You’re all probably going to be sick of me by the time the robot revolution comes, but until then, I will have bone after bone to pick with the machine.
In August, CBS reported that Aaron Pelczar, a journalist for Wyoming news outlet the Cody Enterprise, was caught using generative artificial intelligence to write and publish news articles.
One of the stories Pelczar generated was a piece on a shooting in Yellowstone National Park, which featured the line, “This incident serves as a stark reminder of the unpredictable nature of human behavior, even in the most serene settings,” according to the Associated Press. Following the incident, Pelczar resigned.
Before Pelczar, Sports Illustrated was found to be publishing AI-generated stories credited to reporters who didn’t exist, according to PBS. The news caused many outlets to reassess or reaffirm their policies on AI.
The AP recently put in its guidelines that AI “cannot be used to create publishable content and images for the news service,” though it reported using the technology for financial-earnings articles and some sports coverage.
USA Today has incorporated AI-generated “key point” summaries at the top of its articles, with disclaimers at the bottom reading: “The Key Points at the top of this article were created with the assistance of Artificial Intelligence (AI) and reviewed by a journalist before publication. No other parts of the article were generated using AI.”
As AI becomes more common, we must become clear and cautious with our rules.
The Daily Lobo has no formal AI policy yet, though the unwritten one is “Do not use AI.” How strange that we may have to put “don’t use a robot to make things up” in the handbook of journalists.
All or at least most jobs do a service to their fellow man. Doctors save lives. Electricians keep the lights on. Servers, cashiers and baristas keep the wheels of convenience greased. We all do our part, and we've do what we promised to do.
As journalists, we have one simple but incredibly important job: to tell the truth.
As soon as you put something in print, you enter into an unspoken contract with your reader: You promise that you have done the reporting and are telling the truth. In return, the reader ideally listens to you and walks away with new information.
This is how it ought to work. It’s symbiosis; we’re the birds who pick gristle out of crocodile teeth, pulling away the rot of untruth and leaving behind what the public needs in order to live well-informed lives.
Get content from The Daily Lobo delivered to your inbox
If we take a shit in the mouth of our crocodile, we deserve to be eaten.
According to Axios, public trust in journalists has reached an all time low, with more than one in three Americans saying they don’t trust journalists at all. Pair that with a fall in public trust in AI — dropping from 61% to 53% over the past five years — and we’re looking at a world where our readers don’t believe in us at all.
In the case of Pelczar’s Yellowstone article, the truth was important. Those were people’s lives, and journalists should do the service of reporting accurately. Not only is it dishonest to use AI to write about people’s lives, it’s disrespectful.
When you use AI without a disclaimer, you fail to tell the truth. You are using the implied truth of a news article to say something made up by a computer.
It isn’t just Pelczar. AI is being used in journalism closer to home, too.
On March 5, New Mexico News Port — the outlet associated with the University of New Mexico’s communication and journalism department — ran an article about changes to state voting laws accompanied by an AI-generated visual.
If you know what you’re looking for, the use of AI for the image is obvious. The signs meant to read “vote” have jumbled letters, the United States flag has the wrong number and positioning of stars and stripes, people melt into buildings, etc. However, if you don’t know what you’re looking for, it’d be easy to fall for the image as being true, especially because it’s coming from a news source you can theoretically trust.
Gwyneth Doland, faculty advisor for News Port, said she generated the image intending it to resemble a painting or an illustration rather than a photograph.
“I don't think any human would think that was a photograph,” Doland said.
The image featured no caption disclaiming that the image had been AI generated, though Doland said it was supposed to.
“I think the important point (is) AI is a tool that trained journalists can use to do their work more efficiently and more effectively … this is a tool, and a tool can be used for good or evil, but the tool exists and people are using it,” Doland said.
I hear ideas like this a lot in conversations about AI. I question the notion that all tools are inherently neutral. Firstly, all tools have a purpose — something they are made to do. In the case of AI that specifically generates images and text, its major purpose is to reduce the need for human creation and analysis, which are essential for accurate, honest and quality journalism.
Secondly, AI works by scraping existing art and writing from human artists, then rescrambling it without crediting or compensating the artists or source material it used.
I don’t believe AI’s position as a “tool” inherently gives it or its use moral neutrality. A guillotine is also a tool. That doesn’t mean we ought to, or need to, use it.
I recognize that News Port is a small publication with limited resources to get photographers and photographs. But any breach of the contract journalists have with the public, for any reason, feels like a failure on our part.
Where do we draw the line? What’s to stop us from lying about something bigger, bolder, more important? How can we, in good faith, ask the public to trust us again?
I think of the quote hung in the Daily Lobo newsroom, a motto of sorts: “Get it first, but first, get it right.”
I just hope it doesn’t get any worse.
Addison Fulton is the culture editor for the Daily Lobo. She can be reached at culture@dailylobo.com or on X @dailylobo