Artificial Intelligence has been one of the most talked about phenomenon of the past year, and AI’s impact on journalism has a lot of people conce r ned, including members of Congress on both sides of the aisle. The concerns range from enabling Big Tech to be even more efficient in repackaging and distributing content generated originally by local media, without compensation to the originators, and in increasing the flow of misinformation.
At a hearing before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law last week, media executives and academics gave their views on the additional pressure that adding AI to the mix of issues facing journalism creates.
Now, much of this testimony relates to the big media companies but the impact on local journalism could be significant also – us small guys just didn’t have a seat at the table to give our views on the matter. Those of us working in the little “j” journalism world have concerns about the use of our content both to train AI systems and when reproduced without compensation, but the oligopolistic nature of the competition for advertising concerns us more. The economies of scale make it very difficult to compete with huge companies like Google, Facebook and Microsoft that dominate the market for all forms of advertising.
In researching for this column, I found a website for an organization called Free Press Unlimited (www.freepressunlimited. org), and an article that discussed the use of AI tools by and for journalists. It talked specifically about Chat GPT, which has been all over the news since its debut in 2022.
Journalists using ChatGPT to do research find that, while the software pulls information from across the internet, not all of that information may be accurate, and the results need close fact-checking. That’s not a problem for reputable publications, but there are instances where online publications have been set up that are essentially automated, running AI-generated stories that contain misleading and outright false information. The websites exist solely to sell digital advertising around this content, competing with more traditional print and online media.
NewsGuard (newsguardtech. com,) which according to its website “provides transparent tools to counter misinformation for readers, brands, and democracies,” has identified over 600 websites that operate using primarily generative AI-produced content with little human oversight. Articles run on some of these sites have erroneously reported the deaths of celebrities and world leaders, fictitious events and old events presented as current.
I’m not entirely negative about the impact of Artificial Intelligence on our profession. I’ve read where AI-assisted search tools can help journalists find information quickly – hopefully with sourcing information. I understand that it can help with the transcription of interviews. It can provide summaries of reports and documents quickly. It’s apparently good at automating repetitive tasks, something I thought computers were supposed to do anyhow. Supposedly, AI helps them do it better.
AI has been called the greatest innovation in a generation, and an existential threat to our society. I doubt if it is either – something in between. It’s a tool that can be beneficial when used for good, and one that can be dangerous in the hands of the malevolent. I don’t want to come off as a luddite, but I’m going to have to be shown how it will make my life easier. Personal computers were supposed to do that for our industry when they came into general use in the 1980s. What they did was enable us to do more work in a given amount of time – usually. I expect AI will have the same effect.