How AI, rather than being a danger to journalism, is helping some newsrooms fulfil journalism’s core function: protecting democracy and holding power to account.
In the digital newsroom of Cuestión Pública, a small investigative outlet in Bogotá, Colombia, the usual flurry of Discord messages suddenly took on a different tone. It was late 2023, and the team, working remotely from across the capital, were about to reach a significant milestone: their in-house created AI tool, Odin, had autonomously crafted and published its first X thread with human oversight. The thread’s content, which emerged from a database of meticulously selected information, was triggered by a current event involving a governor in the Colombian state of Atlántico. As Odin processed this information, it selected the most relevant details to craft a series of X posts that resonated with the day’s news, marking a revolutionary step in digital journalism for Cuestión Pública.
Cuestión Pública’s team primarily investigates issues of public interest and holds power to account. Its investigations are presented in a distinctive sassy tone, often accompanied with cartoonish illustrations.
Co-founder of Cuestión Pública and data journalist Claudia Báez was the driving force for using the AI technology in the newsroom. “I only see opportunities with AI and journalists,” she tells me enthusiastically, “the intersection has a lot of possibilities.”
After the tweet went live, the virtual workspace buzzed with reactions. Among the commotion, one journalist wrote: “Oh, so that’s what the future looks like.”

Moments like these have been happening in- and outside of newsrooms around the planet, as AI technology is taking the world by storm. In newsrooms particularly, AI has been a topic of contention, with many cautiously approaching the technology and others outright rejecting it.
“People are afraid,” says Claudia Báez, “and its good to be cautious. But they should at the very least build a table or something to ride this technological wave.”
This wave is washing through bustling newsrooms in London all the way to small outlets in remote corners of the world. AI is reshaping how news is gathered, written, and disseminated – and its here to stay.
Cuestión Pública is only a small newsroom, counting 18 employees working mostly remotely across Colombia, but with the help of AI they’ve been able to increase the scale and outreach of their investigative work.
“With AI, we can convert long form investigations into many different products for different audiences,” Báez says, “we want to educate people to make better decisions and improve social decision making.”
Their award-winning investigations have been shared many times by Colombian mainstream media, been discussed in congress and they’ve contributed to a law being formed that requires politicians to share any conflicts of interest that might affect their policy making.
While Claudia’s team and many others around the world approach AI with enthusiasm, many around the world view the use of the technology in newsrooms with distrust: A Reuters Institute poll found that only 30% or less of those questioned trust the news media to use generative AI responsibly.
Since AI has become more freely available in the last year, a number of headlines have confirmed the worries of sceptics.

In June the New York Times reported on BNN Breaking, a site based in Hong Kong that had published multiple incorrect articles as a result of what appeared to be generative AI errors. One of their articles, featured on MSN.com among others, falsely painted Irish talk show host Dave Fanning as being sued for sexual misconduct. A different article was found to have invented quotes from a source, while another completely made up a panellist in an article about a literature festival.
“It is a fundamentally new kind of news”
The site would churn out hundreds and at times even thousands of stories a day – too many to authenticate. BNN Breaking is no longer active following the article in the NYT and a lawsuit by Dave Fanning.
More recently, X is testing a new automated stories feature via Elon Musk’s AI company, X.ai. The feature automatically creates personalised articles for users based off of X posts, which has unsurprisingly led to a number of false reports, including the supposed death of Noam Chomsky (who is still very much alive as of the publication of this article). Unlike BNN Breaking, this feature by X.ai is likely here to stay.
Headlines like these have left many in the news industry weary of adopting the technology into their newsrooms.
Rhys Everquill, co-founder of the Great Central Gazette, a small independent newspaper in Leicester, tells me that he has been very cautious with exploring the use of Artificial Intelligence and has no plans to ever publish anything that is AI-generated. The reason, he says, is that they “have significant concerns about current AI tools and their ability to fabricate plausible-sounding but factually inaccurate information.”
Standing on this technological frontier that will transform how news is gathered, written and consumed, a crucial question looms: How can newsrooms harness the power of AI while staying true to the core principles of journalism?
To understand where the issues are with artificial intelligence, one must first understand how it works. AI has been around for a while. The term ‘artificial intelligence’ was first coined in 1956 by John McCarthy, off the back of Alan Turing’s research in the 1950s. It took until the 90s and early 2000s when artificial intelligence had it’s first landmark achievements, including the defeat of the reigning chess champion by IBM’s Deep Blue computer program.
As computing power doubled every year throughout the 2000s, AI was being used in a number of industries, such as banking, marketing and entertainment. But it was used almost exclusively for non-personal use and public understanding of artificial intelligence didn’t go much further than Terminator’s Skynet.
“ChatGPT threatens the New York Times’s ability to provide it’s services.”
Then in 2017, OpenAI released GPT 2 to the public which, like other chat bots that have been circulating the news, such as Claude by Anthropic or Meta’s Llama 3, is a Large Language Model, or LLM. LLMs are trained on huge sets of data. The data includes much of the written content on the internet, including Wikipedia and copyrighted journalistic writing, which is the reason the New York Times sued OpenAI back in December of 2023.
In the landmark case, the news company had alleged that the creation and training of OpenAI’s LLM illegally used copyrighted material and that ChatGPT “threatens the Times’s ability to provide that service”. The case has since inspired eight other publications to follow suit and is yet unresolved.
The large language model is then built on machine learning algorithms, which are trained on data and are able to find connections and patterns that they then apply to new input to generate responses.
So LLM’s have been feed enough text to enable them to identify sentence structures and thus generate text themselves. In other terms, the LLM knows what a text should look like to such an extent that it can generate text. It generates entire sentences by always generating the word that would most probably follow the words it previously generated.
To help me write this article, I decided to employ the help of Anthropic’s AI model Claude 3.5, which is said to have a more “human” and empathetic approach than ChatGPT (when I asked him if I could give him a nickname, he refused multiple times). Additionally, Ethan Mollick, a professor who has worked extensively with AI, called Claude 3 “the (AI) most likely to freak you out right now.”
After setting some ground rules for myself (no text generation for the article, every fact is to be double checked), I started treating Claude like a personal assistant and asked him to create images, summarise interviews and give me feedback on where I could improve my writing. His help was incredibly valuable and I quickly got used to coming to him for feedback, but when I asked him to find the dates of a number of events for me, I quickly encountered the first issue: All of the dates were wrong.
This phenomenon is called hallucination and occurs quite often – it’s why BNN Breaking was not a trustworthy news source and why X.ai’s new product publishes false news. The AI, besides having no accountability, has difficulty separating fact from fiction, which brings it in direct contention with journalism’s core principle of reporting the truth.

Image: ChaptGPT-4o. Prompt: “Create an image of an AI hallucinating.”
As a tool which is showing increasingly impressive capabilities in text creation, however, the opportunities for an information-based industry like journalism become hard to ignore.
Claudia Báez sees artificial intelligence as an opportunity. “It’s such a massive destructive technology,” she says, “that can be a big opportunity for independent media if they start to invest now.”
She is not alone in her enthusiasm: A 2023 Journalism AI poll showed that in 2023 90% of the 105 news organisations questioned were using AI in news production, compared to 66% in 2019 and 80% were using AI for news distribution up from 50% in 2019.
Felix Simon of the TOW Center for Digital Journalism interviewed 134 journalists from 35 news organisations in the US, UK and Germany, including The Guardian, The Financial Times, The Sun and The Washington Post on their usage of AI in the commercial, editorial and technological domains.
He reports that artificial intelligence is now the “talk of the town” in the news industry and that “many news industry leaders have high hopes for AI to not just be the next big thing, but to be the “big thing” that delivers for their industry.”
“AI will form part of the journalistic toolbox going forward”
Throughout his interviews across the industry, he found that news organisations around the world have been scrambling to come up with AI strategies in the past three years.
“As a journalist starting out today,” he tells me, “you will just need to have at least a grounding of what it can do, because it will form part of the journalistic toolbox going forward.
“There’s no point in saying, ‘oh, no, I don’t want to have anything to do with this.’ I just don’t think that’s a realistic position to have.”
Despite the blistering pace at which AI capabilities have advanced, his research found that rather than completely transforming work flows, “many of the most beneficial applications of AI in news are relatively mundane” and that AI has generally not proved to be a silver bullet.
He says that the core motivation to use AI in news organisations has generally come from a need to increase efficiency. “On the one hand,” Simon says in his report, “there are experts and practitioners who believe that AI will significantly free up journalists, allowing them to focus on more creative and strategic tasks while the technology takes care of the grunt work.” Others he says are more skeptical, and argue that AI will only have a limited effect on productivity.
Many major news organisations have been putting large amounts of money into developing AI tools for their journalists to use. Some have been relatively secretive on the development of those tools and none of their journalists responded to my requests for an interview.
Others have been quite open. The Guardian recently did a showcase of an AI tool they are developing, which is implemented as a browser extension and can give journalists a list of potential headlines, opening comments and images for an article. It can also draft the article the journalist is looking at into different formats, such as a short explainer, a list of quotes or facts, or a timeline.
While the prospects are exciting, Simon says that naturally, there are limitations that come up with how AI is integrated into news organisations. “There’s only so much you can can do in terms of automating content production, especially if it’s audience facing,” he says, “if you always have to have a human in the loop to check the content for accuracy.
“The commitment to producing factually correct news 100% of the time, which journalism should be about, then limits how much you can do at scale with this in terms of content production.”
Most of these tools that are being developed by major news companies are mostly for improving their efficiency. They aren’t completely changing or even developing completely new workflows or types of journalism, at least not yet.
The latter are the cases of interest for David Caswell, former Executive Product Manager at BBC News, who led the AI in journalism challenge back in April of 2023. The challenge was created as an accelerator programme to support a number of smaller newsrooms in quickly developing AI tools with their limited resources. The newsrooms were given a 5000$ grant by the Open Society Foundation and guidance by Caswell, who acted as lead consultant.
Claudia Baez’s Cuestión Pública was one of the participants. For Caswell, the most interesting point to come out of the challenge is how AI only recently got to a point where newsrooms can develop and deploy effective AI tools in a short amount of time without a dedicated team of developers. “You don’t need a lot of technical skills or capital investment,” Caswell tells me.
“The main barrier for adopting AI in your newsroom is confidence”
“[Cuestión Pública] have built this knowledge base about Colombian society and and then they have this very sassy, sarcastic editorial voice and they successfully fine-tuned GP 3.5 to speak in that voice. So now they have this quite sophisticated system and they’re not data scientists.”
He argues that this now gives smaller newsrooms a disproportionate advantage compared to large newsrooms, “because they’re more agile and experimental.”
“If you can get these newsrooms just up to this point,” he says, “where they’ve got confidence in their own ability to figure it out they’ll just go and that’s the core of the challenge.”
The applications of AI in news can generally fall into two separate categories. One is output, so AI tools that assist the distribution of news and input, AI tools that help with gathering and processing information relevant to news.
Zamaneh media is an independent news outlet based in Amsterdam that does investigative reporting on Iran in exile. With limited resources they’ve developed a tool called “Newsletter Hero”, which allows them to package their long form investigative work into bitesized newsletters, with which they can reach a wider audience and increase engagement.
Impact monitoring is another important area that is quite resource intensive, and many smaller newsrooms often don’t have the resources to do it effectively. Agéncia Pública, an independent investigative outlet based in São Paulo, Brazil, developed a tool called Pública IQ, which enables them to track the performance of their article by scanning for links and key words online.
“We can save now like 75% of the time we used to [for impact monitoring],” Marina Dias, who worked on the development of the tool tells me.
“We have now a very functional prompt,” she adds, “and this helped us to better understand and improve our methods and I think this is really, incredible.”
The Quint, an independent investigative news outlet in India, has become an expert in detecting AI generated images and has created workflows around this. They are now trying to combat fake news by identifying images, deepfakes and audio that are going viral and are likely to be artificially generated. Ritu Kapur, co-founder of The Quint tells me: “Basically we are using AI, to counter the use of AI.”
While all of these news outlets have been approaching AI with enthusiasm, they are confident their journalistic integrity and quality stays unaffected.
“One of the things we did pretty early on,” says Ritu Kapur, “and we update that every three months, is come up with very, very, very detailed AI use guidelines for the editorial team.
“It took a lot of collective head banging to determine what AI use is OK and what AI use crosses a line.”
The guidelines are quite similar across the different organisations. A common bare-minimum ground rule is double checking everything before publication.
The vast majority of news organisations completely steer clear of using any AI generated text in anything audience-facing. Marina Dias of Agencía Pública says: “Our main guideline is that Pública’s journalists don’t use AI to write a story.”
The Quint took it a step further and blocked AI’s from reading its content out of fear that it could misrepresent content out of The Quint’s articles: “It can mix and match [our articles] with something that it picks up from somewhere else,” says Ritu Kapur, “and mangle it into something else that’s incorrect.”
While most newsrooms don’t publish AI-generated images to stay on the safe side, The Quint does, but makes sure it’s done responsibly: “Everything has to be well labelled and have human oversight,” Kapur says.
In his report, Felix Simon summarises the current impact of AI quite well:
“As with any new technology entering the news, the effects of AI will neither be as dire as the doomsayers predict, nor as utopian as the enthusiasts hope.”
There are definitely challenges that AI brings to the news and wider media landscape. With it now being possible to create realistic-looking text and images instantly, and that technology only likely to improve, false information is likely going to skyrocket. Some purely profit-oriented tabloids may be tempted to mass-produce news content beyond the capacity of human oversight, like BNN Breaking did.
As the use of AI develops there will likely be the occasional blunder from a major publication, but journalistic integrity and the reader’s desires for the truth remains unchanged. As new applications for the technology are being discovered, so are its limits and it’s clear that the high quality human-led journalism isn’t going to disappear.
On the other hand, AI is offering great opportunities to the news economy. Large language models are a brand new tool in the journalists toolbox, that can help cut the time it takes for tedious tasks such as interview transcription, Search Engine Optimisation and other performance-improving factors.
It has helped some small newsrooms increase their outreach by helping with gathering information and processing and distribution, allowing them to better hold power to account. David Caswell even argues that this could help close a bit of the gap small newsrooms are experiencing to large news organisations, who might have larger budgets but are less flexible due to their size.
Still, AI is unlikely to pull journalism out of its current crisis, as new problems arise, such as X.ai creating “news articles” based off of a number of tweets, or Google search offering up AI-generated answers that drive traffic away from news sites.
The future will bring many exciting new types of journalism. Claudia Báez says that there is a future in interactive or even gamified journalism. Something her newsroom is working on currently with “Game of Votes”, which is a Game of Thrones themed interactive map (Westeros but it’s shaped like Colombia), where the houses are represented by the different political parties and are clickable to show the royal families (members of parliament) and their values.
Some of the large news outlets are working on being able to adapt articles to the reader’s preferred consumption. Think text-based articles summarised to bullet-point format or even automatically generated audio or video files, some of which is already being offered now. Alternatively some readers may prefer a different writing style, for example: “Read me this article in the style of Dr. Seuss.”
Despite the whiplash-inducing pace at which AI is developing and being incorporated into every aspect of media, the core, on-the-ground reporting likely won’t be going anywhere and, by following a strong set of guidelines as it has always done, journalism will be alright.