AI in the Newsroom: What Journalists Are Actually Using It For
AI is being adopted in newsrooms worldwide, but the reality is more cautious and more specific than the hype suggests. A look at what ai is actually being used for — and where the limits are.
The discourse around AI in journalism oscillates between two extremes: either AI will replace journalists entirely, or it's a gimmick that produces unreliable slop and will never do real reporting. Neither position reflects what is actually happening in newsrooms.
What is actually happening is more specific, more cautious, and more interesting than either narrative. Journalists and editors are adopting AI tools for defined tasks where automation provides clear value — while maintaining explicit editorial control over the decisions that determine what gets published.
Where AI Is Being Deployed
Transcription and note processing. The most widespread current use of AI in newsrooms is for transcribing interviews. Tools like Otter.ai, Descript, and integrated features in recording software have replaced manual transcription for many reporters. A 60-minute interview that previously required 3-4 hours to transcribe manually can now be processed in minutes. The transcript requires checking for errors, particularly with names, technical terms, and accents, but the time saving is real.
Text-pattern analysis on large datasets. Investigative journalism that involves processing large sets of documents — government procurement records, corporate filings, court documents — increasingly uses AI to identify patterns that would take human review far longer. The AP has used automated systems to report on financial results for years; more recent applications involve sifting through leaked datasets.
Automated reporting of structured data. The production of straightforward data stories — sports results, financial results, election results in predictable formats — has been partially automated at several outlets. The Washington Post's Heliograf system generated thousands of short articles on the 2016 election results automatically. This is most appropriate where the news is the data itself, not an interpretation of it.
First-draft assistance and summarisation. Many journalists use large language models (LLMs) to draft article structures, generate headline options, or produce initial summaries of documents that they then substantially rewrite. The resulting text is not published without significant human editing — it's a starting point, not a finished product.
SEO and metadata optimisation. AI tools are being used to suggest tags, categories, and metadata for articles — lower-stakes tasks with a clear efficiency rationale.
Where Newsrooms Have Drawn Hard Lines
The adoption of AI in journalism has been accompanied by explicit policies about where it should not be deployed.
Unverified AI-generated facts. The most consequential failure mode of current LLMs is confabulation — generating plausible-sounding but false factual claims. Newsrooms that have experimented with generating news content from LLMs have found rates of factual error that make the approach unusable without intensive human review that eliminates the efficiency gain.
CNET ran AI-generated articles in 2022-23 and quietly retracted or corrected dozens of them when errors were identified. The reputational cost was significant. The experience became a cautionary case study about the difference between AI's ability to produce fluent prose and its ability to produce accurate information.
AI-generated quotes. No responsible newsroom publishes AI-generated quotes attributed to real people. This is both a basic factual accuracy requirement and a legal issue — generating text attributed to a named individual creates defamation and impersonation risks.
Autonomous publication without human review. Even in outlets with the most automated workflows, a human is in the publication loop. AI identifies, processes, and drafts — humans decide what runs.
The Deeper Tension
The most important question in AI and journalism isn't about specific tasks — it's about the definition of journalism itself.
Journalism, at its most valuable, involves judgement: deciding what is important, what the public needs to know, what questions require investigation, whose perspectives have been ignored. These are decisions that require understanding of social context, power dynamics, history, and values. They are not tasks that current AI systems can perform — not because the technology might not eventually get there, but because they require exactly the human understanding of significance and accountability that makes journalism different from information retrieval.
The risk isn't that AI will replace journalists wholesale. The risk is that commercial pressure to reduce costs leads to the replacement of journalism with the appearance of journalism — fluent text that references no real sources, involves no real reporting, and is accountable to nobody.
That distinction is what editorial standards exist to enforce.
What This Means for Readers
When you're reading an article, there is currently no reliable way to tell whether AI was involved in its production. Some outlets disclose AI use; many don't. Some use AI extensively in early drafts without disclosure; others use it only for transcription.
The most reliable signal remains not the production process but the output: Are there named sources? Is primary evidence cited? Is the article bylined by a person you can find with a public profile? Does the article demonstrate knowledge that would require actual reporting?
The presence of these elements doesn't guarantee absence of AI, but their absence tells you something meaningful about how a piece was produced.
Sources: Reuters Institute, "How news organisations are using AI" (2024); CNET AI story retraction analysis (NPR, 2023); AP automation guidelines (2023); Columbia Journalism Review AI in Newsroom reports.
Sources & Citations
This analysis is based on primary documents, curated reporting from The Associated Press, Reuters, and verified direct quotes. We adhere to the SPJ Code of Ethics.
Corrections Policy
We are committed to accuracy. If you spot an error in this analysis, please contact us. Read our full corrections policy.