I'm writing this in my personal capacity as someone who recently began working at the Tarbell Center for AI Journalism—my colleagues might see things differently and haven’t reviewed this post.
The rapid development of artificial intelligence could prove to be one of the most consequential technological transitions in human history. As we approach what may be a critical period in AI development, we face urgent questions about governance, safety standards, and the concentration of power in AI development.
Yet many in the general public are not aware of the speed of AI development, nor the implications powerful AI models could have for them or society at large. Society’s capacity to carefully examine and publicly debate AI issues lags far behind their importance.
This post makes the case for why journalism on AI is an important yet neglected path to remedy this situation.
AI journalism has a lot of potential
I see a variety of ways that AI journalism can helpfully steer AI development.
Journalists can scrutinize proposed regulations and safety standards, helping improve policies by informing relevant actors of potential issues or oversights.
- Example article: Euractiv’s detailed coverage of the EU AI Act negotiations helped policymakers understand the importance of not excluding general-purpose models from the act, which may have played a key role in shaping the final text.[1]
- Example future opportunity: As individual US states draft AI regulations in 2025, careful analysis of proposed bills could help prevent harmful loopholes or unintended consequences before they become law.
Journalists can surface (or amplify) current and future risks, providing activation energy for policymakers and other actors to address these risks.
- Example article: 404 Media's investigation revealing an a16z-funded AI platform's generation of images that “could be categorized as child pornography” led a cloud computing provider to terminate its relationship with the platform involved and contributed to border industry action to reduce AI-generated CSAM.
- Example future opportunity: Surfacing examples of AI-enabled surveillance within governments could highlight the dangers of AI-enabled concentration of power.
Journalists can focus public discussion on important topics by writing explainers or connecting current news to underdiscussed topics, increasing the salience of these topics for key decision makers and voters.
- Example article: Ian Hogarth’s Financial Times article on the “race to god-like AI” highlighted AI risks and reportedly played an important role in generating the political will to set up the UK’s Frontier AI Taskforce, and later the UK AISI. It was one the Financial Times most read articles of the year.
- Example future opportunity: Reporting on the state of US AI regulation could highlight the gap between the US electorate’s views on AI regulation and existing regulation.
Journalists can reveal new information via in-depth investigations of companies and regulators, creating accountability pressure for them to uphold their commitments and act in the public interest.
- Example article: Vox Future Perfect’s investigation of OpenAI revealed sketchy NDAs, which were promptly changed and evidenced untrustworthy behavior from OpenAI.
- Example future opportunity: Publishing investigations into whether AI labs are meeting their voluntary commitments.
Overall, I think quality journalism plays an important role in how accurately the public and key-decision makers understand AI development and can shape it to not endanger many or just benefit a privileged few.
The state of AI journalism
While AI journalism has a lot of potential, I don’t think it’s currently on track to meet its full potential:
- Click-driven revenue models often don't incentivize the in-depth reporting needed for quality AI journalism.
- The field is severely understaffed, relative to its importance. Shakeel Hashim, my colleague at Tarbell, estimates that there are only about 20 journalists worldwide covering AI full-time, and roughly 100 covering it part-time.[2] While the field of journalism is broadly losing jobs, AI is one of the few topics that newsrooms are trying to expand their coverage of.
- The technical and political complexity of AI makes it challenging for journalists without expertise to cover effectively. The demands of journalism also make it hard to take time off to learn.
- Many outlets don't take the possibility of rapid AI development seriously, treating AGI discussions as mere marketing hype.
See Shakeel Hashim’s essay ‘The media reckons with AGI’ for an appeal to journalists to treat the possibility of transformative AI more seriously.
Building the next generation of AI journalists
AI journalism needs more talented people who:
- Understand the technical and political aspects of AI
- Can think carefully about how new AI developments influence existing complex systems
- Care about truth-seeking, not defending an ideology
- Want to help society navigate this crucial transition
Historically, people with these qualities in the AI safety-orbit have either pursued AI governance or technical AI safety. This has left the AI journalism path (and other communication paths) relatively neglected, to the point where grantmakers value the marginal AI journalist more than the marginal AI policy or technical alignment researcher.
As more of the world wakes up to rapid AI progress and looks for sense-makers, they shouldn’t just see people trying to sell them something. We need journalists who can investigate claims, explain developments, and hold powerful actors accountable.
But I want to emphasize that journalism differs from advocacy. Good journalism leads with questions rather than answers. Its influence comes from digestible and factual coverage on important questions. If you just want to broadcast conclusions or advocate for a specific solution, journalism may not be right for you.
See Tarbell’s mission statement for more info on how my colleague and I view the role of journalism on AI.
In brief: We believe that society could soon have to reckon with AI systems as good or better than humans at nearly all cognitive tasks. But we don’t suppose we know exactly what to do about that, or that any group has a monopoly on important perspectives here. So we want to empower journalists to do what they do best: asking tough questions, challenging assumptions, and helping the public understand complex developments through thorough reporting and clear storytelling.
The Tarbell Fellowship
If you're interested in AI journalism, consider applying to the Tarbell Fellowship. The program provides:
- Training in both AI and journalism fundamentals
- A $50,000 stipend
- A 9-month placement at outlets like TIME, MIT Tech Review, The Guardian, Lawfare, ChinaTalk, South China Morning Post, UnderstandingAI, Bloomberg, and The Bureau of Investigative Journalism.[3]
- Mentorship from experienced journalists
Applications for the 2025 cohort close February 28th.
- ^
Or at least that’s what we’ve heard anecdotally. Like policy research, it's difficult to quantify or get significant insights on the impact of a specific piece.
- ^
Excluding Tarbell Fellows.
- ^
The concrete work of Tarbell Fellows depends a lot on where they are placed, which they partially decide. For example, placements at Lawfare will focus on long technical explainers for a national security audience, while placements at The Information may focus more on breaking news.
I think it would be a huge mistake to condition support for AI journalism on object level views like this. Being skeptical of rapid AI development is a perfectly valid opinion to have: and I think it's pretty easy to make a case that the actions of some AI leaders don't align with their words. Both of the articles you linked seem perfectly fine and provide evidence for their views: you just disagree with the conclusions of the authors.
If you want journalism to be accurate, you can't prematurely cut off the skeptical view from the conversation. And I think skeptical blogs like Pivot-to-AI do a good job at compiling examples of failures, harms, and misdeployments of AI systems: if you want to build a coalition against harms from AI, excluding skeptics is a foolish thing to do.
I think this is really fair pushback, thanks! Skeptical coverage of AI development is legitimate. I think the way I wrote this over-implied that these articles is a failing of journalism—the marketing hype claim is not baseless.
But I'm torn. I still think there's something off about current AI coverage, and this could be a valid reason to want more journalism on AI. Most articles seem to default to either full embrace of AI companies' claims or blanket skepticism, with relatively few spotlighting the strongest version of arguments on both sides of a debate.
Also, I think my core point stands without conditioning on object-level views: we need more journalists who can dig deep into AI development. More investigation and scrutiny from all angles would serve us better than our current situation of relatively thin coverage.
"Most articles seem to default to either full embrace of AI companies' claims or blanket skepticism, with relatively few spotlighting the strongest version of arguments on both sides of a debate. " Never agreed with anything as strongly in my life. Both these things are bad and we don't need to choose a side between them. And note that the issue here isn't about these things being "extreme". An article that actually tries to make a case for foom by 2027, or "this is all nonsense, it's just fancy autocomplete and overfitting on meaningless benchmarks" could easily be excellent. The problem is people not giving reasons for their stances, and either re-writing PR, or just expressing social distaste for Silicon Valley, as a substitute.
I agree that for journalism it's important to be very careful about introducing biases into the field.
On the other hand, I suspect the issue they are highlighting is more that some people are so skeptical that they don't bother engaging with this possibility or the arguments for it at all.
Executive summary: Journalism on AI is a crucial but underdeveloped field that can shape public understanding, influence policy, and hold powerful actors accountable, yet it suffers from staffing shortages, financial constraints, and a lack of technical expertise.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.