top of page

Journalism, genAI, and why human oversight in the newsroom is more important than over

We cannot make good news out of bad practice. Edward R. Murrow American broadcast journalist and war correspondent

Good journalism depends on facts and credibility, and artificial intelligence presents both opportunities and challenges in this regard. The real priority, however, is understanding that human insight is key to making the most of AI while avoiding pitfalls. As the Council of Europe’s Guidelines on the responsible implementation of artificial intelligence systems in journalism make clear, "the ability to exercise human oversight and control is an important ethical and also legal requirement for the responsible deployment of journalistic AI."


Now, obviously, some journalistic tasks lend themselves to genAI more than others. And for each task, there will be a range of contexts, limitations, risks, and benefits. Yet, the common thread in all these tasks is the need for human oversight.


On the positive side, AI can be a powerful tool to create content, boost research and fact-checking capabilities, and improve audience engagement. On the other hand, AI can also be misused to spread misinformation and undermine confidence in the stories reported.


No matter how advanced genAI becomes, it can't replace the expertise and perspective of living, breathing, human media professionals. Their everyday involvement ensures that AI is used ethically, accurately, and responsibly, staying true to industry best practices and the values of the organisation - and that’s why it’s crucial for editors, writers, and investigators to be equipped with the practical skills and guidance needed to mitigate risk.


Here are some practical steps to help editors and journalists ensure human control over AI:


Accountability demands leadership.


When journalists or news organisations fail to be accountable — for example by being opaque about their sources, showing bias, or distorting facts — public trust erodes. To help avoid this from a governance perspective, there should be someone senior who is clearly accountable for AI implementation and deployment, as well as the practical outcomes and implications of its use. Their role should be to provide staff and correspondents with clarity and direction, but also to set and enforce standards, and to take responsibility when things go wrong. Assigning a senior editor or similar figure to oversee AI ensures that there is a human decision-maker who can intervene and make difficult calls as required, and demonstrates to others that the business takes its duties of care and integrity seriously.


Use AI with caution for sensitive stories.


It may be stating the obvious, but certain things requiring expert judgement, creativity, nuance or discretion might simply fail to meet proper journalistic standards if they're outsourced to genAI. For these stories or tasks, level of human oversight and approval should be maintained in each phase of story developmet and publication, especially when the tangible impact on the readership or contributors could be significant. For example, an exposé on a corruption scandal may put a whistleblower at risk, and as we saw with the Covid Pandemic, news coverage about the safety of a vaccination can influence people's decisions to get the jab (or not). Relying solely on AI for stories such as these could lead to physical, financial, reputational, or emotional harm - and hopefully, these are things that an experienced professional with real-world insight would catch.



A woman at a with a copy of СТАТБЯ 19 (Article 19 of the Russian Constitution) with the words "НЕ РАБОТАЕТ" (DOES NOT WORK) written across.
A woman - ostensibly at a protest in Russia - wearing a copy of СТАТБЯ 19 (Article 19 of the Russian Constitution) as a vest, with the words "НЕ РАБОТАЕТ" (DOES NOT WORK) written across.

No detail is too insignificant.


Even minor omissions or overstatements can impact a story's overall message and how it is received. Take, for instance, an AI-generated report on an economic policy. The algorithm might correctly outline the basic framework, but fail to capture the nuanced implications of its roll-out, perhaps by overlooking critical socio-economic factors or historical contexts that a human journalist would consider. This might happen if the algorithmic training data was insufficent, or the prompt (instruction) given to generate the report wasn't quite right. In our hypothetical scenario, omissions in the story result in confusion or incorrect assumptions about the policy's potential impact: in turn, voter behaviour is inadvertently manipulated and/or the journalists attract criticism. Not good. For this reason, even well-written, plausible-looking genAI texts should be manually reviewed, and ideally by someone not involved in the prompt or generation itself. What we're looking for here is not just factual inaccuracies, but incomplete narratives and subtle misrepresentations. This requires critical thinking that AI alone cannot provide - at least, not yet.

Obtain proper permissions - or at very least, good legal advice.


When using genAI, the training data or inputs (e.g. those used to contextualise prompts) might involve a mix of personal data, copyrighted materials, and confidential information. It's always best practice to obtain proper permissions (e.g. licences for IP, and consent or other lawful bases for processing for personal data), because a third party's rights in the data may otherwise be infringed or violated through AI. And here's some free legal advice: most AI platforms are provided on an "as-is" basis, which means that the liability (fault) for such infringements is the user's, not the tech company's.


That said, there are often legitimate carve-outs or alternatives worth exploring. Looking at copyright as a primary example, some specific situations allow copyrighted materials to be used without direct permission from the copyright holder, say if fair dealing (UK) / fair use (U.S.A.) or another exception applies. But again, this really depends on the particular AI model being used, as well as the use case of the content in question. Amongst other things, fair dealing won't apply if the content in question is a photograph and the context is the reportage of current events. See? It's a convoluted and highly contentious area, so human judgement and oversight is essential - perhaps in the form of advice from a (human!) lawyer.


laptop on a table with a coffee cup nearby
A whole industry has evolved around prompts, aimed at helping users get the most out of their genAI tools

Explainable AI (XAI) tools can help to avoid bias.


The "black box" problem refers to the opaque nature of AI systems. If the algorithm's logic and decision-making processes are not transparent or understandable, this makes it difficult (or even impossible) to interpret, justify, or trust the AI's decisions. Take for instance the potential for bias: AI systems can unintentionally perpetuate or exacerbate biases present in their training data, which amongst other things can harm vulnerable or historically oppressed people. A clear example of this would be a reporter using AI to generate an image to accompany her story on knife crime, which the AI generates as a caricature of a Black man.


As a personal anecdote, DALL-E 2 was released shortly after I returned from maternity leave. I asked it to generate an image of "a lawyer holding a baby". It perhaps won't surprise you to know that each and every one of the multiple images generated showed male lawyers! My example is somewhat trivial, but if extrapolated at scale, it's easy to see how rascism, sexism, and other prejudices can spread in this way.


Human oversight would be crucial to identify and mitigate against this sort of bias, and this is where explainable AI (XAI*) tools come in. They're designed to enable people to scrutinise why certain conclusions were reached, or how specific content was generated. XAI tools, also marketed as "transparent box" or "white box" tools, can help human content reviewers better understand how the systems arrived at a particular decision in the first place. And once understood, safeguards can be put in place to prevent bias and other problems in the future.


*Not to be confused by Elon Musk's company of the same name (styled xAI)...! Also, the Partnership on AI has extensive commentary on this whole sub-industry of XAI tools, with players including academic R&D labs, open-source communities, and enterprise software solutions providers.


black box with cables and cords
My beautiful visual representation of the "black box" problem, created by me with Wix's AI image generator!


Build trust with your audience through transparency.


The principle of transparency, to borrow from the Society of Professional Journalists' code of ethics, "means taking responsibility for one’s work and explaining one’s decisions to the public." In practical terms, transparency involves open and honest disclosure about the processes, sources, and objectives behind reporting. Worth noting is that there's a separate (and admittedly somewhat academic) conversation to be had about what "transparency" actually involves, and whether it even increases trust. For our purposes however, I like what a recent Neiman Lab report has to say: "while transparency isn’t a fail-safe and rarely a perfect solution, it helps people identify credible information and journalists whose work they trust."


One straightforward suggestion is to clearly label articles, images, or videos as being produced by generative AI - like I've done with my black box picture, above. But we can go further, here, and disclose how AI tools influence editorial decisions more broadly, such as story selection or topic prioritisation. For instance, if an AI system is used to identify trending topics or to suggest story ideas based on behavioural analytics, editors should consider being transparent about this. By doing so, readers gain insight into the editorial process, and the role AI plays in shaping the news agenda. It could also go some way, I believe, to assague certain anxieties about the use of AI in news reporting, not unlike a nutritional label on a bar of chocolate, perhaps (let's be honest: we'll still eat the chocolate...).


Further guidance and resources...


For more on this important issue, I strongly recommend reviewing the Council of Europe's Explanatory Memorandum to Recommendation CM/Rec(2022)11 which includes insight on safeguarding editorial standards. If you're interested in the "black box" problem and transparency, check out the Turing Institute's work on the right to explaination. Other valuable resources include those from the Poynter Institute for Media Studies, and the UK Parliament's Future of News inquiry on impartiality, trust and technology.


I'm also very excited to be speaking about AI and journalism at the following events:


18 June: I'll be joining One World Media, a media charity that supports, trains and champions journalists and documentary filmmakers across the global south, for their Media Freedom Lab. I'll be speaking on in The newsroom: challenges and solutions (event by invitation only) and the following night, I'll be attending their Awards Ceremony at the Curzon in Soho.

4 July: I'll be speaking on The AI Revolution: Should We Be Issuing Warnings or Welcomes? for the Centre for Investigative Journalism's Summer Conference, hosted at Goldsmiths University in South London. I'll also be attending the conference on the day before, for networking and learning.


23 July: I'll be presenting at a Westminster Media Forum online seminar, Priorities for tackling disinformation and use of deepfakes in the UK on 23 July. Amongst other things, the panel I'll be joining will discuss counter-disinformation initiatives and media literacy skills.



screenshot of event information for Centre for Investigative Journalism summer conference












92 views0 comments

Comments


bottom of page