Newsroom AI and the ethical challenge: Things to consider when writing your guidelines

Accuracy, fairness, trust - did the job of journalism get a lot more difficult with the arrival of GenAI in 2023? 

AI in the news industry is not a new phenomenon. It has been seeping into the business for many years. The most likely area where AI played a role was in recommendation - delivering story suggestions to readers via algorithm.  Newsrooms work with these systems and they are an accepted part of how digital news operates. Most journalists did not think too much about them as they do not directly impact their day to day work. The ground shifted 12 months ago. 

On 30th November 2022, OpenAI unveiled ChatGPT.  The AI space had been a place for specialists - developers and highly skilled data journalists - but suddenly sophisticated tools were accessible to everyone by simply typing in a question.  People signed up to play in record numbers, and many of them, it seemed, were journalists.  

New organisations began to talk about the great opportunities that would open up with the use of Generative AI ( if only they could figure out what to do with it). Some set up teams and began experimenting immediately. Others noticed a warning light flashing. With their staff already playing with these new generative tools, some guidelines for use were needed . 2023 has been the year of AI experimentation. It has also been the year of reflection about the ethical concerns for an industry, those that work in it and those that use its products. 

There are serious issues to address and setting about writing guidelines for the ethical use of a set of developing technologies is daunting.  The JournalismAI project at LSE/Polis found 60% of journalists taking part in its latest survey, were concerned about the impact of AI.  It found, “For journalists,  the central question is, how do we integrate AI technologies in journalism while upholding journalistic values like accuracy, fairness, accountability, and transparency. “

Those values are some of the fundamental principles of journalism that good news organisations aspire to uphold. Journalism should be well placed to deal with this. It is used to applying these values across its activities. It now needs to apply them to the role of AI.  It is not a distinct thing, it is an addition to the ethical framework. 

Easy to say, more difficult to do perhaps, but many news organisations have begun the work and there are high quality examples to reference if you are about to draft or develop guidance for your own organisation.  Getting on the front foot and letting the audience into your thinking about how you’ll deal with this new landscape is a sensible approach.  The first to do this was US tech title Wired, which published its policy in March 2023. It acknowledged in doing so that the policy would change, “We recognize that AI will develop and so may modify our perspective over time, and we’ll acknowledge any changes in this post.”

Many have referenced this need for flexibility in how their policies will have to adapt as technologies develop and use cases for a new raft of AI tools are found.  One route through this has been to start by succinctly laying out some broad principles, as we see at The Guardian and the BBC in the UK, and at Rappler in the Philippines, among others. 

Broad principles provide a strong foundation and a way to address issues in more detail. It may be best to approach these as a series of questions that you need to answer.  Here’s my opening list: 

Is your challenge or opportunity best addressed by using AI? 

AI has been in a hype cycle over the last 12 months. There’s a FOMO which is persuading some that AI may be the answer no matter the question. Maybe, what looks like a case for GenAI is best dealt with through more controllable automation technologies or even left in human hands.  

Do you understand the technology?

There’s a wide knowledge disparity in most news organisations. AI systems,  especially GenAI technologies, are complex. Gaining an understanding of the basic principles of how they operate is a vital first step. Newsrooms should be able to explain how their news production processes work. If they cannot, how can they exercise control over them? 

How will you correct bias within AI tools? 

The training data used to develop an AI may be flawed if it is not fully representative causing social, racial or gender bias.  AI has the potential to amplify this issue. The operations of an algorithm may also lead to bias.  Early use of image generators showed racial and gender bias in an image produced  e.g.  doctor/male, nurse/female.  Measures need to be put in place to identify and counter this. 

Can you guarantee accuracy? 

This is a fundamental question for any news organisation as it is the bedrock of trust with your users. GenAI systems are known to ‘hallucinate’ (ie make stuff up) so you cannot guarantee that they are accurate.  Even in what is considered to be a safer task of text summarisation there is a 3% failure rate even in the best performing LLMs, according to this study. No news organisation is aiming for just 97% accuracy. AI outputs need to be checked. Which brings us to…

What is the role of human oversight of AI? 

AI needs an editor.  News companies deploying GenAI technologies are doing so with a human-in-the loop to oversee the process or fact check and edit the outputs. AI can lead to fewer people involved in a mundane process. It will seldom mean no people.  An AI programme cannot be held responsible for accuracy and accountable for error.  Responsibility and accountability must be held by identifiable individuals. 

Will you be transparent about the use of GenAI with your audience? 

The automatic answer to this must be YES. The question is what you tell them to ensure they are properly informed. Some editors I have spoken to express concern that the blanket use of an AI disclaimer may lead the audience to assume wrongly that the ‘robots were doing all the work’.  There are issues about broader AI literacy but that does not excuse news organisations from an obligation to be clear and transparent about how the news is produced. 

How do you carry out newsroom change in an ethical way? 

Journalists have been fretting about their jobs being taken by robots for many years. GenAI has brought that prospect closer with early use cases focused on ‘efficiency’ - that’s cheaper and quicker with fewer people. The question for any news organisation that achieves saving from AI is how that saving will be used.  Will the time saved be used on other higher quality work, will the money saved (or at least some of it) be reinvested?  

What about your relationship with the new tech overlords? 

Early battle lines were drawn over how publisher content was used as training data without permission. This is an ethical and a business issue. The News/Media Alliance, an industry body representing publishers of news, magazines and books has demanded recognition of intellectual property rights, recompense for the access to content,  and transparency and fairness in usage.  In July, one of the first official deals for content access was struck between the US news agency Associated Press (AP) and OpenAI, the makers of ChatGPT.  Others may follow, but smaller organisations may struggle.  Increasing number of news companies are preventing their websites from being crawled. 

Navigating the ethics or AI means nuance and complexity.  One thing that will help is how you frame the issue to start.  In 2018, Gina Chua, then at Reuters, used the phrase “cybernetic newsroom” to explain an approach to AI and automation technologies that was not based on a binary human v robot but the best combination. In my own work I have found that thinking about AI as a set of tools that can be put to use by journalists to better serve their audience is a helpful starting point.  

2023 has been a watershed moment for technology in journalism. Journalism has always adapted to technological development.  It will continue to do so. 

Gary Rogers is a Senior Newsroom Strategy Consultant at Fathm, leading the work on newsroom AI.  In 2017, Gary co-founded RADAR AI, a UK news agency that combines data journalism with AI tools to produce stories at scale. 

Previous
Previous

Journalism and media conferences in 2024

Next
Next

Five Takeaways From the 2023 Africa Facts Summit