Âé¶¹ÒùÔº

May 13, 2025

Have journalists skipped the ethics conversation when it comes to using AI?

Credit: Pixabay/CC0 Public Domain
× close
Credit: Pixabay/CC0 Public Domain

Artificial intelligence (AI) is being used in journalistic work for everything from and to writing and publishing , and stories.

It's even being used to in cases where time-strapped reporters don't have time to do so.

What's lagging behind all this experimentation are the important conversations about the ethics of using these tools. This disconnect was evident when we interviewed journalists in a mix of newsrooms across Canada from July 2022 to July 2023, and it remains a problem today.

We conducted semi-structured interviews with 13 journalists from 11 Canadian newsrooms. Many of the people we spoke to told us that they had worked at multiple throughout their careers.

The key findings from :

Get free science updates with Science X Daily and Weekly Newsletters — to customize your preferences!

What journalists told us

Some of what we heard was reassuring. One journalist told us: "The one thing that we are very particular about when we use this technology is that our editors always have the ability to override what the machine is doing."

At the same time, however, it became clear that many are still operating in the ethical equivalent of the Wild West.

In many cases, journalists we spoke to talked about just following their gut when it came to deciding if using that AI tool to do that task was ethical. As one of our interviewees put it: "There's a rule book in my head."

When we asked interviewees how they knew their colleagues at the same publication followed the same ethical code they did when using AI, most could not answer except to imply that their co-workers wouldn't have been hired if they didn't share the same principles. One journalist said, "I've worked there for 14 years now …I can't think of anyone whose ethics I would disagree with."

Getting the ethics of AI right and being seen to be doing so is important because journalism has and needs to do everything possible to reverse the trend.

Multiple studies have shown that Canadian audiences want to know if AI tools are being used in newsrooms, and for journalism created using AI.

AI and news

Audiences, meanwhile, are being fed a steady diet of examples that illustrate how using AI tools to create journalistic work can go very wrong. For instance:

News organizations might think they're being transparent with audiences about how much content is being created using AI, but our research finds the evidence is mixed at best, especially in circumstances where AI generates the content and an editor approves it in the content management system before it is published.

In one memorable Zoom interview, an editor walked us through the AI-generated content in an article posted online, saying that it was clearly identified as AI on the webpage.

However, upon sharing the page, they were shocked to discover there was no information about the article being AI-generated anywhere. They said it would be fixed immediately, but when we last checked, the article still said nothing about the AI tool used to generate it.

While we gathered data from interviews, newsrooms in Canada started releasing guidance through internal emails and . It is hard to find any language in publicly accessible policies that refers explicitly to how AI is being used or the ethics surrounding such use. It's also unclear who is involved in conversations about ethical AI use in newsrooms, and who is not.

As one journalist we interviewed put it: "I think my frustration personally comes from again the lack of openness to have this conversation about AI, and the urgency of it, because I think … we're so busy trying to survive, we don't realize that having this conversation about AI will help us survive."

Our research suggests and news organizations are still struggling in the midst of rapid technological change to arrive at a shared understanding of AI tools, their usage, the limitations of programming and best practices that build rather than erode trust.

Provided by The Conversation

Load comments (0)

This article has been reviewed according to Science X's and . have highlighted the following attributes while ensuring the content's credibility:

fact-checked
trusted source
written by researcher(s)
proofread

Get Instant Summarized Text (GIST)

AI is increasingly used in journalism for tasks such as transcription, translation, and content generation, but ethical discussions about its use lag behind. AI literacy and transparency vary widely within newsrooms, and there is no industry-wide consensus or clear guidelines on ethical AI use. Journalists often rely on personal judgment, and audiences remain concerned about trust and disclosure regarding AI-generated content.

This summary was automatically generated using LLM.