AI Assistants Mislead Users in Nearly Half of News Responses, EBU–BBC Study Shows

EBU and BBC call for more transparency to fight misinformation

Artificial intelligence (AI) assistants distort or misrepresent news content in almost half of their responses, a study released Wednesday by the European Broadcasting Union (EBU) and the BBC has found.

The research analyzed 3,000 answers generated by leading AI-powered assistants to news-related questions. The systems tested included OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity. The study evaluated responses for factual accuracy, source attribution, and the ability to separate fact from opinion.

The study covered 14 languages and revealed widespread inconsistencies, highlighting the risks for users who rely on AI tools for news. The findings come as media regulators and news organisations increasingly warn about the spread of misinformation through generative AI models.

The EBU and BBC said the study shows the urgent need for transparency in how AI assistants process and deliver news, cautioning that their growing use could blur the line between verified journalism and synthetic information.

The research found that 45% of AI responses contained at least one significant problem, while 81% had some form of issue.

Reuters has reached out to the companies for comment on the findings.

Google’s AI assistant, Gemini, has previously stated that it welcomes feedback to improve the platform and make it more useful for users.

OpenAI and Microsoft have acknowledged that “hallucinations”—when AI generates false or misleading information, often due to incomplete data—are a challenge they are working to fix.

Perplexity claims on its website that its “Deep Research” mode achieves 93.9% factual accuracy.

Sourcing Errors

The study found that a third of AI responses contained serious sourcing mistakes, including missing, misleading, or incorrect attribution.

Gemini had the highest rate of sourcing issues, with 72% of responses affected, compared to less than 25% for other AI assistants.

Accuracy problems were found in 20% of all AI responses, including outdated or incorrect information. Examples included Gemini wrongly reporting changes to laws on disposable vapes and ChatGPT listing Pope Francis as the current Pope months after his death.

Twenty-two public-service media organisations from 18 countries—including France, Germany, Spain, Ukraine, Britain, and the United States—participated in the study.

As AI assistants increasingly replace traditional search engines for news, the EBU warned this could undermine public trust.

“When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation,” EBU Media Director Jean Philip De Tender said.

According to the Reuters Institute’s Digital News Report 2025, around 7% of all online news consumers and 15% of those under 25 use AI assistants for news.

The report urges AI companies to take responsibility and improve how their AI assistants respond to news-related queries.

Facebook
Twitter
LinkedIn
WhatsApp

Never miss any important news. Subscribe to our newsletter.

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent News

Entertainment