<p>A large international study coordinated by the <a href="/tags/ebu/" rel="tag">#EBU</a> and led by the <a href="/tags/bbc/" rel="tag">#BBC</a> found that AI assistants misrepresent news content 45% of the time across different languages and platforms, with <a href="/tags/gemini/" rel="tag">#Gemini</a> performing the worst.</p><p>[…] Key findings: </p><p>• 45% of all AI answers had at least one significant issue.<br>• 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.<br>• 20% contained major accuracy issues, including hallucinated details and outdated information.<br>• Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.<br>• Comparison between the BBC’s results earlier this year and this study show some improvements but still high levels of errors.</p><p><a href="https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content" rel="nofollow" class="ellipsis" title="www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content"><span class="invisible">https://</span><span class="ellipsis">www.bbc.co.uk/mediacentre/2025</span><span class="invisible">/new-ebu-research-ai-assistants-news-content</span></a></p><p><a href="/tags/aihype/" rel="tag">#aihype</a> <a href="/tags/llm/" rel="tag">#llm</a> <a href="/tags/openai/" rel="tag">#openai</a> <a href="/tags/perplexity/" rel="tag">#perplexity</a> <a href="/tags/chatgpt/" rel="tag">#chatgpt</a></p>