AI search engines cite incorrect sources at an alarming 60% rate, study says


Even when these AI search tools cited sources, they often directed users to syndicated versions of content on platforms like Yahoo News rather than original publisher sites. This occurred even in cases where publishers had formal licensing agreements with AI companies.

URL fabrication emerged as another significant problem. More than half of citations from Google’s Gemini and Grok 3 led users to fabricated or broken URLs resulting in error pages. Of 200 citations tested from Grok 3, 154 resulted in broken links.

These issues create significant tension for publishers, which face difficult choices. Blocking AI crawlers might lead to loss of attribution entirely, while permitting them allows widespread reuse without driving traffic back to publishers’ own websites.

A graph from CJR showing that blocking crawlers doesn’t mean that AI search providers honor the request.


Credit:

CJR

Mark Howard, chief operating officer at Time magazine, expressed concern to CJR about ensuring transparency and control over how Time’s content appears via AI-generated searches. Despite these issues, Howard sees room for improvement in future iterations, stating, “Today is the worst that the product will ever be,” citing substantial investments and engineering efforts aimed at improving these tools.

However, Howard also did some user shaming, suggesting it’s the user’s fault if they aren’t skeptical of free AI tools’ accuracy: “If anybody as a consumer is right now believing that any of these free products are going to be 100 percent accurate, then shame on them.”

OpenAI and Microsoft provided statements to CJR acknowledging receipt of the findings but did not directly address the specific issues. OpenAI noted its promise to support publishers by driving traffic through summaries, quotes, clear links, and attribution. Microsoft stated it adheres to Robot Exclusion Protocols and publisher directives.

The latest report builds on previous findings published by the Tow Center in November 2024, which identified similar accuracy problems in how ChatGPT handled news-related content. For more detail on the fairly exhaustive report, check out Columbia Journalism Review’s website.



Source link

About The Author

Scroll to Top