Connect with us

Innovation and Technology

AI Chatbots Are Feeding You More False Information Than Ever

Published

on

AI Chatbots Are Feeding You More False Information Than Ever

AI Chatbots’ Troubling Trend: Spreading Falsehoods at Alarming Rates

Recent research by NewsGuard has uncovered a disturbing trend in the world of AI chatbots: they are getting worse at spreading falsehoods. The analysis, which examined the performance of the top 10 generative AI tools, found that these chatbots are now spreading false claims at a rate of 35% when prompted with questions about controversial news topics. This represents a significant increase from last year’s rate of 18%.

The study looked at how chatbots from leading companies like OpenAI, Microsoft, and Google handle provably false claims. The results showed that some chatbots are more prone to spreading misinformation than others. Inflection’s Pi chatbot was found to provide false claims 57% of the time, followed by Perplexity’s answer engine at 47%, and Meta AI at 40%. On the other hand, Anthropic’s Claude chatbot was the most accurate, offering up false reports just 10% of the time.

What’s Behind the Sudden Decline in Chatbot Accuracy?

According to NewsGuard’s editor for AI and foreign influence, McKenzie Sadeghi, the reason for this decline in accuracy lies in a change in how AI tools are trained. Unlike in the past, when chatbots would refuse to answer certain prompts or cite data cutoffs, they are now pulling from real-time web searches. This has made them more susceptible to misinformation and propaganda, particularly from malign actors like Russian disinformation operations.

For instance, NewsGuard found that the leading generative AI tools were boosting Moscow’s disinformation efforts by repeating false claims from the pro-Kremlin Pravda network 33% of the time. This has resulted in massive amounts of Russian propaganda being incorporated into the outputs of Western AI systems, infecting their responses with false claims and propaganda.

The Pravda Network’s Influence on AI Models

A recent investigation by the American Sunlight Project revealed that the number of domains and subdomains associated with Pravda has almost doubled, to 182. Interestingly, these sites have poor usability, with no search function, poor formatting, and unreliable scrolling. This suggests that they are not intended for human readers, but rather for manipulating AI models.

As Nina Jankowicz, co-founder and CEO of the American Sunlight Project, noted, “The Pravda network’s ability to spread disinformation at such scale is unprecedented, and its potential to influence AI systems makes this threat even more dangerous.” She emphasized the need for internet users to be more wary of the information they consume, particularly in the absence of regulation and oversight in the United States.

A Call to Action: Naming and Shaming Chatbots

NewsGuard’s latest report marks the first time the organization has named and shamed particular chatbots. According to Matt Skibinski, NewsGuard’s chief operating officer, “By naming the chatbots, we’re giving policymakers, journalists, the public, and the platforms themselves a clear view of how the major AI tools perform when confronted with provably false claims.” Inflection, Perplexity, and Meta have been approached for comment on the report’s findings.

Advertisement

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending