Connect with us

Innovation and Technology

Is AI Truly Woke?

Published

on

Is AI Truly Woke?

Introduction to AI and Its Impact

AI will be the most transformative technology of our lifetimes. However, even I, a firm advocate for the good I think it will do, can see that there is a huge amount of hype and confusion around it. Some of the biggest and most powerful corporations have bet the house on selling it to us. It’s also a highly contentious subject, with many rightly concerned about its possible impact on jobs, privacy, and security.

Concerns About AI

Another frequently voiced fear is that AI will be used to create disinformation that could further political narratives or even influence our democratic choices. There are two claims made frequently – the first is that AI can be used to spread extremist beliefs and maybe even create extremists. The second is that AI output veers towards the “woke” – a term originally used by African American civil rights protesters but now most frequently used by conservatives to refer to progressive or pro-social-justice ideas and beliefs.

Does AI Have A Left-Wing Bias?

Conservative and right-wing commentators frequently make the claim that AI and the Silicon Valley culture, where it often originates from, have a left-wing bias. And it does seem that there is at least some evidence to back up these beliefs. A number of studies, including one by the University Of East Anglia in 2023 and one published in the Journal of Economic Behavior And Organization, make the case that this is true. Of course, generative AI doesn’t actually have a political opinion – or any opinions, for that matter. Everything it “knows” comes from data scraped from the web.

Understanding AI’s Data Sources

If that data happens to support a progressive consensus – for example, if the majority of climate science data supports theories that climate change is man-made – then the AI is likely to present this as true. Rather than simply presenting facts with a left-wing bias, some of the research focuses on findings that AI will just refuse to process "right-wing image generation" requests. And when prompts describe images featuring progressive talking points like “racial-ethic equality” or “transgender acceptance,” the results are more likely to show positive images (happy people, for example).

Can AI Turn Us Into Extremists?

While some researchers are concerned that AI will turn everyone into liberals, others are more worried that it will be used to radicalize people or further extremist agendas. The International Centre For Counter-Terrorism, based at The Hague, reports that terrorist groups already widely use generative AI to create and spread propaganda. This includes using fake images and videos to spread narratives that align with their values. Terrorist and extremist groups, including Islamic State, have even released guides demonstrating how to use AI to develop propaganda and disinformation.

The Role of Humans in AI Bias

Again, this is a case of humans using AI to persuade people to adopt their views rather than an indication that AI is extreme or prone to suggesting extreme ideas and behaviors. However, one inherent risk with AI is its capability to reinforce extreme views through the algorithmic echo-chamber effect. This happens when social media and news platforms use AI to suggest content based on past engagement. This often results in users being shown more of what they already agree with, creating “echo chambers,” where people repeatedly see content that mirrors their existing beliefs. If those beliefs are extreme, AI can amplify its effect by serving up similar, more radical content.

Can AI Really Influence The Way We Think?

It’s essential to remember that while AI is likely to play an increasing role in shaping the way we consume information, it can’t directly influence our beliefs. It should also be noted that AI can also help counter these threats. It can detect bias in data, for example, that could lead to biased responses, and it can find and remove extremist content from the Internet. Nevertheless, there is clearly a perception, which appears to be justified, that groups of all political affiliations will inevitably use it to try to steer public opinion.

Conclusion

Understanding where misinformation comes from and who might be trying to spread it helps us to hone our critical-thinking skills and become better at understanding when somebody (or some machine) is trying to influence us. These skills will become increasingly important as AI becomes more ingrained in everyday life, no matter which way we lean politically.

FAQs

  • Q: Can AI have a political bias?
    A: AI itself doesn’t have opinions, but the data it’s trained on can reflect biases, which it may then present as factual.
  • Q: Can AI be used to spread extremist beliefs?
    A: Yes, AI can be used to create and spread propaganda, but this is a result of human action rather than AI’s inherent properties.
  • Q: How can AI influence political opinions?
    A: AI can influence opinions indirectly by creating echo chambers where users are shown content that aligns with their existing views, potentially amplifying extreme beliefs.
  • Q: Can AI detect and remove bias or extremist content?
    A: Yes, AI can be used to detect bias in data and remove extremist content from the internet, helping to counter misinformation and radicalization efforts.
Advertisement

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending