Connect with us

Innovation and Technology

A Responsible Optimism Approach to AI

Published

on

A Responsible Optimism Approach to AI

Danger and Opportunity Ahead

The Rise of AI: A Double-Edged Sword

With great power comes great responsibility, and that is now certainly the case as the power of AI gets unleashed. Is it amplifying bias? Is it delivering erroneous information? Does it violate intellectual property or copyrights? Is it opening the door to even greater malfeasance than we’ve seen to date in the digital era?

The Risks of AI

Is everyone ready for all this? Kind of. We can’t be doomsayers when it comes to AI issues. But we also need to be proactive about keeping AI responsible. When it comes to assessing the risks of their AI efforts, just over half of organizations, 58%, have some grasp of the risks involved, a recent PwC survey of 1,001 executives found. While there is interest in delivering responsible AI, only 11% of executives can say they have fully moved forward with responsible AI initiatives.

The Need for Responsible AI

Thinkers and doers across the business landscape agree that we are entering an age of great danger– and great opportunity. "Ultimately, we all want our technology to be the safest and most sophisticated in the world," said Arun Gupta, CEO of NobleReach Foundation. "The question is not whether this technology should be regulated, but how we ensure we have the talent and innovation infrastructure – both in government and the private sector – to unlock AI’s benefits while mitigating its dangers."

Building an Infrastructure for Responsible AI

In many cases, AI itself can help mitigate some of these dangers, Gupta added. "We must build an infrastructure that supports responsible AI optimism." An AI-optimism approach means "investing in initiatives that focus on trusted and secure AI," Gupta said. "We must maintain an open dialogue between industry, academia, and government as risks evolve. We need to bring the brightest minds and best research to solve problems and maximize AI’s positive societal impact."

The Importance of Human Oversight

There is a "lack of transparency and guardrails in the datasets used to train AI models and the potential bias and discrimination that may result from it," said Thomas Phelps, CIO of Laserfiche and member of the SIM Research Institute Advisory Board. "If AI is employed without human oversight, the wrong decision or recommendation could be made in critical areas such as law enforcement, court systems, credit and lending, insurance coverage, healthcare, or even employment matters."

The Risk of AI-Based Manipulation

Another risk AI poses is the specter of AI-based manipulation, something that its developers and proponents have yet to fully get their arms around. For example, the answers that conversational AI systems provide can impact how people think, warned David Shrier, professor at Imperial College Business School and author of Welcome to AI. "A very small number of people, privately employed, decide on what kind of answers these companies provide you," Shrier continued. "What’s worse, since many of these systems are self-learning, they are susceptible to manipulation. If you contaminate the data that goes into these AIs, you can corrupt them."

The Need for Transparency and Regulation

It’s important, then, "to protect the rights of individuals, and the intellectual property of people who shape ideas," said Shrier. "The average consumer or worker doesn’t realize how much they’ve been giving away to certain large tech platforms. We have to do this in a way that doesn’t damage economic productivity and competitiveness." More broadly, Shrier added, "as we hand over decisions to artificial intelligences, like who gets a loan, or whether or not a car will brake when a person steps in front of it, how do we know that the algorithm is giving us the correct answer?"

The Public’s Attitude Towards AI

Significantly, people are clamoring for – not fearing – AI. But they’re also willing to accept restraints in exchange for responsible use of AI. "We want to have these amazing technologies in our lives, much as we wanted to convenience of having cars to get around," said Shrier. "We eventually learned to live with brake lights and windshield wipers and seat belts and airbags, all of which made our cars safer. We need the equivalent for AI."

Conclusion

As new technologies emerge, the industry figures out ways to make it more secure and compliant. Much as they did with data privacy controls and with data portability, Shrier illustrated. "You used to not be able to very easily move your banking data or your phone number from one company to another. Yet, when privacy regulations came along, technology companies with their deep broad base of innovation and resources were able to figure out how to comply."

FAQs

  • Q: What are the risks of AI?
    A: AI can amplify bias, deliver erroneous information, and violate intellectual property or copyrights.
  • Q: Is everyone ready for AI?
    A: Not yet, but many organizations are taking steps to be proactive about keeping AI responsible.
  • Q: What is the need for responsible AI?
    A: To ensure that AI technology is used to benefit society while minimizing its risks.
  • Q: How can we ensure responsible AI?
    A: By building an infrastructure that supports responsible AI optimism, investing in initiatives that focus on trusted and secure AI, and maintaining an open dialogue between industry, academia, and government.
Advertisement

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending