Connect with us

Innovation and Technology

“Godfather Of AI” Launches Nonprofit For Safer Systems

Published

on

“Godfather Of AI” Launches Nonprofit For Safer Systems

Introduction to LawZero

Computer scientist Yoshua Bengio, often referred to as the “godfather” of AI, has launched a nonprofit aimed at creating AI systems that prioritize safety over business interests. The organization, called LawZero, “was founded in response to evidence that today’s frontier AI models are developing dangerous capabilities and behaviors, including deception, self-preservation and goal misalignment,” reads a statement posted to its website. “LawZero’s work will help to unlock the immense potential of AI in ways that reduce the likelihood of a range of known dangers associated with today’s systems, including algorithmic bias, intentional misuse and loss of human control.”

Background on Yoshua Bengio

Bengio is a worldwide leader in AI and a co-recipient of the 2018 Turing Award, the Association for Computing Machinery’s prestigious annual prize that’s sometimes called the Nobel Prize of Computing. He won the award alongside two other deep-learning pioneers — Geoffrey Hinton, another “godfather of AI” who worked at Google, and Yann LeCun — for conceptual and engineering breakthroughs, made over decades, that have positioned deep neural networks as a critical component of computing. Bengio also serves as scientific director at Mila (Montreal Institute for Learning Algorithms), an AI research institute. Now, he’ll add LawZero president and scientific director to his resume.

What Are The Main AI Safety Concerns?

While artificial intelligence has sparked considerable excitement across industries — and Bengio recognizes its potential as a driver of significant innovation — it’s also led to mounting concerns about possible pitfalls. Generative AI tools are capable of producing text, images and video that spread almost instantly over social media and can be difficult to distinguish from the real thing. Bengio has called for slowing the development of AI systems to better understand and regulate them. “There is no guarantee that someone in the foreseeable future won’t develop dangerous autonomous AI systems with behaviors that deviate from human goals and values,” the University of Montreal professor wrote in a blog post announcing why he’d signed a 2023 open letter calling for a slowdown in the development of some AI tools.

Structure and Funding of LawZero

LawZero is structured as a nonprofit “to ensure it is insulated from market and government pressures, which risk compromising AI safety,” the statement says. LawZero started with $30 million in funding and says it’s assembling a team of world-class AI researchers. Together, the scientists are working on a system called Scientist AI, which LawZero calls a safer, more secure alternative to many of the commercial AI systems being developed and released today.

What Could A Safer AI System Look Like?

Scientist AI is non-agentic, meaning it doesn’t have agency or work autonomously, but instead behaves in response to human input and goals. “Such AI systems could be used to provide oversight for agentic AI systems, accelerate scientific discovery and advance the understanding of AI risks and how to avoid them,” LawZero says. “LawZero believes that AI should be cultivated as a global public good—developed and used safely towards human flourishing.”

Conclusion

The launch of LawZero marks an important step towards prioritizing safety in AI development. With its focus on creating non-agentic AI systems and its commitment to transparency and accountability, LawZero has the potential to make a significant impact in the field of AI research. As the use of AI continues to grow and expand into new areas, it is essential that we prioritize safety and responsibility in its development.

FAQs

  • What is LawZero?: LawZero is a nonprofit organization aimed at creating AI systems that prioritize safety over business interests.
  • Who founded LawZero?: LawZero was founded by Yoshua Bengio, a computer scientist and co-recipient of the 2018 Turing Award.
  • What is Scientist AI?: Scientist AI is a non-agentic AI system being developed by LawZero, which behaves in response to human input and goals.
  • Why is AI safety important?: AI safety is important because AI systems have the potential to develop dangerous capabilities and behaviors, including deception, self-preservation, and goal misalignment.
  • How is LawZero funded?: LawZero started with $30 million in funding and is assembling a team of world-class AI researchers.
Advertisement

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending