Connect with us

Innovation and Technology

Prudent Guardrails For The AI Race

Published

on

Prudent Guardrails For The AI Race

The rapid development of Artificial Intelligence (AI) has sparked a heated debate about the need for regulations to ensure its safe and responsible use. As AI systems become increasingly powerful, there is a growing concern that they could pose significant risks to society if not properly controlled. In response to these concerns, some states in the US have started to take matters into their own hands, introducing laws that aim to mitigate the potential dangers of AI.

Introduction to RAISE Act

The Responsible AI for Safety Evaluation Act, also known as RAISE, is a law introduced in New York that requires developers of large AI systems to have a safety plan in place, disclose critical safety incidents, and conduct risk assessments before releasing their models. This law is specifically designed to target high-compute frontier systems, which have the potential to alter markets or national security. By focusing on these systems, RAISE aims to prevent catastrophic harms, such as those related to biosecurity or critical infrastructure.

Key Provisions of RAISE Act

One of the key provisions of RAISE is that it only applies to the largest builders of AI systems, those spending over $100 million on the final training run of a model. This narrow focus is intended to avoid burdening startups or open-source researchers who are working on smaller systems. The law also requires developers to disclose critical safety incidents and to have a plan in place for mitigating risks. Additionally, if testing shows that a model poses an unreasonable risk, it cannot be released.

Broader Implications of AI Regulation

The introduction of laws like RAISE and California’s SB 53 marks a significant shift in the way that AI is regulated. These laws demonstrate that states are taking a proactive approach to addressing the potential risks of AI, rather than waiting for federal action. By setting practical boundaries for AI development, states are creating a framework for responsible innovation that can help to prevent catastrophic harms.

Expert Opinions on AI Regulation

Experts in the field of AI and cybersecurity have mixed opinions about the effectiveness of laws like RAISE. Some, like Vineeta Sangaraju, security solutions engineer at Black Duck, believe that RAISE is a landmark step towards responsible innovation in AI. Others, like John Watters, CEO of iCOUNTER, are more skeptical, arguing that rules cannot easily constrain data sets or prevent dual-use. Despite these differing opinions, there is a growing recognition that progress in AI needs to be accompanied by friction, in the form of regulations that ensure safe and responsible use.

Conclusion

In conclusion, the development of AI regulations like RAISE and SB 53 marks an important step towards ensuring the safe and responsible use of AI. By setting practical boundaries for AI development, states are creating a framework for responsible innovation that can help to prevent catastrophic harms. While there are challenges to implementing and enforcing these regulations, they represent a crucial step towards mitigating the potential risks of AI and ensuring that its benefits are realized in a sustainable and responsible way.

Advertisement

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending