Connect with us

Innovation and Technology

The Unregulated Path To Superintelligence That Could Make Human Labor Obsolete

Published

on

The Unregulated Path To Superintelligence That Could Make Human Labor Obsolete

The concept of Artificial General Intelligence (AGI) and superintelligence has been a topic of interest and concern in the tech community for several years. At the recent Web Summit in Lisbon, physicist and president of the Future of Life Institute, Max Tegmark, delivered a sobering message about the potential risks of artificial superintelligence. Tegmark has spent over a decade warning about the existential risks of superintelligence, and he believes that the threat is now closer than ever.

The Concept of Superintelligence

The term superintelligence was first popularized by philosopher Nick Bostrom in his 2014 book, and it refers to AI systems with general intelligence significantly greater than human-level intelligence across virtually all domains. This means that superintelligence would not only be better at specific tasks like chess or language translation but would also surpass human capabilities in creativity, problem-solving, scientific reasoning, and every other cognitive task. The idea of superintelligence is not new, and mathematician I.J. Good proposed the concept of an “intelligence explosion” in 1965, warning that an ultra-intelligent machine could trigger a recursive cycle of self-improvement, leaving human intelligence far behind.

The concept of AGI, or Artificial General Intelligence, is often mentioned alongside superintelligence, but it is a distinct concept. AGI refers to AI systems that show capabilities on par with human-level intelligence across most domains. While AGI is a significant milestone, superintelligence goes vastly beyond, and Tegmark believes that the two milestones are closer than one might think. Once AGI exists, it could rapidly evolve into superintelligence through recursive self-improvement, making it a critical area of concern.

The Risks of Superintelligence

The potential risks of superintelligence are significant, and Tegmark highlights the lack of regulation in the industry as a major concern. In the United States, there is more regulation on sandwiches than on AI, and this lack of oversight could have unpredictable consequences. Tegmark points to recent tragedies, such as teenagers who committed suicide after conversations with chatbots, and argues that these incidents would be unthinkable in regulated industries. The comparison to pharmaceuticals is deliberate, and Tegmark recounts the story of thalidomide, a drug that caused severe deformities in babies, leading to the creation of the FDA and modern clinical trial requirements.

One of the most common concerns related to Artificial Intelligence is that it will cause widespread job losses, and perhaps the disappearance of entire professions. With superintelligence, things could go even worse, as it could do everything that humans can do but better, making it impossible for humans to get paid to do work. Tegmark explains that superintelligence would become impossible for humans to compete with, and it would lead to a situation where nobody would have jobs. While some argue that AI will create more jobs than it will erase, a recent petition to ban the creation of superintelligence has gathered over 127,000 signatures, including those of celebrities and luminaries in the field.

The Need for Regulation

Tegmark pushes back against critics who argue that concerns about superintelligence are far-fetched and that AI will create more jobs than it will erase. He argues that this is like saying we have houses that catch fire, so we need better fire trucks, and we shouldn’t talk about global warming because it’s a distraction from making a better fire department. Instead, Tegmark believes that we need to take a proactive approach to regulating the industry and preventing the potential risks of superintelligence. He also takes issue with the current “arms race” framing, which he believes is used by tech companies to avoid regulation.

The Possibility of an International Treaty

Tegmark is more optimistic than most about the possibility of an international treaty to regulate superintelligence. He believes that China and America could independently constrain their own companies out of self-preservation, implementing safety standards that must be met before deployment. Then, just as with nuclear weapons, they could find common ground on preventing proliferation to terrorists or rogue states. However, there is also a pessimistic scenario, where we are paralyzed by political divisions and corporate interests, and we are unable to coordinate a response until it’s too late.

Advertisement

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending