Skip to main content

OpenAI co-founder warns 'superintelligent' AI must be controlled to prevent possible human extinction

An OpenAI co-founder warned in a Tuesday blog post that superintelligence has the potential to lead to human extinction if not controlled or steered away from "going rogue."

A co-founder of artificial intelligence leader OpenAI is warning that superintelligence must be controlled in order to prevent the extinction of the human race. 

"Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction," Ilya Sutskever and head of alignment Jan Leike wrote in a Tuesday blog post, saying they believe such advancements could arrive as soon as this decade. 

They said managing such risks would require new institutions for governance and solving the problem of superintelligence alignment: ensuring AI systems much smarter than humans "follow human intent." 

"Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us and so our current alignment techniques will not scale to superintelligence," they wrote. "We need new scientific and technical breakthroughs."

FBI WARNS OF AI DEEPFAKES BEING USED TO CREATE 'SEXTORTION' SCHEMES

To solve these problems, within a period of four years, they said they're leading a new team and dedicating 20% of the compute power secured to date to this effort. 

"While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem," they said. 

BIDEN ADMIN CLAIMS ‘IRREPARABLE HARM,’ SEEKS TO KEEP MEETING WITH BIG TECH AFTER RULING ORDERED IT TO STOP

In addition to work improving current OpenAI models like ChatGPT and mitigating risks, the new team is focused on the machine learning challenges of aligning superintelligent AI systems with human intent.

Its goal is to devise a roughly human-level automated alignment researcher, using vast amounts of compute to scale it and "iteratively align superintelligence." 

In order to do so, OpenAI will develop a scalable training method, validate the resulting model and then stress test its alignment pipeline. 

The company's efforts are backed by Microsoft.

Reuters contributed to this report.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.