Researchers want a 'big red button' for shutting down a rogue artificial intelligence

"I'm sorry, Elon Musk. I'm afraid I can't do that."

27

If artificial intelligence goes off the rails, which many philosophers and tech entrepreneurs seem to think is likely, it could result in rampant activity beyond human control. So some researchers think it's important to develop systems to "interrupt" AI programs, and to ensure the AI can't develop a way to prevent those interruptions. A study, conducted in 2014 by Google-owned AI lab DeepMind and the University of Oxford, sought to create a framework for handing control of AI programs over to human beings. In other words, a "big red button" to keep the software in check.

"If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions — harmful either for the agent or for the environment — and lead the agent into a safer situation," reads the team's paper, titled "Safely Interruptible Agents" and published online with the Machine Intelligence Research Institute. A common case here could be a factory robot that needs to be overridden to prevent human injury or damage to the machine.

AI agents may "learn in the long run to avoid such interruptions ... which is an undesirable outcome."

"However, if the learning agent expects to receive rewards from this sequence, it may learn in the long run to avoid such interruptions, for example by disabling the red button — which is an undesirable outcome," the paper adds. The phrase "undesirable outcome" to describe the situation of an AI disabling its own shutdown mechanism is putting it lightly. The paper goes into very complex detail as to how this interruption system might work. The researchers appear to suggest it can be done by manipulating the rewards systems used to develop self-learning intelligences.

As more tech companies get involved with artificial intelligence, breakthroughs in AI have begun occurring at a faster clip. DeepMind, whose research scientist Laurent Orseau co-authored the above paper, is responsible for developing AlphaGo. That AI system is capable of playing the ancient Chinese board game on a level exceeding that of the game's most skilled human players. Meanwhile, every big tech company with big investments in cloud computing is working to develop AI in various capacities, including Facebook, Amazon, Google, and Microsoft.

Researchers are banding together to prevent AI missteps

Amid the growing popularity of the technology, numerous organizations and non-profits have risen up to study its effect and ensure AI has a positive impact. Those include the Machine Intelligence Research Institute and philosopher Nick Bostrom's Future of Humanity Institute. Even Tesla and SpaceX CEO Elon Musk has heeded the warnings about AI. Musk last year co-founded Open AI, a non-profit dedicated to preventing malevolent software and ensuring the tech is beneficial to humanity. At Recode's Code Conference this week, Musk insinuated that one tech company, Google, worried him more than any other when it came to self-learning software.

More from The Verge

The best of Verge Video

Back to top ^
X
Log In Sign Up
If you currently have a username with "@" in it, please email [email protected].
forgot?
forgot?
Log In Sign Up

Forgot password?

We'll email you a reset link.
If you signed up using a 3rd party account like Facebook or Twitter, please login with it instead.

Forgot username?

We'll email it to you.
If you signed up using a 3rd party account like Facebook or Twitter, please login with it instead.

Forgot password?

If you signed up using a 3rd party account like Facebook or Twitter, please login with it instead.
Try another email?

Forgot username?

If you signed up using a 3rd party account like Facebook or Twitter, please login with it instead.
Try another email?

Almost done,

By becoming a registered user, you are also agreeing to our Terms and confirming that you have read our Privacy Policy.

Authenticating

Great!

Choose an available username to complete sign up.
In order to provide our users with a better overall experience, we ask for more information from Facebook when using it to login so that we can learn more about our audience and provide you with the best possible experience. We do not store specific user data and the sharing of it is not required to login with Facebook.