x
Cloud Native/NFV

To Stop AI From Going Rogue, Make It Neurotic

To reduce risk of AI getting out of control, machines should be made to doubt themselves, according to researchers at the University of California Berkeley.

In other words, make AI neurotic.

In a paper, researchers Dylan Hadfield-Menell, Anca Dragan, Pieter Abbeel and Stuart Russell say that one of the best ways to prevent AIs from misbehaving is to make sure we can turn them off. Even an AI that doesn't care about its own life will want to remain active -- or alive -- to make sure it can continue to be useful. So the solution is to make the AI doubt whether it's useful.

It is clear that one of the primary tools we can use to mitigate the potential risk from a misbehaving AI system is the ability to turn the system off. As the capabilities of AI systems improve, it is important to ensure that such systems do not adopt subgoals that prevent a human from switching them off. This is a challenge because many formulations of rational agents create strong incentives for self-preservation. This is not caused by a built-in instinct, but because a rational agent will maximize expected utility and cannot achieve whatever objective it has been given if it is dead. Our goal is to study the incentives an agent has to allow itself to be switched off.

It's not enough that the robots will take our jobs. They'll also be neurotic, constantly requiring reassurance.

My colleague Jamie Davis at Telecoms.com has some other ideas for personality types that can be programmed into AI to keep them in line:

Conceited Carl: As an AI application, Carl is no danger. Carl spends too long staring into the mirror to fix his hair, down the gym pumping iron or on ASOS trying to figure out what Jon from Love Island was wearing last night. This AI is so concerned about how it appears there is very little risk of world domination. Just tell it that its algorithm looks a little bit flabby and it will be on the virtual cross-trainer in no time.

Also, Hypochondriac Henrietta, Boring Benjamin, and more. Read it here: "How do you stop AI from taking over the world? Make it neurotic of course."

I fear I may be Boring Benjamin. Would you like to discuss the relative merits of Mac to-do list software?

Related posts:

— Mitch Wagner Follow me on Twitter Visit my LinkedIn profile Visit my blog Friend me on Facebook Editor, Enterprise Cloud News


Keep up with the latest enterprise cloud news and insights. Sign up for the weekly Enterprise Cloud News newsletter.


Phil_Britt 6/19/2017 | 8:31:40 AM
Re: Other Issues Another issue: When an accident is inevitable, such as a steel coil rolling off a truck, does the car attempt to save the life of the driver, or cause as little harm/injury to others as possible? Does it go into a ditch, or hope the driverless car in other lane can swerve to avoid an accident?
kq4ym 6/19/2017 | 8:23:27 AM
Re: Other Issues While turning off a machine is certainly necessary and those AI devices will be no exception. But think about a self-driving car that's designed to stop when an obstacle is ahead, and the programmer or device must figure out what to injure, the occupants or the obstacle which may in fact be people. There may be a dilemma of should the driver be able to override the machine or allow the designers to choose the reaction?
Michelle 6/14/2017 | 2:38:05 PM
Genisys is Skynet This is really interesting stuff. Neurotic humans have lots of issues. I can only assume a multitude of unintended consquences will emerge with neurotic AI...

Also recommended by numerous scientists in sci-fi flicks:

"it is important to ensure that such systems do not adopt subgoals that prevent a human from switching them off"

AI always finds a workaround
[email protected] 6/13/2017 | 2:13:55 PM
Re: Other Issues Yes , Ai is better in the movies even Baymaxx thought for himself and made judgement calls for the sake of entertainment, For those that work in technology we know that is pure entertainment and we realize that an army of AI robots will not overtake the earth and subjugate all humans unless they are programmed to do so!
Mitch Wagner 6/12/2017 | 2:14:42 PM
Re: Other Issues I'm reminded of Marvin the Paranoid Android on "Hitchhiker's Guide to the Galaxy."
Phil_Britt 6/12/2017 | 10:17:10 AM
Re: Other Issues AI taking over not a new idea -- the basis of all of the Terminator movies. While those are far-fetched, AI is moving strongly into areas like financial advice that seemed fairly far off only a few years ago.
[email protected] 6/12/2017 | 10:07:02 AM
Re: Other Issues Fear of AI taking over the world with an evil plot seems to be the hot topic these days. I am not afraid computer driven technology of any kind is programmed it cannot reason or make decisions it was not programmed to make, I am just not that concerned that we will see all of our jobs overtaken by AI and robots serving us in restaurants. Then again I am also not afraid of the XZombie apocalypse!:)
[email protected] 6/12/2017 | 10:07:02 AM
Re: Other Issues Fear of AI taking over the world with an evil plot seems to be the hot topic these days. I am not afraid computer driven technology of any kind is programmed it cannot reason or make decisions it was not programmed to make, I am just not that concerned that we will see all of our jobs overtaken by AI and robots serving us in restaurants. Then again I am also not afraid of the XZombie apocalypse!:)
Phil_Britt 6/12/2017 | 3:09:23 AM
Other Issues Beyond being neurotic, you could give them other mental health issues, but you'd have to be careful that they couldn't become narcissitic or have other personality disorders that could cause them to harm humans.


 
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE