The march towards robots taking over our everyday lives moves ahead inexorably. Self-driving cars are just the latest manifestation of the trend.
Recent research now shows that in order to make them more human,, we need to give the similar flaws to real people so that we take them more seriously (How perfect is too perfect? Research reveals robot flaws are key to interacting with humans).
My question: how soon before robots get mental illnesses as an unforeseen byproduct of this trend, or just because of the sheer complexity of the beasts?
Isaac Asimov foresaw part of the problem, namely that robots could be nasty to us by being violent towards us. So he suggested the famous 3 laws of robotics to prevent this, hopefully. The problem he didn’t foresee is that robots might be unintentionally nasty to us by getting mental illnesses.
That’s not a new idea either, as you would know from the 1968 movie “2001: A Space Odyssey” and its depiction of the malevolent computer, Hal. Mentally sick robots pose a new type of problem: you can see a robot that gets violent, but you don’t necessarily know if a computer or robot is sick mentally until it does something weird or bad. By then it might be too late.
So the problem goes well beyond what we have tended to see as the main issue, that robots might hurt us physically. Unlike a human, it’s unlikely you are going to quickly notice telltale signs like unusual facial or body gestures, at least not any time soon. So the problem is you might have a robot that acts sane but isn’t.
And we also have to recognize that if a robot has a mental illness, that even violence isn’t likely to be the main problem. According to one expert in mental illness in humans, “People with serious mental illness are 3 to 4 times more likely to be violent than those who aren't. But the vast majority of people with mental illness is not violent and never will be.”(Myth vs. Fact: Violence and Mental Health).
We can expect robots to be the same. How about a depressed robot that simply can’t be bothered to tell us that one of our machines is not working correctly? That our car is going to crash? How about a car-driving robot that wants to commit suicide, with or without you in the back seat?
I have already posted previously about the huge societal problem of mental illness (“The Invisible Pandemic”). It’s constantly getting even more serious under the pressures of stress, drugs and other causes as yet unknown but possibly bacterial or viral.
We tend as a species to focus on problems that are readily visible. Mental illness in robots could be a lot more dangerous precisely because you can’t easily see it.
And it’s not just a problem of straight-out mental dysfunction. How about behavioral issues such as cognitive biases that warp a robot’s judgment, just as we humans also make poor decisions under the influence of unconscious cognitive biases?
How about a robot that makes poor decisions because it is subject to the confirmation bias? The illusion of control bias (which was at least part of the problem with Hal in “2001”)? The status quo bias?
You might argue that we can program all this out of our robots or computers by careful design. But there’s a problem with this. You might indeed do this if the techniques you employ are deterministic.
But what it the learning techniques built into the robot is nondeterministic such as neural nets or genetic algorithms? Then you don’t know what learning is going to result. In the case of very complex learning routines, who’s to say that such techniques won’t lead to unintended lessons? Just like happens with us humans.
I think the problem of mental illness and cognitive biases are going to start to loom much larger with computers and robots. It’s likely that they will form new frontiers in computer hacking, and therefore be much more difficult to detect than the normal garden-variety DDOS attacks, phishing, or data theft. These are going to be even more difficult to detect than viruses such as Stuxnet and the newer variants.
From an attacker’s perspective it might be a much better strategy just to plant the seeds of a mental illness or a cognitive bias into a robot so that its existence might never be discovered. How about a virus that tends to prefer Russian to American products? Or that favors a Chinese drone in a dogfight? You get the idea.
Unfortunately with all the benefits that come along with robots, also come the problems. Robo-psychiatry is likely to be the big new job after cyber-security has become totally passé.