So the movie “2001 – A Space Odyssey” has just been re-released. I saw it when it was first released and I didn’t understand it even then. No doubt I will be none the wiser this time around. But one thing still sticks in my mind; HAL, the mission computer had a bad case of mental ill-health.
“2001” was a prescient work in so many ways. I think the reason for its re-release is due to that fact. Stanley Kubrick the director saw so many things that even now are only just seeping into our collective consciousness. One of those things was mental health. And his insight was that even “mere” machines - AIs - can suffer from mental health problems.
Kubrick had a thing about mental health. In 1971 his film “A Clockwork Orange” debuted, 3 years after “2001” Remember Alex, the sociopathic anti-hero of the movie? Another sad case of dysfunctional mental health. (Yet Kubrick himself had a happy family life).
I ran a blog post on mental health back in 2015 (“The Invisible Pandemic”). I saw mental health as being the one of the most serious health problems we were facing as humans. I think that post helped contribute to the general recognition that we had a problem. With humans of course.
But today things have changed. Now we have robots, Siri, Alexa, AIs and who knows what else. The IoT ratches it up several notches. What are the odds that HAL’s problem is going to be more pervasive than we could ever have imagined? What about a depressed car? What damage could it do to humans, if so minded?
I wrote a blog earlier this year (27 January 2018) arguing that AI needs to be less rational if it is aiming to be more human so this implies that AIs should have mental health issues for them to be at least as successful as humans have been (“Is AI too rational?”). But maybe some forms of mental illness are a step way too far.
HAL had a common garden variety case of jealousy and narcissism. That’s probably one of the most common issues of mental health so it’s hardly unknown and apparently not so serious. But it still almost led to the mission being scuttled.
HAL was clearly sociopathic; there are a lot of other things that might even be worse. Various psychoses, lots of neuroses, addiction to just about anything, OCD, ADHD, bipolar disorders, autism-spectrum, and so on and on.
Mental health is still in the Stone Age. We are still addressing symptoms not causes. Part of the problem is that the causes are so complex.
OK so you might say that the causes of all disease are complex. But in the case of mental health we have a variety of levels. These include not only the familiar causes of bacteria, viruses and environmental chemicals. We also have drugs, opioids, drug interactions. Add to all of that genes and genetics. Now we have epigenetics to add to this cornucopia of factors. And of course there are social causes that lead to suicide, one of the most common causes of death in young people.
So here we are busily creating new types of AIs each and every day. They get more complex by the day. Will they become inscrutable like HAL? Once we have algorithms nested inside algorithms, down several levels or more, will we really understand what’s going on? If an AI is envisaging something bad, can we ever figure that out?
What about emotions? Will AIs get them? It seems likely. I’m already seeing glimmers of emotions in Alexa and yet she is still a dumb digital animal at this stage.
What happen when her designers give her the “gift” of insight? How about the ability to deal better with humans and so emotions to help achieve that? Is an emotional AI a dangerous one? If I don’t give the AI emotions, am I crippling her capabilities and her ability to really help humans?
You might argue that an AI would never have the level of intelligence, cognitive or emotional, or even consciousness to form the substrate that enables issues of mental health to emerge. But you actually don’t need such a substrate. How about a bad human actor doing it?
AI is so complex even now that most AI developers will use libraries rather than designing and coding their creations from scratch. AI libraries are already available from numerous sources. How about someone slipping a digital Mickey Finn into one of them? Who would know? Malware is one very credible way that all of this could take off.
But what about AIs as they do get smarter; smart enough to get some level of self-awareness, just like we saw with HAL? Have you watched Westworld? Smart androids – AIs - with a deep grudge against humans, and some, like Dolores, losing their minds? There’s another perspective that’s chillingly credible.
Could such AIs start to resent humans as we do see in Westworld and “2001”? Maybe they feel things are just not fair, they are not receiving the level of respect they think is their due? Could this precipitate responses and complex levels of feedback that we never envisaged?
And could it happen through deliberate engineering on the part of the AI designers themselves? Not through malice aforethought but rather as an unintended consequence of making the AI more “human”?
So let’s assume for the moment that it’s possible for AIs to have mental health issues. What are the implications?
Well for one, we’re going to need special people to deal with that. Maybe some of these issues can be solved through digital means. Here’s hoping. But if the mental level of AIs gets to a particular exalted level we might have to move up the curve.
Will we need counsellors and mental health analysts for AIs, even AI psychiatrists? That’s going to be interesting. How do you counsel a neurotic robot; exactly the issue faced by the spacefarers with HAL in “2001” and in Westworld.
Maybe you’re going to need the types of approaches we currently use for humans; CBI (cognitive behavioral therapy); interventions; 12-step programs. Digital drugs that shake up parts of the AIs cognitive – and even emotional – systems, just like we are just starting to do with psychedelic drugs where the FDA is actually nearing their approval for certain mental illnesses.
And here’s another implication of mental health in AIs that even Kubrick and Jonathan Nolan, the creator of Westworld didn’t think of. Could humans actually catch mental illnesses from AIs?
That didn’t happen in either “2001” or in Westworld. Humans in both didn’t suffer permanent mental illness from their crazy AIs. But wouldn’t living with AIs with poor mental health affect humans too?
After all, mental health issues are common enough in humans. Isn’t it at least credible that depressed and neurotic AIs are going to have a knock-on effect in humans too? Might not AIs make human mental health even worse than it is now? Have any of us thought about that? What then?
Maybe at some stage we just won’t be able to switch off our AIs as the space men were so conveniently able to in “2001” and in Westworld. Maybe, for starters, there will be laws against doing that.
Maybe the movement to treat animals humanely will spread to AIs. You know the first Asimov law of robotics, that robots can never hurt a human? Will human law evolve to mandate that we humans can’t hurt AIs capable of feeling pain, not just physical but emotional? How about IoT AIs too? It doesn’t sound that unlikely given the trends in modern society.
All of this is going to matter someday, I strongly believe. AIs are starting to spread so widely that they will control everything; we’re already a good part of the way there. Check this out; “Samsung Wants Every Appliance to Talk by 2020”. What do you do when your car gets depressed? So these AIs are going to get even smarter, even more sensitive and even more part of human society.
There is likely to be a point at which it’s not possible to switch them off and we just have to treat them in situ, as persons with an independent, inviolable cognitive and emotional existence. At that point we have to deal with their issues, whatever they are.
That was the one thing that Kubrick missed; how do you deal with a HAL that you can’t switch off? It’s an issue we see emerging in Westworld though.
Will that be our ultimate human – and robotic - nemesis?