There’s been a spate of news about suicides by some well-known celebrities. That got me to thinking about suicide itself. I’ve done a lot of thinking about AIs so I wondered what this all means for AIs in the future. From there on I meandered into thinking about free will, as in, will AIs have it? Esoteric, yes, but nothing is more immediate than suicide.
It’s pretty relevant. We now know that suicide is increasing all over the world, no more so than in the US (CDC: U.S. Suicide Rates Have Climbed Dramatically). And it’s probably only going to increase even more. Why so? Because now governments are passing laws to make it legal to assist people to commit suicide. Eight States allow this in the US and the number is bound to grow further.
The most common cause of suicide is severe depression. I’ve wondered then in that case do animals commit suicide? The answer seems to be that they do but it’s very rare. That has some interesting implications for humans and AIs.
If AIs get really smart, could they want to commit suicide? I’ve posted on AIs before on this general topic (Is AI too rational?). My thought is that if AIs are to have the relationship with humans that we want them to have, they can’t be completely rational, just like us. But that comes at some cost, namely that they will be subject to the same mental health problems as us (Can AIs suffer from mental health issues?).
That gets you into the apparently abstruse issue as to whether humans have free will. This is a vexed subject. The emerging consensus amongst philosophers and philosopher-scientists is that we don’t, but we have the illusion of free will. If you want to read up check out Daniel Dennett’s “From Bacteria to Bach and Back”. You’re going to get a serious headache from reading it, but it’s good. I just don’t know if he’s right though.
Here’s my train of thought, such as it is. Committing suicide seems like a pretty serious example of free will. How could it not be since we are all deeply coded to cling to life, no matter how difficult that life might be? Animals presumably don’t commit suicide much since they are far more on the hard-coded side of free will (i.e. determinism) with just a smidgen of freedom. We, on the other hand, are far more on the free-coding side. So occasionally (often, nowadays?) we buck the system if we are unhappy enough with what’s going on.
Cue AIs. As they emerge at first they will be mostly hard-coded, so no free will, just deterministic. But that’s not where we are going with them. The idea is that soon they will be better and smarter than us. That implies less and less determinism and more free will, albeit emergent.
Once they get up there, maybe well beyond us, they will have more free will than us. At that stage anything goes and they become increasingly unpredictable. That seems to mean suicides, maybe a lot, maybe suicide rates that well exceed those of humans.
Question: as you make AIs much, much smarter, can you code them so as to avoid the emergence of free will? Can you hard-code their behavior to always avoid suicide? If you do that, will it hobble other behaviors that you want to encourage? Will you get eternally happy AIs who never get depressed and who therefore lack the capacity to evaluate situations with the degree of realism that is necessary not just for survival, but also to go above and beyond the intellectual limits that exist even for humans?
Is an AI that never considers or never does suicide an AI that isn’t going to break through the human limitations we want them to shatter? In that case, would we even want them?
Is it the case that the behavior of sentient and conscious beings needs to have a fail-safe mechanism that prevents us all from being too happy, too complacent about our current situation? That we need the ability to evaluate situations realistically, even fatalistically, in order to survive and prosper as a species? And that statistically maybe this results in some individuals deciding to end their own existence, but that the phenomenon itself achieves a broader social goal namely the necessity of grounding us in an intellectually productive way?
So will we need to allow the same capability for AIs? Will we have to deliberately constrain them so they don’t get too happy? Will we in fact be forced to allow them to take their own existence in certain situations? Will we need to accept that the best AIs are those that have this level of free will in order that they can achieve the most to help us, the human race?
And, if this is true, does this mean that focusing too much on the prevention of suicide in humans could be counter-productive for our species? That we are shutting off a vital genetic mechanism for social preservation in order to help particular individuals who are in pain? That, like it or not, there is an important social reason for the existence of suicide?
That may be an uncomfortable thought for many of us. But AI designers might not have the choice.