AI has been the talk of the town at CES. There’s suitcases that follow you, more smart speakers and that’s the least of it.
But I’m feeling uneasy with all this AI stuff. Part of my concern is it all sounds too easy. Just add more intelligence and everything’s ok. Doesn’t that all sound too easy, too pat? If you stuff even more intelligence into your gadgets they get better, right? Intelligence these days is not just next to Godliness, it’s above it.
I’ve got this nagging feeling that we’re only going to reach the Godhead once artificial intelligence gets less smart, just like us highly imperfect humans. As in irrational, temperamental, contrary and just plain stubborn. But then I have to think about how, even if we could get an AI to be so ornery, what it would buy us. I continue to think it’s an important issue that we are all missing, but I can’t just put my finger on why it’s so important, even though I just know it is.
Here’s a start. Can an AI predict things accurately if it doesn’t understand how imperfect and warped our thinking processes are? Can you trust something that predicts things based on a perfectly rational calculus when it’s patently obvious that we are all perfectly irrational at times and sometimes, all the time? Just look at global politics currently just to see how truly irrational humans can be.
And if we get AIs that are unbelievably smart, that never think stupid, weird, crazy, even catatonic things, can we ever get the insights of a Van Gogh, an Einstein, a Beatle or Musk? Can AIs even be weird or crazy anyway? Can an AI be unconstructively irrational, just for the sheer damned sake of it, to piss everyone off? Coz it’s good for them anyway? Pour encourager les autres?
What about ethics and morality? Can we teach AIs about that? Isaac Asimov thought about the first 3 laws of robotics. But there’s another 10,000 plus. They may or may not be rational, and many are probably totally stupid. How do we model that? And then teach AIs about it? Do we even have models to cover them?
And what about pesky sidebars like religion and belief? Do they come into the purview of an AI? Can AIs be believers, or even agnostics? How will this impact their internal calculuses and decisions? If they don’t know that irrational can be good, how can we trust those smart suitcases not to do something bad or lousy? How about drones where the problem is imminent?
Much as I admire and respect the engineers and scientists who are making the AIs, I don’t know that they’ve got that in them. Not because they’re not smart, but precisely because they are. Because it’s not in our culture to admire the irrational and sometimes terrible impulses that humans unpredictably experience that nonetheless lead to great and amazing things, will nilly or otherwise.
Do we have a science of irrationality anyway to provide us with guidance? Yes we kind of do, it’s the psychology of personality, and behavioral disciplines such as behavioral economics and finance. But the latter are not even taught at undergrad level, which just shows you where the academics are at: namely disbelief in the power and importance of the irrational, no matter it surrounds us all and is slowly destroying us.
I can’t see AI making any sort of quantum leap until we get over this existential hurdle. AI will get smarter and more powerful but in doing so I think it’s going to miss the forest for the trees. When you discount the irrational you are blinded to some of the most important questions in human existence.
AI is starting to come hard up against these obstacles right now. For example: in the famous questions asked of self-driving cars; how do they decide whether to kill a child or a senior if it has no choice but to hit one of them? Tough right?
The headlong flight into rational AI is taking us in the wrong direction. It’s much too one-dimensional.
If we’re going to solve the rapidly growing social and political problems globally we’re going to have to think about how AI confronts the irrational.
We need a new and more human AI. Let’s call it artificial irrationality.
That’s what will make for the right AI for humans, not just engineers.