My recent post on AI argued that currently AI is not interesting because it’s really just more incremental thinking based on current, boring, paradigms, in effect party tricks (“AI – Let’s move beyond party tricks!”).
Yep, I’ve seen the latest celebratory article in the new York Times about the amazing things they’re doing with Google Translate (“The Great A.I. Awakening”) but that’s still the same old story – neural nets are hardly new and the language project in Google is still using brute force via its massive databases of translation terms.
So you might be thinking that I am an AI-denier, a caveman just like the climate equivalents. Well, maybe.
But I believe that AI will only get interesting once we get quantum computers. They are getting very close. Just as I have predicted that quantum computing will disrupt current stock trading (“High Frequency Trading Trashed by Quantum Computers?”), so I think that quantum computers will disrupt current AI and essentially make it obsolete. That’s because with quantum computers we will get a massive leg up and finally be able to have real AI.
What do I mean by real AI? Well firstly, it’s not about Siri giving you flight schedules or telling you when Alexander the Great died. Currently AI is really the application of current ways of incremental thinking to everyday human problems. That’s ok but it’s not AI, at least not what I want AI to be.
We will never have real AI unless we get a machine to think outside the box (e.g., “Are our brains wormholes”?), think irrationally, explore stupid ideas (see my post “Is Astrology a Branch of Neuroscience?”) and fall in love. Quantum computers open up these possibilities.
Indeed it’s possible that our brain really is a quantum computer (see my post “Is our brain really a quantum computer? Idiots savants may know the answer.”). It’s becoming clear that biology makes use of quantum effects and may even be based on them. That’s one of the reasons I believe that we won’t get real AI until we can apply quantum computers to AI.
What I want to ask Siri is stuff like” what is the meaning of life? Will we ever get machine consciousness? How can we beat Alzheimer’s in 2 years? Stuff I can ask an alien, not my local schoolteacher (apologies to school-teachers everywhere).
Apologies also to The Hitch-hikers Guide to the Galaxy”. And yep, I’m serious.
Is that too much to ask? If the AI guys don’t want to do that, they should choose a different gig.
But there’s an even larger problem with AI, even with the idea of quantum AI.
It’s fashionable these days amongst the literati to discourse about the Singularity, the epoch which starts when machines become more intelligent than humans. Check out Ray Kurzweil’s book “The Age of Spiritual Machines: When Computers Exceed Human Intelligence.” I think there’s a big disconnect here. The reason is the focus on intelligence.
For me, you only get the Singularity once machines are fully conscious, irrespective of their intelligence. The real inflection point comes when a machine has subjective experience and we can show that it does. It’s consciousness that should be the real point of AI, not intelligence, no matter how high.
What’s the difference? Intelligence is about power. Consciousness is about sensitivity. Sensitivity means more awareness of the aims, feelings and pain of others. More intelligence doesn’t lead us to be more peaceful and caring; if anything the opposite. More consciousness brings about the harmony part. Isn’t that what we want?
So as long as AI, the Singularity focuses on intelligence, it’s focusing on failed paths that are even more likely to result in the destruction of humans, human values and any aliens we happen to encounter.
You remember those failed paths right? Wars, genocides, planetary destruction, cruelty to people, kids, animals and even our own families. Unbreathable air, undrinkable water, unsustainable ecologies, mass extinctions caused by humans themselves. Weapons of mass destruction, terrorism, industrialized hate. Did I miss anything?
We can call that approach the weaponization of AI. That’s why the focus on the use of quantum computers is inordinately focused on their use in encryption and decryption.
If we focus on consciousness we take the high road. It means we want to improve the human race, the planet, and be nice to any aliens, or for that matter those terrestrial animals that cross our wayward and often-wanton path.
It means we have decided to use AI for nice things, not nasty ones like wars, better nukes, and encryption so we can beat enemies and friends alike. It means a choice for ecological and cosmological sustainability, not for the achievement of the next quantum jump in war-making, hate and tools to make the powerful and evil even more so.
Quantum computers promise to be able to do that because they can handle massive numbers of probabilistic feedback loops which can handle superposition of possibilities. That paves the way for consciousness to the extent that it’s an emergent property of cognitive functioning. It’s likely to lead to a sense of machine awareness of subjective experience that a normal computer can’t achieve because of massively limited processing power and because it computes essentially in the wrong way.
AI should be used for good, not more harm, the road it’s set on right now. We’ve got to realize that the real aim of AI should be the creation of machine consciousness, not merely machine intelligence.
Then we ourselves can be more conscious, more sensitive and more sustainable. We should aim to build conscious machines that are truly moral, not like us. They can help us overcome the nasty impulses we have deep within us. They can be our partners to build a better world, which we patently aren’t going to do on our own.
Of course, people might say that there’s no guarantee that machines can ever achieve consciousness. But what’s the alternative?
The true aim of AI should be improve ourselves, our machines and the universe, not to make things even worse than they already are.