AI is So Passé

I suppose you’ve seen that Google is having problems with it’s AI folks. Maybe it needs some AI to fix them? The cobbler’s shoes perhaps?

It seems that everyone is using AI in their apps these days. Voice assistants like Alexa are touted for them and who can claim they aren’t getting smarter? If you have an app these days, it might be death if you can’t claim to be AI-powered or have machine learning (ML) powering them. But I fear that we might be claiming to much. What is intelligence anyway, and what in particular endows these apps with it, given that we still don’t know?

I, for one, would want to see something more than smart algorithms. In fact, I have a particular criterion, namely I think these apps should be at the very least just a little conscious. What do I mean by that? That the app, as it goes about its daily chores, manages to intuit whether not it’s meeting the real, even if unexpressed, needs of its customers.

That is, it understands the gaps in its repertoire that its makers haven’t realized yet. That it can look forward to the future to see not just whether the app is achieving its stated mission, but whether or not it is going to be able to contribute to the causes of global sustainability and human happiness. Is that too much to ask?

Now you might legitimately argue that asking for any sort of consciousness in apps is just being totally unrealistic. After all, what is consciousness anyway, and how the hell does it come about in our brain or anywhere else? It’s not for nothing that the consciousness issue is often termed the “hard” problem.

If AI Isn’t Quantum, It’s Not Real AI

I’ve been thinking about these weighty issues after reading a piece by Stuart Hameroff, a researcher in the area of quantum consciousness. He has partnered with the famous British Nobel Laureate and physicist Sir Roger Penrose on leading edge and idiosyncratic theories of quantum biology.

I’ve been aware of Hameroff’s work for a while but it’s daunting stuff. It has far-reaching implications for AI and the future of human society. One aspect is the much-discussed Singularity. According to Ray Kurzweil, the Singularity will be upon us by 2045 but others believe it will be sooner. That epochal event, you might recall, is when machines surpass us in intelligence. Yes, what AI is supposed to do. So, is 2045 on or not?

It all depends on what you believe AI is. My perspective is that there’s two sorts of AI: the weak version and the strong one. The weak version is what passes for AI today, that is ML of various hues, and genetic algorithms and neural networks. Nice but not earth-shattering.

Strong AI is Really Thoughtful

The strong one is quantum consciousness. That is, where intelligence is orders of magnitude higher compared with what we have right now. It’s based on quantum consciousness, not on the crude electrically based models we assume now. Strong AI is way way beyond what is possible just using neurons, no matter how many you have.

That’s my bet for the future of AI. It’s the strong form which results in quantum consciousness. That’s the ticket for all those apps we really want to develop sometime in the future.

If you want to mug up on quantum consciousness and what it means for AI, check here. But be warned, its not for the faint-hearted. On the other hand, if you want to see where strong AI is headed, you need to be aware of this stuff. It’s not necessarily the truth or any credible version of it but it’s the first time I’ve seen anyone explain in one place how consciousness evolved, mixing in all the gory and mind-numbing biophysical and chemical details.

The Quantum Underground

The nub is that neurons are just a kiddy way of explaining intelligence and that the real action goes on in what Hameroff and Penrose term “the quantum underground” that exists in pockets in nonpolar solubility regions of bodily cells, especially neurons. The actual action takes place in microtubules within neurons. Their computational capacity is orders of magnitude higher than what is available from neurons alone.

That’s what I mean by strong AI. In my view once you are able to re-engineer that you get to the Singularity, but not before. So, the AI we are developing now is nice to play with but just for kids and can never do the hard stuff. In fact, unless you have consciousness somewhere, its not AI no matter how you dress it up.

So how do we get quantum apps anyway? Way above my pay grade. But the betting is that you might need a quantum computer somewhere in the picture although it probably won’t be too useful because it can only work at just a little about zero Kelvin. Maybe you can use plants which manage quantum superposition at ambient temperatures. Of course, you could also tap into the microtubules within neurons n your own brain as a neat short cut. Maybe Elon Musk’s Neuralink can be jury-rigged to do that.

Of course, we’re not going to get strong AI anytime soon, so we only have the weak variant to play with for the moment, and so we should. Over the next couple of decades, we’re going to get more into strong AI as real quantum computers come along, and as biological science figures out how to create sustained quantum interactions in vivo without decoherence. That will lead to a form of limited consciousness, not necessarily similar to that possessed by or within a living form. At that point we will have the framework for strong AI.

In the meantime, I am fine with you talking about AI as long as it is introduced by the prefix “weak”.