Right now AI is really hot. But to me it’s boring, showing no imagination. Siri still performs at the level of a very dumb 5 year old. Self-driving autos work but the tech is soooo incremental. Basically AI is all about performing party tricks. When will it get truly smart, even shocking?
I am sure you have heard of the famous American science fiction writer, Arthur C. Clarke? He coined a famous phrase, namely “Any sufficiently advanced technology is indistinguishable from magic.”
This article makes some suggestions about where AI should be headed. What I’m going to suggest is that if these suggestions don’t lead to things that we would shocking or otherwise impossible, the technology behind them isn’t advanced enough to be considered magic. In that case we’re wasting our time.
Here’s my thoughts.
Lean AI (LAI): It’s very powerful but very, very lean and efficient. For example, what if I could make a fantastically intelligent machine using just 10 biological cells? Or 10 atoms? Or maybe one atom? There’s going to be lots of occasions we want LAI. I’m going to give you some examples below, just in case you’re a bit doubtful about the idea. That would be magic, right?
Augmented AI (AugI): That means my brain plus something else that makes it ultra-ultra intelligent beyond our wildest imaginings. It’s kind of like a disabled runner using special blades to enable him to run much faster than a normal person. Sounds like it survives the magic cut.
Alien AI (ALI): That means AI which is at the level of an advanced alien. Of course it’s so far off that you might justifiably think that there’s no way we can even imagine it, let alone define and construct it. But we could have fun trying, right? And maybe even if we don’t construct ALI, it would teach us how to get there even if we can’t finish it. What’s the difference between magic and God-like aliens anyway?
Lean AI: Doing Everything with (Almost) Nothing
Here’s an example. I recently watched a science video. It showed a bacterium chasing a really small bacterium, maybe even a large virus. Every time the little guy tried to get away, the bacterium turned with him and continued to chase him. The bacterium ignored all the other bacteria nearby. Its aim was the eat the little guy. And in the end it did.
Now how many cells were in that bacterium? 10? 20? That’s lean. Yet it was able to chase down this like guy and eat it. That’s what big animals do. Yet this animal was tiny. It was an ultra-tiny machine that was amazingly intelligent. It sure looks like magic to me so I’m going to accept this as a good example of LAI.
Let’s take another example. Birds. Birds do some pretty amazing things other than fly. They can navigate with pinpoint accuracy to precise points thousands of miles away. Without a mobile phone too! That’s really impressive.
But we now know that there are many kinds of birds that can actually construct and use tools, made out of things like cardboard and sticks. Now we know that insects can construct and use tools too and also have sophisticated social skills and techniques.
We used to think that the only animals that could use tools were humans and that we could do that because we had really big brains. And only big brains like humans allowed an animal to possess our advanced social skills. Other animals weren’t supposed to be able to do that because their brains were so tiny, they didn’t have enough mental horsepower to pull off this prodigious feat.
But we were wrong! Birds and even insects have LAI too with tiny brains. How do they do it? We have no idea. So it looks like magic too. I’m going to accept that as another example of LAI.
Somehow we’re going to have to construct LAI. Maybe we have to start trying to do it computationally using math and virtual reality to see how tiny machines could be so smart. Or we’re going to have to try making ultra-smart nano-machines. But we need to start on it no matter how it’s done.
Ultra-smart nano-brains sound magical to me!
Augmented intelligence (AugI): Making much more from what you already have
There’s increasing evidence that humans use only a small fraction of their total potential intelligence. Much of our thinking and use of our intelligence is unconscious. We do not direct much of that intelligence. So in fact have vastly more intelligence than we actually use. We see this from disciplines such as behavioral economics and finance.
But there is one definite proof of this fact. It is through people called “idiots savants”. These people often appear to be intellectually backward, even mentally defective but actually they have amazing intellectual abilities. These are often mathematical but sometimes artistic. This ability was highlighted in the American film “Rain Man” starring Dustin Hoffman. He was able to use his brain to regularly win in gambling casinos. The movie is based on a real-life person.
So the brain has amazing capabilities but the vast majority of us have never been able to use them. In fact our brain might have the best form of “artificial” intelligence because it’s also natural intelligence. What if we could find out how to use this incredible ability?
Maybe there is a way, using either drugs, mental exercise, or even a machine or prosthesis we could use to unlock this ability? In that case we would not call it artificial intelligence since it is partly natural. Instead we would call it “augmented” intelligence, or AugI. It would be a little like augmented reality, which adds artificial reality to our natural reality.
How would we augment it? There is now a huge amount of research being conducted into brain studies using MRI (magnetic resonance imaging). This would be one way to start. Maybe the machine to augment our intelligence would be through a “hat” that fits over our head, as we sometimes see in science fiction films. Or it could be drugs or some other way.
But in any of these cases we would be augmenting our intelligence dramatically. So this would be another route to AI.
One route to AugI would be a new understanding of the brain. We still don’t have a real explanation of how the brain works.
We used to think that it was through neurons that work electrically and communicate with each other, just like a computer. But now it’s obvious that this model is obsolete.
We now know that another class of cells, called glial cells, makes up 90% of cells in the brain. We used to think that these cells had no role in thinking but now it looks like they have huge role in that. But instead of communicating via electrical signals, they use chemical signaling. We really don’t understand how this works. If we did we could start to use the brain in different ways for thinking maybe like idiots savants.
AugI would be a totally new route to AI. It would have the massive advantage that it starts with natural intelligence, and we have some ways to understand that. If we find out enough of how glial cells work we could probably find other routes to AugI that we still don’t know about.
We can’t ignore the intelligence mechanisms we already have. They give us a head start on finding out more about AI. It’s likely that the future of AI is not just disembodied computers which are totally separate from us. It’s just as likely that AI will be some form of AugI. So we need to follow this route too.
Our brains are magical as it is, since we still have no real idea how they work. So this approach layers even more magic on it. A ”…riddle, wrapped in a mystery, inside an enigma, layered right on top of magic” (apologies to Sir Winston Churchill).
Alien AI (ALI) – We’ve got to learn how to think like aliens if we want to find them
You’ve probably heard about SETI – the search for extraterrestrial intelligence. These days it’s a big deal with lots of films about it. Aliens are probably really, really intelligent, at least if they can get here from the stars. So if we learned how they think, that would give as another route to AI and presumably a huge boost to our own intelligence.
But how do aliens think? That would be pretty useful to know right? If we could discover that, it would give us yet another route to AI. This is the Many of AI since we would have to think of AI that was super-intelligent, much smarter than any AI we have ever thought of.
Where would you even start in trying to crack this problem? After all, we've never seen an alien so have no idea how they think. But one way would be to imagine the extreme environments in which they would exist and try to image and figure out how an intelligent being would need to think in order to be able to function in those environments. Here are a few ideas on how to do that:
- Intelligence in multiple dimensions
- Intelligence in multiple universes
Intelligence in multiple dimensions: Did you ever hear of string theory? It’s a new theory of physics. It hasn’t been proved yet and maybe never will; be but it’s an intriguing theory. Partly that’s because it predicts there are 11 dimensions. So if the theory were true, then we have 8 dimensions here in our universe that we can’t see, at least easily.
But it also implies there may be other universes where you see some or all of them and not necessarily the dimensions we see. And there could well be alien life forms living in those other dimensions. How would they look to us? More importantly, how would they think? What would be the form and structure of the intelligence of an alien living in other dimensions?
Now you might think that’s all too theoretical and not useful to us. But pure mathematics is full of findings that looked to be useless at the time that later turned out to be very useful indeed. And actually if we can figure some answers to the question, it would probably give us some radical new ways of thinking which would, in that case, open up new approaches to AI.
So how would you conduct that research? I am sure there’s a variety of methods but there are two that occur to me immediately. I think you could simulate intelligence in other dimensions both mathematically and computationally. Maybe some virtual reality would help too. But I am certain that it would yield some results which would probably be radical and could lead to new forms of AI being discovered.
Intelligence in multiple universes: I am sure you have heard of the theory of the multiverse; that is numerous other universes, maybe even an invite number. We don’t have to worry about whether it’s true because it provides another way for us to research new forms of AI. If there were indeed other universes, how would intelligence look in them?
Have you heard of the physical constants? These are some of the basic values of certain phenomena in our universe. They include things such as the speed of light and the strength of gravitation. But scientists have speculated that maybe in other universes these constants might be different. In those cases these universes might be totally different. They might not be able to host life, or intelligence. But maybe they can host other forms of life that are radically different to ours’.
But here’s the issue. What would intelligent life look like in other universes with different physical constants? That would be another approach to exploring AI. Then we would be looking at the nature and structure of intelligence in environments that are massively different to our own.
This approach could also include looking at different universes where senses might be totally different. For example there’s no sense of sight, touch, smell, hearing and taste. What then?
Again I think this could all be explored both mathematically and computationally to provide totally new approaches to AI. Maybe researchers could examine animals or humans that lack some of their senses to see what happens then. There are animals, insects and bacteria that live in extreme environments that might give us some clues. Animals that live deep in the sea where there is no light for example.
That’s what I mean by magic AI (thanks again to Arthur C.).
Let’s do it!