This is the official Perth blog site for posts, comments, and other contributions about leadership, behavioral finance and economics, and about management generally, as well as other related topics that take our fancy.
Font size: +

A Turing test for AIs that have passed the Turing test

You know the Turing test, right? You communicate with a computer by typing and it must guess if you’re a human or an AI. The test has been run for many years. Usually the humans have won but the AIs are catching up.

So now some inventive academics (yep, they even exist) have devised a new Turing test; instead of doing a typed conversation you just have to give one word. The judges then must decide based on that one word, if you’re human or not.

A bit inscrutable right? How can they even do that? If you need to ask I shouldn’t tell you. But I can tell you the word that came out the most as being associated with a human responder.

That word? “Poop.” Go figure.

In 2014 the Turing test was apparently won by a 13 year old Ukrainian boy (actually fake) by convincing 1 of 3 human judges that he was human. The writing is on the wall that in the not-too-distant future the Turing test will be essentially obsolete. That will be when AI chatbots routinely convince all human judges that they are human. We’re far from being there yet though.

There’s something called the Loebner prize which is a competition like the Turing test. A program called Mitsuku has now won it for 4 years running. But the sense is that the competition itself is getting tired. That’s what I believe too. Been there, done that, what’s next is the interesting issue.

The Turing test had as its aim that a machine should exhibit intelligent behavior that you can’t distinguish from a human’s. Leave aside the obvious conundrum concerning how often a human exhibits intelligent behavior, the test is essentially about the appearance of human intelligence, as portrayed in a texted chat. Let’s say we get there. What’s the next set of goalposts?

It seems to me that intelligence is a very flawed goal. Intelligent behavior doesn’t necessarily mean human behavior which can often be very unintelligent, irrational and flawed. It’s normal human behavior to have mental disorders (see my recent post “Mental health needs disruption on a massive scale!”). Conversations which are 100% rational aren’t normal, which means they’re not quintessentially human. Isn’t that we really want?

I have a list of my own requirements when it comes to having an AI that is both human, natural and believable. Intelligence is only one of the desiderata and its isn’t necessarily even the most important one. How about wisdom rather than intelligence for a start?

Here are some of the other things I want my AI/chatbot to possess:

How about consciousness? OK it’s difficult, so what? We’ve just got to figure it out, that’s all. (For my take see  “Is the immune system the seat of consciousness?”). If the AI isn’t conscious can we really trust it? Don’t we want it to have amazing insights which can only come from consciousness? 

What about willpower? Don’t we want our AIs to have characteristics like courage, determination, guts, obstinacy in the face of insurmountable obstacles, and so on?  I tackled that one recently, see “Can AIs have will-power?”. No doubt I’ll have a completely different idea at some stage when the spirit (shouldn’t AIs be subject to that too?) moves me?

Passion: don’t we want our AIs to be passionate on occasion? Who wants a stolid AI anyway? I want my AI to be inspiring when needed to help me go the extra mile. That’s what we expect our leaders of all stripes to be and do right? A bit of the old Schwarzenegger swagger perhaps? “I'll be back”.

Mentally ill: Up to half of all humans have a mental disorder in their lifetime. Is that a critical marker of humanity? Isn’t that a marker of the type of person who might break through, if she doesn’t fail? I want Van Gogh, not Dr. Spock. Does that make me a bad person and a logic-phobic AI-hater? See my post on mental health for more (or less) insight (“Can AIs suffer from mental health issues?”).

In the same vein, will some AIs commit suicide? Is suicide the sole marker of free-will we can expect from a true, conscious, self-aware AI? Is the depression that drives suicide telling us there is something unique about the self-awareness of the being, AI or otherwise? That it must have the stamp of uber-humanness that we cannot guarantee any other way? See also my post on AI suicide (“Will AIs Commit Suicide?”) for even more gothic thoughts.

I want imagination; Not just any old imagination either; I want fevered imagination, one that sometimes goes over the edge, as we might expect from a good old-fashioned mental disorder. The imagination I want is colorful, not merely mathematical (although it might sometimes be), and maybe even sometimes beyond the pale. Because that’s what we humans are like too. Otherwise an AI will not be able to pass my advanced uber-Turing test.

And, for the curtain call, irrationality. Humans are irrational often so shouldn’t their mechanical Doppelgangers also be such? Is a rational AI too unbelievable to have a meaningful interaction with humans? Something any human would see in any type of Turing test? Isn’t what we are really looking for wisdom, not just or mainly intelligence? Check out my post about that particular snark (“Can we create irrational AI?”).

There you have it folks. All we’ve been thinking about that Turing stuff is old and passé. We need a new one to meet the requirements of today, not the requirements of Turing’s time, prescient though he might have seemed at the time.

Get to it you AI guys!



Stay Informed

When you subscribe to the blog, we will send you an e-mail when there are new updates on the site so you wouldn't miss them.

Are fruits and veggies bad for you?
Mental health needs disruption on a massive scale!

List of all Perth posts