Voice control is hot. Amazon announced a voice-controlled microwave late last year, in fact it actually works with Alexa, the belle that everyone wants to take to the ball. But voice stuff even now is getting to be passé. Sure, Alexa can take orders and give you replies to everyday questions. But what if Alexa knew if you were angry, skeptical or even deceitful when you asked? Then it could interact in numerous new, and maybe very disquieting ways.
Detecting emotions in voices is already well on the way (see here for a list of companies developing the technology). So, we can be pretty sure Amazon is working on this for Alexa as we speak. You might think that Alexa deciding you are angry based on your voice boggles the mind, but in actuality we’re well past that point.
And remember, I’m not even talking about facial recognition, which is proceeding apace. Now it too is working on cracking the code of emotions as displayed in facial expressions. In this space the mot du jour is micro expressions. So, these smart speakers coupled with a camera, a common thing now, can read your emotions from both voice and facial cues. It’s all downhill from there.
Of course, there are some possibly beneficial uses of this technology. Numerous in fact. Cognitive behavioral therapy for mental health is one. Leadership assessment and training is another. Sales training. Career advice. And so, on and on. So, there’s much more boggling to come.
Do you remember Hal, the wayward computer in the movie “2001: A Space Odyssey”? Hal murdered all the astronauts except one. They were onto him, but he/she/it still outmaneuvered them all with a bravura performance of deception, one worth of the most Machiavellian human. It was lucky one got away. The emerging questions now is: how many human wont once AI truly flourishes?
In the movie we see Hal carefully observing the astronauts to gauge their thoughts and state of mind. Even when they go into a soundproof module to outwit him by not allowing him to hear what they were saying, he still managed to get a glimpse of them talking inside the module (which has glass walls0 and to read their lips. Crafty right? Sounds impossible? Nope, check out this article about ALR (automatic lip-reading} software. Director Stanley Kubrick was truly ahead of his time.
How soon will Alexa be released with this product by the way?
How does this all work? Here’s one way. Instead of just sending out fake news on its own, a bad guy can track how you react to it emotionally by both voice and facial recognition to see if you are negative or positive on it, or somewhere in between. Then it can take action based on your reaction. It can do this not just for individuals but also for teams, groups, cultures of even countries. In other words, it can take fake news to the next frightening level.
So how about the broader uses of AI for deceit? It seems inevitable that it will be pressed into use, not just by the military and intelligence, but also quickly into commerce. Competitive intelligence and spoofing are going to be big uses. How about deceiving potential mates on match.com? Or, for that matter, how about your better half?
Remember Operation Overlord, the operation by the Allies in World War II to deceive the Germans about the real landing target in Normandy? It worked right? How about 10,000 Overlord-class deceptions at any given time in the world? And that’s just for government and military use. How about the millions of spoofing and deception operations by regular people like You and Me (you and me excepted of course)?
It seems to me that the first ports of call for AI deception operations will be cyber-operations to deceive national governments of the intentions of their perceived military threats. Competitive deception will be close on their heels. But the potential for personal use is huge. Will there be apps for that? Very likely, I would think.
The list obviously goes on. There’s already a problem of deception in academic research. Likely there’ll be an app for that too. Spoofed citations anyone? Who needs fake papers anyway?
On this telling, AI democratizes fake news so that anyone can product it, deceitfully and credibly. Of course, there will be anti-deceit apps too, but chances are they will always be at least one step behind.
Think about this. It’s the industrialization of fake news but now adding emotional content and cues.. Governments or even individuals can mount massive operations using what we can call “weapons of mass deception”. It’s like the Russian operation against the 2016 US elections but now by any country or individual that wants to destabilize something: a government, company, organization, or society.
We tend to think of AI doing good things like making medical diagnoses quicker, cheaper and more accurate, or cars to drive autonomously more safely. But we forgot about Hal and the ever-present motives of bad actors, in government, private companies and society generally.
One way or the other were going to have to figure this out. We need things like anti-deception social and technical architectures, not to mention university courses in the same.
Nor can we stop any of this. The genie is out of the bottle. AI is still being seen as a social force for good, even if there’s a few blemishes. That will have to change and probably will after citizens of countries have been sucker-punched a few times.
The implications for global conflict and the for democracy don’t seem so good though. This seems like it gives a really big stick for the bad guys to beat the good guys with.
We are entering a new age: the new era of weapons of mass deception. They will probably do way more harm than the traditional weapons of mass destruction because there will be no barrier to suing them
The only issue now is, not whether AI will be deceptive, but how amazingly deceptive it will be.
We won’t have long to find out.