Over the past few weeks I’ve been getting more chatty with my tech. Voice control is becoming more standard in today’s gadgets, and from smart home tech to smartwatches to fitness companions, the age of voice input is definitely upon us.
Yet while these leaps of progress should be reason to celebrate, I’m just not at the point where I can ask an inanimate object for a weather update without feeling like a prized asshat. I knew I wasn’t the only one, so I did a little crowdsourcing to gauge why other people might also not yet be at ease talking to technology. Turns out, my assumption was right.
The responses I got underlined two aspects to the problem: the bit where we’re talking to the robots, and the bit where they’re talking back. To get to the point we’ll happily chat to Google or Siri on the bus without feeling the judging looks of others, both parts need to be reconciled.
Yelling at a bit of plastic, no matter how good its conversation skills may be, is going to feel odd for a long time to come because it’s unfamiliar, but it also becomes less strange when it actually works.
Read next: The best smart home systems
Whether it’s talking to Google Now, Siri, or any of the myriad of smart home appliances, the conversation is often too stilted and unlike speaking to another human – but we’re gradually moving past this. For example, I’ve recently been testing the Oakley Radar Pace glasses, which use Intel’s natural processing language to let you chat with the AI in a more conversational manner; Google’s doing something similar with its chatbot Assistant. The moment you remove the need for pre-set commands, the technology starts disappearing behind the voice.
Sadly, it’s usually not long before I’m repeating or rephrasing sentences so the AI can understand me; the cracks start to show, and once again I’m “that guy” yelling at his tech in the street, and all too aware of it.
In those brief moments where the conversation gets into a more natural rhythm, I become slightly less embarrassed, and I glimpse the future technological singularity that we keep getting warned about, where robots outstrip human intelligence.
Which brings me to the second bit: the stuff that’s being fed back. In robotics there’s a hypothesis called the uncanny valley which posits our emotional responses to AI changes as it become more familiarly human.
We respond more positively the more lifelike they are, until a certain point where the likeness is so close we start feeling repulsion instead. Finally, when robots become uncanny to humans, our response shifts more positively again.
With AI, it feels like we’re still in that bit before the drop, where AI is too stupid to evoke repulsion but not smart enough to ease the peculiarity.
The usefulness of the conversation is important. For the most part I don’t feel I gain much by asking my smartwatch questions that I could just as easily find the answer to by pulling out my phone, but take, say, Amazon’s Alexa, which I use mostly for playing music and reading out the latest news – things that otherwise require a bit more work – and it feels more beneficial. The utility justifies the weirdness of talking to an object.
We all know how rapidly AI is advancing, but it’s hard to guess how long it will be before it’s smart enough to stop us feeling like chumps when we talk to it. I think you can compare it to smartphone payments: yes, you feel silly holding up the queue in Pret trying to get Apple Pay to scan, but one day in the not-too-distant future, when everyone is doing it, no one will care.