How to make devices sound less like robots: add lovable flaws

Devices are increasingly communicating with us via audio, asking us for voice inputs and responding with their own voice as well. But most of these dialogues still sound like humans giving commands to robots, and not like conversations. That’s a problem, argued Human Machine Interactions founder and CEO Brant Ward at Gigaom’s Roadmap conference in San Francisco on Tuesday.

“If the system is apathetic, so is the user,” Ward said, arguing that communication with devices should be more like in the movie AI. “The onus is on the designers of these systems to make us want to use them,” he said, adding that this is especially important in an IoT world where more and more devices communicate with us without traditional screens.

So how can you make devices be less like monotonous robots? Ward argued that a lot of it has to do with adding human flaws. Words like “so”, “you know” or “anyway,” or even accents or otherwise more interesting characteristics that would make a voice imperfect, and thus less sterile.

Devices should also learn about situations and contexts before reacting to them. “The system has to know you in order to fully engage you,” Ward said, suggesting that a device should be able to react to workload, weather and behavioral patterns to become more human-like.

The good news is that we may not be that far away from these kinds flawed, but lovable devices. “We are entering an era of meaningful interaction,” he said.

[protected-iframe id=”e03e95d45cf7e0a0a3afdb2d1bd063a9-14960843-61002135″ info=”” width=”640″ height=”360″ frameborder=”0″ scrolling=”no”]

roadmap2014_ticker_ad_1b (1)