Can the Internet hear me now?

For the past two decades, the web has been optimized for sight and touch. This is about to change in a big way. Apple’s Siri, Microsoft’s Cortana, and the Amazon Echo are ushering in an era of voice-controlled devices and services. Soon, Facebook M will join the fray. These digital assistants all reveal the beginnings of a transformation within the Internet. We will increasingly interact with the web and all it contains primarily using our voice.

There’s one glaring problem with this brave new world: as the Internet of Things talks back to us, much of what it has to say will be for our ears only. How do we keep these conversations private and personal? Answer: With hearables. I spoke to several in the budding hearables industry. All, no surprise, are big believers in the technology.

Better hearing means smarter listening

San Francisco-based Nuheara makes a wearable for the ear — scheduled to go on sale in early 2016 — that allows the wearer to connect with specialized voice-enabled apps.

David Cannington, co-founder at Nuheara told me that his company believes most of us will soon “be interacting with our smart devices more and more with our voices than our fingers.” The newly launched Apple TV speaks to just such a vision, though Cannington believes we are only at the start of this great transformation.

Soon, our car, refrigerator, thermostat, the subway turnstile and all manner of futuristic devices will verbally communicate with each of us. The possibilities, he says, “range from personal digital assistants, to translation on the fly, to AI-driven voice enabled apps.”

Cannington’s hope, of course, is that Nuheara is the device of choice relaying these voices to our brains while cutting through all the noise. “As thousands of developers around the world (start) working on voice recognition apps, we plan to be their partner of choice as an innovative hearing technology platform.”

Gadgets that listen better than ears

Hearables will not just enable what we hear but how we hear. Noah Kraft, CEO and co-founder of Doppler Labs told me his company is “focused on hearables because we think that hearing enhancement and optimization of real world experiences like concerts is a new scenario that can dramatically improve life’s most amazing experiences.”

Want to pump up the bass at a live concert? Remove extraneous noise while having a conversation on a crowded bus? That’s what Doppler Labs earbuds are designed to achieve. The product, which began as a Kickstarter, is expected to ship in December at a cost of $199 a pair.

Think altering how we hear is unnecessary? Not so fast. Our ability to control screens — what we see — has become so commonplace that we may not realize just how much potential there is in controlling what and how we hear. As Kraft told me, “in 5 years everyone will be signal processing their hearing 24×7, allowing for cool AR (augmented reality) scenarios but also passively ensuring they never hear noise again.”

Doppler isn’t alone in this vision. It’s rumored that Apple’s recent hiring of a key Microsoft audio engineer reflects the company’s interest in augmented reality. After all, artificially augmented sounds created just for you will make virtual experiences seem radically more realistic — and also way more cool.

“Once people put a computer, speaker and mic in their ear they will of course be able to do telephony, alerts and virtual assistant scenarios, but we think hearing optimization is a better way to get people to adopt hearables,” Kraft told me.

Why hearables may be overdue for innovation

The vision for hearables, pun intended, is still in the early days. Design and battery life are the primary stumbling blocks holding back the industry. Scott Amyx, founder and CEO of Amyx+McKinsey, which provides strategy and support for the wearables industry, told me that current ear-based wearables remain “fatiguing and inconvenient.” However, he is very optimistic over the long-term.

“Advancements in battery life and sensor miniaturization will allow for smaller, more comfortable forms for extended use,” Amyx said. “As objects around us awaken, information will be seamlessly communicated to us via hearables.” In fact, Amyx, like so many others, believes the “primary human-computer interface” will not be touch, keys or screens, but voice — and voice requires hearing.

There was probably a time when we might all fret over the intrusiveness of a voice speaking directly into our ear. But with Siri and Cortana becoming more commonplace, and with a billion people constantly staring into their smartphones, the idea of the web inside our ear may actually be long overdue.