Expect Labs, the company behind the MindMeld app and the MindMeld API for creating intelligent apps, has released a new service that specializes in voice-powered searches for movies and TV shows (and, theoretically, anything). The company is hailing it as a victory for couch potatoes, who could be saved the effort of lifting their fingers to find stuff to watch, but it’s content providers that would really win if the API works well.
Basically, Expect Labs hopes its new service will empower a horde of apps that mimic the Amazon Instant Video voice search capability of the Kindle Fire tablet, co-founder and CEO Tim Tuttle (pictured above, on the left) said. Cable providers, Netflix, Hulu or any company providing access to digital content could target consumers at the point, as he explained it, “when you’re sitting on your couch and you just want to talk your way to find something to watch, you don’t want use those terrible on-screen guides.” (Although the voice search can also work with other types of content, such as news or shopping apps.)
They could be mobile apps or smart apps on set-top boxes, or, more likely, mobile apps communicating with smart TVs or set-top boxes. Tuttle said about 80 percent of its media-industry users fit into the latter category, opting to use the mobile device as the point of contact rather than having customers speak to their TVs directly. From what he’s seeing, he added, “The TV just becomes the place you display the content you want to view in a large-screen format.”
Tuttle said this new voice-search service is different from the original MindMeld API in its focus and simplicity. He called the latter a “Swiss Army knife” API for hardcore developers who wanted to build any variety of contextual apps, but the new service consists essentially of a graphical web crawler for pointing MindMeld at the content an app needs to learn (from a site like IMDB, for example, or users’ own data) and a few lines of code for adding in voice navigation. What hasn’t changed, Tuttle said, is Expect Labs’ focus on “developing these language models around a specific content domain.”
This type of capability could be a boon for content companies if it works reasonably well. Natural language search makes it a lot easier to find what you really want rather just going by genre, actor or title, and voice is a more natural interface for it than is text. I think it’s not so much about being lazy (although crafting keyword searches does seem mentally taxing for some) as it is about finding a media app that delivers what consumers expect. We’ve already proven we’ll pay for Netflix and Roku and Apple TV, now imagine if we could actually find the stuff we want to watch.
In a truly intelligent world, though, it seems we’d ditch the mobile devices and talk directly to smart TVs and content devices such as Roku or Apple TV. We’re already heading in that direction with voice APIs for other connected devices because it’s more convenient. The less we have to switch from app to app, or get up to grab our phone from the other room, the happier we’ll be. All of this, of course, assumes there’s intelligence built in so each device or app knows when it’s the target of a command.
Ideally, I could say “Show me movies about kickboxing” into the ether while sitting on my couch and see all the titles come up on whatever services I use. Then, I say “Play King of the Kickboxers” and it happens. Hmm, maybe it is about laziness after all.
For more on Expect Labs’ Tuttle’s vision about the future of artificial intelligence, check out this Structure Data session featuring him, SwiftKey’s Ben Medlock and True Ventures’ Om Malik.