Though there’s a lot of activity around real-time search, structured search, and social search, nearly all the searching we do today is for text, with text. But there are some really interesting things happening in non-text search, as well, and their potential is just beginning to show.
Searching for images on Google is actually text-based, since it relies on metadata associated with the images. But some developers are finding ways to search actual images as opposed to text.
PhotoSketch and GazoPa find pictures that roughly resemble anything you draw into an online interface. More impressive, though, is Imprezzeo, which, instead of searching for matches to ad hoc drawings, searches for matches to actual images you submit, and it ranks them from most similar to least. It even includes a text option to get you started, so you can begin by searching for the word “horses,” pick the result that matches your needs and find several more like it.
There’s obvious value in being able to retrieve images with that kind of speed and precision. I can also imagine the technology being applied to online shopping services, for example — a sort of Pandora for clothes, art or furniture that helps you find items similar to those you already like.
TinyEye from Idée Labs uses pattern recognition algorithm to analyze images down to the pixel and find lookalikes on the web, allowing photographers to track their wares online or consumers to protect their online family photos. But sophisticated image matching could potentially be applied in more meaningful ways. Law enforcement already use face recognition tools to match police sketches with a database of criminals. If Coke Zero can amass a vast store of faces and match lookalikes within it just for a marketing campaign, surely the same technology can be applied as a powerful crime-fighting tool.
Using voice in search queries is becoming more common all the time. Last week, for example, Sprint introduced voice-activated search over its Samsung Intrepid phones using an interface from TellMe (which has been offering a stable of voice-activated information like movie showtimes and weather for years).
Searching for actual sounds is less common but exciting enough to warrant further exploration. One example is Melody Catcher, a novel service that lets you search for a song by “playing” it into a web-based piano. But it is not nearly as well-known as Shazam, the iPhone app that recognizes and identifies songs it hears. Shazam has already become profitable and recently earned a funding injection from Kleiner Perkins Caufield & Byers. So it is in a great position to explore the possibilities of audio search as it pours more capital into its business.
If images can be challenging to search using text, then video should be even harder, right? Not necessarily, since video contains more indexable information than images. But unlike TinyEye and Imprezzeo, video search still relies on metadata, so it’s essentially still text-for-text searching. But there’s some interesting activity here, too.
A startup called AnyClip won a lot of attention at last month’s TechCrunch50 event for promising to let users find “any moment from any film ever made.” The service is operated, like YouTube, by entering text queries. However, the site crowdsources metadata, letting users add information about each clip that will help future searches. That metadata is important because lines of dialogue, while they’re an easy way to find and mark movie moments, are not the only ways people want to search for them.
Movies are AnyClip’s first focus — which it might try to monetize as a discovery engine that leads users to watch more films — but its founders imagine migrating to other areas eventually. Sports and TV shows come to mind, but what else could it do? Surveillance video, perhaps?
As search evolves in these three areas, the next interesting question to examine will be how each type of search could be used in combination with both traditional and emerging types of search.