If Watson is going to be a $10B biz, IBM had better figure out the cloud

During an event Thursday morning officially unveiling the new IBM(s ibm) Watson Group, division head Mike Rhodin said something very true about the challenge of taking cognitive computing mainstream: “Eras are not ours alone … We make markets, we create entire industries and that’s what we’re going to do with this.”

However, he added a couple minutes later, “We recognize that the power of this technology is really what it can do for everyone, and to get to everyone, we need help.”


Most of the business press coverage of the new division has focused on two storylines: (1) This is a huge opportunity for IBM, maybe bigger than the mainframe was 50 years ago, and (2) Watson has been a revenue disappointment for IBM thus far. Both are probably true. Generating just $100 million in revenue over the last three years, as the Wall Street Journal reported, isn’t great. But, generating $10 billion a year by 2023, as CEO Ginni Rometty reportedly expects the IBM Watson division to do, would represent a signicfant business.

I think the Watson developer cloud and API platform that IBM announced in November is the answer to everything IBM expects Watson to be. That’s the channel by which IBM can actually reach a community of innovators to push cognitive computing beyond the low-hanging fruit that Watson is tackling right now. But IBM has to do it right.

If you’re interested, you can watch the whole two-hour-plus event below.

[protected-iframe id=”4e789bfd12393f8ac3836ef5850fb58a-14960843-6578147″ info=”” width=”560″ height=”340″ frameborder=”0″ style=”border:0;outline:0″ scrolling=”no”]

The same old, same old isn’t enough

Frankly, as great as it is to improve health care, as laudable as it is to try and cure cancer, and as practical as it is to try and make better retail predictions, these aren’t exactly mindblowing applications for something that IBM touts as pretty much the most revolutionary model of computing ever. Really, they’re the same types of problems that companies have been targeting with big data technologies for years. Health care is already a major area of focus for startups and established big data players alike. Better recommendations are nice, but in very few industries will they make or break companies.

And IBM appears to be seeing about the same results as other companies pushing big data technology, such as Hadoop. There’s some money coming in, but it’s not yet a billion-dollar business. There’s potential for really big deals, but it probably means slogging through long proofs of concept and deployment cycles — IBM has been working with its current health care partners for years now — because this stuff is complicated, even if some Watson services are delivered via the cloud. There are existing data-management systems and applications in place. In the case of health care, law, finance and other fields IBM is targeting with Watson, wrong decisions are costly.

If anything, IBM is probably behind the curve in these areas and Watson might not be the way to jump ahead of it, at least at scale. Many CIOs are already trying to figure out a strategy around Hadoop and the IT ecosystem that has shaped around it promising cutting-edge analytics and even machine learning capabilities. Watson is different than those and maybe better at some things, but I wouldn’t want the task of having to explain those differences, and then why it makes sense to divert attention and budget away from existing strategies and toward Watson.

It’s possible that some other technologies will just prove just as good as Watson at certain things — perhaps sooner and at a lower price. Investors have put hundreds of millions of dollars into machine learning and artificial intelligence startups over the past few years, some targeting specific industries such as health care and retail, while some are more general-purpose. They’re out in the field, they’re innovating and they’re not going anywhere just because Watson is now strolling into town.

APIs equal innovation

Cloud computing — specifically, exposing Watson’s cognitive computing capabilities as APIs — actually gives IBM an edge. When capabilities are easy (and relatively cheap) to consume, innovative people and companies do all sorts of great things. Just look at the growth of Amazon(s amzn) Web Services into a multi-billion-dollar business in less than 10 years. We always had computers and storage, but AWS opened up a whole new way of consuming them and inspired developers to build all sorts of new products.

AWS revenue estimates.
AWS revenue estimates.

AWS’s reach doesn’t stop at basic computing, either. Startups using its Elastic MapReduce Hadoop service have found some really innovative use cases for that type of data processing and because of the consumption model are, on average, probably much more creative than companies deploying Hadoop clusters internally. And while I don’t have the numbers, I’d argue that Elastic MapReduce generates more revenue than many might assume.

I spoke with AlchemyAPI Founder and CEO Elliot Turner a few months ago, and after pointing out how widely used his company’s core deep learning APIs are by businesses, he noted that the company gives away API access free to academic researchers. They’re investigating how it applies to all sorts of areas beyond the company’s current text-analysis services and, Turner said, deep learning — which is just one slice of IBM’s vision for cognitive computing — will soon power “all sorts of really cool applications that … none of us are even thinking about now.”

To IBM’s credit, it appears to understand this and talked all the right talk about its developer cloud in November. Now it just needs to actually offer up a platform that meets the need of a broad base of developers in terms of cost, capabilities, openness and accessibility. It didn’t get off on the right foot (see below), but there’s plenty of time.

Expect to hear us talk in depth us this topic at our Structure Data conference in March (stay tuned for more details). A main theme of the conference is using data to build entirely new products and capabilities rather than just using “big data” as a euphemism for “better business intelligence.” Cognitive computing represents possibly the pinnacle of big data capabilities for the next decade, but I don’t see how it can reach its full potential unless it’s exposed to a community of developers with new ideas on how to use it.