Close

Pivotal hopes its new big data pricing makes it a real platform

Pivotal, the big data and cloud computing spinoff from EMC and VMware, has announced a new license model for its suite of big data products that could make it more palatable for buyers.

Rather than paying by the terabyte here and by the node there, and having different prices everywhere, customers can now pay by the computing core and use those cores to run any of the company’s database products. So, for example, a company with a license for 50 cores could run 30 cores of the Greenplum database and 20 cores of the GemFire database one day, and then 50 cores of the company’s HAWQ SQL-on-Hadoop query engine the next day.

Customers buying subscriptions to this new package of products can use Pivotal HD, the company’s Hadoop distribution, on an unlimited basis for no extra charge.

Source: Pivotal
Source: Pivotal

According to Michael Cucchi, senior director of product marketing at Pivotal, the old model of different pricing for different products made it difficult for some customers to justify buying Pivotal’s software. That’s a problem because the company’s vision of building a platform for the next generation of data-driven applications depends on them having all the components and being willing to keep all their data in one centrally available place.

“We’re telling customers they have to capture [all their data] to be competitive,” he said of the company’s previous licensing models, “yet they’re being charged, or taxed, for every terabyte they put into their data lake.” Now, he added, “[We’re] basically dumping the idea that you should be taxed on how much you store. … [Users can] subscribe to this and store absolutely everything.”

In theory, the new pricing model could be a boon for Pivotal’s bottom line for years to come. Assuming customers keep adding data to their Hadoop environments, they’ll keep wanting, or needing, more cores to analyze it. Cucchi declined to comment on the exact pricing, but said it would be “closer to pure-play Hadoop vendors’ pricing” (i.e., Cloudera, Hortonworks or MapR) than to traditional data-management vendor pricing (i.e., Oracle or IBM).

(Listen to Hugh Williams, Pivotal’s senior vice president of research and development, talk about the company’s big data strategy on our Structure Show podcast below.)

[soundcloud url=”https://api.soundcloud.com/tracks/129714318?secret_token=s-VQfwj” params=”color=0092ff&auto_play=false&show_artwork=true” width=”100%” height=”166″ iframe=”true” /]

One big ding on Pivotal’s strategy might be that its collection of big data technologies was largely developed a decade or more before the advent of Hadoop and have been retrofitted to work with it. Pivotal will argue that’s just fine — because it has a lot of engineers to do this and is essentially bringing best-of-breed database capabilities to Hadoop — but others will argue it’s best to use technologies designed from the ground up to work with Hadoop. There certainly are enough of them emerging in the open source community and even from Hadoop vendors to make the latter a valid argument.

Maybe, then, Pivotal’s biggest advantage is its focus on applications with Pivotal Labs and its open source Cloud Foundry platform. Right now, Cucchi said, Cloud Foundry users can call the company’s various big data technologies as services, but they’re technically very different from the traditional software versions. For Pivotal to really deliver on its platform vision, all of its pieces will have to come together as one, and the company is working toward that, he said.

Pivotal CEO (and former VMware CEO and Microsoft VP) Paul Maritz sat down with Om Malik to talk about the next generation of applications at our Structure Data conference last month. Check out that conversation below.

[protected-iframe id=”b1dbfc57c827da7f12c7ae88d8240fd1-14960843-6578147″ info=”http://new.livestream.com/accounts/74987/events/2795568/videos/45577724/player?autoPlay=false&height=360&mute=false&width=640″ width=”640″ height=”360″ frameborder=”0″ scrolling=”no”]