Intel: We’ve always been serious about microservers. No, really

Sometimes it’s fun to watch a giant tap dance.

That’s essentially what happened today when chip giant Intel (s intc) hosted a call with Matt Adiletta, an Intel Fellow, who all the way back in 2006 was charged with figuring out what blade servers were all about. His journey of discovery took him to nascent cloud service providers, financial service CIOs, even Andy Bechtolsheim and has culminated in Intel’s embrace of what it calls microservers — highly dense, low-power machines aimed at emerging workloads.

Adiletta’s narrative danced around the fact that Intel is facing growing competition in this sector from established and new chip firms using the ARM-based architecture, but also that Intel has been pretty late to the microserver party — although it did coin that term. Even today, when it was parading an Intel Fellow before the press, the chip giant seems decidedly unenthusiastic about the segment and reluctant to claim they will be a big business.

Wait: how big is this microserver market?

When asked if he thought microservers would represent more than 10 percent of the market in the next three to four years — a number Intel has stuck with since 2011 — Adiletta hedged, saying, “I think it’s too early to tell, but it’s a reasonable first approximation … the software is evolving … but I think what we have to do is assume it is at least that.” He deflected then by noting, that while he and Intel may be unsure, “Our customers don’t know either.”

Adiletta also deflected questions about Intel’s decision to buy interconnect assets that might lead to the creation of fabrics for such highly dense servers or might allow Intel to integrate a switch onto a system on a chip for scale-out environments. Instead, the call was an attempted history lesson on how Intel has long believed in this sector, despite the fact that its initial launch of an interest in microservers back in 2011 was rushed and looked hastily put together as a reaction to ARM getting aggressive about the data center market.

notintelNow that ARM has a bevy of server makers and chip firms embracing the idea of the ARM architecture in the data center, a growing software ecosystem, and 64-bit chips coming next year, Intel seems to be trying to walk the line between downplaying the market and assuring customers that it is ready for “wimpy cores.”

Intel embraces the big.Little strategy too

In general, Adiletta took as many opportunities as possible to point out how Intel has the chops to manage the data center and give enterprise customers what they want — even with a lower performance, low-power processor such as Atom — while underplaying the architecture changes that Intel has been making to Atom to get it ready for the server market. Adiletta also echoed the same Big.Little strategy that ARM has laid out for its next-generation chips — namely that it will have faster, brawnier Xeon cores that can be combined with lower-performance, more power efficient Atom cores.

He offered the example of a Hadoop cluster, a common use case for parallelized wimpy cores (see here for an x86 example or here for an ARM-based one). For the name nodes that send the processing job to the domain nodes, a more powerful Xeon core works better, while the processing could be handled by smaller, Atom cores, he said.

In the end, this call cemented what we already know about the coming fight between Intel and those pushing ARM-based products in the server market — Intel thinks it has the legacy software and understanding of what server customers need, while ARM will tout core designs that will consume less power.

And as much as it can’t stand to admit it, Intel is worried about losing the microserver part of its server business to ARM — a business that will probably end up being more than 10 percent of the market.