ARM (s adr) has created a new family of processor cores designed with user demands for always-on computing as well as the need for more efficient computing on the data center side in mind. The new A50 family of cores will be available to chipmakers at the end of next year and ARM expects devices containing those cores to hit the market in 2014 and 2015.
Unlike Intel (s intc) or even Qualcomm (s qcom), ARM doesn’t build or design chips, but instead licenses its technology to other chipmakers who take the ARM IP and build chips or systems on a chip around those core designs. The new cores are designed in a “big” version that offers 64-bit processing and more powerful cores, and a 64-bit “little” version (that is also 32-bit compatible) that is aimed at the mobile market. The big A-57 core will deliver three times more performance at the same power consumption as today’s mobile phone chips, according to ARM, while the little A-53 core will deliver four times the power efficiency as today’s phone platforms with a better performance than current generation phone platforms.
Computing is no longer a desk job or sold by the server
Our computing habits have changed in the last five years. Where we once may have sat at a desk and completed our computing tasks, we now wake up and roll over in bed to check out email on phones, before maybe moving to a tablet, a connected car and then finally to a laptop or desktop at work. As we hop from machine-to-machine we expect a similar and continuous experience, which we get thanks to web services that most of us use in the browser or via clients.
To meet that demand, web companies are deploying millions of servers in data centers the size of warehouses. At that scale things change — not just the focus on power consumption, but also the ability to use hardware tuned to a specific workload. For example, Facebook (s fb) isn’t one app, it’s a combination of more than 20 different services that are tied together with software. But because Facebook is so huge, those services can require a lot of computing resources. Facebook doesn’t buy servers, it buys racks, and at that level of hardware consumption, buying a rack using ARM-based servers to cut power costs may slightly increase the management load on operations team members, but it also cuts the energy requirement.
The combination of these shifts on the user and on the web services side are why ARM has seen a chance to get into the data center. It’s also why players from AMD, Dell and even Intel are embracing heterogeneous compute. During the last 20 years, just like Ford’s original Model T that only came in black, you could only have one type of instruction set as long as it was an x86. But with the rise of webscale, cloud and even a broader use for high performance computing, companies wanted variety.
ARM’s response is modular building blocks
So Intel is working on its MIC architecture, Nvidia is introducing graphics processors in servers and AMD is embracing ARM, x86 and GPUs. Startups ranging from Tilera to Adapteva are also trying to bring new architectures to the market. ARM’s approach with its latest architecture (ARMv8) is to emphasize power efficiency even at the expense of performance. It has always done this in the mobile market, where a poor battery life can doom device to the scrap heap, even if the graphics are vivid and the applications are speedy.
Companies that license these new cores can mix big cores with little cores or build systems containing big cores and ARM graphics cores, or any number of configurations to meet the needs of the device and market they are building for.
The two new cores also will eventually bring 64-bit processing into the mobile device arena, which Nole Hurley, VP of marketing and strategy for ARM’s processor division, said is important because people are creating more content on mobile devices (this will also offer ARM a credible core for the laptop market as well). Now, all of this depends on software that can run using the ARM instruction set, but in both the consumer side and the data center market ARM is building out ecosystem partners.
The new cores should appear in chips built using the 28 and 20 nanometer process node, and will scale down to 14 nanometers and the newer chipmaking processes that build up instead of out. As the process node shrinks and more transistors are crammed on the chip expect additional performance and energy gains.
For those who are still looking for gigahertz performance numbers Hurley sais]d that new A-50 family will deliver performance ranging from 1.3 gigahertz to 3 Gigahertz depending on how the ARM licensees tweak their designs. At that point I wonder if we can still get away with calling a 3GHz ARM-based design a wimpy core. However, Ian Ferguson, who heads ARM’s server ambitions, (and who doesn’t use the phrase wimpy core) noted that ARM isn’t expressing its server goals in terms of the traditional enterprise.
“What we’re not saying is that we’re going to blaze on into traditional enterprise infrastructure … that is not the space we’re planning to attack,” Ferguson said. “We want places where the server is the business.” And as we’ve stated before, that space is where much of the growth in servers will come from in the coming years. Seeing this, ARM has developed a family of processor cores that can be configured to meet the needs of all-day computing from the user side and the server side.