Power efficiency comes down to the chip. Again.
We’ve been hearing for the past couple years that despite strong energy efficiency efforts in the data center like hot and cold aisles, the best solution in terms of power savings result from the hardware itself. Last year’s paper in Nature Climate Change actually made the precise point that it’s the hardware and not the facilities or access to clean power that has the potential to make the biggest different in power savings, and thus reducing greenhouse gas emissions.
Some systems integrators are now taking things a step further, arguing that among the hardware, it’s really the processor chip that makes the most difference. From a recent TechTarget article:
“Data centers aren’t saving money through better energy efficiencies or cooling technologies as they are through the power consumption improvements built into server chips themselves,” said Greg Carl, solutions architect with SyCom Technologies, a client-focused IT systems integration company based in Richmond, Va.
Typically, it can take three years — sometimes longer — for a midsize or larger green data center to realize a return on their energy efficiency investments.
The calculus that now needs to be occurring is the energy ROI that can result from changing out older servers for newer ones with power efficient processors and hardware optimized for the cloud. It also should continue the emphasis being placed at Intel, AMD, and ARM startups on building servers that leverage lower power processors as well as other server hardware like interconnected fabrics.
It’s taken a few years but I think even the folks at places like Intel are seeing which way the wind is blowing.