Pulling in GPUs to lower power costs in the data center
A few years ago, Microsoft debuted the new Xbox console and it had a chip that included a CPU and GPU (graphics processing unit) on one piece of silicon. And like many chip innovations, like ARMs low power chips, many wondered whether this approach might be sensible in the data center.
Well AMD is headed in just that direction, announcing new support for something it dubs heterogenous system architecture (HSA). HSA seeks to make available both CPU and GPU resources on a server, depending on where processing power can best be utilized. The idea with GPUs is that their parallel computing abilities can help lower costs by allowing data centers to offload key cloud processing tasks like data analytics and gaming. Critical to this strategy is the possibility that GPUs can work in a parallel fashion to execute more tasks at a lower power envelope. It’s for this reason that big cloud providers like Amazons have embraced GPUs in their data center.
Again and again we’re seeing server hardware re-imagined, taking lessons from mobile and gaming. And in the end it’ll be end uses like mobile and gaming that drive much of the traffic in data centers.