Moving data center compute to where power is cheapest

For years now data centers have grown more aware of their power consumption costs, a trend that has allowed IT giants with significant capital like Google and Facebook to experiment with clean energy investments, low power servers, and best practices for building a green data center.

One of the more advanced efficiency prospects has always been the idea that compute tasks could be moved or swung to the location where clean power exists. One question I’ve always had is that given the extremely low utilization rates at a given data center, whether it might ever be possible to move processing tasks among different data centers in order to optimize access to either clean energy or the best energy pricing.

For a few years now, this idea has percolated through the data center space but I recently got to observe a demo from the folks at Power Assure that seeks to do just that—move compute tasks seamlessly between multiple data centers based upon energy pricing optimization.

During the demo, Power Assure’s CEO Pete Malcolm moved a compute load between two data centers, one in Sacramento, CA and one in Ashburn, VA. The screenshot below shows the user interface that Power Assure has built to manage and visualize these tasks along with a graph indicating utility rates.

Those rates are important because many data centers find themselves in areas with time of use pricing, and the advantage of being able to swing compute loads between data centers revolves around moving compute to wherever pricing is optimal. That pricing can change throughout the day, and as smart meter deployment edges towards 100 percent over the next 6-7 years, utilities will be able to easily track customer power use every 15 minutes and thus easily roll out time of use pricing.

To be sure, one of the major advantages of being able to seamlessly and quickly transfer compute loads between data centers is not just about optimal energy pricing. The number one concern for data center operators is uptime and reliability. In the event that a data center goes down, it’s critical that operators can move application tasks between a main and backup data center.

But the reality is that many data center operators have little practice doing this because the events are relatively rare. The net result is that failure rates for moving to a backup data center during an outage are often high.

So while there’s likely an energy ROI from Power Assure’s model, the investment from data center operators’ perspective should also come from the ability to practice on a daily basis moving compute loads among multiple data centers. Doing this as a standard operating practice opens up the possibility of more reliable data centers capable of responding to outages in a practiced and standardized method.

The visualization abilities also allow data center operators to fully understand what their utilization rates are and how much headroom they have for peak events that often overwhelm and bring down data centers.

Going forward I’d love to see not just utility rates integrated onto a visualization platform, which is a great step, but perhaps one day even clean energy sourcing info, like knowing whether a data center were getting coal power or solar power and at what cost. But for now just empowering data centers to proactively move compute among different data centers and thus different utility markets will further empower data center operators to think about energy pricing in their operational decisions.

Relevant Analyst
lesser_profilepic14bc7fcadf2acb41d74be5ed84e63558-avatar2

Adam Lesser

Analyst Gigaom Research

Do you want to speak with Adam Lesser about this topic?

Learn More
You must be logged in to post a comment.
No Comments Subscribers to comment
Explore Related Topics

Latest Research

Latest Webinars

Want to conduct your own Webinar?
Learn More

Learn about our services or Contact us: Email / 800-906-8098