Are Liquid-Cooled Servers Coming to a Data Center Near You?

In recent weeks, a server cooling concept that greatly reduces data center power consumption has been causing a bit of a stir. During the Supercomputing 2009 conference, two startups, Iceotope and Green Revolution Cooling, promised to drastically reduce the energy it takes to keep computing hardware cool by doing something a little unconventional. Their solution: dunking server hardware in liquid.

Not just any liquid, of course, but inert fluids that won’t short out the electronics. Like I discussed a couple of weeks ago in this post at Earth2Tech, the concept works because liquids are much better at transferring heat than air. Now the preferred cooling technology of some computer hobbyists is poised to invade data centers. But will it catch on?

The one thing these liquid-cooled systems have going for them is huge savings, plain and simple. Iceotope claims that its technology can cut data center cooling costs by a whopping 93 percent. Green Revolution Cooling, for its part, says its system can cut total data center power consumption nearly in half (45 percent). What’s not to like?

The prospect of saving hundreds of thousands of dollars a year in energy costs is certainly an alluring one, but it’s not enough to win over data center operators. Here are a few stumbling blocks that liquid-cooling outfits and others hoping to bank on exotic cooling technologies have to clear before they find success in the marketplace.

Warranty, Service and Repair: For better or worse, server manufacturers design their systems to operate in air-cooled server rooms. Until OEMs and vendors give the official go-ahead — remember some have their own cooling systems to sell — expect demand to be muted. Without IBM’s or HP’s blessing, for example, IT departments will be reluctant to subject their hardware to potentially warranty-voiding modifications. In this video, Green Revolution Cooling’s Co-President, Christiaan Best, describes some of the changes servers have to undergo like removing fans and sealing hard drives. Not that deploying servers is ever plug-and-play simple, but this adds a whole new level of customization to an already complex undertaking. Even Green Revolution Cooling’s own warranty (which covers modified servers for three years and may surpass native coverage in some cases) may not be enough to sway CIOs and IT managers.

Making Room: Iceotope’s system is meant to emulate the traditional server rack. Problem solved. Green Revolution Cooling’s system (PDF), on the other hand, is a rack on its back.  Meant to have 1U servers slot into it vertically, the company’s enclosure sits low on the floor instead towering over IT personnel. As you might guess, most data centers weren’t planned and built with this layout in mind.

Culture Change: And let’s not forget the inherent messiness of dealing with liquids of any kind. IT folk, by and large, like to keep their hands dry. Retrofitting data centers for liquid-cooling has some heavy switching costs that will prove challenging for startups offering such systems.

Are any of these challenges deal-breakers? The warranty issue alone is enough to scare many IT shops off. It will take a lot of convincing by Iceotope and Green Revolution Cooling, something that is usually remedied with official partnerships and vendor certifications (remember, they are also competitors of a sort). If they can pull it off, they’ll find that some data center operators don’t mind getting their hands wet in exchange for some big energy savings.

Question of the week

Is your IT department willing to get its hands wet for some major energy savings?
Relevant analyst in green data centers
You must be logged in to post a comment.

3 Comments Subscribers to comment

  1. I’d like to point out the lower up-front costs (30% lower build-out costs) and easy maintenance of a horizontal system vs. a typical vertical system. I don’t claim to know all vertical systems or be an expert in any one, but am pointing out characteristics I’ve seen. For disclosure, I work for the horizontally-mounted company mentioned in the article.

    First, a vertical orientation means that each server has to be individually sealed because fluid would fall out the side and/or not completely fill the server case. Looking at a competitive product that uses a vertical orientation, it seems that the motherboard is within an inner case, and 2 motherboards cases are encased within an outer case. Making changes/doing maintenance seem to require opening the sealed box by opening the ~2 fasteners on the outer container, removing the 6 fasteners that mount the inner container to the outer container, disconnecting the water-filled coils, removing the 44 fasteners on the sealed inner container to get to the motherboard itself, draining the coolant, and replacing the desiccant.

    GR Cooling’s orientation where servers slot in but are not attached allows removal in around 60 seconds by simply lifting the server out with nothing to unbolt. I’m guessing the author of this article saw us show this at SC 09 as we have nearly a hundred maintenance demonstrations. I would assume that sealing each server with its own cooling coils has a very different cost point than GR Cooling’s system.

    We believe that our cost to build out a data center is roughly 30% less, which is over $1,000 per server for most places, assuming a $10/Watt standard build out costs. So, it’s not just energy, but a massive upfront costs savings. I haven’t seen other people discuss up-front savings.

    A horizontal orientation does not mean lower density. If a large % of data center space is hot/cold aisles and air-flow space, then that goes away. Also, racks can be fit hotter equipment and/or more servers. Net/net, we believe density goes up.

    So, our system offers 1) lower total costs (upfront and overall) 2) low energy and 3) potential for higher power densities.

    Share
    1. Hi mark10,

      Thanks for your insider’s point of view. Are you finding data center operators receptive?

      Share
  2. Pedro-

    We came out of stealth mode at SC 09 in late November. Our first unit goes into a large data center in early Q1 2010 and we have had a lot, lot of interest from HPC.

    I think your points about warranty, service, and culture change were all good points and the best thought out article we’ve read. We don’t expect the standard IT shop to move away from OEM server warranty easily. Servers rarely fail, but that’s perhaps not the issue.

    That said, who are our early adopters? A few people care about energy. However, the Supercomputing/High Performance Computing (HPC) crowd also cares a lot about high power density. For them, less costs or energy is nice, but having computers melt is a bigger issue. They also often make their own servers so are much less concerned about warranty. We’re about to have a high density product that can handle server overclocking, which of course wouldn’t be backed by an OEM warranty but the server owners wouldn’t care because overclocking likely would void the warranty by itself.

    Also, I wouldn’t assume that we won’t get OEM warranty acceptance anytime soon. ^^^In the last 3 weeks we’ve talked to representatives of all but one of the four largest server manufacturers. They all think this is a very interesting technology, although they of course have questions.^^^ We believe we can make servers more reliable through preventing corrosion (air causes arcing and corrosion) and removing fans (mechanical things break) and reducing hot spots. When one OEM decides to support it or licenses the cooling technology, they all will follow.

    Share

Latest Research

Latest Webinars

Want to conduct your own Webinar?
Learn More

Learn about our services or Contact us: Email / 800-906-8098