We Can Call It A Cloud, But It’s Still Hardware

1Executive Summary

Consumers and businesses are grabbing their movies, business software and computing when they want it, and storing it “in the cloud” when they don’t. Thanks to wireless networks and an increasing number of broadband-connected appliances, this means content can be accessed anywhere there’s a connection and a screen. As I discussed last month, that has led to large changes in the market for computers and cell phones, as well as in the devices themselves. However, just because data is no longer stored on a PC’s hard drive or a wall of DVDs, that doesn’t mean that it’s not stored somewhere.

Behind Every Cloud is a Server (Lots of Them)

Behind every software-as-a-service product, movie-streaming service and online photo album there’s a mess of servers and storage gear that holds your stuff and directs the requests coming in via the web to the appropriate content. And there’s more of this stuff coming in every day. For example, Facebook stores over 850 million photos each month. The amount of information created, captured and replicated in 2007 was 281 exabytes (or 281 billion gigabytes) according to IDC.

This has led to changes in the types of servers offered and the way such servers are sold. There’s a move toward mega data centers filled with thousands of servers devoted toward running a single company’s product (such as a search engine, social network or even a cloud computing offering). Two years ago, Dell recognized that these buyers would be better served by a custom server built with off-the-shelf parts. So Dell created its Data Center Solutions group, which is now the fifth largest sever vendor in the world, despite having fewer than 50 customers.

HP reacted to this same shift by combining its high-performance computing sales efforts with its big server-buying customers to create its Scalable Computing Initiative. The trend toward large-scale data centers has made it possible for specialty hardware vendors such as Rackable and Sun to offer their own optimized boxes for large web-scale buildouts. The sector is hot enough that even Cisco decided that it needed to hop into the server equipment market, and in March launched its Unified Computing System to compete against the established players.

While mega data centers aren’t going away, they’re also very expensive to build and operate. To help customers add capacity quickly and more cheaply HP, Sun and Rackable have built out data centers contained in shipping containers, which they argue are good for both incremental growth and energy efficiency. Google and Microsoft are both using containerized data centers jam-packed with commodity servers.

Taming Energy Hogs With Specialty Chips

However, shipping containers or running hyper-scale data centers still consume electricity, especially when they are packed with general purpose CPUs. Analysts say energy can account for 10-25 percent of the total cost of running the data center each year. One of the biggest drivers of those costs are the x86-based processors powering each machine. The chips can suck between 85 and 130 watts, but they also generate heat that has to be dispersed by fans and air conditioning. A general rule of thumb is that for every dollar spent on power to the servers data center operators will spend another 50 cents to a dollar cooling those machines.

Given these calculations, and the fact that general purpose x86 chips aren’t always the best choice for doing certain types of jobs, vendors such as Nvidia, IBM, Sun, Texas Instruments and specialty companies such as SiCortex are pushing different chip architectures to save on energy. For example, the IBM RoadRunner supercomputer unveiled last year is a combination of IBM’s Cell processor and AMD x86 chips. The combination makes RoadRunner one of the most efficient supercomputers around. IBM is placing its Cell chips in servers for corporate computing as well.

Also with an eye on power savings, Texas Instruments is readying digital signal processing (DSP) chips to install inside servers for math-intensive tasks, while researchers at Lawrence Berkeley National Lab are trying to build a supercomputer out of DSPs. The use of DSPs, which can perform high level math at lower power are trying to break in on the trend of using the lowest power vhip for the job. Right now, the market for these specialty chips is in high performance computing, and vendors hope is that it will move downstream into corporate and web scale data centers in a few years.

However, if specialty chips invade the data center they will likely be used in conjunction with CPUs or with a variety of specialty chips performing different jobs. Such heterogeneous architectures could be managed better in a cloud computing structure or in a platform as a service, because a programmer or developer can assign the best jobs to the best processor without worrying about developing for a non-x86 architecture. This is a realm of research and study that may change the data center in the next decade.

Delivering Content Everywhere Through the Cloud

As we gain the ability to connect our consumer-facing devices back to these mammoth server farms through the Internet, there are opportunities for existing chipmakers and hardware vendors to take advantage of providing the hardware behind the cloud. However, for true ubiquity of content delivered to multiple devices from the cloud there are licensing issues that need to be worked out, not only from entertainment providers such as movie studios, but also software companies such as Microsoft and Oracle, which need to adapt their licensing models to reflect the reality of the new computing world.

Stacey Higginbotham is a Staff Writer for GigaOM.

Relevant analyst in electricity
You must be logged in to post a comment.
3 Comments Subscribers to comment

Learn about our services or Contact us: Email / 800-906-8098