The server architecture debate rages on

Big processors or little processors, scale-up or scale-out, on-premise or in the cloud: The answers might not be as easy as one would think. Web-style, scale-out architectures have been gaining acceptance since Google (s goog) made them popular earlier this century, but big, expensive machines still have their proponents. Likewise, low-power server processors are getting more attention by the day, but even Google says they have their limits. And what about cloud computing?

Is parallel processing necessary?

Take parallel processing, for example, which has found permanent homes in countless server farms and even high-performance computing clusters around the world. On his personal blog, a bioinformatics programmer named Jeremy Leipzig argues that for certain tasks, it’s better to just throw money into one big, powerful machine than it is to try to distribute a task across dozens of commodity machines. (Hat tip to Todd Hoff at High Scalability for first covering Leipzig’s post.) Much of Leipzig’s argument has to do with bioinformatics workloads that he says are just better suited for single-node systems, but there’s also the issue of complexity. Writing parallel applications takes a lot of time, he notes, which isn’t a good thing when a team needs to constantly try new things, or when a workload will only run for a couple of days.

Big system, or server farm?

Fair enough, but assuming parallel processing is the answer to your problem, cost might become a real consideration. Larry Dignan at ZDNet (s cbs) broke down a new report that compares the cost of building an open-source, scale-out database to handle YouTube’s (s goog) traffic versus building an Oracle (s orcl) Exadata-based architecture for the same job. The contrast is staggering: The Oracle system would cost $589.4 million in hardware and software costs, plus an additional $99 million a year in support and maintenance. The open-source system, on the other hand, would cost only $104.2 million in hardware, and $15.1 million a year for maintenance and support.

Now, this particular comparison doesn’t take into account factors such as workload-specific appropriateness, quality of support or any performance gains from integrated systems, but the price difference alone might be enough to make the decision for some organizations. Ultimately, the report’s authors contend, large-profit-margin vendors like Oracle, EMC (e emc) and HP (s hpq) could have a tough time keeping up with smaller vendors selling less-expensive — or even open-source — software to run on commodity hardware.

Wimpy cores or brawny cores?

Having decided to go with less-expensive hardware and some third-party software, more questions arise. Right now, a big one might be whether to go with standard server processors such as Intel (s intc) Xeon or AMD (s amd) Opteron, or instead to go with something energy-efficient like Intel Atom or even ARM (s armh). You can already find vendors willing to sell you the latter type, and big-name customers have already taken the plunge. SeaMicro, which sells a server packed with 256 dual-core Atom processors, already has Mozilla and eHarmony on board as customers. eHarmony, in fact, runs its Hadoop cluster on SeaMicro gear.

But low-power processors have their critics, including Google’s Urs Hölzle. In a research note published last year, Holzle makes the distinction between brawny-core processors such as Xeon and wimpy-core processors such as Atom or those from ARM. While wimpy-core processors certainly can scale at a lower cost than can brawny-core processors, Hölzle notes that they run into problems such as Amdahl’s Law and increased software-development costs to optimize for the new architecture. Further, he notes, wimpy-core servers might lead to greater costs for DRAM, cabling, and other related hardware, and might actually lead to low utilization rates, somewhat mitigating the effects of their energy efficiency.

A research team at the University of Wisconsin came to a similar conclusion, but acknowledged that wimpy cores still might have a place:

Our study presents evidence that for complex data processing workloads, a scale-out solution of a low-power low-end CPU-based cluster may not be as cost-effective (or produce equivalent performance) as a smaller scale-out cluster of traditional high-end server nodes. …

While our results suggest that wimpy node clusters are not suited for complex database workloads, it does open up the area of hybrid (heterogeneous) cluster deployment. Hybrid cluster deployment strategies, job scheduling, and scaleup analysis are interesting avenues of future research.

To the cloud?!

None of this takes into account the option of running workloads in the cloud. You want lots of small instances, no problem. You want a single instance with 96GB RAM and 32 cores, you can have that, too. Amazon Web Services  (s amzn) will give you a 10 GbE cluster complete with Nvidia GPU co-processing. But the economics of moving to the cloud, as well as inherent security and performance concerns, can make that decision fairly complex.

The plethora of choices for application architecture and delivery model are great if you like variety, but I don’t envy anyone tasked with choosing which system on which to spend their limited budget dollars.

Image courtesy of Flickr user MrFaber.