Open Compute to deliver big changes in servers and data centres

While much less prominent than some January events – namely CES – the recent Open Compute summit may end up imparting more information regarding the direction of big computers than any individual vendor announcement.

Facebook initially organised the Open Compute Project (OCP), and this was the group's fourth such summit in the past 18 months or so. A number of big data centres – ranging from hosting companies to major financial firms – are now members, and much of the industry is now showing up to exhibit and offer support. The idea is to redesign the modern server – initially for computing, but also potentially for storage – in ways that reflect the needs of the largest data centres for better scalability and less proprietary solutions.

The first step is a new design for racks, known as the Open Rack specification. This uses rack units that are wider and slightly taller than those in existing servers. A standard rack unit today (a 1U server) is 19in wide; with Open Rack, a single rack unit would be 21in wide. The new size is designed to fit three motherboards or five 3.5in drives side-by-side, heading towards even denser servers. Note that in the Open Rack plan, servers don't have their own power supplies; instead, the rack has multiple supplies to power each server.

The concept isn't all that different from the blade servers being offered today by Cisco, Dell, HP, and IBM, but this is an open specification, whereas today's solutions tend to be proprietary. This should lead to more cost competition. (Also note that the size of the rack frame or chassis is still about 24in wide, so Open Rack products should fit into existing data centres.) HP and Dell, among others, have already shown products that fit into an Open Rack design.

Within the Open Rack, the idea is to eventually have different "sleds" – a compute module with two processors and a small amount of memory and storage, a DRAM module, a storage module, and a flash storage module – all of which are connected at very high speeds. The idea is these modules should be able to be mixed and matched, and more importantly, each can be replaced on a different schedule. (Flash memory typically wears out faster than hard drives, for instance, and CPUs are often upgraded every two years or so because compute demands really take advantage of Moore's Law, but other components may well be upgraded on a five to six year cycle).

One new specification is called the Open Common Slot for processors. Based on PCI-Express, this should allow processors from any vendor who supports it to go into an Open Rack server. Traditional x86 server vendors Intel and AMD both indicated support, as did Applied Micro and Calxeda, both of which were showing their low-power ARM-based servers. In addition, AMD and Intel said they have developed Open Rack motherboards: AMD's Roadrunner and Intel's Decathlete.

A lot of progress seems to be happening in interconnects for such servers. Intel says it is shipping samples of a 100Gbps silicon photonics module, and that it is developing specifications for using the interconnect for CPU, memory, and networking cards within a rack. Meanwhile, Mellanox was showing a new system that includes controllers and a top-of-rack switch that can run Infiniband at up to 56Gbps.

Other parts of the OCP are working on the Open Vault storage project (known as Knox), which will allow up to 30 drives in a 2U Open Rack chassis. A number of the big names in storage are supporting at least parts of this, including EMC, Fusion-io, Hitachi Global Storage, and SanDisk, with Fusion-io showing an ioScale card that can have up to 3.2TB of flash memory.

Initially, much of the emphasis for Open Compute has come from Facebook, which started the project to deal with the massive amounts of data it needs to store, move, and compute each day. At the summit, Facebook repeated some of its usage statistics: It has one billion users, who upload about 4.2 billion likes, posts, and comments per day, as well as about 350 million photos per day. As a result, Facebook needs an additional 7 petabytes per month just to store photos (and that figure is continually getting larger).

Facebook also talked about how it actually runs about 40 major services and 200 minor ones, but has now split them up so they each run on one of five standard server types: Web, Database, Hadoop, Haystack (Photos) and Feed (lots of CPU and memory). The concept behind Open Compute is to let it more easily customise its servers for each service, and to be able to easily swap in and out components, from different vendors and on different schedules, so it's both more flexible and more cost efficient.

Of course, those of us who run data centres for businesses have the same general goals, although most of us don't have nearly the same scale. For now, my guess is the big users of the Open Compute concepts will be the largest data centres, similar to how the big cloud providers have been the impetus for the OpenStack cloud platform.

Indeed, there's certainly overlap in the thinking between these groups, but over time, the concepts should become mainstream. I wouldn't be surprised to see businesses of all sizes being able to order Open Rack servers, thus gaining access to the cost and agility improvements they promise. It will take a couple of years, but the idea is certainly promising.

Michael J. Miller is Chief Information Officer at Ziff Brothers Investments, a private investment firm. Mr. Miller, who was editor-in-chief at PC Magazine from 1991-2005, authors this blog for PC Magazine to share his thoughts on PC-related products. No investment advice is offered in this blog. All duties are disclaimed. Mr. Miller works separately for a private investment firm which may at any time invest in companies whose products are discussed in this blog, and no disclosure of securities transactions will be made.


Sorry! Page not found.

The article you requested has either been moved or removed from the site.