Intel's new leadership has been spending a lot of time talking about mobile strategy, but the chip giant wants us all to remember that innovation in the datacentre remains a top priority.
Speaking at Intel's Datacenter Day, Intel Datacenter Group general manager Diane Bryant made the case for a new revolution in backend computing, pledging that her company would lead the way in driving key changes to how servers, networking, and storage are utilised to deliver better efficiencies, quicker service delivery, and lower costs.
"It is truly a very, very exciting time in the industry. We're going through a fundamental transformation in the way that IT is used," Bryant said, describing a move towards "human-centric" computing as the next logical step following the "computer-centric" and "network-centric" eras that preceded this next big transformation of the datacentre and back-end infrastructure.
"Today, we look at IT as the service. IT is no longer supporting the business, rather IT is the business," she said. The new focus, building upon earlier pushes to add automation and efficiency to datacentre operations, is to deliver rapid service to users through the cloud and the massive build-out of mobile devices like smartphones and the so-called Internet of Things.
Bryant made the case that the key areas where back-end operations can be improved are: utilisation rates for servers; the customisation of solutions for specific workloads, which includes developing System-on-a-Chip (SoC) products; adding "intelligence" to the edge of telecom networks to deliver content faster and more cheaply; the standardisation of network solution stacks; and moving from a static storage model to a dynamic one, depending more heavily on non-volatile Flash memory for better utilisation of resources.
Xeon E3 Coming in 2014
Intel also announced a new product in its Xeon product family, set for a 2014 release — a 14nm Xeon E3 processor, which Bryant called Intel's "first SoC based on a high-performance core."
The unnamed chip will incorporate Intel's next-generation, 14nm "Broadwell" processor architecture for the company's flagship Xeon and Core product lines. Unlike other high-performance products in the upcoming Broadwell offering, this chip will have integrated I/O, fabric, and accelerators on the same die as the CPU, making it a true SoC with more in common with Intel's low-power Atom products for the datacentre.
"We'll be delivering the best of both worlds, high-performance and high-density," Bryant said.
Going forward, Intel is also planning non-SoC, Broadwell-based Xeon products for next year. The company isn't saying what the thermals of those chips will be just yet, but Bryant pointed out that the lowest-power products in successive Xeon E3 generations have gone from 20 Watts to 17 Watts to 13 Watts in the past three years.
Intel has also been racing to ready its low-power Atom products for datacentre installations while moving Atom towards the company's latest silicon fabrication processes in line with Xeon and Core.
This year, Intel pushed out its first 22nm Xeon and Core products ahead of the "Avoton" and "Rangeley" generation of 22nm Atoms, which won't arrive until later in 2013. But Bryant said that 2014 will mark the convergence of Atom's fabrication process with that of the Xeon and Core families, when Intel is set to introduce a 14nm Atom architecture code named Denverton.
Intel's current and future SoCs are just one part of the company's ambitious and wide-ranging strategy for transforming the datacentre — and at least as much of that work will be driven by software optimisation as by more advanced hardware.
For example, Intel has plans to add computational capabilities to the five million or so base stations operated by telecoms around the world to create what it calls a Software Defined Network (SDN), Bryant said. Doing so would greatly reduce the time it takes to deliver content and services to users on the edge of cellular networks by removing tremendous amounts of latency, she maintained.
"An SDN could provision a new requested service in minutes rather than taking two to three weeks," Bryant said, elaborating on the inefficiencies of many current networks that rely on proprietary solution stacks for load balancing, gateway management, and firewalls.
Intel's proposed open network reference design works globally with the company's own infrastructure solutions and also the different stacks provided by various vendors, she said.
On the server and storage side of things, Bryant made the case for more dynamic use of assets to jack up utilisation rates and cut costs. Server racks ought to be able to deliver "pooled compute, pooled memory, and pooled I/O for application-driven allocation of resources that makes for greater efficiency," she said.
Meanwhile, Intel is also adding much more customisation of hardware solutions to its portfolio, as back-end requirements move from general-purpose towards a diversity of datacentre workloads, each made more efficient with specific optimisations.
For storage, a big hangup for IT has been the over-allocation of resources for app delivery—because "nobody wants storage to be the bottleneck for an app, so too much is requested, more than is needed," Bryant said.
Intel claims that simply moving from hard disk drives (HDDs) to solid state drives (SSDs) can in certain instances greatly increase the pace at which storage installations can crunch through data in order to properly sort it. The chip giant is also pushing new software models and accelerators which it says can sort 1TB of data which HDD-based storage systems require four hours to accomplish in just seven minutes.
In the end, Intel is looking to completely "re-architect the data centre," Bryant said, in an effort to promote a new "virtuous cycle" of computing, which will see the accelerating needs of mobile-device equipped end users and growing machine-to-machine infrastructure met by a more scalable, efficient back end.
"We need technology solutions that are easy to deploy, lower cost, and easier to deploy at scale," she said.
Intel's Ronald Kasabian also tried to demystify Big Data, telling us that the firm expects us to produce 10 times as much data in 2016 than we do now.