One of the topics we’ve discussed at some length is how the benefits of process node shrinks have gotten smaller over the years. One of the consequences of this transition is that the definition of a process node – as in, what does or does not constitute a 28nm or 22nm processor – has become increasingly fuzzy.
Technically, the process node is the size of the gap between the source and drain of a transistor. Once upon a time, it also referred to the size of the transistor gate itself, but that hasn’t been true since the mid-1990s. Part of what complicates the question is the fact that DRAM, NAND, and logic (CPUs) all have different characteristics. There are many pertinent feature sizes, not just the metric we measure in a “node,” and adjusting them impacts performance, power consumption, and die size.
The chart below, produced by IEEE Spectrum, shows how the concept of “half-pitch,” “node,” and “gate” were originally synchronised at Intel but decoupled later. Gates shrank more quickly than half pitches (half the distance between two identical features/transistors), then levelled off.
And it’s absolutely true that the entire process is getting fuzzier. SanDisk’s 1Y NAND flash node, announced earlier this year, is built on the same technology as its 1X node but improves on memory cell size nonetheless. The fact that SanDisk is claiming improvements without moving to a new process shows just how vague the entire concept of a “node” truly is.
Still, there’s no doubt that the problem is getting worse. In the past, performance gains were delivered mostly through straight feature shrinks. For the last decade, however, we’ve seen companies deploy a huge variety of additional technologies, from SOI and strained silicon to high-k metal gate and FinFETs.
So, process nodes have transformed from actual engineering descriptions that referred to specific feature sizes to marketing terms that are designed to capture the benefits of those feature gains rather than literally referring to the size of any particular metric. This isn’t actually a problem, provided that the benefits exist. The universe is full of marketing terms that are no longer functionally anchored to the standards that gave them life, as evidenced by the fact that we still talk about motors in terms of horsepower.
As the difficulties mount, foundries are talking about advances in fluffier terms and planning hybrid approaches that their customers aren’t necessarily thrilled with. GlobalFoundries’ hybrid 20nm/14nm process technology, for example, is expected to offer the power savings and performance advantages of a die shrink, but not the physically smaller size. No one is entirely sure what to make of this. Conventional logic says this should put GF at a disadvantage compared to TSMC, but that assumes GF’s competitors can offer the “full” advantages of 14nm in the first place.
IEEE Spectrum’s recent discussion of these issues, however, is a bit more doom and gloom than may actually be warranted. While it’s true that conventional silicon scaling is drawing to a close, that doesn’t mean we’ve exhausted the possibilities for improving designs.
Die shrink challenges force innovation in other areas
Why has Intel poured money into developing FinFETs? Because planar silicon doesn’t scale very effectively past 28nm. Why push into graphene, or carbon nanotubes, or for more mundane advances like 3D chip stacking? Because our ability to shrink feature sizes is hitting a wall. Why research the GreenDroid concept, where certain common functions are implemented directly in semi-programmable hardware? Because die size is something we have lots of. Lithography expert Chris Mack has discussed what he calls the “design gap” – the difference between the chips we could be designing, and the chips that are actually being built.
The reason transistor counts have continued to rise on CPUs over the last few years has less to do with the actual core logic, and more to do with the other components we now integrate on-die. We now integrate multiple CPU cores, each with its own cache bank – there’s more cache in an Ivy Bridge-E than the amount of RAM in my 1997 desktop PC which cost a grand back then. And even so, we could be building chips far larger than we do. The race is on to find better ways to build cores rather than simply driving feature size shifts, because the costs are rising too quickly for any company to play forever.
There were 19 companies still actively building leading-edge capacity at the 130nm. At 22nm/20nm, there are five (GlobalFoundries ought to be on the above list). UMC is going to be active at 20nm eventually, but its ramp is running well behind the primary companies. In 1987, all of the top 20 semiconductor companies owned their own leading-edge fabs. In 2010, just seven did. As the cost of moving to new processes skyrockets, and the benefits shrink, there’s going to come a time when even Intel looks at the potential value and says: “Nah.”
But this doesn’t mean the death of improvements in silicon. On the contrary, there are potential solutions and improvements that haven’t been explored before, partly because the emphasis and cash have always flowed towards smaller process shrinks. The benefit in finding ways to improve a specific process has always been balanced against the expectation that a smaller node would be available in 18 to 36 months. As that situation changes, foundries will find ways to improve the final product via other methods.