At Aberdeen Group in the 1990s, I learned one fundamental
writing technique: Get to the point, and
don’t waste time with history. However,
the history of Moore’s Law is the exception that proves (meaning, “tests and
thus clarifies”) the rule – its history provides lessons that should guide our
computer-industry strategies even today.
As I hope to prove in this series of blog posts.
A Wealth of Technology Options
In 1976, when I started by programming the Intel 8080 microprocessor at NCR Corp., Electronics Magazine and EE Times were filled with news and ads about upcoming developments in many alternative methods of storage. Not only were disk and tape, today’s survivors, quite different approaches from transistor-based storage and processors; within and outside of transistors, there were quite a few alternatives that are worth remembering:
·
Magnetic drum;
·
CCDs (charge-coupled devices);
·
Magnetic bubble memory;
·
ECL;
·
Silicon on sapphire;
·
Josephson junctions.
Within 10 years, most of these had practically vanished from
the bulk of computing, while ECL hung on for another 5-10 years in some
minicomputer and mainframe shops. So why
did the silicon-based architecture that Moore is really talking about (the one
used by triumphant competitor Intel) crowd all of these out? To say that it was because silicon plus the
transistor was a better technology is pure hindsight.
In my ever so humble opinion, the reason was the superior
ability of silicon transistors to allow:
1.
Learning-curve pricing;
2.
Generation-leapfrogging design; and
3.
Limitation of testing and changes.
Together, these form an integrated long-term process that
allows effective achievement of Moore’s Law, generation after generation, by
multiple companies.
Let’s take these one at a time.
Learning-Curve Pricing
Few now remember how shocking it was when chip companies started pricing their wares below cost – not to mention making handsome profits out of it. The reason was that the silicon-transistor-chip fabrication process turned out to benefit from a “learning curve”: as designers, fab-facility execs, and workers learned and adjusted, the cost of producing each chip dipped, sometimes by as much as an eventual 50%. By pricing below initial cost, chip companies could crowd out the ECL and SoS folks who were later to achieve the same chip size but had faster chips; and by the time ECL design could catch up, why, the next generation of silicon-transistor chips was ahead again. Note that not only does Moore’s Law imply greater storage in the same size of chip; it implies the cost per storage unit goes down 50-100% per year; and learning-curve pricing is a strong impetus for that kind of rapid cost decrease.Generation-Leapfrogging Design
A second distinguishing feature of the silicon-transistor-chip process is the ability to start planning for generation 3 once generation 1 comes out, or for generation 4 once generation 2 comes out. Competitors would far more frequently be unable to anticipate the kinds of challenges related to shrinking storage size two generations ahead. By contrast, silicon-transistor-chip companies then and now create whole “leapfrog teams” of designers that handle alternate releases.
The luxury of tackling problems 4 years ahead of time means
a far greater chance of extending Moore’s Law as shrinkage problems such as
channel overlap and overheating become more common. It therefore tends to widen the difference
between competing technologies such as ECL (not to mention IBM Labs’ old favorite,
Josephson junctions) and silicon, because each silicon
generation is introduced faster than the competitor technology’s. And this ability to "leapfrog design"
is greatly improved by our third “superior ability”.
Limitation of Testing and Changes
Afaik, the basic process of creating a silicon chip has not changed much in the last 55 years. On top of a silicon "substrate", one lays a second layer that has grooves in it corresponding to transistors and the connections between them. A "mask" blocks silicon from being laid down where a groove is supposed to be. Attach metal to the chip edges to allow electric input and output, and voila! A main-memory chip, an SSD chip, or a processor chip.
Note what this means for the next generation of chips. To speed up the next generation of chips, and
increase the amount stored on a same-sized chip at about the same cost, one
simply designs a mask that shrinks the distance between channels and/or
shortens the distance between transistors on the same channel.
Contrast this with, say, developing a faster, more
fuel-efficient car for the same price. Much
more of the effort in the car's design goes into researching relatively new
technologies -- aerodynamics, fuel injection, etc. As a result, each generation is very likely
to achieve less improvement compared to the silicon chip and requires more
changes to the car. Moreover, each
generation requires more testing compared to the silicon chip, not only because
the technologies are relatively new, but also because the basic architecture of
the car and the basic process used to produce it may well change more. Hence the old joke, "If a car was like a
computer, it would run a million miles on a tank of gas and cost a dollar by
now." (There's an additional line by the car folks that says, "and
would break down five days a year and cost millions to fix", but that's no
longer a good joke, now that cars have so many computer chips in them).
Of course, most next-generation design processes fall in
between the extremes of the silicon chip and the car, in terms of the testing
and changes they require. However, fewer
changes and less testing per unit of improvement in the silicon chip gave it
another edge in speed-to-market compared to competing technologies -- and the
same thing seems to be happening now with regard to SSDs (transistor
technology) vs. disk (magnetism-based technology).
Applying Lesson 1 Today
The first and most obvious place to apply insights into the crucial role of long-term processes in driving cost/unit according to Moore's Law is in the computer industry and related industries. Thus, for example, we may use it to judge the likelihood of technologies like memristors to supersede silicon. It appears that the next 5 years will see a slowed but continuing improvement in speed/size unit and cost/unit in the silicon-transistor sphere, at least to the point of 10-nm fab facilities. Thus, these technologies will have the handicaps of unfamiliar technology and will be chasing a rapidly-moving target. Hence, despite their advantages on a "level playing field", these technologies seem unlikely to succeed in the next half-decade. Likewise, we might expect SSDs to make further inroads into disk markets, as disk technology improvements have been slowing more rapidly than silicon-processor improvements, at least recently.Another area might be energy technology. I guessed back in the 1980s, based on my understanding of the similarity of the solar and transistor processes, that silicon solar-energy wafers would be on a fairly rapid downward cost/unit track, while oil had no such process; and so it seems to have proved. In fact, solar is now price-competitive despite its stupendous disadvantage in "sunk-cost" infrastructure, making success almost certainly a matter of when rather than if. Moreover, now that better batteries from the likes of Tesla are arriving, oil's advantage of storage for variable demand is significantly lessened (although battery technology does not appear to follow Moore's Law).
Above all, understanding of the role of the process in the success of silicon-transistor-chip technology can allow us to assess as-yet-unannounced new technologies with appropriate skepticism. So the next time someone says quantum computing is coming Real Soon Now, the appropriate response is probably, to misquote St. Augustine, "By all means, give me quantum computing when it's competitive -- but not just yet."
No comments:
Post a Comment