Thursday, May 28, 2015

Lessons of Moore's Law History 2: It's the Software


In my first post about the lessons of Moore’s Law, I talked about how the silicon-transistor-chip process drove long-term cost/unit decreases and crowded out competitor technologies – but I didn’t talk about winners and losers who were all using a similar Moore’s Law process.  Why did some prosper in the long run and some fall by the wayside?  The answer, again, I believe, has important implications for computer-industry and related-industry strategies today.

Low Barriers to Entry

In 1981, the trade and political press was abuzz with the onset of the Japanese, who, it seemed would soon surpass the US in GDP and dominate manufacturing – which was, in that era’s mindset, the key to dominated the world economy.  In particular, the Japanese seemed set to dominate computing, because they had taken the cost lead in the Moore’s-Law silicon-transistor chip manufacturing process – that is, in producing the latest and greatest computer chips (we were then approaching 4K bits per chip, iirc).  I well remember a conversation with Prof. Lester Thurow of MIT (my then teacher of econometrics at Sloan School of Management), at the time the economist guru of national competitiveness, in which he confidently asserted that the way was clear for the Japanese to dominate the world economy.  I diffidently suggested that they would be unable to dominate in software, and in particular in microprocessors supporting that software, because of language barriers and difficulties in recreating the US application development culture, and therefore would not dominate.  Thurow brushed off the suggestion:  Software, he said, was a “rounding error” in the US economy.
There are two points hidden in this particular anecdote:  first, that there are in fact low barriers to entry in a silicon-transistor-chip process-driven industry such as memory chips, and second, that the barriers to entry are much higher when software is a key driver of success.  In fact, that is one key reason why Intel succeeded while the Mosteks and TIs of the initial industry faded from view:  where many other US competitors lost ground as first the Japanese, then the Koreans, and finally the Taiwanese came to dominate memory chips, Intel deliberately walked away from memory chips in order to focus on evolving microprocessors according to Moore’s Law and supporting the increasing amount of software built on top of Intel processor chips.  And it worked.

Upwardly Mobile

It worked, imho, primarily because of an effective Intel strategy of "forward [or upward] compatibility."  That is, Intel committed unusually strongly to the notion that as component compression, processor features, and processor speed increased in the next generation according to Moore's Law, software written to use today's generation would still run, and at approximately the same speed as today or faster.  In effect, as far as software was concerned, upgrading one's systems was simply a matter of swap the new generation in, swap the old one out -- no recompilation needed.  And recompilation of massive amounts of software is a Big Deal.
Push came to shove in the late 1980s.  By that time, it was becoming increasingly clear that the Japanese were not going to take over the booming PC market, although some commentators insisted that Japanese "software factories" would be superior competitors to the American software development culture.  The real tussle was for the microprocessor market, and in the late 1980s Intel's main competitor was Motorola.  In contrast to Intel, Motorola emphasized quality -- similar to the Deming precepts that business gurus of the time insisted were the key to competitive success.  Motorola chips would come out somewhat later than Intel's, but would be faster and a bit more reliable.  And then the time came to switch from a 16-bit to a 32-bit instruction word -- and Motorola lost massive amounts of business.
Because of its focus on upward compatibility, Intel chose to come out with a version that sacrificed some speed and density.  Because of its focus on quality, Motorola chose to require use of some 32-bit instructions that would improve speed and fix some software-reliability problems in its 16-bit version.  By this time, there was a large amount of software using Motorola's chip set.  When users saw the amount of recompilation that would be required, they started putting all their new software on the Intel platform.  From then on, Intel's main competition would be the "proprietary" chips of the large computer companies like IBM and HP (which, for technical reasons, never challenged Intel's dominance of the Windows market), and a "me-too" company called Advanced Micro Devices (AMD).
The story is not quite over.   Over the next 20 years, AMD pestered Intel by adopting a "lighter-weight" approach that emulated Intel's processors (so most if not all Intel-based apps could run on AMD chips) but used a RISC-type instruction set for higher performance.  As long as Intel kept to its other famous saying ("only the paranoid survive") it was always ready to fend off the latest AMD innovation with new capabilities of its own (e.g., embedding graphics processing in the processor hardware).  However, at one point it started to violate its own tenet of software support via upward compatibility:  It started focusing on the Itanium chip.
Now, I believed then and I believe now that the idea embedded in the Pentium chip -- "superscalar" computing, or instruction-word design that did not insist that all words be the same length -- is a very good one, especially when we start talking about 64-bit words.  True, there is extra complexity in processing instructions, but great efficiency and therefore speed advantages in not insisting on wasted space in many instruction words.  But that was not what croaked Pentium.  Rather, Pentium was going to require new compilers and recompilation of existing software.  As far as I could see at the time, both Intel and HP did not fully comprehend the kind of effort that was going to be needed to minimize recompilation and optimize the new compilers.  As a result, the initial prospect faced by customers was of extensive long-term effort to convert their existing code -- and most balked.  Luckily, Intel had not completely taken its eye off the ball, and its alternative forward-compatible line, after some delays that allowed AMD to make inroads, moved smoothly from 32-bit to 64-bit computing.

History Lesson 2

By and large, the computer industry has now learned the lesson of the importance of upward/forward compatibility.  In a larger sense, however, the Internet of Things raises again the general question:  Whose platform and whose “culture” (enthusiastic developers outside the company) should we bet on?  Is it Apple’s or Google’s or Amazon’s (open-sourcing on the cloud) or even Microsoft’s (commonality across form factors)?  And the answer, Moore’s Law history suggests, is to bet on the platforms that will cause the least disruption to existing systems and still support the innovations of the culture.
I heard an interesting report on NPR the other day about the resurgence of paper.  Students, who need to note down content that rarely can be summarized in a tweet or on a smart phone screen, are buying notebooks in droves, not as a substitute but as a complement to their (typically Apple) gear.  In effect, they are voting that Apple form factors, even laptops, are sometimes too small to handle individual needs for substantial amounts of information at one’s fingertips.  For those strategists who think Apple is the answer to everything, this is a clear warning signal.  Just as analytics is finding that using multiple clouds, not just Amazon’s, is the way to go, it is time to investigate a multiple-platform strategy, whether it be in car electronics and websites, or in cloud-based multi-tenant logistics functionality.
And yet, even smart software-focused platform strategies do not protect the enterprise from every eventuality.  There is still, even within Moore’s-Law platforms, the periodic disruptive technology.  And here, too, Moore’s Law history has something to teach us – in my next blog post.

Friday, May 22, 2015

Lessons of Moore's Law History 1: It's the Process


At Aberdeen Group in the 1990s, I learned one fundamental writing technique:  Get to the point, and don’t waste time with history.  However, the history of Moore’s Law is the exception that proves (meaning, “tests and thus clarifies”) the rule – its history provides lessons that should guide our computer-industry strategies even today.  As I hope to prove in this series of blog posts.

A Wealth of Technology Options

In 1976, when I started by programming the Intel 8080 microprocessor at NCR Corp., Electronics Magazine and EE Times were filled with news and ads about upcoming developments in many alternative methods of storage.  Not only were disk and tape, today’s survivors, quite different approaches from transistor-based storage and processors; within and outside of transistors, there were quite a few alternatives that are worth remembering:

·         Magnetic drum;

·         CCDs (charge-coupled devices);

·         Magnetic bubble memory;

·         ECL;

·         Silicon on sapphire;

·         Josephson junctions.
Within 10 years, most of these had practically vanished from the bulk of computing, while ECL hung on for another 5-10 years in some minicomputer and mainframe shops.  So why did the silicon-based architecture that Moore is really talking about (the one used by triumphant competitor Intel) crowd all of these out?  To say that it was because silicon plus the transistor was a better technology is pure hindsight.

In my ever so humble opinion, the reason was the superior ability of silicon transistors to allow:

1.       Learning-curve pricing;

2.       Generation-leapfrogging design; and

3.       Limitation of testing and changes.
Together, these form an integrated long-term process that allows effective achievement of Moore’s Law, generation after generation, by multiple companies. 
Let’s take these one at a time.

Learning-Curve Pricing

Few now remember how shocking it was when chip companies started pricing their wares below cost – not to mention making handsome profits out of it.  The reason was that the silicon-transistor-chip fabrication process turned out to benefit from a “learning curve”:  as designers, fab-facility execs, and workers learned and adjusted, the cost of producing each chip dipped, sometimes by as much as an eventual 50%.  By pricing below initial cost, chip companies could crowd out the ECL and SoS folks who were later to achieve the same chip size but had faster chips; and by the time ECL design could catch up, why, the next generation of silicon-transistor chips was ahead again.  Note that not only does Moore’s Law imply greater storage in the same size of chip; it implies the cost per storage unit goes down 50-100% per year; and learning-curve pricing is a strong impetus for that kind of rapid cost decrease.

Generation-Leapfrogging Design

A second distinguishing feature of the silicon-transistor-chip process is the ability to start planning for generation 3 once generation 1 comes out, or for generation 4 once generation 2 comes out.  Competitors would far more frequently be unable to anticipate the kinds of challenges related to shrinking storage size two generations ahead.  By contrast, silicon-transistor-chip companies then and now create whole “leapfrog teams” of designers that handle alternate releases.
The luxury of tackling problems 4 years ahead of time means a far greater chance of extending Moore’s Law as shrinkage problems such as channel overlap and overheating become more common.  It therefore tends to widen the difference between competing technologies such as ECL (not to mention IBM Labs’ old favorite, Josephson junctions) and silicon, because each silicon generation is introduced faster than the competitor technology’s.  And this ability to "leapfrog design" is greatly improved by our third “superior ability”.

Limitation of Testing and Changes

Afaik, the basic process of creating a silicon chip has not changed much in the last 55 years.  On top of a silicon "substrate", one lays a second layer that has grooves in it corresponding to transistors and the connections between them.  A "mask" blocks silicon from being laid down where a groove is supposed to be.  Attach metal to the chip edges to allow electric input and output, and voila! A main-memory chip, an SSD chip, or a processor chip. 
Note what this means for the next generation of chips.  To speed up the next generation of chips, and increase the amount stored on a same-sized chip at about the same cost, one simply designs a mask that shrinks the distance between channels and/or shortens the distance between transistors on the same channel. 
Contrast this with, say, developing a faster, more fuel-efficient car for the same price.  Much more of the effort in the car's design goes into researching relatively new technologies -- aerodynamics, fuel injection, etc.  As a result, each generation is very likely to achieve less improvement compared to the silicon chip and requires more changes to the car.  Moreover, each generation requires more testing compared to the silicon chip, not only because the technologies are relatively new, but also because the basic architecture of the car and the basic process used to produce it may well change more.  Hence the old joke, "If a car was like a computer, it would run a million miles on a tank of gas and cost a dollar by now." (There's an additional line by the car folks that says, "and would break down five days a year and cost millions to fix", but that's no longer a good joke, now that cars have so many computer chips in them).
Of course, most next-generation design processes fall in between the extremes of the silicon chip and the car, in terms of the testing and changes they require.  However, fewer changes and less testing per unit of improvement in the silicon chip gave it another edge in speed-to-market compared to competing technologies -- and the same thing seems to be happening now with regard to SSDs (transistor technology) vs. disk (magnetism-based technology). 

Applying Lesson 1 Today

The first and most obvious place to apply insights into the crucial role of long-term processes in driving cost/unit according to Moore's Law is in the computer industry and related industries.  Thus, for example, we may use it to judge the likelihood of technologies like memristors to supersede silicon.  It appears that the next 5 years will see a slowed but continuing improvement in speed/size unit and cost/unit in the silicon-transistor sphere, at least to the point of 10-nm fab facilities.  Thus, these technologies will have the handicaps of unfamiliar technology and will be chasing a rapidly-moving target.  Hence, despite their advantages on a "level playing field", these technologies seem unlikely to succeed in the next half-decade.  Likewise, we might expect SSDs to make further inroads into disk markets, as disk technology improvements have been slowing more rapidly than silicon-processor improvements, at least recently.

Another area might be energy technology.  I guessed back in the 1980s, based on my understanding of the similarity of the solar and transistor processes, that silicon solar-energy wafers would be on a fairly rapid downward cost/unit track, while oil had no such process; and so it seems to have proved.  In fact, solar is now price-competitive despite its stupendous disadvantage in "sunk-cost" infrastructure, making success almost certainly a matter of when rather than if.  Moreover, now that better batteries from the likes of Tesla are arriving, oil's advantage of storage for variable demand is significantly lessened (although battery technology does not appear to follow Moore's Law).

Above all, understanding of the role of the process in the success of silicon-transistor-chip technology can allow us to assess as-yet-unannounced new technologies with appropriate skepticism.  So the next time someone says quantum computing is coming Real Soon Now, the appropriate response is probably, to misquote St. Augustine, "By all means, give me quantum computing when it's competitive -- but not just yet."

Monday, May 11, 2015

ClimateChange and Harvard Magazine: My Jaw Dropped


For my sins – and especially the sin of having gone to Harvard University at one point – I used to get an alumnus copy of Harvard Magazine.  I cancelled my subscription last year, when a fatuous article by a Harvard professor of environmental science recommended that the US allow the Keystone XL pipeline without the slightest effort to explain why James Hansen (far more expert in climate science than the professor) had said “Keystone XL means game over for the environment”.  At that time, it was clear to me that Harvard Magazine’s editors had not the slightest idea that there was anything wrong with the article, and not the slightest intention of boning up on the subject because of the gravity of the issue.
However, other members of the family still get the rag, and I took a brief look at an article in the latest issue about “why the US may be on the cusp of a [sustainable} energy revolution.”  And, although I was braced for something clueless, my jaw still dropped.  Here’s the quote (condensed):
“Beyond the connection between long-term climate change and the use of energy … lies another stark reality: the total amount of fossil fuel on earth is finite, and so, inevitably, eventually, ‘We will run out.’”
Not sure why this is so awful a framing of the article’s issues?  Try this equivalent.  In the middle of global nuclear war, Harvard Magazine publishes the following:
“We need to think about beginning to substitute something else for atomic bombs, because, at this rate, in about twenty years (long after the rubble has stopped bouncing) we will run out of uranium.”
And the rest of the article shows absolutely no consciousness of what climate science is predicting.  The focus is on wind power.  Double the wind speed means four times the energy, so the middle (windiest) 1/3-1/6 of the US should be covered with turbines – no mention of the likelihood that average wind speed is likely to increase by 1/3 over the next 30 years, nor that wind patterns are likely to change (as they are already doing in winter because of the disruption of the jet stream), so that the middle third of the US may not be the windiest.  Offshore wind farms are good with regard to wind speed but more expensive to build – no mention of the likelihood that East Coast waters are likely to rise by a foot over the next 35 years, drowning low-lying turbines, or that higher-category hurricane patterns that might overstress the turbines are also likely.  And then there’s the assertion that biofuels are renewable – it’s not clear what the article is referring to, since corn ethanol production and use can in some cases involve more carbon emissions than oil, while cellulosic ethanol is not yet as widely used.
To summarize:  Harvard Magazine is by no means a set of climate change deniers – there is no sense that the truth is something to twist in order to make money for oneself, no matter the consequences.  Rather, Harvard Magazine is a representative of what I would call, in drug addiction terms, enablers.  Their minds are simply not open to the possibility that delay involves exponentially increasing costs and deaths.  And so, they allow deniers to carry the day by sheer weight of inertia, just as enablers multiply addiction’s costs simply by deciding that they don’t need to face the addiction problem just yet.
I am reminded of someone once saying of an outrageous act, “It wasn’t just immoral – it was downright stupid.”  But, heavens, how could someone like Harvard be stupid?  In the words of Westbrook Pegler, that last sentence “was writ ironical.”

Monday, May 4, 2015

HP: Memristors And Things Past


As noted in the latest issue of MIT Technology Review, HP’s research arm is making a major bet on memristors, seeking to deliver a major performance boost beyond the traditional processor/memory/disk hierarchy.  More specifically, as presented in the Review, HP would replace memory, SSDs, and disk with “flat” memristor memory, and then communicate within the new processor-memory architecture using optical-fiber-type light-speed technology rather than today’s slower-electron-speed technology.  In the Review, HP projects initial product ship as next year, although it appears from Wikipedia that 2018 is a likelier date.
It is exceptionally difficult to tell whether the resulting products would really outdistance SSDs and cables over the long haul, because HP is making its bet at an unusually early stage in the technology’s development.  However, my skepticism is not of the technology, nor of its potential to deliver major and scalable performance benefits beyond the transistors beloved of Moore’s Law.  Rather, I believe that a critical success factor for HP will be delivery of an operating system (HP has christened the concept The Machine, and the operating system Machine OS) plus the attendant compilers and interpreters.  To put it bluntly, HP has been late to realize that it needs software development savvy in its DNA, and past experience with HP software development suggests that even if the technology pans out, HP will probably fail at giving business and consumer customers a reason to insert Machine OS in place of the old-architecture operating systems.
Let’s dig further into the details.
More On Memristors

As Wikipedia notes, it is not clear even now that what HP is touting is indeed, strictly speaking, a memristor.  This matters only if, as fab facilities are developed and the technology tries to scale like the transistor, it turns out that HP’s technology is not only not a memristor but also cannot scale as the memristor should.  That’s one unusual technology risk for HP.
The second technology risk is that the actual “unit form” of the memristor memory has also not yet been specified and verified.  Apparently, a “crossbar latch” approach has been tested at a small scale with good results, but it is not certain whether that’s the best approach or if it will scale appropriately to memory chips.  If it does, HP estimates that 1 cubic centimeter can contain 1 petabit of memory (all accessible at the same tens-of-nanoseconds speed).
Other details of the memristor seem less relevant at this time.  For example, a true memristor effectively keeps a “memory” of past changes of state, so that, as with a disk but unlike with main memory, if you turn off the power and then turn it on again, the memristor will be able to recover its value at the time of shutoff (or previous values).  This type of “remembrance” does not seem important in computations, although it certainly might help with machine-learning experiments.
Remembering Itanium

Again citing MIT Technology Review, initial plans are to create a “Linux over memristors” OS first, called Linux ++, which would not be a “clean slate” optimization for the new “flat” memory, and then a version called Carbon that would be optimized for flat memory and open sourced.  Left unexamined are such questions as “Do I need to recompile my Linux apps so that they’ll work on Linux ++, as was the case between Red Hat and Novell Linux?”  If most apps must be recompiled, business use of the new OSes is probably in most cases a non-starter, and even cloud providers might question the business case.
We have been here before.  When HP committed to Intel’s Itanium line, I remember from my briefings that they did not at all appreciate the importance of having well-tuned compilers and interpreters with minimal recompilation involved from the get-go.  I believe that Itanium’s variable-length-word instruction set approach was a very good idea at the time, but without a clear transition strategy, HP’s effort to implement it was pretty close to doomed at the start:  it could never gain critical mass in the market. 
I see no indication yet that HP has the software-development prowess to succeed in an even larger app conversion.  This is especially true because HP’s tradition since the early ‘90s has by and large been “let customers pick other companies’ software where needed”, which is fine until you really need in-house software to boost your new hardware technology, and quickly.
Is The Era of the Hardware And/Or Services Critical Success Factor Over?

It is generally understood that HP is not in a great strategic position these days, having decided to split PC computing from the server side as two separate companies and with a small but consistent decline in revenues (together with a larger early dip in profits) in its immediate past.  I do not, however, view HP as wildly less successful than other of the traditional hardware/services-biased companies, including IBM.  There does, however, seem to be a continuum ranging from purer hardware/services offerings to predominantly software-related competitive advantage, and it correlates with revenue trends.  HP is down more than IBM, which does less well than Oracle, which does less well than Microsoft.  Apple may seem to be an outlier, with its smart-phone hardware, but the differentiator that allowed Apple to croak Nokia’s hardware focus was the swipe touch screen – software all the way. 
If I am right, then HP really needs to do something about innovative-software development.  If it were me, I’d bet the farm on a better form of agile development than other large firms are practicing, plus meshing agile marketing with agile software development in a major way.  Despite welcoming some major agile-development upgrades in firms like IBM, I believe there’s plenty of room for additional goodness.
Failing that, it’s hard to see how HP turns itself around, as one company or two, via the memristor route or something less risky, in the next few years.  To misquote Santayana, even those who have hardware that remembers for them are doomed to repeat software-driven history – and that ain’t good.