In my first post about the lessons of Moore’s Law, I talked
about how the silicon-transistor-chip process drove long-term cost/unit
decreases and crowded out competitor technologies – but I didn’t talk about
winners and losers who were all using a similar Moore’s Law process. Why did some prosper in the long run and some
fall by the wayside? The answer, again,
I believe, has important implications for computer-industry and
related-industry strategies today.
Low Barriers to Entry
In 1981, the trade and political press was abuzz with the
onset of the Japanese, who, it seemed would soon surpass the US in GDP and
dominate manufacturing – which was, in that era’s mindset, the key to dominated
the world economy. In particular, the
Japanese seemed set to dominate computing, because they had taken the cost lead
in the Moore’s-Law silicon-transistor chip manufacturing process – that is, in
producing the latest and greatest computer chips (we were then approaching 4K
bits per chip, iirc). I well remember a
conversation with Prof. Lester Thurow of MIT (my then teacher of econometrics
at Sloan School of Management), at the time the economist guru of national
competitiveness, in which he confidently asserted that the way was clear for
the Japanese to dominate the world economy.
I diffidently suggested that they would be unable to dominate in
software, and in particular in microprocessors supporting that software,
because of language barriers and difficulties in recreating the US application
development culture, and therefore would not dominate. Thurow brushed off the suggestion: Software, he said, was a “rounding error” in
the US economy.There are two points hidden in this particular anecdote: first, that there are in fact low barriers to entry in a silicon-transistor-chip process-driven industry such as memory chips, and second, that the barriers to entry are much higher when software is a key driver of success. In fact, that is one key reason why Intel succeeded while the Mosteks and TIs of the initial industry faded from view: where many other US competitors lost ground as first the Japanese, then the Koreans, and finally the Taiwanese came to dominate memory chips, Intel deliberately walked away from memory chips in order to focus on evolving microprocessors according to Moore’s Law and supporting the increasing amount of software built on top of Intel processor chips. And it worked.
Upwardly Mobile
It worked, imho, primarily because of an effective Intel
strategy of "forward [or upward] compatibility." That is, Intel committed unusually strongly
to the notion that as component compression, processor features, and processor
speed increased in the next generation according to Moore's Law, software
written to use today's generation would still run, and at approximately the
same speed as today or faster. In
effect, as far as software was concerned, upgrading one's systems was simply a
matter of swap the new generation in, swap the old one out -- no recompilation
needed. And recompilation of massive
amounts of software is a Big Deal.Push came to shove in the late 1980s. By that time, it was becoming increasingly clear that the Japanese were not going to take over the booming PC market, although some commentators insisted that Japanese "software factories" would be superior competitors to the American software development culture. The real tussle was for the microprocessor market, and in the late 1980s Intel's main competitor was Motorola. In contrast to Intel, Motorola emphasized quality -- similar to the Deming precepts that business gurus of the time insisted were the key to competitive success. Motorola chips would come out somewhat later than Intel's, but would be faster and a bit more reliable. And then the time came to switch from a 16-bit to a 32-bit instruction word -- and Motorola lost massive amounts of business.
Because of its focus on upward compatibility, Intel chose to come out with a version that sacrificed some speed and density. Because of its focus on quality, Motorola chose to require use of some 32-bit instructions that would improve speed and fix some software-reliability problems in its 16-bit version. By this time, there was a large amount of software using Motorola's chip set. When users saw the amount of recompilation that would be required, they started putting all their new software on the Intel platform. From then on, Intel's main competition would be the "proprietary" chips of the large computer companies like IBM and HP (which, for technical reasons, never challenged Intel's dominance of the Windows market), and a "me-too" company called Advanced Micro Devices (AMD).
The story is not quite over. Over the next 20 years, AMD pestered Intel by adopting a "lighter-weight" approach that emulated Intel's processors (so most if not all Intel-based apps could run on AMD chips) but used a RISC-type instruction set for higher performance. As long as Intel kept to its other famous saying ("only the paranoid survive") it was always ready to fend off the latest AMD innovation with new capabilities of its own (e.g., embedding graphics processing in the processor hardware). However, at one point it started to violate its own tenet of software support via upward compatibility: It started focusing on the Itanium chip.
Now, I believed then and I believe now that the idea embedded in the Pentium chip -- "superscalar" computing, or instruction-word design that did not insist that all words be the same length -- is a very good one, especially when we start talking about 64-bit words. True, there is extra complexity in processing instructions, but great efficiency and therefore speed advantages in not insisting on wasted space in many instruction words. But that was not what croaked Pentium. Rather, Pentium was going to require new compilers and recompilation of existing software. As far as I could see at the time, both Intel and HP did not fully comprehend the kind of effort that was going to be needed to minimize recompilation and optimize the new compilers. As a result, the initial prospect faced by customers was of extensive long-term effort to convert their existing code -- and most balked. Luckily, Intel had not completely taken its eye off the ball, and its alternative forward-compatible line, after some delays that allowed AMD to make inroads, moved smoothly from 32-bit to 64-bit computing.
History Lesson 2
By and large, the computer industry has now learned the
lesson of the importance of upward/forward compatibility. In a larger sense, however, the Internet of
Things raises again the general question:
Whose platform and whose “culture” (enthusiastic developers outside the
company) should we bet on? Is it Apple’s
or Google’s or Amazon’s (open-sourcing on the cloud) or even Microsoft’s (commonality
across form factors)? And the answer,
Moore’s Law history suggests, is to bet on the platforms that will cause the
least disruption to existing systems and still support the innovations of the
culture.I heard an interesting report on NPR the other day about the resurgence of paper. Students, who need to note down content that rarely can be summarized in a tweet or on a smart phone screen, are buying notebooks in droves, not as a substitute but as a complement to their (typically Apple) gear. In effect, they are voting that Apple form factors, even laptops, are sometimes too small to handle individual needs for substantial amounts of information at one’s fingertips. For those strategists who think Apple is the answer to everything, this is a clear warning signal. Just as analytics is finding that using multiple clouds, not just Amazon’s, is the way to go, it is time to investigate a multiple-platform strategy, whether it be in car electronics and websites, or in cloud-based multi-tenant logistics functionality.
And yet, even smart software-focused platform strategies do not protect the enterprise from every eventuality. There is still, even within Moore’s-Law platforms, the periodic disruptive technology. And here, too, Moore’s Law history has something to teach us – in my next blog post.
No comments:
Post a Comment