As noted in the latest issue of MIT Technology Review, HP’s research arm is making a major bet on memristors, seeking to deliver a major performance boost beyond the traditional processor/memory/disk hierarchy. More specifically, as presented in the Review, HP would replace memory, SSDs, and disk with “flat” memristor memory, and then communicate within the new processor-memory architecture using optical-fiber-type light-speed technology rather than today’s slower-electron-speed technology. In the Review, HP projects initial product ship as next year, although it appears from Wikipedia that 2018 is a likelier date.
It is exceptionally difficult to tell whether the resulting products would really outdistance SSDs and cables over the long haul, because HP is making its bet at an unusually early stage in the technology’s development. However, my skepticism is not of the technology, nor of its potential to deliver major and scalable performance benefits beyond the transistors beloved of Moore’s Law. Rather, I believe that a critical success factor for HP will be delivery of an operating system (HP has christened the concept The Machine, and the operating system Machine OS) plus the attendant compilers and interpreters. To put it bluntly, HP has been late to realize that it needs software development savvy in its DNA, and past experience with HP software development suggests that even if the technology pans out, HP will probably fail at giving business and consumer customers a reason to insert Machine OS in place of the old-architecture operating systems.
Let’s dig further into the details.
More On Memristors
As Wikipedia notes, it is not clear even now that what HP is touting is indeed, strictly speaking, a memristor. This matters only if, as fab facilities are developed and the technology tries to scale like the transistor, it turns out that HP’s technology is not only not a memristor but also cannot scale as the memristor should. That’s one unusual technology risk for HP.
The second technology risk is that the actual “unit form” of the memristor memory has also not yet been specified and verified. Apparently, a “crossbar latch” approach has been tested at a small scale with good results, but it is not certain whether that’s the best approach or if it will scale appropriately to memory chips. If it does, HP estimates that 1 cubic centimeter can contain 1 petabit of memory (all accessible at the same tens-of-nanoseconds speed).
Other details of the memristor seem less relevant at this time. For example, a true memristor effectively keeps a “memory” of past changes of state, so that, as with a disk but unlike with main memory, if you turn off the power and then turn it on again, the memristor will be able to recover its value at the time of shutoff (or previous values). This type of “remembrance” does not seem important in computations, although it certainly might help with machine-learning experiments.
Again citing MIT Technology Review, initial plans are to create a “Linux over memristors” OS first, called Linux ++, which would not be a “clean slate” optimization for the new “flat” memory, and then a version called Carbon that would be optimized for flat memory and open sourced. Left unexamined are such questions as “Do I need to recompile my Linux apps so that they’ll work on Linux ++, as was the case between Red Hat and Novell Linux?” If most apps must be recompiled, business use of the new OSes is probably in most cases a non-starter, and even cloud providers might question the business case.
We have been here before. When HP committed to Intel’s Itanium line, I remember from my briefings that they did not at all appreciate the importance of having well-tuned compilers and interpreters with minimal recompilation involved from the get-go. I believe that Itanium’s variable-length-word instruction set approach was a very good idea at the time, but without a clear transition strategy, HP’s effort to implement it was pretty close to doomed at the start: it could never gain critical mass in the market.
I see no indication yet that HP has the software-development prowess to succeed in an even larger app conversion. This is especially true because HP’s tradition since the early ‘90s has by and large been “let customers pick other companies’ software where needed”, which is fine until you really need in-house software to boost your new hardware technology, and quickly.
Is The Era of the Hardware And/Or Services Critical Success Factor Over?
It is generally understood that HP is not in a great strategic position these days, having decided to split PC computing from the server side as two separate companies and with a small but consistent decline in revenues (together with a larger early dip in profits) in its immediate past. I do not, however, view HP as wildly less successful than other of the traditional hardware/services-biased companies, including IBM. There does, however, seem to be a continuum ranging from purer hardware/services offerings to predominantly software-related competitive advantage, and it correlates with revenue trends. HP is down more than IBM, which does less well than Oracle, which does less well than Microsoft. Apple may seem to be an outlier, with its smart-phone hardware, but the differentiator that allowed Apple to croak Nokia’s hardware focus was the swipe touch screen – software all the way.
If I am right, then HP really needs to do something about innovative-software development. If it were me, I’d bet the farm on a better form of agile development than other large firms are practicing, plus meshing agile marketing with agile software development in a major way. Despite welcoming some major agile-development upgrades in firms like IBM, I believe there’s plenty of room for additional goodness.
Failing that, it’s hard to see how HP turns itself around, as one company or two, via the memristor route or something less risky, in the next few years. To misquote Santayana, even those who have hardware that remembers for them are doomed to repeat software-driven history – and that ain’t good.