Monday, September 26, 2011

The Real Effects of Computing Brilliance

Recently, L.E. Modesitt, a noted and highly perceptive (imho) science fiction author with a long background in business, government, education, and the like, wrote a blog post in which he decried the contributions of “brilliant minds, especially at organizations like Google and Facebook” to “society and civilization.” He noted Eric Schmidt’s remarks in a recent interview that young people from colleges were brighter than their predecessors, and argued that they had applied their brilliance in “pursuit of the trivial … [and] of mediocrity.” More specifically, he claimed that their brilliance has helped lead to the undermining of literary copyright and therefore of an important source of ideas, the creation of bubble wealth, the destabilization of the financial system, and potentially the creation of the worst US political deadlock since the Civil War.

Here, however, Mr. Modesitt steps onto my turf as well as his own. I can claim to have been involved with computers and their results since I was in the same freshman class in college as Bill Gates (Junior, thank you). I have had a unique vantage point on those results that combines the mathematical, the technological, the business, and even the political, and have enjoyed an intimate acquaintance with, and given long-running attention to, the field. I am aware of the areas in which this generation (as well as the previous two) has contributed, and their effects. And I see a very different picture.

Let’s start with Mr. Schmidt, who is a very interesting man in his own right. Having been at Sun Microsystems (but not heading it) in its glory days of growth, struggle, and growth (1983-1997), he became CEO of Novell as it continued its slide into oblivion, and then in 2001, a key part of the necessary transition of Google from its founders to successors who can move the company beyond one person’s vision. Only in the last role was he clearly involved with successful technology leadership, as Sun was a master at marketing average technology as visionary, while Novell’s influence on industry innovation by that point was small.

All that being said, Mr. Schmidt is speaking from the viewpoint of having seen all three of computing’s great creative “generations,” waves of college-bred innovators who were influenced by comparable generations of their societies, and who combined that social influence with their own innovations to create the stew that is the Web of today. I would divide those generations as follows:

1. Graduating from college in 1971-1981, the “baby boomer/socially conscious”.
2. Graduating from college in 1982-1995, the “individual empowerment Gen Xer”.
3. Graduating from college in 1996-2007, the “Web is the air we breathe” generation.

Where Mr. Modesitt sees only some of the effects of all three generations from the outside, I believe that Mr. Schmidt sees from within the way that generation 3 incorporates all the technological advances of the previous generations and builds on them. Therefore, where Mr. Modesitt is too negative, I believe that Mr. Schmidt is too positive. Generations 1 and 2 have created a vast substructure – all the technologies of computing and the Internet that allow Generation 3 to move rapidly to respond to, or anticipate, the needs of each segment of a global society. Because the easiest needs to tackle are the most “trivial”, those will be most visible to Mr. Modesitt. Because Generation 3 appears to move from insight to insight faster by building on that infrastructure, Mr. Schmidt sees them as more brilliant.

Let’s get down to cases: the Web, and Google. The years of Generation 3 were the years when computing had a profound effect on the global economy: it gave productivity more than a decade of 2.5-3% growth, a kind of “sunset re-appearance” of the productivity gains of the Industrial Revolution. This effect, however, was not due to Generation 3. Rather, Generation 2’s creation of desktop productivity software, unleashed in the enterprise in the early 1990s, drove personal business productivity higher, and the advent of the business-usable Web browser (if anything, a product of Generation 1’s social consciousness) allowed online sales that cut out “middleman” businesses, cutting product and service costs sharply.

But where Generation 3 has had a profound effect is in our understanding of social networks. Because of its theoretical work, we now understand in practical terms how ideas propagate, where are the key facilitators and bottlenecks, and how to optimize a network. These are emphatically not what we think of when we talk about the value of computing; but they are the necessary foundation for open source product development, new viral and community markets, and the real prospect of products and services that evolve constantly with our needs instead of becoming increasingly irrelevant (see my blog post on Continuous Delivery).

Has that made life better? I would say, that’s like asking if Isaac Newton’s calculus was a good idea, given that it has mainly been used to calculate trajectories for artillery. Ideas can be used for good or ill; and often the inventor or creator has the least control over their long-term use by others.

But let’s play the game, anyway. Undermining copyright? I would blame that more on Generation 2, which used the Web to further its libertarian ideologies that now form a key part of the “Web ethos”. Financial algorithms and bubble wealth? Sorry, that’s a product of the broader society’s reaction to the perceived (but not necessarily real) lawlessness and government “do-gooding” of the Vietnam era, so that law and order and “leave me alone” permitted business deregulation and lack of outside-the-box thinking about the risks of new financial technologies. Political deadlock? Same thing. “Leave me alone” created a split between Vietnam-era social consciousness and previous generations, while removing later generations from the political process except as an amoral or totally self-interested game. In both those cases, no computing Generation could have done very much about it except to enable it.

What are the most profound ideas and impacts of all three Generations’ “brilliant minds”? I would say, first, the idea of a “meta-product” or “meta-thing” (today’s jargon probably would call it a “virtual reality”), an abstraction of a physical good or process or person that allows much more rapid analysis and change of what we need. To a far greater degree than we realize, our products are now infused with computer software that not only creates difficulties in carrying out tasks for poor Mr. Modesitt, but also gives an array of capabilities that can be adjusted, adapted, evolved, and elaborated much more rapidly than ever before. Mr. Modesitt, I know, rightly resents the stupidities of his word processing software; but there is simply no way that his typewriter hardware by now would have the spelling and grammar checking capabilities, much less the rapid publishing capabilities, of that same word processor. It may be more work to achieve the same result; but I can guarantee that, good or bad, many of the results achieved could not have been attained forty years ago. Our computing Generations did that.

The second, I believe, is that of global information. The Web, as imperfect as it is, and as security-ridden as it has become, has meant a sea-change in the amount of important information available to most if not all of us. When I was in college, the information readily available was mostly in textbooks and bookstores, which did not begin to capture the richness of information in the world. The key innovation here was not so much the search engine, which gave a way to sort this information, but the global nature of the Web user interface, the embedding of links between bits of information, and the invention of “crowdsourcing” communities, of which Wikipedia is by far the most successful as a pointer from information area to information area. Say all you like about the lies, omissions, and lack of history behind much of this global information pool; it is still better than a world in which such information appeared not to exist. Our computing Generations did that, too.

The third idea, the one I mentioned above, is understanding of, and ability to affect the design of, social networks. Facebook is only the latest variant on this theme, which has shown up in Internet groups, bulletin boards, open source and special-interest communities, smartphone and text-message interfaces, file sharing services, blogs, job sites, and other “social networks.” The Arab Spring may be an overhyped manifestation of its power; but I think it is fair to say that some of the underlying ideas about mass peaceful regime change could never have been global or effective without computing’s “brilliant minds.” I give Generation 3 most of the credit for that.

So here’s an example of why I disagree with Mr. Modesitt: a couple of years ago, I sat down to understand climate change. At first, the best information came from old-style books, mostly from the local public library. But soon, I was using and to dig deeper, to see the connections between what the books were saying, and then Wikipedia to point me to research articles and concept explanations that allowed me to put it into a coherent whole, adding my own technology, economic, business, political, and government experience and knowledge to fill in the gaps. No set of books, then or now, could possibly have given me enough information to do this. No phone network could possibly have shown me the people and institutions involved. No abacus could have put the statistics together, and no typewriter or TV could have given me such a flexible window to the information. The three great ideas of computing’s three Generations made such a thing possible.

So, no, I don’t believe that much of what this Generation of brilliant minds has been working on is at all trivial or mediocre, despite appearances – not to mention the contributions of the other two Generations – nor has its effect been predominantly bad. Rather, it reminds me of the remark in Dilbert that it is the work of a few smart, unprepossessing drones that moves the world forward, hindered rather than helped by their business or the greater society. Despite society’s – and sometimes their own – best efforts, these Generations’ blind pursuit of “the new new thing”, I say, has produced something that overall is good. In every sense.

Monday, September 12, 2011

Human Math and Arctic Sea Ice

Over the last year and a half, instead of following baseball and football, I have been following the measurements of Arctic sea ice. Will it entirely melt away at minimum? If so, when? Will it continue to melt past that point? Will the Arctic reach the point of being effectively ice-free year-round? If so, when? Will Arctic sea ice set a new record low in extent this year? In area? In volume? By the way, the answers to the last three questions in 2011 are already, by some measures, yes, yes, and yes.

In the process, I have become entirely addicted to the Arctic Sea Ice web site (, the Arctic sea ice equivalent of a fantasy football league. There, I and other scientist and math wannabees can monitor and pit themselves against near-experts, and plunder much of the same data available to scientists to try our own hand at understanding the mechanics of the ebb and flow of this ice, or to do our own predictions. Periodically, scientific research pops up that deepens our understanding in a particular area, or challenges some of our assumptions. Strange weather patterns occur as we near minimum, making the final outcome unpredictable. In fact, this year we still don’t know whether the game is over for the year or not – whether we are going into overtime.

But what disturbs me about the glimpse I am getting into science, this late in life, is that I keep seeing half-veiled glimpses of what I might call “data snow blindness,” not just from us newbies, but from some of the scientific research noted in the site. By that I mean that those who analyze Arctic sea ice data tend to get so wrapped up in elaborating the mathematical details of, say, a decrease in extent at minimum and how short-term and long-term changes in sea-ice albedo, the North Atlantic Dipole Anomaly, cloud cover, and increased Arctic surface temperatures can predict this minimum, that they seem to forget what extent data really conveys and does not convey, and how other factors not yet clearly affecting extent data should play an increasing role in future extent predictions. They keep talking about “tipping points” and “equilibria” and “fundamental changes in the system” that do not appear to be there. And so, what surfaces in the media, with few if any exceptions, gives an entirely misleading picture of what is to come.

So let me lay out my understanding of what the data is really saying, and how people seem to be going wrong. Bearing in mind, of course, that I’m not the greatest mathematician in the world, either.

The Model
I find that the easiest way to visualize what’s going on with Arctic sea ice is to think of a giant glass full of water, with an ice cube floating on top that typically almost but not quite fills the top of the glass. In the summer, the sun shines on the ice and top of the water, adding heat on top; in winter, that heat turns off. In summer and winter, the water below the ice is being heated, although a little less so in winter.

This does not quite complete the picture. You see, water is constantly flowing in from the south on one side and out to the south on another. As it hits the ice, it freezes, and moves throughout the cube until it exits on the other side – an average of about five years, apparently. These currents, too, act to break the ice apart as it is melting in summer, so (especially on the edge) there are lots of little “cubelets”. Where the ice is solidly packed together in the glass, then “area” – the amount of ice we see from above – is the same as “extent” – the region in which there is enough ice to be easily detectable. But where there are cubelets, then extent is much greater than area – as much as 70% more, according to recent figures.

One more important point: the depth of Arctic sea ice is not at all the same everywhere. This was true when the first nuclear sub approached the Pole underwater decades ago, and it is true as the Polarstern measures depth from above this year. The age of the ice, the point in the year in which it first froze, and where it is in relation to the Pole all factor in; but even in small regions, “thickness” varies. Generally, less than half the ice is almost exactly the same thickness as the average, with lots above and lots below. I find it useful to think of a normal curve of sea-ice thickness more or less peaking at the average.

In a state of “equilibrium” such as apparently has existed for perhaps 5-10 million years, the Arctic cube fills at least 70% of the Arctic in summer and just about all of the Arctic in winter. In summer, the sun melts the top and sides of the cube. However, since we only see the top, which only peeks out a little bit from the top of the water, we don’t see the way that the bottom of the cube rises in the water – the “balance point” between “salt” water below and “no-salt” ice above goes up. In the same way, in winter, we don’t see the way that the balance point goes down deeper.

Now suppose the globe starts warming. As it happens, it warms faster in the Arctic than in the equator, and it warms in both the near-surface air and in the water that is flowing through the Arctic underneath the ice, year-round. Estimates are that the air temperatures in summer in or near the Arctic have reached more than 10 degrees higher, and the rate of rise is accelerating; likewise, the ocean temperature is up more than a degree, and the rate of rise is accelerating. What we would expect to happen is that there is less and less ice during the summer – and also less and less ice during the winter, because the water underneath that cube is warmer, and the “balance point” is higher. And the rate we would be losing ice would be accelerating.

The Measures
Now how do we measure what’s going on with the ice? We point our satellites at the Arctic, and we measure in three ways: extent, area, and volume. Extent and area we have explained above; volume is area times average thickness.

To get extent, we divide the Arctic into regions, and then into subregions. If, say, more than 15% of the subregions in a region show ice, then that’s part of the ice extent. The area is the extent times the percent of subregions that show all ice (this is a very simplified description). There are difficulties with this, such as the fact that satellites tend to find it difficult to distinguish between melt ponds at the top of the ice during melting and melting of the ice “all the way down”; but it appears that those cause only minor misestimations of “actual” extent and area. Storms that wash over “cubelet” ice can cause temporary fairly large downward jumps in what the satellite perceives as ice, and hence in area; but again, this goes away as the storm subsides. All in all, the measuring system gives a pretty good picture (by now) of what’s going on with area and extent.

Volume is much more difficult to get at – because it’s very hard to get an accurate picture of thickness from a satellite. Instead, we create a model of how thick each part of the ice should be at any time of the year, and then check it out “on foot”: walking around on the ice (or icebreaking) and testing.

Now here’s what most people either don’t know or don’t think about: that model has been tested against samples almost constantly since it was first developed decades ago, and it is pretty darn accurate. When it has said, this year, that ice is as little as a meter thick near the North Pole, vessels went out and, lo and behold, found a large patch of ice 0.9 meters thick near the North Pole. That’s clearly not something that many scientists were anticipating six years ago as likely – but the model was predicting it.

It’s the Volume, Stupid
All right, let’s get back to the model. Suppose we have equilibrium, and we start an accelerating rise in air and ocean temperatures. What changes would we expect to see in real extent, area, and volume year to year?

Well, think about what’s happening. The water heating is nibbling away at the “balance point” from beneath, and at the edges. The air heating is nibbling away at the top of the ice during the summer, and expanding the length of the summer a bit, and heating the surface water at the edges a bit. So it all depends on the importance of water heating vs. air heating. If water heating has little effect compared to air heating, what happens to volume is not too far from what happens to area and extent: They start plummeting earlier and go farther down, but during the winter, when just about the whole basin gets frozen again, they go back to about where they were – a little less, because the air is more often above freezing at the very edge of the ice even in the depth of winter.

Now suppose water heating has a major role to play. This role shows up mostly as “bottom melt”, it shows up year-round in about the same amounts, and it shows up almost completely, until nearly all ice is melted, in volume. So what you’ll see, when you look at extent, area, and volume, is that volume goes down for quite a while before it’s really clear that area and extent are changing, and then area and extent start going down at minimum but not much if at all at maximum, and then the melt moves the balance point up and up and some of that normal curve starts reaching “negative thickness” as some of that melt reaches the surface of the ice and that means area starts moving down fast, and extent with it, and then in a year or two half of the ice reaches “negative thickness” and it seems like the area is cut in half and the extent by one-third, and the same thing happens the next year, while the rate of volume decrease actually starts slowing, and then in two more years there is effectively less than 1% of the ice area at maximum showing up at minimum, and less than 15% of the extent.

This is where lots of folks seem to fail to understand the difference between a measure and what it’s measuring. If you look at the volume curve, until the last few years before it reaches zero it seems to be accelerating downward – then it starts flattening out. But what’s actually happening is that some parts of the ice have reached the point where they’re melting to zero – others aren’t, because there’s a great variation in ice thickness. If we represented those parts of the ice that have melted as “negative thickness”, then we would continue to see a “volume” curve accelerating downward. Instead, we represent them as “zero thickness”, and the volume delays going to zero for four or five years.

So what are our measures telling us about the relative importance of water and air heating? First of all, the volume curve is going down, and going down at an accelerating rate. From 1980 (start of the model) to 2005, average thickness at minimum area (a good proxy for thickness at ice minimum), went from more than 3 meters to a little more than 2 meters. From 2006 to 2010, it has gone from there to a little more than 1 meter. If it were to follow the same path until 2013 or 2014, then “volume” would go to zero then at minimum, with the model’s measures of volume going to zero perhaps 3 years later, as explained in the last paragraph.

But there’s another key fact about volume: volume at maximum went down by about the same amount (15 million cubic meters) from 1980 to 2010 as volume at minimum (14 million cubic meters). In other words, instead of volume springing back during the winter almost to what it was 30 years ago, it’s almost exactly tracking the loss of volume the rest of the year. The only way this happens is if water melting from the bottom is a major part of the loss of volume from year to year.

Let’s sum up. The samples show that volume is going down at an accelerating rate throughout the year. The accelerating drop in volume shows that bottom melt is driving accelerating loss of Arctic sea ice year-round. Thus, we can predict that area and extent will soon take dramatic drops not obvious from the area and extent data, and that these will approach zero in the next 7 years or so. In other words, the key to understanding what’s going on, and what should happen next, is as far more the measure of volume – rightly understood – than area or extent.

Where’s The Awareness?
And yet, consistently, scientists and wannabes alike seem to show a surprising degree, not just of disagreement with these projections, but also of a seeming lack of awareness that there should be any issue at all. Take some of the models that participants in Arctic Sea Ice are using to predict yearly minimum extent and area. To quote one of them, “volume appears to have no effect.” The resulting model kept being revised down, and down, and down, as previous years that had shorter summers, slower rates of melt for a given weather pattern, and less thin ice, proved bad predictors.

Or, take one of the recent scientific articles that showed – a very interesting result – that if, for example, the Arctic suddenly became ice free right now, it would snap back to “equilibrium” – the scientist’s phrase – within a few years. But why was the scientist talking about “equilibrium” in the first place? Why not “long-term trend”? It is as if the scientist was staring at the volume figures and saying, well, they’re in a model, but they’re not really nailed down, are they, so I’ll just focus on area and extent – where the last IPCC report talked about zero ice in 2100, maybe. So suppose we substituted the volume’s “long-term trend” for “equilibrium”, what would we expect? Why, that a sudden “outlier” dip in volume, area, or extent would return, not to the same level as before, but to where the long-term accelerating downward curve of projected volume would say it should return. And, sure enough, the “outlier” volume dip in 2006-2007 and extent dip in 2007 returned only partway to the 2005 level – in 2009 -- before resuming their decrease at an accelerating rate.

And that brings us to another bizarre tendency: the assumption that any declines in area and extent – and, in the PIOMAS graph, in volume – should be linear. Air and ocean temperatures are rising at an accelerating rate; so an accelerating downward curve of all three measures is more likely than a linear decline. And, sure enough, since 2005, volume has been consistently farther and farther below PIOMAS’ linear-decline curve.

Then there’s “tipping point”: the idea that somehow if Arctic sea ice falls below a certain level, it will inevitably seek some other equilibrium than the one it used to have. Any look at the volume data tells you that there is no such thing as a tipping point there. Moreover, a little more thought tells you that as long as air and ocean temperatures continue their acceleration upwards, there is no such thing as a new equilibrium either, short of no ice year-round – which is a kind of equilibrium, if you don’t consider the additional heat being added to the Arctic afterwards. Sooner or later, the air and ocean temperature at all times in winter reaches above minus 4 degrees C, so that water can’t expel its salt and freeze. And that’s not hypothetical; some projections have that happening by around 2060, if not before.

And, of course, there’s “fundamental change in the system”, which implies that somehow, before or after now, the Arctic climate will act in a new way, which will cause a new equilibrium. No; the long-term trend will cause basic changes in climate, but will not be affected by them in a major way.

Follow-On Failures
The result of this data snow blindness is an inability to see the possibility – and the likelihood -- of far faster additional effects that spread well beyond the Arctic. For instance, the Canadian Prime Minister recently welcomed the idea that the Northwest Passage is opening in late summer, because Canada can skim major revenues off the new summer shipping from now on. By the data I have cited above, it is likely that all the Arctic will be ice-free in part of July, August, and September by 2020, and most of the rest of the year by 2045, and very possibly by 2035. Then shippers simply go in one end of the Arctic via the US or Russia, and the other end via the Danes (Greenland) or Russia, and bypass Canada completely.

And then there’s the Greenland land ice. Recent scientific assessments confirm that net land ice loss has doubled each decade for the last three decades. What the removal of Arctic sea ice means is the loss of a plug that was preventing many of the glaciers from sliding faster into the sea. Add a good couple of degrees of increased global warming from the fact the Arctic sea ice isn’t reflecting light back into space any more, and you have a scenario of doubling net Greenland land ice loss for the next three decades, as well. This, in turn, leads to a global sea ice rise much faster than the 16 feet (or, around the US, maybe 20 feet) rise that today’s most advanced models project by assuming that from now on, Greenland land ice loss will be rising linearly (again, why linearly? Because they aren’t thinking about the reality underlying the data). And, of course, the increased global warming will advance the day when West Antarctica starts contributing too.

The Net-Net: The Burden of Proof

As a result of these instances of data snow blindness, I continually hear an attitude that runs something like this: I’m being scientifically conservative. Show me why I shouldn’t be assuming a possibility of no long-run change, or a new equilibrium short of year-round lack of Arctic ice, or a slow, linear descent such as shows up in the area and extent figures.

What I am saying -- what the data, properly understood, are saying to me – is that, on the contrary, the rapid effects I am projecting are the most likely future outcomes – much more rapid declines in area and extent at minimum in the very near future, and an ice-free Arctic and accelerating Greenland land ice loss over the 20-30 years after. Prima facie, the “conservative” projections are less likely. The burden of proof is not on me; it’s on you, to disprove the null hypothesis.
So what about it, folks? Am I going to hear the same, tired arguments of “the data doesn’t show this yet” and “you have to prove it”, or will you finally start panicking, like those of us that understand what the data seem to be saying? Will you really understand which mathematics applies to the situation, or will you entrance yourselves with cookbook statistics and a fantasy world of “if this goes on”? It may be lonely out here; but it’s real.

Wednesday, September 7, 2011

The Third Age of Software Development: Andreessen, Thoughtworks, and the Agile World

Recently there passed in front of me two sets of thoughts: one, an article by Marc Andreessen (once a Web software developer, now a VC) in the Wall Street Journal about how software is “eating the world”; one by a small development tools/services company called Thoughtworks about a new technique called Continuous Delivery.

Guess which one I think is more important.

Still, both make important points about changes in computing that have fundamental effects on the global economy. So let’s take them one at a time, and then see why I think Continuous Delivery is more important – indeed, IMHO, far more important. A brief preview: I think that Continuous Delivery is the technique that takes us into the Third Age of Software Development – with effects that could be even more profound than the advent of the Second Age.

To Eat the World, First Add Spices

A very brief and cursory summary of Andreessen’s argument is that companies selling software are not only dominant in the computer industry, but have disrupted (Amazon for books, Netflix for movies) or are “poised to disrupt” (retail marketing, telecom, recruiting, financial services, health care, education, national defense) major additional industries. The result will be a large proportion of the world’s industries and economy in which software companies will be dominant – thus, software will “eat the world”.

My quarrel with this argument is not that it overstates the case, but rather that it understates it. What Andreessen omits, I assume in the interest of conciseness, is that in far more industries and companies, software now is the competitive differentiator between companies, even though the companies’ typical pedigree is primarily “hardware” or “services”, and the spending on software is a small percentage of overall costs.

My favorite examples, of course, are the company providing audio and video for stadiums, which relies on software to synchronize the feeds to the attendees better than the competition, and Boeing, which would not be able to provide the differentiating cost-saving use of composites were it not for the massive amounts of software code coordinating and monitoring the components of the Dreamliner. In health care, handling the medical health record and support for the new concept of the medical home requires differentiating software to glue together existing tools and health care specialists, and allow them to offer new types of services. It isn’t just the solo software company that is eating the world; it’s the spice of software differentiation in the mostly non-software businesses that is eating these companies from within.

So Andreessen’s article signals a growing awareness in the global economy of the importance not just of computing, but of software specifically, and not just as a separate element of the economy, but as a pervasive and thoroughly embedded aspect of the economy whose use is critical to global economic success or failure. How far we have come, indeed, from the point 30-odd years ago when I suggested to Prof. Thurow of MIT that software competitive advantage would allow the US to triumph over the Japanese economic onslaught, and he quite reasonably laughed the idea to scorn.

I do have one nit to pick with Andreessen. He trots out the assertion, which I have heard many, many times in the past 35 years, that companies just can’t find the programming skills they need – and this time, he throws in marketing. Every time people have said this, even at the height of the Internet boom, I have observed that a large body of programmers trained in a previous generation of the technology is being overlooked. That sounds as if it makes sense; but, as anyone should know by now, it is part of the DNA of any decent programmer to constantly acquire new knowledge within and beyond a skill set – good programmers are typically good programmers in any arena, from Web-site design to cloud “program virtualization”.
Meanwhile, I was hearing as late as a year ago about the college pipeline of new programmers being choked off because students saw no future jobs (except in mainframes!), and I can attest that available, technically savvy marketers in the Boston area – a well-known breeding ground – are still thick on the ground, while prospective employers in the Boston area continue to insist that program managers linking marketing and programming have at least five years of experience in that particular job. It just won’t wash, Mr. Andreessen. The fault, as is often the case, is probably not mainly in the lack of labor, but in companies’ ideological blinders.

The Idea of Continuous Delivery

Continuous Delivery, as Thoughtworks presents it, aims to develop, upgrade, and evolve software by constant, incremental bug fixes, changes, and addition of features. The example cited is that of Flickr, the photo sharing site, which is using Continuous Delivery to change its production web site at the rate of ten or more changes per day. Continuous Delivery achieves this rate not only by overlapping development of these changes, but also by modularizing them in small chunks that still “add value” to the end user and by shortening the process from idea to deployment to less than a day in many cases.

Continuous Delivery, therefore, is a logical end point of the whole idea of agile development – and, indeed, agile development processes are the way that Thoughtworks and Flickr choose to achieve this end point. Close, constant interaction with customers/end users is in there; so is the idea of changing directions rapidly, either within each feature’s development process or by a follow-on short process that modifies the original. Operations and development, as well as testing and development, are far more intertwined. The shortness of the process allows such efficiencies as “trunk-based development”, in which the process specifically forbids multi-person parallel development “branches” and thus avoids their inevitable communication and collaboration time, which in a short process turns out to be greater than the time saved by parallelization.

I am scanting the many fascinating details of Continuous Delivery in the real world in order to focus on my main point: it works. In fact, as agile development theory predicts, it appears to work better than even other agile approaches. Specifically, Thoughtworks and Flickr report:

• A much closer fit between user needs and the product over time, because it is being constantly refreshed, with end-user input an integral part of the process;
• Less risk, because the small changes, introduced into production through a process that automatically minimizes risk and involves operations, decrease both the number of operational bugs and the time to identify and fix each; and
• Lower development costs per value added, as loosely measured by the percentage of developed features used frequently as opposed to rarely or never.

The Ages of Software Development

I did my programming time during the First Age of software development, although I didn’t recognize it as such at the time. The dominant idea then was that software development was essentially a bottleneck. While hardware and network performance and price-performance moved briskly ahead according to Moore’s Law, software developer productivity on average barely budged.

That mattered because in order to get actual solutions, end users had to stand in line and wait. They stood in line in front of the data center, while IT took as long as two years to generate requested new reports. They stood in line in front of the relatively few packaged application and infrastructure software vendors, while new features took 9 months to arrive and user wish lists took 1 ½ years to satisfy. They stood in line in front of those places because there was nowhere else to turn, or so it seemed.

I noted the death of the First Age and the arrival of the Second Age in the late ‘90s, when a major Internet vendor (Amazon, iirc) came in and told me that more than half of their software was less than six months old. Almost overnight, it seemed, the old bottlenecks vanished. It wasn’t that IT and vendor schedules were superseded; it was that companies began to recognize that software could be “disposable.” Lots of smaller-scale software could be developed independent of existing software, and its aggregate, made available across the Internet by communities, as well as mechanisms such as open source, meant that bunches of outside software could be added quickly to fill a user need. It was messy, and unpredictable, and much of the new software was “disposable” wasted effort; but it worked. I haven’t heard a serious end user complaint about 2-year IT bottlenecks for almost a decade.

Andreessen’s “software eating the world”, I believe, is a straightforward result of that Second Age. It isn’t just that faster arrival of needed software from the new software development approach allows software to start handling some new complex tasks faster than physical products by themselves – say, tuning fuel mixtures constantly for a car via software rather than waiting for the equivalent set of valves to be created and tested. It is also that the exponential leap in the amount of the resulting software means that for any given product or feature, software to do it is becoming easier to find than people or machines.

However, it appears clear that even in the Second Age, software as well as physical products retain their essential characteristics. Software development fits neatly into new product development. New products and services still operate primarily through more or less lengthy periods of development of successive versions. Software may take over greater and greater chunks of the world, and add spice to it; but it’s still the same world.

The real-world success of Continuous Delivery, I assert, signals a Third Age, in which software development is not only fast in aggregate, but also fast in unitary terms – so fast as to make the process of upgrade of a unitary application by feature additions and changes seem “continuous”. Because of the Second Age, software is now pervasive in products and services. Add the new capabilities, and all software-infused products/services -- all products/services – start changing constantly, to the point where we start viewing continuous product change as natural. Products and services that are fundamentally dynamic, not successions of static versions, are a fundamental, massive change to the global economy.

But it goes even further. These Continuous-Delivery product changes also more closely track changes in end user needs. They also increase the chances of success of introductions of the “new, new thing” in technology that are vital to a thriving, growing global economy, because those introductions are based on an understanding of end user needs at this precise moment in time, not two years ago. According to my definition of agility – rapid, effective reactive and proactive changes – they make products and services truly agile. The new world of Continuous Delivery is not just an almost completely dynamic world. It is an almost Agile World. The only un-agile parts are the rest of the company processes besides software development that continue, behind the scenes of rapidly changing products, to patch up fundamentally un-agile approaches in the same old ways.

And so, I admit, I think Thoughtworks’ news is more important than Andreessen’s. I think that the Third Age of Software Development is more important than the Second Age. I think that changing to an Agile World of products and services is far, far more important than the profound but more superficial changes that the software infused in products and services via the Second Age has caused and will cause.

Final Thoughts and Caveats

Having heralded the advent of a Third Age of Software Development and an Agile World, I must caution that I don’t believe that it will fully arrive any time in the next 2-3 years, and perhaps not for a decade, just as it took the Second Age more than a decade to reach the point of Andreessen’s pronouncements. There is an enormous amount of momentum in the existing system. It took agile development almost a decade, by my count, to reach the critical mass and experience-driven “best practices” it has achieved that made Continuous Delivery even possible to try out. It seems logical that a similar time period will be required to “crowd out” other agile new-product development processes and supersede yet more non-agile ones.

I should also add that, as it stands, it seems to me that Continuous Delivery has a flaw that needs to be worked on, although it does not detract from its superiority as a process. Continuous Delivery encourages breaking down features and changes into smaller chunks that reflect shorter-term thinking. This causes two sub-optimal tendencies: features that “look less far ahead”, and features that are less well integrated. To encapsulate this argument: the virtue of a Steve Jobs is that he has been able to see further ahead of where his customers were, and that he has been able to integrate all the features of a new product together exceptionally well, in the service of one, focused vision rather than many, diffused visions.

Continuous Delivery, as it stands, pushes the bar more towards more present-focused features that are integrated more as a politician aggregates the present views of his or her constituents. Somehow, Continuous Delivery needs to re-infuse the new-product development process with the ability to be guided at appropriate times by the strong views of an individual “design star” like Steve Jobs – else the Agile World will lose some of its ability to deliver the “new, new thing.” And it would be the ultimate irony if agile development, which aims to release programmers from the stultifying, counter-productive constraints of corporate development, wound up drowning their voices in a sea of other (end-user) views.

But who cares? In this latest of bad-news seasons, it’s really nice to look at something that is fundamentally, unreservedly good. And the Agile World brought about by Thoughtworks’ Continuous Delivery’s Third Age of software development, I wholeheartedly believe, is Good News.