Wednesday, December 28, 2011

Methane Talk-Down: Partial

One of the true joys of learning about science – as opposed to, say, economics – is that eventually you can usually get to a scientific summary that clears up many of the distortions that popular reports create. In the midst of wading through yet another cherry-picked-evidence blog post (this one on methane) by Andrew Revkin of the NY Times, it suddenly occurred to me that I should check out Justin Gillis of the Times, whose posts have been praised iirc by Joe Romm of climateprogress fame. Gillis’ reporting still seemed a little superficial to me, but he had a link to a 2006 scientific summary of the research about methane and climate change, an oldie but goodie where I found the answers to many of my questions. My recent blog post on methane laid out the doomsday scenario that I fear; Chapter 6 of this summary, as Rachel Maddow would say, talked me down – but only partially.

Because the broad scenario that I laid out is not drastically affected by the information in the summary, it is easier to lay out the summary’s picture of methane and then, at the end, note how this may affect my scenario. I will focus on methane clathrates, since the changes to everything else are less substantial. And, of course, I am sure that more misconceptions remain – because a summary article of ongoing research can’t be expected to answer everything. Anyway, let’s begin.

Methane Clathrates, Water Methane

Last time, I presented a very summarized picture of natural-source methane as coming from three sources: methane clathrates under the sea, permafrost on land at high latitudes, and peat bogs next to the permafrost or in the tropics. It turns out that the picture is a bit more complicated, and the complications matter.

To start with, methane clathrates are formed and remain stable in sea-floor sediment in particular combinations of sea temperature and pressure from the sea above that limit them to the sea floor somewhere between 200 meters and 1000 meters below sea level. In other words, the water has to be near zero F, and the clathrate has to be lower than 200 meters below sea level and higher than 1000 meters below sea level. Between those two limits, the deeper the sea floor, the wider the zone in the sediment where it can exist. Guesstimates for a typical clathrate “stability zone depth” might be 250-300 meters. Btw, a confusing part of the scientific lingo apparently refers to Arctic clathrates as “subsea permafrost.”

What happens to melt the clathrates? The water next to the sea floor warms up, or warmer temps further up the sea slope cause the equivalent of a mudslide on the sea floor that basically slices through the clathrate, stirs up everything above the slice as a cloud of sediment, and melts all the clathrate above the new sea floor. That is what they think happened at Storegg, a place near Norway where there is a “crater” 30 km across that may have released a gigaton of carbon, all at once (methane is CH4).

Now here’s an odd part. We are used to thinking of gas coming up to the surface in bubbles and releasing itself into the atmosphere when the bubble pops. Not so with clathrate methane – most bubbles pop long before they rise the 200 meters or more to the surface, according to the models. Instead, one of several things happens: the methane rises to the surface but not as bubbles (it is “buoyant”) and then releases into the atmosphere, or it is eaten by methane-eating bacteria, or it converts (typically to carbon dioxide) en route. Initial indications are that a small percentage of melted clathrate should rise to the surface combined with water and is released into the atmosphere as methane, which happens effectively immediately; a large percentage should be eaten by bacteria, who convert it into carbon dioxide on the surface of the sea, and the carbon dioxide is released into the atmosphere in order to equalize atmospheric and oceanic CO2; and a medium-sized percentage should convert to carbon dioxide without going through the bacteria, to be released into the atmosphere as carbon dioxide in the same way.

The methane clathrates in the Arctic seas contain perhaps 50%-80% of all clathrates. They are also by far the most likely to be affected by global warming, since water temperature variation due to increased sunlight on the water and increased temps of sun-warmed currents from the south are widest there.

Other Methane Sources

The picture of land-based methane sources also needs amendment. It appears that much of the methane stored in permafrost is stored in peat within the permafrost – which can extend as far down as 200 meters or so. Meanwhile, wetlands at whatever latitude are generators of methane, the Amazon as much as Ireland. When the permafrost melts, the water plus peat turns into a bog that (under global warming) is maintained by increased precipitation: that’s what often drives increased methane production.

Here, the translation to the atmosphere is more clear. Melting of permafrost releases any methane locked in the ice (but not in clathrates), and also creates new constantly-emitting sources of methane. Likewise, wetlands inject methane directly into the atmosphere.

Now we come to the tricky part. We are accustomed to thinking of methane in the atmosphere as separate from carbon dioxide. Not so. What often happens to methane in the atmosphere is that it "oxidizes”, which typically means that one of the hydrogen atoms is broken off to help form H2O (water), while the rest forms a methyl group (CH3) which eventually breaks down to carbon dioxide. In other words, much of the methane tossed into the atmosphere actually winds up as the major greenhouse gas, and stays up there for 150-250 years.

What’s the Effect? Um …

OK, so now the scientist wants to figure out what the global-warming effect of unlocking all that methane is going to be. The problem is that we have two sources of comparison, and neither of them is great.

The first is to use what happens over 10-20,000 years immediately after a Milankovitch-cycle minimum (a “glaciation”) as a model. Using that model, scientists have pretty well determined that in such times of rising global temperatures, the amount of methane in the atmosphere probably doesn’t vary by a heck of a lot, and the effects on global temps compared to atmospheric carbon are pretty minimal. Methane melt in general might have a role in things like sea-ice melting near Greenland, which has been shown to have surprisingly wide effects on global climate, but most of the good candidates for that type of melt (subsea, permafrost, wetlands) just don’t make a strong case for themselves.

The problem with this type of analysis is that it looks only at periods when most of the ice remains – because that’s what happens at the peak temps of a Milankovitch cycle. We have almost certainly moved above those peak temps in the last couple of decades, and so we are in much less charted waters. For a period much more comparable, you have to go back to the PETM – 55 million years ago.

OK, in the PETM, temps were 5-10 degrees C warmer than now. Increases in carbon in the atmosphere just don’t seem to be enough to justify those warmer temps. So for a while, there were theories floating around that methane was the complete reason for that kind of warming – no carbon needed. That would have been nice, since figuring out why carbon suddenly spiked in the first place, not to mention why the time period of this rapid warming was around 20,000 years as the latest research suggests, has been a headache. Bad news: there simply doesn’t seem to be a natural source of methane that comes near to explaining the whole temperature rise, not to mention keeping going for 20,000 years. So it looks like we have a choice between carbon emissions plus “unknown”, and carbon plus methane. Tentatively, the scientists are voting for carbon plus methane.

But the PETM isn’t great as a model, either. The problem there is that things happened slowly compared to today. If we say that the carbon atmospheric-concentration rise then happened over the course of 20,000 years, well, our carbon rise appears to be happening over 350 years – and it may very well double the rise of the PETM over the course of those 350 years. In other words, this is happening at least a hundred times faster. And, as we’ve seen in the case of carbon, that can mean that the positive follow-on effects happen well before the negative “stabilizer” effects. So, for example, don’t necessarily expect the magical munching methane sea bacteria to appear in the Arctic and save the day.

OK, so the models we have aren’t great. Can we at least use them for some guesstimates?

Preliminary Guesses

Well, the scientists have done the guessing for me. The key sentences I find in Chapter 6 say, more or less (with the usual caveats about my understanding), that the amount of atmospheric methane from natural sources pre-Industrial Revolution equals the amount of methane added from human sources since then, which equals the likely amount of methane to be added at some point due to all natural sources except subsea methane, which equals the potential amount of methane from subsea methane. In other words, in a worst-case scenario with 2006 models, at some point in the next 300 years, we might expect atmospheric methane four times what it was in 1850.

How much added heating would that translate to? Again, reading between the lines, perhaps 1 degree C from the methane alone. However, if we take the PETM as a model, it might be more like 2 degrees C. And that’s the maximum, so we can all semi-relax, right?

Well, no. You see, there are two problems. First of all there’s the fact that much of that methane is going to convert to carbon dioxide when it’s up there. Second, there’s the fact that the more methane gets into the atmosphere from now on, the longer it sits there. The 2006 estimate was that methane hangs up in the atmosphere an average of 9 years. But at twice the concentration, I think we can count on it sitting up there for 12-18 years on average. So those two things should add another ½-1 degrees C to the “additive effect” of methane in the atmosphere.

And then, of course, there’s the question of methane that converts to carbon dioxide before it gets into the atmosphere. Here, the summary didn’t really have much to add in the long term. Even by their time-frame estimates, all that methane-to-carbon-dioxide, even if it doesn’t get there in the next 100 years, will almost certainly show up in the next 1000 years. So it’s a more extreme version of my “pay me now or pay me later” scenario – except that we can at least hope that by the time the methane-turned-carbon-dioxide shows up, we will have managed to cut down on our human-caused carbon emissions and the amount in the atmosphere will have begun to go downhill significantly.

All in all, not great, but not as bad as my full doomsday scenario. Instead of 6 degrees C from methane-turned-CO2, perhaps 2-3, although that increase will stick around for maybe twice as long; instead of 7-9 degrees C from methane-stayed-methane over the next 160 years, perhaps 1-2 degrees over the next 300-500 years. And it will happen more gradually, so it won’t be really noticeable, probably, for the next 30-40 years. Except …

I’m Not All the Way Down

Read carefully the interview with the head of the survey of methane releases in the Siberian Sea. He states, effectively, that the diameter of the “craters” I referred to earlier had increased by up to 100 times this year, and this methane was “bubbling to the surface.” If you look at the 2006 summary, neither is supposed to happen. Very little methane should arise to the surface in a bubble, as noted above, and the methane hydrates should not suddenly do a big jump in melting: and 5 degrees C increase in water temps (since 1984, a jump of 2.1 degrees C has been observed) should cause perhaps 1 meter’s worth or less of methane hydrates to melt over the next 40-80 years – and it can’t be explained as mudslides, since it has happened in quite a few places.

So why would scientists’ models be wrong? Well, in the first place, they assume that relative sea-water warming will only occur in a short space in the summer, when the ice is melted and the sun warms its top. However, the depth of the surface ice in winter is also less than before, and the water carried by currents from the south is warmer. Clearly, it’s very possible that scientists are underestimating the amount of melting going on the rest of the year. Add this to the known problems with the original model developed in 1995, and you have some, but maybe not all, of the increase in clathrate atmospheric methane release explained.

The second flaw may be the modeled prediction that very little methane melt will rise to the surface as bubbles. Why might this model be wrong? I don’t have a clear answer from the summary – it could be that the turbulence of the water keeps the bubble from popping, although that seems unlikely. One thing seems clear: the magical munching methane bacteria are nowhere to be seen.

And the third flaw, which also affects the land methane emissions rate, is a major underestimate in the models of the rate of global warming. The models implicitly assume that the Arctic sea ice won’t melt entirely in summer before somewhere between 2030 and 2100, and year-round perhaps never – that one seems clearly wrong. Therefore, they underestimate the speed of the follow-on effects, including much faster warming of water within 100 meters of the surface, which would inevitably mean much faster warming at the 200-500 meter level – sorry, that’s not “deep ocean.”

In other words, what the latest information is telling us is that the semi-comforting story I just gave you is almost certainly an underestimate. The “true” effect of methane is somewhere between my doomsday estimate and the one above – except that the roles of methane-stayed-methane and methane-turned-carbon-dioxide have switched, because we now know that much of that atmospheric methane is going to change to carbon dioxide while it’s up there.

I find the logic of the summary convincing as well as semi-comforting; so if I had to guess, I would say that the net effect is somewhere between 3-5 degrees C, mainly in carbon dioxide, and spiking over the next 40-150 years before leveling off. But that’s a complete guess. Until I understand just how the models went wrong, I’m only partially talked down from my panic. So here’s to the New Year: It will be a season of hope, it will be a season of despair, it will be a season of enormous impatience until the first scientific explanations come out.

Wednesday, December 21, 2011

Methane: The Final Shoe

Recently, Neven’s blog on Arctic sea ice (neven1.typepad.com) featured a new post on recent scientific observations of methane – observations that Neven said made him “sick to my stomach.” I am not as easily panicked – I reserved my stomach sickness for a recent British report about how most life in the oceans, except jellyfish, will be dead within the century unless we do far more than we are doing. However, I do understand his reaction. Effectively, these reports indicate that the dreaded “final shoe” of global warming, the one reinforcing side-effect of global warming that we hoped against hope would not happen, appears to be partially beginning to drop. Moreover, it seems clear to me that most if not all folks, even those who are aware of methane’s role in climate, are underestimating its potential impact in causing additional and more rapid murder, disaster, and then catastrophe.

So here is my understanding of methane’s role in our tragedy – for yes, some small tragedy is unavoidable now, even without methane’s impact and even if we do everything we can from now on. I am sure that as an amateur I am missing or misrepresenting some points. I am also pretty sure that most amateur commentators are doing far worse. If I were you, I would not take comfort from any of this post’s stumbles or missteps.

How It Works

Methane is CH4, or a carbon atom with four hydrogen atoms. It is, imho, the second most important “greenhouse gas”, and to understand its effects it is best to compare it with carbon tossed into the atmosphere and combined with oxygen to form CO2, or carbon dioxide.

Here’s how carbon emissions that form carbon dioxide work (excess detail stripped away). As they ascend in the air they combine with the oxygen to form carbon dioxide. Most of that carbon dioxide sits in the atmosphere for perhaps 150 years, and most of the carbon has fallen to earth again within 250 years. While it is up there, doubling the amount of carbon (in CO2 form) in the atmosphere adds about 3 degrees C to global temperatures, and double that in the far north and south, especially in the winter. The “normal” rate of carbon in the atmosphere is 250-280 ppm (parts per million), and we are presently somewhere around 395 ppm.

Methane emissions work in a similar fashion, but with some important differences. In the first place, methane is typically stored in the earth and emitted as a gas – i.e., not as carbon but as CH4. Once it gets into the air, it can either split the carbon atom to form carbon dioxide – hence increasing that greenhouse gas – or remain as methane. If it stays methane, most of it stays in the atmosphere for 10 years, and most is gone after 15 years. So what’s the problem?

Well, the problem is that while methane is in the atmosphere it has up to 70 times the impact on global warming of comparable amounts of carbon dioxide. I find it useful here to imagine the old image of keeping a ping-pong ball in the air with jets blowing from beneath. If I toss a ball of carbon dioxide in the air, it stays up there for 200 years. With methane, I have to keep blowing like crazy – or, if you like, adding the same number of new balls of methane every 12 years. But if I keep an amount of methane in the atmosphere comparable to doubling carbon dioxide, then I drive up temps not by 3 degrees C, but by 100 degrees C. No, we haven’t gotten to the worrisome part yet.

In other words, the effects of increased amounts of atmospheric methane, piled on top of increasing carbon dioxide from other sources, fall somewhere in between two extremes. At one extreme, all the methane turns into carbon dioxide, and hangs there for 150-250 years. As we will see, that means that carbon dioxide may double or quadruple compared to global warming without intervention by methane, for an additional 3-6 degree C global warming. At the other extreme, all the methane stays methane. As we will see, a reasonable guess for its effects then is a 12-18 degree C additional increase starting somewhere around 20 years from now and going for 120 years, and then fading out. To put it bluntly: we roast more now (stays methane) or we fry more later (changes to carbon dioxide).

The Real Worry

So where are these new methane emissions coming from? Mainly, there are two potential types of source. The first is human-caused activity: just as we emit more carbon by burning fossil fuels as our population and industry grows, so emissions of methane for industry and personal tasks and by increasing populations of farm animals like cows increases accordingly. The second is methane frozen in the earth between periods of unusual global warming. That methane lies in three main places:

i. The shallow Arctic seas, especially the shallow Siberian sea north of Russia, where it is frozen not in ice but in a comparable substance known as a clathrate;

ii. The permafrost of Siberia and northern Canada/Alaska, where it is locked in frozen ground tens of meters deep on top of unfrozen earth;

iii. The peat bogs further south, where it is often mixed with water.

The methane emissions from our first type of source have caused methane in the atmosphere to shoot up fairly steadily over the last 100-odd years, so that methane is now a significant contributor to today’s global warming. It would be a really excellent idea to cut down on it. However, to some extent that increase has apparently leveled off. No, what really scares us over the next 200 years is the second set of sources.

The last time the globe apparently became (not when it was, when it became) this warm or warmer – maybe 55 million years ago, in what is called the Paleocene-Eocene Thermal Maximum, or PETM for short – it seems very likely that methane from the second source type was indeed emitted in quantity as methane, as that is a very good explanation for why temps actually went a little higher than the amount of carbon dioxide in the air would seem to dictate. However, that should not give us comfort. Ken Caldeira in Nature notes that methane of this type was stored in much smaller quantities then. That would mean that methane from sources i-iii emitted now would either (a) have similar effects over a much longer period of time or (b) would have much greater effects over the same period of time. So which is it, (a) or (b)?

Well, one obvious factor in deciding between (a) and (b) is how fast our global temps are going up already, before we start emitting i-iii. Once we start that faster rate of emissions, of course, that will speed up global warming even further, so we can bet that a faster initial rate of global-temp increase will keep methane emissions higher right throughout the process. And every available bit of evidence points to the fact that we are warming already much faster than in the PETM – because we humans are emitting carbon stored in the earth as “fossil fuels” (really, mostly decayed vegetable matter) much faster.

All right, so it’s faster. Is it fast enough to worry about? Here we have to consider sources i, ii, and iii separately. Methane clathrates are apparently a big honking source of methane, according to scientific estimates. No one is entirely sure about how fast these clathrates will “melt” and methane bubbles will rise to the surface, once they start melting. However, they are sitting in shallow seas and they start right at the surface of the sea-floor. We know what it takes: warming of the water above the clathrates. And that has been happening, as the Arctic sea ice in that area at the top of the water melts in the summer where it hasn’t before, the sun beats down on the newly-exposed water to heat it, and warmer water from the south moves in.

Now let’s consider ii. Joe Romm at www.thinkprogress.com has an extended post focusing on this source. The net of what he has to say is: Methane stored in permafrost is comparable in amount to methane stored in clathrates – big and honking. Permafrost melting is already underway at a brisk pace. Projections that are unrealistically conservative about how fast global warming will occur project that methane/carbon dioxide emissions from that permafrost will reach a high level about 20 years from now and continue at that level or somewhat higher for 120 years, at which point most of the permafrost will be gone. Make your own adjustments – however you adjust, it’s going to reach a higher level than that sooner, and stay there for a shorter period of time. Is that enough, by itself, to worry about? You betcha.

Then there’s iii – peat bogs and wetlands, even in the tropics. It’s not clear that there is as much methane there, or that it will be released as quickly. Remember, the further south (north, for the Southern Hemisphere) you go, the slower the rate of global warming. But it’s very clear that it’s happening. That was what the Russian summer fires were all about: global warming led to warmer temperatures that dried up the peat bogs and they went up in smoke, releasing methane. My totally random guesstimate is that peat bog methane emissions will follow much the same trajectory as those in i and ii, and will therefore have ½ to ¼ the impact at any one time or overall of either i or ii.

Now let’s reach ahead and note that things get drastically worse if all of these emissions increases happen over the same period of time – somewhere around 2-2.5 times worse. Luckily, so far emissions source i has not yet kicked in. Scientists report that as of 2010, there were no atmospheric signs of unusual methane or CO2 from Arctic sea sources. Be careful. There’s a trick in that statement.

Doing the Math

Before we hurry on to our conclusion, let’s pause and see if we can nail down a little better what those effects are likely to be if all three sources of the second type fire off fast at the same time. In particular, let’s assume a scenario like that predicted for source ii, only this time with all three sources emitting like crazy.

One estimate has the amount of stored methane, converted to carbon dioxide, at about 7 times the amount in the atmosphere right now. Let’s assume that, starting 20 years from now, this emits about 1/3 of itself at a fairly steady rate over the course of the following 160 years. So, 2.3 times 400 ppm or 920 ppm is the amount added to the atmosphere by 2170, on top of the existing amount (400 ppm) and the amount estimated to be added to the atmosphere by 2100 under “business as usual” (900 ppm). We’re up to about 7-9 degrees C global warming somewhere between 2100 and 2150. Even if we cut our emissions to zero today (totally unrealistic), we’re up to 5-7 degrees Celsius.

If you want to be gloomy, you can assume almost all the methane, turned to carbon dioxide, vents in the same time period. Add on another 1800 ppm, and then add on another 500 ppm for continuing “business as usual” between 2100 and 2170. Now we’re talking 12 degrees C.

OK, same thing, but it all stays methane. If you remember, this is over 160 years, but methane falls from the sky after about 12 years, so we’re talking about 12/160 = 1/12.33 of the equivalent amount of carbon dioxide on an ongoing basis over that 160 years. However, that methane has perhaps 33 times the effect on temps while it’s up there. So starting 20 years from now, there is an overall jump of about 8 degrees C on top of the effect from carbon dioxide noted above – and that effect lasts for 160-odd years. That baked-in non-methane carbon dioxide effect is going to be around 3 degrees C under the most optimistic of assumptions, and could go as high as 9 degrees C. And remember, if you want to be gloomy, tack on an additional 6 degrees C from emitting almost all the methane.

So here’s your two extremes. If you’re lucky, it’ll all go up as methane, and fall right down again. Now we’re talking 11-23 degrees C global warming between 2030 and 2190, and we fall down to a nice comfortable 6 degrees C after that. If you’re unlucky, it’ll all go up as carbon dioxide, in which case we’ll see maybe 8 degrees C of global warming from 2050 to about 2330. By the way, initial estimates are that more of it will rise as carbon dioxide.

The best part of this analysis is that I left out other “positive forcings.” In particular, I left out the fact that all this warming is going to turn part of the ocean (The Arctic and Antarctic) and part of the land (all those nasty glaciers) darker, from white (snow) and off-white (ice) to dark brown and green (land) and dark blue (ocean). Darker colors store heat. I’m not sure what how much warming effect that will have, but scientific estimates suggest indirectly (it’s included in some scientific estimates suggesting 3-6 degrees additional warming beyond that due to carbon dioxide in the atmosphere) that it’s likely to be at least an additional 1 degree C. Icing on the cake. Or not icing.

The Best Part

Now back to that trick. You noticed, didn’t you, that I said “as of 2010.” Well, the recent article cited by Neven said that a Russian scientist reported that in their annual sample of Siberian Sea methane emissions, which they had been doing for 20 years, for the first time ever they were seeing, not “funnels” of tens of meters across from which methane was bubbling up, but lots of funnels “more than a thousand meters across”. Do the math: that’s between 1000 and 10,000 times the rate they had ever seen before. He was very confident that results across much of the Siberian Sea would be similar. Is that enough to signal the start of Arctic sea methane emissions on the scale we’ve been talking about? How can it not be enough?

Now let’s add the usual caveats: wide variance inherent in the estimates, lack of confirming evidence in some areas, uncertainties in data collection, blah, blah. The scientist’s reaction to these is to minimize the impact by stating the most likely impact of which he or she can be certain. The realistic reaction is to ask what is the impact of median likelihood, with equal likelihood of a lesser or greater impact – and, as far as I can tell, that’s what I’ve given you.

And, by the way, don’t bother to object that present projections don’t show this. Guess what – most models don’t consider the impact of even one natural methane source behaving this way, and the rest (only recently) of just one (permafrost).
Like I said, I don’t get sick to my stomach about this – because I did my own guesstimates more than a year ago and got my puking done then. I’m still hoping that the methane shoe will drop more slowly; and also that I’ll win the lottery. Right now, the latter seems more likely. Happy holidays, all.

Tuesday, December 20, 2011

The Psychological Failure of Libertarianism

One of the theories of government that I have been considering lately is this: the long-term success or failure of a system of government is determined in many cases by how they treat their insane. Or, more precisely, how they avoid discarding people with psychological problems unnecessarily while dealing with people with psychological problems who potentially can gain power over other people.
Until the early 20th century, we did not even have the vocabulary to face this question. I remember one student of history who was quoted as saying that our reading of history before about the 12th century should be tempered by the fact that everyone, from Aristotle to Abelard, was absolutely nuts – that they led lives of trauma and were genetically predisposed to serious psychological illnesses. Yet no history before about 1920 that I can recall ever raised such a question.

Nevertheless, we can guess at some of the ways certain systems of government approached the matter. On the one hand, Charles of France was treated as intermittently mad while Henry V of England invaded, and so was George III of England (although we now think it may have been an inherited medical condition); while the unclassifiable were confined to madhouses and Bedlam in unbelievable squalor that certainly did not improve their plight. On the other hand, people like Napoleon (one theory posits he had extreme ADHD or a Type A personality, since he was always hyperactive and slept little) and Alexander (at the very least, megalomania exacerbated by alcoholism) were rewarded with excessive power despite the destructive effects on society of over-accommodation of their insanity – so that not only military dictatorship but also hereditary kingship tended to choke on its own psychological excesses.

Democracy (or, more properly, until the early 1900s, republicanism) as practiced in Britain and the United States did offer some “checks and balances” on power which served to limit the power of the insane whose insanity was indetectible: a General Sherman gave every indication of occasionally going off the deep end, but was never in a position of political power where it could really ruin things in the long run – except for his power over Native Americans.

However, with the popularization of Freud’s theories, the veil was ripped away from many of the clearly problematic psychologies of power, and it has never been possible again to accept with equanimity addiction, sadism, psychopathy, narcissism, and other conditions that are deeply serious when allied with power to govern others. In this light, while non-democratic societies that hide and deny psychological problems are clearly inferior, just as clearly our democracies have far to go in facing up to the psychological problems of those in power, and in treating the psychologically disabled in ways that ensure we don’t waste those who can contribute, rather than typecasting them or stuffing them in jails to be forgotten.

Rejecting the Deeper World
The effect of our new understanding of psychology has been subtle but pervasive. I see it clearest in a sharp divide occurring about 1920 in literature. I remember picking up a stray “woman’s novel” once from around that time, and marveling at how its differences cast any novel from 1910 and before as flat, bland, and somehow shallow – the stick figure of the madwoman was revealed to have a past of tragedy that not only shaped itself into depression and alcoholism, but led with painful examination of all the “unmentionables” repressed for the sake of survival to a very nuanced hope. Contrast that with a Samuel Butler, a G.A. Henty, or even a Mark Twain!

At the same time, however, it has led Western society, at least, into an uncomfortable and apparently unending examination of the psychological characteristics of our leaders. We have learned, for example, that our habit of demanding fidelity to our wishes from our male legislators while subjecting them to incredible amounts of campaign contempt has led to amazing numbers of them seeking unconditional adoration from groupies, and frequenting whorehouses where they alternately dominate and are spanked. We have in some cases learned to accommodate leaders who have been treated for depression or who smoked pot when they were young, perhaps because we have begun to realize just how understandable and relatively unimportant such traits are in a leader.

The psychological viewpoint has also affected how we view major “new” theories of the government from the past. Communism – to be clearly distinguished from socialism, which seeks to balance a greater “insurance” role for government with continuing democracy – has fallen short in practice, we now assert, not only for its other sins, but also because its secrecy and naïve view of human psychology (“from each according to his abilities, to each according to his needs”, to misparaphrase Marx) leads more often to pathologically paranoid leaders and the depression and alcoholism of the command-and-control economy. So does Fascism. A capitalism-driven government – or, if you prefer, an oligarchy, not of the landed but of the moneyed rich – falters as a theory when we consider the narcissism and Type A obsessive-compulsive behavior of its leaders, the so-called “captains of industry and finance,” and its failure to face the highly irrational behavior of its participants that leads to “bubbles”, “imperfect markets”, “Ponzi schemes”, and above all to “doubling down on failure.”

And then there is libertarianism – the only theory, in my opinion, that can be said to have gained currency after the arrival of psychology on the scene. What is striking about libertarianism is that, superficially at least, it takes no notice of psychology’s insights; in fact, it acts as if they don’t exist. When previous theories were developed, ignoring the deeper world of psychology’s insights was understandable, since those insights had not yet been developed; in libertarianism, it amounts to a – unconscious! – rejection of those insights. What’s going on? What does it mean for libertarianism in practice?

Libertarianism in Theoretical Practice
I have been hearing about libertarianism since the early 1980s, when it was all the rage in the political discussion groups on the nascent Internet – in fact, the incessant posts about how “libertarianism solves this!” were early markers of today’s trolls who crowd out all other viewpoints. My question, then and now, was, OK, how does it really work in practice? Ayn Rand’s The Fountainhead, which I had picked up 15 years before, was of very little help. Even at 15, I could see that the climactic trial scene was utterly unrealistic (the prosecutor in any sane trial would, instead, be talking about law and order and how the protagonist should be locked up to prevent future acts that would kill people); and this was the case of an isolated libertarian in a non-libertarian society, not a fully-fledged functioning libertarian culture. Likewise, Robert Heinlein’s Moon is a Harsh Mistress seems to show a complete (prison) libertarian society; but proponents seem to keep forgetting that at the end, when supposedly the truly independent libertarian society is launched, it immediately breaks down in “political infighting”, and the protagonist leaves for hopefully more libertarian pastures.

In fact, the best portrayal of a libertarian society I have seen is in a gem of a science-fiction novel by a long-underappreciated author, Eric Frank Russell. It is really three essays in political theory, in which a benign military invasion seeks to restore three planetary societies to “normal” government, and fails miserably at each point. The first is a Mafia government, where the inhabitants reject return to “normalcy” by treating every military move as a mobster power play. The second is a government of narcissists, which simply regards everyone else as incredibly ugly and therefore to be exterminated or at least removed from sight immediately. The third is the most appealing to the protagonist – and to us.

In it, the whole society functions by a form of barter known as “obs” – short for obligations. If you want something from someone, you do something for that person (or someone else, in trade) in return – there is a very clear relationship between work and reward. Attempts at domination are met, confusingly, with “myob!” – short for “mind your own business.” The person who tries to get things for free is progressively blocked until, if he does not stop, he cannot get anything at all, and starves to death.

At this point, readers of the Laura and Mary books should begin to suspect what makes this society so appealing – it’s very like the frontier society described in those books. Pa and Ma are utterly self-reliant and function well in a society of other self-reliant types, where everyone feels an obligation to help others and expects others to help them in the same measure, where payment is as much by that kind of trade as by exchange of goods for money, where the children are brought up to behave in the same way, with fewer stereotypes about “helpless women”, and where outsiders are few.

And yet, the Laura and Mary books, written as they are from the viewpoint of a pre-psychology frontier society, cannot help but note things that call into question the long-term stability of such a society, and form a sharp contrast with Mr. Russell’s society. The violent, disturbed young student in Laura’s class is handled without regulations, yes, but by application of a whip that will in all likelihood lead to an adulthood in which the student will view the whip as the answer to everything – “might makes right.” The government that fails to get the trains through in the Long, Hard Winter is rightly viewed as incompetent by Pa; but, as technically dexterous as people are, there is no sign that the frontier society is anywhere near providing the same level of technology as the train and distribution system that keep folk alive and keep people from going stir-crazy. When Almanzo proves incompetent at farming and goes back to his expertise in running a country store, while Laura wrecks her health trying to pick up the slack at the farm while pregnant, Pa comes to the rescue, yes, but the result is that Laura moves back to a non-frontier, non-libertarian society, where there is a better social safety net. The drunks, the rowdy miners, the rancher-farmer friction of much of the rest of the American West and the attendant psychological problems, are mostly offstage, but we can still perceive that they are out there. The problems of class (the Olsens), of isolation (the Long, Hard Winter), of excess disease (Mary’s blindness) are all psychologically-related strains that this society can only handle up to a point, and whose solutions are far from ideal.

Libertarian Power Vulnerability
At this point, the libertarian will typically answer with a variant of “Here’s how libertarianism solves the problem!”, usually an assumption that today’s or tomorrow’s technology (computer or otherwise) will allow a very loosely coupled massive collection of frontier societies to function without the psychological stresses caused by hunger, disease, and so on. But that is not my point. Because we now come to the psychologically disturbed who seek excess power in such a society.

Note Mr. Russell’s way of solving the problem of the self-centered, amoral individual. Leave aside the fact that piling more “obs” on a person may face that person with reality, but may also create a burden greater than an individual who is partly but not wholly self-centered can bear. No, my objection is rather that such a solution still leaves the door wide open to exploitation by people with known and dangerous psychological conditions – in many ways, wider open than other forms of government.

Consider, for example, the psychopath. The psychopath has no objection to working hard – although for the life of him he cannot understand why other people care. Rather, the psychopath is often superb at counterfeiting the well-adjusted, hard-working individual, while behind the scenes he seeks control and immediate gratification of whims. Want to see how he behaves in his private life? Myob! He will happily undertake all sorts of obs for what he views as useless things in return for more and more power; and he is exceptionally good at convincing you that what he is doing is natural and normal and desirable.

In most societies, not the psychopath but at least the existence of abnormal power is recognized, and so at least psychopaths are kept under more control via laws, competition from other psychopaths, and, more recently, recognition of and monitoring for the condition. In libertarianism, by definition, psychopaths are supposed to be kept in check “automagically”, by the reality of the “ob” plus the perception of their saner fellows. But, as we have seen, the psychopath is not impeded by work to satisfy “obs” nor by inability to deceive those linked in the “ob” chain. On the contrary, the loosely-coupled nature of this society means that if the technology cannot pick it up (and why would a libertarian trust a computer Big Brother?) then the victims simply don’t compare notes – myob! And the psychopath has every incentive to reach out to other psychopaths in other communities to increase power without threat to himself. The result is a libertarian veneer over the worst of totalitarian societies – a kind of libertarian Animal Farm.

A closely allied problem is that of the narcissist. The narcissist in fact has a great incentive to work hard, and a superficial self-confidence that counterfeits sanity, but, like the psychopath, she has a total lack of empathy, and she also has a strong and violent reaction to criticism (btw, in case you wondered, both conditions affect both sexes). The object of the narcissist, therefore, is to avoid such criticism at all costs, by surrounding herself with sycophants controlled by massive obs. Narcissists also tend to reach armed truces with other narcissists in other communities leading to mutual admiration societies, with Mutually Assured Criticism replacing Mutually Assured Destruction as a deterrent to competition between them.

In most societies, the narcissist is more easy to detect than the psychopath, and criticism that cannot be held at bay is the ultimate control – which we can sometimes achieve in both democracies and autocracies by occasional but nonetheless somewhat effective random acts of criticism/punishment fueled by laws. But while Nelly Olsen may have been easy to counter in a frontier society, a massive collection of such societies allows the Nelly Olsens of the world a surprising power to combine to hold up “fashions” as a tool for power. Subtly, the freedom of the libertarian to think different thoughts is corrupted into the freedom to buy into the narcissist’s idea of fashion. Again, the libertarian idea that such things can be handled “automagically” simply enables a libertarian veneer over a world in which we respond as the narcissist wants, with useless obs traded for narcissist praise and control.

I pass over addiction here because, in fact, it can be argued that libertarianism has an equally effective method of dealing with it – which is, I believe, the method outlined by Mr. Russell, of refusal to be blinded by promises not backed up by fulfillment of obs, and of effective withdrawal of the addiction: kill or cure. However, I would note that this is adequately effective only if you believe that approaches that seek to end the addiction before starvation arrives are useless. I happen to believe that the carrot of psychological encouragement and the medical fix as well as the stick does better, and so libertarianism, which by its nature tends to reject the first two, would in the end prove less effective. In any case, the addict has a destructive effect on any scheme of government that is limited by his or her excesses to the relatively short run, hence the short reign of some of Rome’s Augustan emperors – so the point is moot.

And now we can revert to the problem of how a government handles the psychologically powerless who are disturbed. I would repeat, again, that the reasons I view libertarianism as not as good a societal solution as many of the others cited above – the unusual power they give to certain types of psychological disturbance by their inability to recognize those types – do not apply to the case of the psychologically ill who are powerless. Here, I rely only on my own experience of having a hand in the care of the autistic. The help of others that was like that of a frontier community helping with each others’ burdens; it was well-meant but in almost all cases – whether it involved the autistic person taking responsibility for his or her entirely automatic destructive actions, or volunteering the helper’s own creative thought based on irrelevant experience about what it was and what to do – was almost invariably counterproductive. The help of businesses was entirely focused on what made money for the purveyor, with as little regard as possible for the needs of a particular as opposed to the “average” autistic person. The help of government was bumbling, bureaucratic, intrusive, and in most cases the most effective of all, because it bore some vague relation to both the common good and solid science – particularly as regards psychology. We can argue endlessly about whether “libertarianism can solve that problem!” The fact is, the libertarian communities I hear about out there don’t have psychological smarts, aren’t set up to inhale or accept scientific knowledge about Tourette’s or ADHD, and tend to be misogynistic and indiscriminate in their enthusiasms; so even if the nature of libertarianism does not preclude decent handling of the psychological illness of the powerless, I see absolutely no sign of real-world libertarianism coming up with such a solution in the foreseeable future.

A Better Way
And yet, as I have noted, we perceive some merit in the ideas that libertarianism promotes. This merit, I would say, lies entirely in two aspects of their criticism of present-day societies and governments. Firstly, yes, there should be a much better line between actions and consequences. We respond positively, rightly, to the notion that we have to earn as much as, but not more than, we get in return, and we have to pay the right price for doing wrong things – because it tells us that our irrepressible self-esteem is truly earned. Secondly, it is indeed reasonable to be concerned about increasing powerlessness and threat of blackmail in one’s life as governments intrude, even to try to make things better. It is this kind of merit, I believe, that leads a scientist like James Hansen to protest that he is a bit of a libertarian himself, even as he is being viciously attacked by climate deniers in the guise of libertarians.

And yet, libertarianism, like anarchism, is effectively defined by its negatives. Rather than accommodating a theory of human psychology that requires consideration of structures to contain the harmful societal effects of mental illness, while preserving the potential of those temporarily ill for later contributions, libertarianism “solves” the problems of skewed action/reward pairs and powerlessness by abolishing such structures entirely. The result, by libertarianism’s own definition, is a much greater inability to cope with society-destabilizing and destructive psychological conditions such as psychopathy and narcissism.

And so, sadly, we must return to trying to improve the systems we already have in these areas. These, at least, have made some effort to accommodate psychology’s insights; and therefore their structures are to some extent pre-adapted to the task. However, our development of these structures has up to now been almost entirely reactive, involving after-the-fact regulations and sanctions, leaving the powerful but disturbed always one step ahead in adapting to the new rules of the game.

A better answer to the action/reward problem, I believe, is to embed a deliberate integration of psychology and action/reward fine-tuning into the workings of the government. Yes, like everything else, it can be diverted by the powerful. But introducing the notion of psychopathology into the fabric of normally-functioning society and power structures makes the notion a fundamental part of life that the powerful cannot quite argue away. Too blatant a violation, and only sweeping aside other powerful people to alter the government will do; and that’s a solution that even a psychopath will usually find difficult. The action/reward fine-tuning can then be undertaken as limited by psychology concerns, rather than used to justify deregulation leading to libertarian dystopias.

In the powerlessness area, we are faced with the dilemma that societal success and growth increases distorting power bases that affect us. Our technological might and numbers allow us to create better monitoring technologies for threats like al Qaeda, technologies which in turn are very hard to point away from us, much less prevent the powerful in government and outside it from using these technologies to keep us in line, whatever that means. As our numbers and tasks increase, the power of our vote decreases. The answer here, I believe, is less in the checks and balances and regulations that are harder and harder to preserve and extend, and more in the extension of fundamental limits on any powerful person to throw sand in the gears, by the constitutional elevation of scientific fact as a limit on all government. It is now time to refuse to permit laws that define pi as 3.00, as the Indiana legislature is once reported to have done. And that job should not be the sole duty of the courts, but also of scientists whose independence and peer-review process is preserved constitutionally as a necessary part of a free society. The best way to prevent burgeoning power from ever being used against us unjustly is to require that its users never distort the truth beyond recognition in their efforts to do so; because the powerful will view changing these rules are not worth the bother, while this kind of rule places a better limit on the powerful, no matter what their tactics.

And finally, the better way would ensure that we continue to improve our handling of the powerless and psychologically ill, as we have been doing, free of interference from those who would wipe out such efforts in the name of libertarianism and the like. I am no automatic fan of today’s efforts in this area. I am, I think, fully aware of their misuse, not least by some psychologists. I simply assert that we are much, much better off than in the days when Bedlam was the proper place for the autistic – and further improvements lie along much the same path.

Viewed in terms of human psychological excesses, it seems clear to me that libertarianism is by its nature a cure that is worse than the disease. Within our present frustrating systems, there lie the seeds of a way at least marginally better than today’s governmental mess. One way to grow those seeds is to do a better job of facing the realities of human psychology when allied with power, leading to alterations within the general framework of those systems. Libertarianism offers good criticisms but also a poor solution that rejects psychology. We should incorporate those criticisms into a government that handles human psychology better and therefore rejects libertarianism.

Thursday, December 8, 2011

Energy Usage and IT: Is the Name Not a Clue?

At the IBM Systems and Technology Group analyst briefing two days ago, IBM displayed three notable statistics:

1. The global amount of information stored has been growing at 70-100% per year the last 5 years, with the result that the amount of storage has been growing by 20-40% per year;

2. The amount of enterprise expenditures for datacenter power/cooling has grown by more than 10-fold over the last 15 years, with the result that these expenditures are now around 16% of system TCO – equal to the cost of the hardware, although well below the also-rising costs of administration;

3. Datacenter energy usage has doubled over the last five years.

These statistics almost certainly underestimate the growth in computing’s energy usage, inside and outside IT. They focus on infrastructure in place 5 years ago, ignoring a highly likely shift to new or existing data centers in developing countries that are highly likely to be more energy-inefficient. Also, they ignore the tendency to shift computing usage outside of the data center and into the small-form-factor devices ranging from the PC to the iPhone that are proliferating in the rest of the enterprise and outside its virtual walls. Even without those increases, it is clear that computing has moved from an estimated 2 % of global energy usage 5 years ago to somewhere between 3 and 4%. Nor has more energy usage in computing led to a decrease in other energy usage – if anything, it has had minimal or no effect at all. In other words, computing has not been effectively used to increase energy efficiency or decrease energy use by more than marginal amounts – not because the tools are not beginning to arrive, but rather because they are not yet being used by enterprises and governments to monitor and improve energy usage in an effective enough way.

And yet, there have been voices – mine among them – pointing out that this was a significant problem, and that there were ways to move much more aggressively, since the very beginning. I remember giving a speech in 2008 to IT folks, in the teeth of the recession, stressing that the problem would only get worse if ignored, that doing something about it would in fact have a short payback period, and that tools for making a major impact were already there. Here we are, and the reaction of the presenters and audience at the STG conference is that the rise in energy usage is no big deal, that datacenters are handling it just fine with a few tweaks, and that IT should focus almost exclusively on cutting administrative costs.

All this reminds me of a Robin Williams comedy routine after the Wall Street implosion. Noting the number of people blindly investing with Bernard Madoff, pronounced “made off” as in “made off with your money”, Robin simply asked, “Was the name not a clue?” So, I have to ask, “energy usage”: is the name not a clue? What does it take to realize that this is a serious and escalating problem?

The Real Danger

Right now, it is all too easy to play the game of “out of sight, out of time, out of mind.” Datacenter energy usage seems as if it is easily handled over the next few years. Related energy usage is out of the sight of corporate. Costs in a volatile global economy that stubbornly refuses to lift off (except in “developing markets” with lower costs to begin with), not to mention innovations to attract increasingly assertive consumers, seem far more urgent than energy issues.

However, the metrics we use to determine this are out of whack. Not only do they, as noted above, ignore the movement of energy usage to areas of lower efficiency, but they also ignore the impact of the Global 10,000 moving in lockstep to build on instead of replacing existing solutions.

Let’s see how it has worked up to now. Corporate demands that IT increase capabilities while not increasing costs. The tightness of the constraints and the existence of less-efficient infrastructure causes IT to increase wasteful scale-out computing almost as much as fast-improving scale-up computing, and also to move some computing outside the data center – e.g., Bring Your Own Device – or overseas – e.g., to an available facility in Manila that is cheaper to provision if it is not comparably energy-optimized at the outset. Next year, the same scenario plays out, only with even greater costs from rebuilding from scratch a larger amount of existing inefficient physical and hardware infrastructure. And on it goes.

But all this would mean little – just another little cost passed on to the consumer, since everyone’s doing it – were it not for two things; two Real Dangers. First, the same process impelling too-slow dealing with energy inefficiency is also impelling a decreasing ability of the enterprise to monitor and control energy usage in an effective way, once it gets around to it. More of the energy usage that should be under the company’s eye is moving to developing countries and to employees/consumers using their own private energy sources inside the walls, so that the barriers to monitoring are greater and the costs of implementing monitoring are higher.

Second – and this is more long-term but far more serious – shifts to carbon-neutral economies are taking far too long, so that every government and economy faces an indefinite future of increasing expenditures to cope with natural disasters, decreasing food availability, steadily increasing human and therefore plant/office/market migration, and increasing energy inefficiency as heating/cooling systems designed for one balance of winter and summer are increasingly inappropriate for a new balance. While all estimates are speculative, the ones I think most realistic indicate that over the next ten years, assuming nothing effective is done, the global economy will reach underperformance by up to 1% per year due to these things, and up to double that by 2035. That, in turn, translates into narrower profit margins due primarily both to consumer demand underperformance and rising energy and infrastructure maintenance costs, hitting the least efficient first, but hitting everyone eventually.

The Blame and the Task

While it’s easy to blame the vendors or corporate blindness for this likely outcome, in this case I believe that IT should take its share of the blame – and of the responsibility for turning things around. IT was told that this was a problem, five years ago. Even had corporate been unwilling to worry about the future that far ahead, IT should at least have considered the likely effects of five years of inattention and pointed them out to corporate.

That, in turn, means that IT bears an outsized responsibility for doing so now. As I noted, I see no signs that the vendors are unwilling to provide solutions for those willing to be proactive. In the last five years, carbon accounting, monitoring within and outside the data center, and “smart buildings” have taken giant leaps, while solar technologies at whatever cost are far more easily implemented and accessed if one doesn’t double down on the existing utility grid. Even within the datacenter, new technologies were introduced 4 years ago by IBM among others that should have reduced energy usage by around 80% out of the box – more than enough to deliver a decrease instead of a doubling of energy usage. The solutions are there. They should be implemented comprehensively and immediately, as, by and large, has not been done.

Alternate IT Futures

I am usually very reluctant to criticize IT. In fact, I can’t remember the last time I laid the weight of the blame on them. In this case, there are many traditional reasons to lay the primary blame elsewhere, and simply suggest that IT look to neat new vendor solutions to handle urgent but misdirected corporate demands. But that begs the question: who will change the dysfunctional process? Who will change a dynamic in which IT claims cost constraints prevent it from “nice to have” energy tools, while corporate’s efforts to respond to consumer “green” preferences only brush the surface of a sea of energy-usage embedded practices in the organization?

Suppose IT does not take the extra time to note the problem, identify solutions, and push for moderate-cost efforts even when strict short-term cost considerations seem to indicate otherwise. The history of the past five years suggests that, fundamentally, nothing will change in the next five years, just as in the past five, and the enterprise will be deeper in the soup than ever.

Now suppose IT is indeed proactive. Maybe nothing will happen; or maybe the foundation will be laid for a much quicker response when corporate does indeed see the problem. In which case, in five years, the enterprise as a whole is likely to be on a “virtuous cycle” of increasing margin advantages over the passive-IT laggards.

Energy usage. Is the name not a clue? What will IT do? Get the clue or sing the blues?

Thursday, November 3, 2011

Rethinking Printing

A recent piece about HP’s plan to assert the key role of its printer division in modern computing, and move it beyond the “cash cow” status it presently seems to have by redefining printers’ use cases, left me underwhelmed. As a printer/copier/scanner/fax machine user, I could not see major benefits to me (or for that matter, to the typical SMB) from the proposed “paper is still relevant, cut printing costs, use the printer as a mailbox” strategy.

Still, it made me think. If I Ruled the World, how would I like to redesign things? How could printing technology play a key role in the Web 3.0 of the future? What follows is, I realize, an exercise in fantasy – it will probably never happen. Still, I think it might form a useful framework with which to think about what printing will really do in the future – and what it won’t.

Printing a Hamburger

One of my favorite old ads was one for A-1 steak sauce that began by saying, “My friends, what is a hamburger?” The point being, of course, that it was chopped steak, and therefore people should consider A-1 instead of ketchup. I would suggest that the whole printer area would benefit from asking, “My friends, what is a printer?”

My answer would be a “virtual” one: printing/scanning/copying is about creating a computer representation of a physical image – call it a graphic – that can be displayed on a wide variety of form factors. Thus, a scanner can take these physical graphics from the outside world; a “snapshot” can take such a graphic from a video stored inside the computer; software can intercept physical representations being sent to output devices from applications, such as internal representations of photos, internal representations of Web pages, internal representations of reports, check images, signed legal documents. These standardized representations of graphics then can be customized for, and streamed to, a wide variety of form factors: computer screens, cell phone and laptop displays, printers, email messages (attachments), or fax machines (although I tend to think that these are fading away, replaced by email PDFs).

Is this Enterprise Content Management? No. The point of such a “gateway”, that represents many graphic formats in a few ways and then customizes for a wide variety of physical displays, is that it is aimed at physical display – not at managing multiple users’ workflow. Its unit is the pixel, and its strength is the ability to utterly simplify the task of, say, taking a smartphone photo and simultaneously printing, emailing, faxing, and displaying on another worker’s screen.

One of those output devices – and probably the most useful – is the printer/scanner/copier. However, the core of the solution is distributed broker software like the Web’s email servers that pass the common representations from physical store to physical store, and route them to “displays” on demand. Rather, today’s printer is simply the best starting point for creating such a solution, because it does the best at capturing the full richness of a graphic.

We are surprisingly close to being able to do such a thing. Documents or documents plus signatures can be converted into “PDF” graphics; photos into JPGs and the like; emails, instant messages, Twitter, and Facebook into printable form; screen and Web page printing is mature; check image scanning has finally become somewhat functional; and we are not far from a general ability to do a freeze-frame JPG from a video, just by pressing a button. But, somehow, nobody has put it all together in such a “gateway.”

Extreme Fantasy

As a side note, I’d like to ask if the following is at all feasible. Visualize in your mind your smartphone for a second. Suppose you had optional add-ons in back. They would contain 4-5 pieces of phone-sized “image paper” in one slim add-on, and a laser-type “print head” that would place the appropriate color on each pixel on the paper A button or screen icon on the front would allow printing of whatever was displaying on the screen – including, as I suggested, a “snapshot” of a video.
Can it be done? I don’t know. I know that I would love to have such a thing for my mobile devices, even crippled as it probably would be. Remember the old Polaroid OneShot? The joy of that was the immediacy, the ability to share with someone physically present what was not a really high-quality photo, but was still a great topic of conversation.

Why haven’t printer vendors moved more in this direction? Yes, I know that attempts to provide cut-down attachable printers for laptops haven’t sold. But I think that’s because the vendors have failed to distinguish two use cases. One is the highly mobile case, where you’re running around a local office or factory and need to print on whatever form factor: small laptop, tablet, or even cell phone. That’s the case where all you need is the ability to print out 4-5 pages worth of info, bad quality and all. For that, you need a very light printer and preferably one that attaches to the form factor so that you can carry both – like strapping two books together.

The second is the “mobile office” case, where you go somewhere and then sit and work. In that case, you need full power, including a multi-sheet scanner/copier feeder – why don’t printer vendors realize how useful that is? It should be light enough so it can be carried like a briefcase or laptop, but it doesn’t have to be attachable; and it should be wireless and work with a MiFi laptop. Above all, it should be foldable so that it’s compact and its parts don’t fall off when you carry it.

Sad Conclusions

I realize that some of the details of the above fantasy may be unrealistic. he real point of the fantasy is that paper and the printer are not dead, no matter what; because, as HP pointed out people keep using them more. And, I believe, that’s because it is just so useful to have a “static” representation of lots of information to carry around with you. Nothing on the horizon replaces that need, not the cell phone or laptop with its small display, nor the immobile PC or TV, nor even the tablet with its book support but inability to “detach” valued excerpts for file-cabinet storage and real-world sharing.

But not being dead doesn’t mean wild success. I believe that HP’s printer “vision” is still too parochial, because it fails to detach the core value proposition from a particular physical device. It may be safe to extrapolate present markets; but I believe that such a marketing approach falls far short of print technology’s potential.

Still, I don’t believe that printer vendors will do what I fantasize. The risks of failure are too great, and the present revenues too comforting. No, it seems likely to me that five years from now, the printer business will still be seen as a cash cow – because nobody asked what a hamburger is.

Pity.

Friday, October 21, 2011

Update to Libya War Post

Well, things have gotten better -- but not much.

A quick look at the archives of Britain's Guardian -- which did better than any US paper -- reveals far more coverage of Qadaffi's death than of the fact that the war was ongoing, or even that the rebels had finally eliminated the last prominent resistance in Sirte. And as late as a month ago, it was talking as if the military part of the war was over, while discussing an assessment of NATO "military success" as if that was the key element in military victory.

Meanwhile, Michelle Bachmann, a Presidential candidate, was quoted as saying 2 days ago (more or less) "Obama got us into Libya; now he's getting us into Africa" -- aside from the obvious but probably accidental geographical mistake, it seems clear she believed that the US was deeply involved in the Libya war and would continue to be involved in the near future.

This is just the commentary on the war; one might also cite the persistent failure to note Qadaffi's role in horrendous wars in Liberia, or the complete nonsense of any references to al Qaeda.

Still, the press did manage to finally understand that the war is really effectively over as of about the date of Qadaffi's death, that the rebels are indeed functioning as a government in all occupied territories, and that despite overclaims by the rebels, their statements about what was going on were far more credible than those by either regime spokesmen or remote reporters. Best of all, some news organizations finally got their reporters' butts out of their hotels and were able to confirm specific rebel successes.

Finally, let me recall this approximate quote from my previous post: "the war is not over ... at least a month, and in the worst case, two months more." It's a little less than two months, and here we are. In the meantime, I can't recall one news organization making that obvious projection. Meanwhile, Wikipedia still managed to provide me with clear, accurate news about the war ahead of all major news organizations about 1/3 of the time, by looking carefully at published NATO briefings.

I can't wait to see what our news organizations and politicians do for an encore. Probably forget all about Libya except to use it as a convenient stick figure for fear, uncertainty, and doubt. At this point, I am tempted to echo the advice of Professor Higgins' mother in My Fair Lady:

Stick to the weather and health.

Steve Jobs

I believe that most folks were expecting an announcement of Steve’s death sometime soon. And yet, it is clear from most computing industry reactions to the announcement that there was something different and valuable about Steve, and that it is not clear who will succeed him in providing that unique “something.” But what is it?

I believe that it’s a much more subtle thing than most folks realize – because they don’t understand the mistakes he made along the way to becoming what he was just before he died, and how he managed to change himself just enough to begin to deliver user-friendly innovation after user-friendly innovation to meet the real needs of the consumer.


My Own Jobs History: The Early Apple Years
In his tenure as leader of Apple from its founding until his ouster, Steve Jobs displayed many of the traits that were later associated with his recent success. He was relentlessly focused on a “vision” for each release of an Apple product, and the vision was his vision, and everything in the product was highly integrated in the service of that vision. Thus, a product like the Apple II C had its own hardware, a different processor (i.e., not Intel), its own operating system (i.e., not DOS), and, above all, strict controls over how applications could be built on top of that operating system.

The results of that style were significant success and much more significant failure. As part of the overall PC revolution, Apple shared in the rapid growth of the consumer market. Steve Job’s vision, evidenced in “cool” products fitted for students and graphics departments, gave Apple a big share of those submarkets. But these submarkets never led to other markets – because application developers flocked to the more open free-for-all of Microsoft and its then competitors. At the computer software sections of stores in those days, you could find plenty of game, spreadsheet, and “do-it-yourself business” offerings on the PC side, and very few on the Apple side. And that, in turn, led to low market share for Apple in the consumer market, and in the business market as well.

There were other side-effects of Steve Jobs’ “vision” in those days. For a couple of years, I worked in a business workgroup environment using Macs, and I can tell you that the reality was not all it was cracked up to be. The print server software – a key part of our operation – frequently crashed, at bad times. The storage on Macs was prone to total failure, and there was a big risk of having all our work and records for a couple of years destroyed – as happened to me. I used PCs for years before and after; they were less user-friendly, and more prone to the “blue screen of death” in the abstract, but in the real world I could do a lot more and have a lot less risk of serious failure. And I suspect that the real reason for the difference was that things like print servers and storage just weren’t the focus of Steve’s “vision”.

At Apple, reportedly, Steve Jobs’ new-product-development style was a mixed bag. On the one hand, his insistence on user-interface quality and consistency clearly produced solutions that consumers intuitively liked, and his desire for innovation satisfied his programmers’ own urge to innovate. On the other hand, it seemed clear that his relentless upsets and negativity when the product wasn’t what he wanted were wearing after a while, gave little scope to the Apple programmers’ own creativity, and showed some blindness to what consumers really wanted. It is significant that Apple was able to spin off or foster few if any now-large software companies. FileMaker had and has excellent and useful database-design software, but is far smaller today than a Quicken.

Had Steve Jobs not been eased out at Apple back then, I am not at all convinced that he would have delivered the successful innovations of today. If I had been a board member tasked with deciding what to do, I would have weighed the short-term innovations like the Mac (which was a major step in graphical user interface [GUI] design, and had come about because Jobs stuck with the idea after the fizzle of the Lisa) against the long-term isolation to niche markets, and the waste of resources in never-win markets like Apple servers, and I might have agreed that it was time to try someone else.

A Decade in Exile
Initially, after he stepped down as head of Apple, Steve Jobs appeared to be positioning himself to make a return to Apple, but not necessarily to succeed any better once he did so. Specifically, Steve founded a company called NeXT, which aimed at producing the “next generation of operating system” using an object-oriented microkernel. Applied to a PC like the Apple Macintosh, this theoretically should have produced all sorts of benefits: better application performance, easier development of applications on top of the operating system (because object-oriented programming was a bit faster than the existing “structured” programming) and especially GUI-oriented ones, and faster turn-around on new versions of the operating system supporting all sorts of new capabilities. And, to a small extent, that was what happened when Steve returned from exile to Apple and made the NeXT operating system into Mac OS X Lion (the latest version).

However (again looking ahead), the effect was limited. Of course, at that point, Apple had nowhere to go but up, and in its traditional educational and graphics markets, the effect of faster delivery of Jobs’ “vision” products was to attract a whole new generation to Apple; but there was no clear indication that Jobs would be able to capture more than about 10% of the PC market, which was his high point during the early Apple years. In other words, the results show us that the effects of Steve Jobs’ NeXT “vision” were limited, and might have been short-term in and of themselves. The Steve Jobs of NeXT would probably have saved Apple; he probably would not have achieved more than a fraction of Apple’s present success.

It may seem odd to point to Pixar as the source of most of the computer-industry success that we have come to associate with Steve; but I believe that that is where he made a significant alteration in style that permitted that success. In very crude terms, I think he learned to ally with others. More specifically, he learned how to make deals with other companies that compounded his company’s revenues, how to allow companies (and later developers and consumers) to take his ideas and run with them, and how to accept ideas from others in areas he knew less about, and incorporate them into products in areas he did know very well.

The story of Pixar up to now is briefly told: five years of plowing money into a startup before things like Toy Story took off, an unequal alliance with Disney to allow survival that led to a major share in Disney once Toy Story took off, and the ability of Pixar to deliver innovation in computer-generated animation movie after movie, combined with solid success. But the key element in that success story was Jobs’ ability to strike and maintain a deal with Disney that infused not only Jobs’ “vision” but also Disney’s sense of commercial entertainment and third-party animators’ “vision” into Pixar, allowing it to repeat its initial successes without wandering off course.

The Real Triumph

When Steve Jobs returned to Apple, remembering his past history, I was wary of becoming too enthusiastic. And, indeed, until the arrival of the iPhone, it wasn’t completely clear that things had changed. But there were two things about the iPhone that signaled a significantly different Jobs. First, the touch screen was done right. Not only was it innovative (somewhat), and user-friendly in its simplicity, but it was entertaining. It seemed to me that Steve had taken what he had learned at Pixar, and applied it to giving a broad set of consumers a new “cool” tool.

The second key difference was that iPhone did development right. Yes, there was reluctance and there were limits initially on iPhone app creation, but the old Steve Jobs might very well have kept tight control on app development forever. The new Steve Jobs allowed outside developers to get a little compensation for each app, and, as they say, the rest is history – a history that shows that the result is not short-term but long-term evolutionary innovation of iPhone and the smart phone.

Things like iPad and iCloud are very much a repeat of the same story. The tablet has a long history of failure, and indeed the old Apple took full part in the belief that what mattered was making it easy for people to scribble on a computer screen. But iPad – and its competitors – were and are mainly about touch-screen commands and downloading books and movies at lower costs for use on the go. That’s a smart phone and media industry insight as much as a computer industry one. And, of course, Steve Jobs’ “vision” embodied in the user interface persuaded most consumers to accept the innovation and take the next big step forward. While iCloud will probably wind up making less of a smash, its development and introduction appear like iPhone and iPad in miniature.

Envoi
And so, I salute the Steve Jobs of the last few years as having indeed delivered major innovation that is unique and valuable. But I assert that the real question should be, not who if anyone in future will succeed him in his “vision”-type style, but who will be able to deliver that kind of results, whatever their style?

You see, there are two types of failure inherent in Steve’s story. One is the failure of the “visionary” who may be right about the general direction of innovation, but undercuts that innovation by “I only see my way” implementation of the vision. The other, which we are much more apt to perceive, is the incessant failure of others in the computer industry to pay the price initially to implement the vision, or even to have the innovative chops at all.

That the later Steve Jobs avoided both failures is, I think, an indication that at any time he would be a pretty rare bird. But I would argue that slowly, reluctantly, corporations of all stripes are coming to realize that innovation that both connects to the customer constantly and taps into company or third-party creativity is a Good Thing. So I am hoping that we will see more successors to Steve Jobs, not in his style, but in his result of successful innovation – as long as these same companies ensure that one creative voice integrates all these strands of innovation together, version after version, product after product.

Therefore, Steve Jobs, rest in peace. I always liked the Vachel Lindsay poem that went “Sleep softly, Eagle Forgotten, under the stone … To live in mankind is far more than to live in a name.” In this case, I can hope that Steve, warts and all, will live in innovators of the future. And that will be far more important to the computer industry and us than his legacy, however great, to Apple.

Monday, September 26, 2011

The Real Effects of Computing Brilliance

Recently, L.E. Modesitt, a noted and highly perceptive (imho) science fiction author with a long background in business, government, education, and the like, wrote a blog post in which he decried the contributions of “brilliant minds, especially at organizations like Google and Facebook” to “society and civilization.” He noted Eric Schmidt’s remarks in a recent interview that young people from colleges were brighter than their predecessors, and argued that they had applied their brilliance in “pursuit of the trivial … [and] of mediocrity.” More specifically, he claimed that their brilliance has helped lead to the undermining of literary copyright and therefore of an important source of ideas, the creation of bubble wealth, the destabilization of the financial system, and potentially the creation of the worst US political deadlock since the Civil War.

Here, however, Mr. Modesitt steps onto my turf as well as his own. I can claim to have been involved with computers and their results since I was in the same freshman class in college as Bill Gates (Junior, thank you). I have had a unique vantage point on those results that combines the mathematical, the technological, the business, and even the political, and have enjoyed an intimate acquaintance with, and given long-running attention to, the field. I am aware of the areas in which this generation (as well as the previous two) has contributed, and their effects. And I see a very different picture.

Let’s start with Mr. Schmidt, who is a very interesting man in his own right. Having been at Sun Microsystems (but not heading it) in its glory days of growth, struggle, and growth (1983-1997), he became CEO of Novell as it continued its slide into oblivion, and then in 2001, a key part of the necessary transition of Google from its founders to successors who can move the company beyond one person’s vision. Only in the last role was he clearly involved with successful technology leadership, as Sun was a master at marketing average technology as visionary, while Novell’s influence on industry innovation by that point was small.

All that being said, Mr. Schmidt is speaking from the viewpoint of having seen all three of computing’s great creative “generations,” waves of college-bred innovators who were influenced by comparable generations of their societies, and who combined that social influence with their own innovations to create the stew that is the Web of today. I would divide those generations as follows:

1. Graduating from college in 1971-1981, the “baby boomer/socially conscious”.
2. Graduating from college in 1982-1995, the “individual empowerment Gen Xer”.
3. Graduating from college in 1996-2007, the “Web is the air we breathe” generation.

Where Mr. Modesitt sees only some of the effects of all three generations from the outside, I believe that Mr. Schmidt sees from within the way that generation 3 incorporates all the technological advances of the previous generations and builds on them. Therefore, where Mr. Modesitt is too negative, I believe that Mr. Schmidt is too positive. Generations 1 and 2 have created a vast substructure – all the technologies of computing and the Internet that allow Generation 3 to move rapidly to respond to, or anticipate, the needs of each segment of a global society. Because the easiest needs to tackle are the most “trivial”, those will be most visible to Mr. Modesitt. Because Generation 3 appears to move from insight to insight faster by building on that infrastructure, Mr. Schmidt sees them as more brilliant.

Let’s get down to cases: the Web, and Google. The years of Generation 3 were the years when computing had a profound effect on the global economy: it gave productivity more than a decade of 2.5-3% growth, a kind of “sunset re-appearance” of the productivity gains of the Industrial Revolution. This effect, however, was not due to Generation 3. Rather, Generation 2’s creation of desktop productivity software, unleashed in the enterprise in the early 1990s, drove personal business productivity higher, and the advent of the business-usable Web browser (if anything, a product of Generation 1’s social consciousness) allowed online sales that cut out “middleman” businesses, cutting product and service costs sharply.

But where Generation 3 has had a profound effect is in our understanding of social networks. Because of its theoretical work, we now understand in practical terms how ideas propagate, where are the key facilitators and bottlenecks, and how to optimize a network. These are emphatically not what we think of when we talk about the value of computing; but they are the necessary foundation for open source product development, new viral and community markets, and the real prospect of products and services that evolve constantly with our needs instead of becoming increasingly irrelevant (see my blog post on Continuous Delivery).

Has that made life better? I would say, that’s like asking if Isaac Newton’s calculus was a good idea, given that it has mainly been used to calculate trajectories for artillery. Ideas can be used for good or ill; and often the inventor or creator has the least control over their long-term use by others.

But let’s play the game, anyway. Undermining copyright? I would blame that more on Generation 2, which used the Web to further its libertarian ideologies that now form a key part of the “Web ethos”. Financial algorithms and bubble wealth? Sorry, that’s a product of the broader society’s reaction to the perceived (but not necessarily real) lawlessness and government “do-gooding” of the Vietnam era, so that law and order and “leave me alone” permitted business deregulation and lack of outside-the-box thinking about the risks of new financial technologies. Political deadlock? Same thing. “Leave me alone” created a split between Vietnam-era social consciousness and previous generations, while removing later generations from the political process except as an amoral or totally self-interested game. In both those cases, no computing Generation could have done very much about it except to enable it.

What are the most profound ideas and impacts of all three Generations’ “brilliant minds”? I would say, first, the idea of a “meta-product” or “meta-thing” (today’s jargon probably would call it a “virtual reality”), an abstraction of a physical good or process or person that allows much more rapid analysis and change of what we need. To a far greater degree than we realize, our products are now infused with computer software that not only creates difficulties in carrying out tasks for poor Mr. Modesitt, but also gives an array of capabilities that can be adjusted, adapted, evolved, and elaborated much more rapidly than ever before. Mr. Modesitt, I know, rightly resents the stupidities of his word processing software; but there is simply no way that his typewriter hardware by now would have the spelling and grammar checking capabilities, much less the rapid publishing capabilities, of that same word processor. It may be more work to achieve the same result; but I can guarantee that, good or bad, many of the results achieved could not have been attained forty years ago. Our computing Generations did that.

The second, I believe, is that of global information. The Web, as imperfect as it is, and as security-ridden as it has become, has meant a sea-change in the amount of important information available to most if not all of us. When I was in college, the information readily available was mostly in textbooks and bookstores, which did not begin to capture the richness of information in the world. The key innovation here was not so much the search engine, which gave a way to sort this information, but the global nature of the Web user interface, the embedding of links between bits of information, and the invention of “crowdsourcing” communities, of which Wikipedia is by far the most successful as a pointer from information area to information area. Say all you like about the lies, omissions, and lack of history behind much of this global information pool; it is still better than a world in which such information appeared not to exist. Our computing Generations did that, too.

The third idea, the one I mentioned above, is understanding of, and ability to affect the design of, social networks. Facebook is only the latest variant on this theme, which has shown up in Internet groups, bulletin boards, open source and special-interest communities, smartphone and text-message interfaces, file sharing services, blogs, job sites, and other “social networks.” The Arab Spring may be an overhyped manifestation of its power; but I think it is fair to say that some of the underlying ideas about mass peaceful regime change could never have been global or effective without computing’s “brilliant minds.” I give Generation 3 most of the credit for that.

So here’s an example of why I disagree with Mr. Modesitt: a couple of years ago, I sat down to understand climate change. At first, the best information came from old-style books, mostly from the local public library. But soon, I was using www.climateprogress.com and neven1.typepad.com to dig deeper, to see the connections between what the books were saying, and then Wikipedia to point me to research articles and concept explanations that allowed me to put it into a coherent whole, adding my own technology, economic, business, political, and government experience and knowledge to fill in the gaps. No set of books, then or now, could possibly have given me enough information to do this. No phone network could possibly have shown me the people and institutions involved. No abacus could have put the statistics together, and no typewriter or TV could have given me such a flexible window to the information. The three great ideas of computing’s three Generations made such a thing possible.

So, no, I don’t believe that much of what this Generation of brilliant minds has been working on is at all trivial or mediocre, despite appearances – not to mention the contributions of the other two Generations – nor has its effect been predominantly bad. Rather, it reminds me of the remark in Dilbert that it is the work of a few smart, unprepossessing drones that moves the world forward, hindered rather than helped by their business or the greater society. Despite society’s – and sometimes their own – best efforts, these Generations’ blind pursuit of “the new new thing”, I say, has produced something that overall is good. In every sense.

Monday, September 12, 2011

Human Math and Arctic Sea Ice

Over the last year and a half, instead of following baseball and football, I have been following the measurements of Arctic sea ice. Will it entirely melt away at minimum? If so, when? Will it continue to melt past that point? Will the Arctic reach the point of being effectively ice-free year-round? If so, when? Will Arctic sea ice set a new record low in extent this year? In area? In volume? By the way, the answers to the last three questions in 2011 are already, by some measures, yes, yes, and yes.

In the process, I have become entirely addicted to the Arctic Sea Ice web site (neven1.typepad.com), the Arctic sea ice equivalent of a fantasy football league. There, I and other scientist and math wannabees can monitor and pit themselves against near-experts, and plunder much of the same data available to scientists to try our own hand at understanding the mechanics of the ebb and flow of this ice, or to do our own predictions. Periodically, scientific research pops up that deepens our understanding in a particular area, or challenges some of our assumptions. Strange weather patterns occur as we near minimum, making the final outcome unpredictable. In fact, this year we still don’t know whether the game is over for the year or not – whether we are going into overtime.

But what disturbs me about the glimpse I am getting into science, this late in life, is that I keep seeing half-veiled glimpses of what I might call “data snow blindness,” not just from us newbies, but from some of the scientific research noted in the site. By that I mean that those who analyze Arctic sea ice data tend to get so wrapped up in elaborating the mathematical details of, say, a decrease in extent at minimum and how short-term and long-term changes in sea-ice albedo, the North Atlantic Dipole Anomaly, cloud cover, and increased Arctic surface temperatures can predict this minimum, that they seem to forget what extent data really conveys and does not convey, and how other factors not yet clearly affecting extent data should play an increasing role in future extent predictions. They keep talking about “tipping points” and “equilibria” and “fundamental changes in the system” that do not appear to be there. And so, what surfaces in the media, with few if any exceptions, gives an entirely misleading picture of what is to come.

So let me lay out my understanding of what the data is really saying, and how people seem to be going wrong. Bearing in mind, of course, that I’m not the greatest mathematician in the world, either.

The Model
I find that the easiest way to visualize what’s going on with Arctic sea ice is to think of a giant glass full of water, with an ice cube floating on top that typically almost but not quite fills the top of the glass. In the summer, the sun shines on the ice and top of the water, adding heat on top; in winter, that heat turns off. In summer and winter, the water below the ice is being heated, although a little less so in winter.

This does not quite complete the picture. You see, water is constantly flowing in from the south on one side and out to the south on another. As it hits the ice, it freezes, and moves throughout the cube until it exits on the other side – an average of about five years, apparently. These currents, too, act to break the ice apart as it is melting in summer, so (especially on the edge) there are lots of little “cubelets”. Where the ice is solidly packed together in the glass, then “area” – the amount of ice we see from above – is the same as “extent” – the region in which there is enough ice to be easily detectable. But where there are cubelets, then extent is much greater than area – as much as 70% more, according to recent figures.

One more important point: the depth of Arctic sea ice is not at all the same everywhere. This was true when the first nuclear sub approached the Pole underwater decades ago, and it is true as the Polarstern measures depth from above this year. The age of the ice, the point in the year in which it first froze, and where it is in relation to the Pole all factor in; but even in small regions, “thickness” varies. Generally, less than half the ice is almost exactly the same thickness as the average, with lots above and lots below. I find it useful to think of a normal curve of sea-ice thickness more or less peaking at the average.

In a state of “equilibrium” such as apparently has existed for perhaps 5-10 million years, the Arctic cube fills at least 70% of the Arctic in summer and just about all of the Arctic in winter. In summer, the sun melts the top and sides of the cube. However, since we only see the top, which only peeks out a little bit from the top of the water, we don’t see the way that the bottom of the cube rises in the water – the “balance point” between “salt” water below and “no-salt” ice above goes up. In the same way, in winter, we don’t see the way that the balance point goes down deeper.

Now suppose the globe starts warming. As it happens, it warms faster in the Arctic than in the equator, and it warms in both the near-surface air and in the water that is flowing through the Arctic underneath the ice, year-round. Estimates are that the air temperatures in summer in or near the Arctic have reached more than 10 degrees higher, and the rate of rise is accelerating; likewise, the ocean temperature is up more than a degree, and the rate of rise is accelerating. What we would expect to happen is that there is less and less ice during the summer – and also less and less ice during the winter, because the water underneath that cube is warmer, and the “balance point” is higher. And the rate we would be losing ice would be accelerating.

The Measures
Now how do we measure what’s going on with the ice? We point our satellites at the Arctic, and we measure in three ways: extent, area, and volume. Extent and area we have explained above; volume is area times average thickness.

To get extent, we divide the Arctic into regions, and then into subregions. If, say, more than 15% of the subregions in a region show ice, then that’s part of the ice extent. The area is the extent times the percent of subregions that show all ice (this is a very simplified description). There are difficulties with this, such as the fact that satellites tend to find it difficult to distinguish between melt ponds at the top of the ice during melting and melting of the ice “all the way down”; but it appears that those cause only minor misestimations of “actual” extent and area. Storms that wash over “cubelet” ice can cause temporary fairly large downward jumps in what the satellite perceives as ice, and hence in area; but again, this goes away as the storm subsides. All in all, the measuring system gives a pretty good picture (by now) of what’s going on with area and extent.

Volume is much more difficult to get at – because it’s very hard to get an accurate picture of thickness from a satellite. Instead, we create a model of how thick each part of the ice should be at any time of the year, and then check it out “on foot”: walking around on the ice (or icebreaking) and testing.

Now here’s what most people either don’t know or don’t think about: that model has been tested against samples almost constantly since it was first developed decades ago, and it is pretty darn accurate. When it has said, this year, that ice is as little as a meter thick near the North Pole, vessels went out and, lo and behold, found a large patch of ice 0.9 meters thick near the North Pole. That’s clearly not something that many scientists were anticipating six years ago as likely – but the model was predicting it.

It’s the Volume, Stupid
All right, let’s get back to the model. Suppose we have equilibrium, and we start an accelerating rise in air and ocean temperatures. What changes would we expect to see in real extent, area, and volume year to year?

Well, think about what’s happening. The water heating is nibbling away at the “balance point” from beneath, and at the edges. The air heating is nibbling away at the top of the ice during the summer, and expanding the length of the summer a bit, and heating the surface water at the edges a bit. So it all depends on the importance of water heating vs. air heating. If water heating has little effect compared to air heating, what happens to volume is not too far from what happens to area and extent: They start plummeting earlier and go farther down, but during the winter, when just about the whole basin gets frozen again, they go back to about where they were – a little less, because the air is more often above freezing at the very edge of the ice even in the depth of winter.

Now suppose water heating has a major role to play. This role shows up mostly as “bottom melt”, it shows up year-round in about the same amounts, and it shows up almost completely, until nearly all ice is melted, in volume. So what you’ll see, when you look at extent, area, and volume, is that volume goes down for quite a while before it’s really clear that area and extent are changing, and then area and extent start going down at minimum but not much if at all at maximum, and then the melt moves the balance point up and up and some of that normal curve starts reaching “negative thickness” as some of that melt reaches the surface of the ice and that means area starts moving down fast, and extent with it, and then in a year or two half of the ice reaches “negative thickness” and it seems like the area is cut in half and the extent by one-third, and the same thing happens the next year, while the rate of volume decrease actually starts slowing, and then in two more years there is effectively less than 1% of the ice area at maximum showing up at minimum, and less than 15% of the extent.

This is where lots of folks seem to fail to understand the difference between a measure and what it’s measuring. If you look at the volume curve, until the last few years before it reaches zero it seems to be accelerating downward – then it starts flattening out. But what’s actually happening is that some parts of the ice have reached the point where they’re melting to zero – others aren’t, because there’s a great variation in ice thickness. If we represented those parts of the ice that have melted as “negative thickness”, then we would continue to see a “volume” curve accelerating downward. Instead, we represent them as “zero thickness”, and the volume delays going to zero for four or five years.

So what are our measures telling us about the relative importance of water and air heating? First of all, the volume curve is going down, and going down at an accelerating rate. From 1980 (start of the model) to 2005, average thickness at minimum area (a good proxy for thickness at ice minimum), went from more than 3 meters to a little more than 2 meters. From 2006 to 2010, it has gone from there to a little more than 1 meter. If it were to follow the same path until 2013 or 2014, then “volume” would go to zero then at minimum, with the model’s measures of volume going to zero perhaps 3 years later, as explained in the last paragraph.

But there’s another key fact about volume: volume at maximum went down by about the same amount (15 million cubic meters) from 1980 to 2010 as volume at minimum (14 million cubic meters). In other words, instead of volume springing back during the winter almost to what it was 30 years ago, it’s almost exactly tracking the loss of volume the rest of the year. The only way this happens is if water melting from the bottom is a major part of the loss of volume from year to year.

Let’s sum up. The samples show that volume is going down at an accelerating rate throughout the year. The accelerating drop in volume shows that bottom melt is driving accelerating loss of Arctic sea ice year-round. Thus, we can predict that area and extent will soon take dramatic drops not obvious from the area and extent data, and that these will approach zero in the next 7 years or so. In other words, the key to understanding what’s going on, and what should happen next, is as far more the measure of volume – rightly understood – than area or extent.

Where’s The Awareness?
And yet, consistently, scientists and wannabes alike seem to show a surprising degree, not just of disagreement with these projections, but also of a seeming lack of awareness that there should be any issue at all. Take some of the models that participants in Arctic Sea Ice are using to predict yearly minimum extent and area. To quote one of them, “volume appears to have no effect.” The resulting model kept being revised down, and down, and down, as previous years that had shorter summers, slower rates of melt for a given weather pattern, and less thin ice, proved bad predictors.

Or, take one of the recent scientific articles that showed – a very interesting result – that if, for example, the Arctic suddenly became ice free right now, it would snap back to “equilibrium” – the scientist’s phrase – within a few years. But why was the scientist talking about “equilibrium” in the first place? Why not “long-term trend”? It is as if the scientist was staring at the volume figures and saying, well, they’re in a model, but they’re not really nailed down, are they, so I’ll just focus on area and extent – where the last IPCC report talked about zero ice in 2100, maybe. So suppose we substituted the volume’s “long-term trend” for “equilibrium”, what would we expect? Why, that a sudden “outlier” dip in volume, area, or extent would return, not to the same level as before, but to where the long-term accelerating downward curve of projected volume would say it should return. And, sure enough, the “outlier” volume dip in 2006-2007 and extent dip in 2007 returned only partway to the 2005 level – in 2009 -- before resuming their decrease at an accelerating rate.

And that brings us to another bizarre tendency: the assumption that any declines in area and extent – and, in the PIOMAS graph, in volume – should be linear. Air and ocean temperatures are rising at an accelerating rate; so an accelerating downward curve of all three measures is more likely than a linear decline. And, sure enough, since 2005, volume has been consistently farther and farther below PIOMAS’ linear-decline curve.

Then there’s “tipping point”: the idea that somehow if Arctic sea ice falls below a certain level, it will inevitably seek some other equilibrium than the one it used to have. Any look at the volume data tells you that there is no such thing as a tipping point there. Moreover, a little more thought tells you that as long as air and ocean temperatures continue their acceleration upwards, there is no such thing as a new equilibrium either, short of no ice year-round – which is a kind of equilibrium, if you don’t consider the additional heat being added to the Arctic afterwards. Sooner or later, the air and ocean temperature at all times in winter reaches above minus 4 degrees C, so that water can’t expel its salt and freeze. And that’s not hypothetical; some projections have that happening by around 2060, if not before.

And, of course, there’s “fundamental change in the system”, which implies that somehow, before or after now, the Arctic climate will act in a new way, which will cause a new equilibrium. No; the long-term trend will cause basic changes in climate, but will not be affected by them in a major way.

Follow-On Failures
The result of this data snow blindness is an inability to see the possibility – and the likelihood -- of far faster additional effects that spread well beyond the Arctic. For instance, the Canadian Prime Minister recently welcomed the idea that the Northwest Passage is opening in late summer, because Canada can skim major revenues off the new summer shipping from now on. By the data I have cited above, it is likely that all the Arctic will be ice-free in part of July, August, and September by 2020, and most of the rest of the year by 2045, and very possibly by 2035. Then shippers simply go in one end of the Arctic via the US or Russia, and the other end via the Danes (Greenland) or Russia, and bypass Canada completely.

And then there’s the Greenland land ice. Recent scientific assessments confirm that net land ice loss has doubled each decade for the last three decades. What the removal of Arctic sea ice means is the loss of a plug that was preventing many of the glaciers from sliding faster into the sea. Add a good couple of degrees of increased global warming from the fact the Arctic sea ice isn’t reflecting light back into space any more, and you have a scenario of doubling net Greenland land ice loss for the next three decades, as well. This, in turn, leads to a global sea ice rise much faster than the 16 feet (or, around the US, maybe 20 feet) rise that today’s most advanced models project by assuming that from now on, Greenland land ice loss will be rising linearly (again, why linearly? Because they aren’t thinking about the reality underlying the data). And, of course, the increased global warming will advance the day when West Antarctica starts contributing too.

The Net-Net: The Burden of Proof

As a result of these instances of data snow blindness, I continually hear an attitude that runs something like this: I’m being scientifically conservative. Show me why I shouldn’t be assuming a possibility of no long-run change, or a new equilibrium short of year-round lack of Arctic ice, or a slow, linear descent such as shows up in the area and extent figures.

What I am saying -- what the data, properly understood, are saying to me – is that, on the contrary, the rapid effects I am projecting are the most likely future outcomes – much more rapid declines in area and extent at minimum in the very near future, and an ice-free Arctic and accelerating Greenland land ice loss over the 20-30 years after. Prima facie, the “conservative” projections are less likely. The burden of proof is not on me; it’s on you, to disprove the null hypothesis.
So what about it, folks? Am I going to hear the same, tired arguments of “the data doesn’t show this yet” and “you have to prove it”, or will you finally start panicking, like those of us that understand what the data seem to be saying? Will you really understand which mathematics applies to the situation, or will you entrance yourselves with cookbook statistics and a fantasy world of “if this goes on”? It may be lonely out here; but it’s real.