Sunday, January 27, 2013

The Sad Implications of Two 2012 Climate Change Surprises


In reading postings in one of my favorite blogs recently (Neven’s superlative Arctic Sea Ice blog), it occurred to me that I hadn’t tried to summarize in my mind the two major surprises that scientists recognized in climate change last year.  What follows is my take on these surprises:

  1.      Arctic methane is venting much faster than expected; and
  2.        The weather effects of global warming are stronger than expected.


What are the implications?  Let’s take them one at a time.

Arctic Sea Ice and Methane

It still amazes me that most people did not see the likelihood that Arctic sea ice was going to take a nosedive to near zero in the 2013-2016 period, because I only had to apply some basic math when I first took a look in 2009:  exponential and normal curves.

Until recently, there was only one serious attempt to assess the volume of Arctic sea ice:  Maslowski’s PIOMAS model.  As I understand it, people tended to dismiss the model because Maslowski said:  this is the way I model Arctic sea ice dynamics, and therefore the sea ice volume should change over time in this and this way; and because he asserted without proof in his model that Arctic Ocean water temperature changes had a major role in increased melting over time.  However, when I saw that this model was constantly reality-tested by on-the-site sampling, even though each sample was of a small part of the overall Arctic Ocean, I realized that the long-term trends there were likely to be true.  And what Maslowski’s model showed, at any time of the year, was an exponentially decreasing Arctic sea ice volume.

By the way, recent Cryosat observations have definitively shown that, if anything, Maslowski’s model has underestimated the rate of volume decrease.

So why, I asked myself, do I not see corresponding decreases in Arctic sea ice area and extent?  As I looked at the dynamics of Arctic sea ice, I realized that that could only happen if there was a uniform distribution of Arctic sea ice thickness from, say, zero to twice the average thickness (at any time of year).  But what was really going on was that a certain percentage of the sea ice survived between years to become second year ice, third, and so on; but cue to currents, the average age of Arctic sea ice back in 1980 was five years – sooner or later, ice frozen at one end of the Arctic would reach the other end and head south into warmer waters, there to inevitably unfreeze.  Instead, thickness (with a little adjustment for the age of the ice) had much more of a normal distribution around the average. And that, in turn, meant that accelerated volume losses at, say, minimum would only show up in area and extent when we reached the fat part of the curve – which, it is now apparent, occurred in 2012.

By the way, the same logic also says that volume and the rest will not go to zero somewhere around 2014 – 2015; we will have reached the other thin end of the distribution, and the exponential decrease in volume will flatten out, exponentially.  That’s why I fully expect to see around 1-5% of the ice remaining at minimum until sometime around 2016-2020.

So, as I’ve said, I expected that Arctic sea ice would begin to obviously disappear around now, and I expected the climate change implications of this – including the fact that Russian-Arctic-Continental-shelf methane “clathrates” would begin to release their methane.  What I (and apparently others) did not expect was the scale of that release.  A Russian sampling of methane bubbling to the surface found huge pockets of the stuff – hundreds of times more than research had suggested might be the case.

Before I go on to discuss this, let me cycle back to the (expected) implications of Arctic sea ice melt.  Today’s models simply do not include melt to near zero at minimum in the 2013-16 time period, and a likely follow-on melt to near zero at all times of the year between 2035 and 2045. This in turn, will not directly lead to more carbon emissions.  What it will do is decrease Arctic Ocean albedo (from off-white-reflection to dark-blue-absorption of heat from the sun during the spring-summer-fall), and therefore warm up the Arctic Ocean portion of global and ocean temperature.  This already is 15-20 degrees Fahrenheit above normal during the summer; we are talking another 25-35 degrees by 2045, taking it to 20-25 degrees during the winter. 

The increased water and air temperature should therefore (a) accelerate methane clathrate melt, including that of the deeper waters nearer the North Pole, and (b) both cool and warm winter temperatures of more temperate zones – with the “warm” predominating over time.  How can (b) be?  Well, warmer Arctic air has more energy, and therefore pushes south against the “jet stream” more strongly, creating weather in which unusually cold Arctic air reaches further south periodically.  However, that same Arctic air is steadily warming over time, to the point where by 2050 it should be as warm as or warmer than southern winter air was in, say, 1980. 

Implications of the Methane Surprise

Part of the problem with assessing methane’s implications is that most if not all scientists have not factored in 2015-2045 Arctic sea ice melt’s implications for temperatures just a bit further south.  Because methane clathrate melt should be understood as part of a “double whammy” for methane – Arctic methane melt at the same time as permafrost melt.

Since (up to a point) methane has a very short half-life in the atmosphere (say, 8-10 years), you need a much greater rate of methane release into the atmosphere than carbon (all right, methane includes carbon too, but methane has a much greater effect on global warming per ppm than carbon dioxide) to achieve a comparable warming effect over time. And yet, studies of 55 million years ago, when the rate of warming was much less, indicates that methane had a major role in causing what Joe Romm at www.climateprogress.com calls “Hell and High Water”, with 90% of species wiped out.

Scientists have made a persuasive case for the idea that, even if methane clathrates are starting to melt and permafrost at the same time (i.e., even if we factor in Arctic sea ice melt), methane emissions will not reach a “danger point” where their effects in the atmosphere will rival that of carbon emissions any time soon – and therefore will avoid the main danger time of carbon emissions, before the lack of fossil-fuel reserves begins to decrease those emissions on its own). The problem is that the Russian observations indicate that those reassurances are based on assumptions about the rate of methane clathrate “bubble” occurrence that far underestimate their rate and/or amount.

So what, then, are the likely implications of this methane surprise?  As far as I can see, there are no “likely” implications, because the range of possible methane “bubble” rates, and therefore emission rates, over the next 40 years is so wide.   Nor is it clear to me, given that these emissions are occurring in such a localized northern area, just how wide an effect on global warming there will be.  However, my best guess is that over the next 40 years there will be a significant, localized effect:  Methane emissions will increase Russian and lower-Arctic-sea heat retention over what it would have been by perhaps 25%, with a corresponding increase in Arctic average temperature and Russian permafrost methane/carbon release.  This, in turn, may add perhaps ½ degree Celsius to global warming over the next 40 years – and, of course, will add a comparable amount over the 50 years following, at least – always remembering that a major fraction of methane released turns into carbon dioxide, and hangs around in the atmosphere for a hundred years or so on average.  In other words, the major effect of the surprise may be a more long-term one:  arrival of a ½ - 1 degree Celsius additional increase in “thousand-year” global warming now rather than later, when it would have less practical effect.   

The Weather Effect Surprise and Implications

Taking my cue from James Hansen and Joe Romm, I had guesstimated in 2009 that we in the US would first see constant, undeniable reminders that global warming is real in the 2020-2025 timeframe.  These reminders would include not only “hundred-year” hurricane-type wind speeds and scorching summers that created Dust-Bowl conditions in many areas, but also an overall burden of disasters that reached 0.1% of GDP even for a country like the US.  I believe it was Heidi Cullen that imagined NYC missing a massive hurricane in 2017 and getting one in 2041, by which time the city was prepared and the sewers did not back up and overflow, causing hundreds of thousands of deaths from disease.

But, as Joe Romm noted in his blog, the things that should have been expected in 2020-2025 seem to be happening in 2011-2012, ranging from devastating Australian rainstorms to stupendous Russian wildfires to Hurricane Sandy and its 13-foot storm surge (still short of the 20-odd-foot storm surge that might cause the sewer outlets to be closed and the sewers to back up, but enough to flood the subways and make downtown Manhattan, Queens, and Staten Island disaster areas).  It appears that the monetary effects of disasters globally, according to insurers like Munich Re, are 10 times what they were a decade ago, and there is no reason why they should not continue to double or triple by 10 years from now – meaning that the timetable for effects on global GDP should perhaps be moved up by 3-5 years.

The surprise is not that global warming is happening faster than predicted – globally, 2012 was actually about average for the last decade, which in turn means that it was one of the “dips” in our steady, accelerating global temperature ascent. The surprise is that the effects on weather were larger than expected.  What I suspect is that forecasts simply assumed that certain catastrophic events would happen more frequently than expected, but could not predict that these catastrophes would spread to areas where they had not before (devastating tornadoes in western Massachusetts), areas that were less adapted to a new set of weather patterns. We are reaching the point where we have not only extremes of existing weather patterns, but also new climates that produce new weather patterns.

And so, I also suspect that the effects of the “weather is changing faster than we expect” surprise are, like the methane surprise, bad but unpredictable.  I anticipate that the Nino/Nina cycle and North Atlantic Oscillation patterns that have driven weather around here since time immemorial (i.e., the last 5,000 years) are changing, but will manifest first in longer versions of the extremes of this cycle – and that’s a total guess.  Certainly, an extended Nino would mean even greater Dust Bowl conditions and summer heat over a great extent of the US for longer than ever before, even leaving out the effects of the ongoing global temperature warming.  Initial predictions show the US except the Northeast and Pennsylvania in catastrophic drought conditions in 2050 – is it possible it could happen before then?

Boy, I hope not. But, as a clueless Presidential candidate noted in 2008, hope is not a plan.

Conclusions

Overall, oddly enough, the implications of these surprises for me are not great.  I concluded in 2009 that we desperately needed to cut carbon emissions in absolute terms by 40% 2010-2020, and another 40% 2020-2030.  Since then, with extremely minor exceptions, all major countries in the world have utterly wasted their time in that regard.  In fact, my definition of functional insanity is to see oil companies and countries seeing Arctic sea ice melt as an opportunity for increased drilling of fossil-fuel carbon pollutants, and the United States seeing a Keystone XL pipeline that solidifies tar-sand drilling that sharply increases the likelihood of the end of all life on Earth as an opportunity worth considering, much less actually being relatively close to implementing it.

So, to my eyes, the horrible effects of the two 2012 surprises simply speed up what’s coming and increase its bad effects in relatively minor ways, and that will be worth it if people wake up now and start doing something globally effective.  Except that there’s little sign as yet that people and their leaders are even beginning to understand the urgency of an adequate scale of action.

I wonder what new surprises 2013 will bring?  I could really use some good news.

Tuesday, January 22, 2013

2012 Election: Big Data Makes It Easy to Shoot Yourself In The Foot


A recent post in MIT Technology Review about the use of Big Data by the parties in the 2012 election offers fascinating insights into what works – and what doesn’t.  Both sides had operations specifically designed to glean new insights about voters and apply them to helping their candidate win, and the article interviews the director of each effort shortly after the election.  As a result, we see long-term Big Data projects whose results have just been tested against the reality of the voter “market” and whose project reviews are relatively untainted by post-project “spin.”

I should note that I view the article’s author’s grandiose claims about the meaning of the Obama campaign’s relative success in Big Data analytics to be overblown.  Luckily, many of those claims take up only the first 20% of the article. The rest provides a fascinating insight into two approaches to Big Data, one of which demonstrably worked better than the other.

Problem #1: Big Data as a “Hot Topic” vs. as a Strategy

The first contrast between the campaigns was that the Obama one chose to bring the analytical systems in-house, whereas the Romney campaign chose to use existing software and hardware run by someone else.  In fact, the Obama campaign specifically committed major bucks to a Vertica solution way back in 2009.  This begs the question, however – why did the two campaigns take such different approaches? After all, the Romney campaign actually started quite early, around the same time, and had no real constraints on money spent.

The answer, I think, is that while Obama viewed in-depth analytics and trolling the Internet for more voter insights as a strategy, while for the Romney campaign, the concept of Big Data was fuzzier, more of a “hot topic” type of strategy.  We have all seen CEOs who have said, “Everyone’s talking about this new technology. We must therefore do it.”  The flavor of Romney-campaign thinking, as reported in the article, was, “Big Data is clearly effective at the corporations we look at; we must therefore get something that purports to do Big Data and use it somehow to understand voters better.”

The result was a cautionary tale.  The Romney system involved a third party merging two sets of legacy-application data repeatedly.   As a result, breakdowns occurred in the stress of the late campaign, and analytics was unnecessarily delayed, delaying candidate responses. The Obama campaign suffered from no such problems. It is not clear how much the Obama campaign’s faster response time helped (although it apparently helped change Obama’s style after the first debate).  However, it is clear that the Romney campaign’s fuzzy and delayed analytics helped lead to Problem #2.

Problem #2:  Hearing the “Customer of One” vs. Hearing What You Want To Hear

One of the great shocks of the campaign was that, in an unprecedented way, the Romney campaign and the Republican party as a whole deluded themselves about what was going to happen.  And yet, the campaign’s Big Data analysis mined a long tradition of understanding voter blocs, and could be supposedly cross-checked against a broad array of pollsters – most of whom were themselves deluded by their political biases.

As it turned out, even the closest estimate underestimated Obama’s margin by about 0.5%.  Moreover, on election day, the Romney campaign actually thought they had a better than even chance of winning. However, as it turned out, to win they would have had to amass more than 5.3% more of the vote in a couple of battleground states than they did, at minimum. To put it another way, they probably would have had to change the vote percentages by more than 5% nationwide to win.  They weren’t even in the same ballpark.

By the way, for Nate Silver fans, he was off by about 1.5%.

There appeared to be two key insights that made this election different.  First, perhaps one-third of people voting could only be reached for polling via cell phone – and they voted in a significantly different way.  Second, only perhaps 5% of the electorate reached the Labor Day start of the campaign with a changeable vote – again, a major difference from 20-40 years ago.  Gaffes, debates, Hurricane Sandy – it didn’t make that much difference. 

Given this, the Romney campaign’s one faint hope was voter turnout – unprecedented turnout for him in battleground states. And yet, there was not the slightest appreciation of this in the campaign.  Instead, the Romney campaign or related groups spent enormous sums on TV ads in the obvious markets, to the point where one station in Ohio was running more ads than programming. The crime wasn’t that this was wasted money; it was that relatively little money was spent on “get out our vote.”

In fact, most of the Romney campaign Big Data folks’ attention was on a series of seemingly baffling Obama ads in small markets targeting small demographics.  As a result, by election day, the Big Data folks were suffering from the “fog of war”, trying to counter something they did not understand.

Again, by contrast, the Obama campaign was focused on a new insight garnered from their use of Big Data to identify the “customer of one”.  More specifically, they realized that although most votes were unchangeable, smaller pockets of “conservative” voters could actually change their mind if policies they valued and prioritized were brought to their attention. These could be reached by, say, social-program-highlighting ads aimed at conservative women in more rural Ohio markets.

An under-appreciated part of these ads was that they contained real content.   After all, “conservative” voters were being saturated with generic Romney ads.  To succeed, the Obama ads needed to point to real, verifiable programs.

And now we come to Problem #3.

Problem #3, the Endgame: Big Data Tunnel Vision vs. a Loosely-Coupled Strategy

The end of every political campaign is “get out the vote” – maximize the number of ones supporters who actually vote on election day (and, of course, the increasing percentage who vote before then). And this is where the most startling difference between the campaigns’ approach to Big Data emerges.

It is apparent from various reports that on election day, the Romney campaign was prepared to focus its “turn out the vote” efforts on key demographics in battleground states, fine-tuned by what their Big Data analytics was telling them.  And then, right when voter turnout efforts were supposed to start, their feed from their system went down.  And it stayed down, for most of the day.

But what almost passes belief is what the campaign did about that.  Local offices were begging to go out and do “get out the vote” efforts anyway.  Instead, they were apparently told to wait until the system came up again.  Yes, theoretically, unfocused efforts could have done more harm than good.  Practically, however, it was overwhelmingly likely that “feet on the street” would have instead achieved slightly higher Romney turnout. And not only national but local campaign coordinators failed to realize that.

The problem, it appears, was that no one had autonomy, and all were focused on the Big Data part of the “get out the vote” strategy.  By contrast, the Obama campaign – partly due to an existing Internet-enabled strategy – granted local offices the ability to act proactively, and the Big Data “customer of one” focus was only part of a broader effort in battleground states to get out key Obama demographics, which were already well understood pre-Big Data.

Again, it is worth noting that this made little difference “on the day.” It is possible that it made a difference to several House seats; but not enough to overcome the effects of the 2010 election and its resulting Republican-dominated redistricting.  Note that there was apparently the biggest difference between the vote for each party (Democrats +1%) and the House seat allocation (Republicans +4%) ever recorded.

Implications for Organization Big-Data Use

We have seen, above, how the Romney organization used Big Data to shoot themselves in the foot – and yet, their strategies were superficially reasonable and well aligned with the practices in many businesses.  The immediate recommendations for IT and the enterprise are likewise relatively straightforward:

1.       Treat Big Data as a strategy, not as a “hot topic.”  Understand that for it to be successful, it must provide greater depth and a more accurate view of the customer and of one’s own organization, and those insights need to be translated to fine-tuning of strategies sooner rather than later.

2.       Focus on Big Data’s ability to understand customers not only more deeply as global “types” but in finer-grained groups, and ensure that the organization accepts a more realistic view of the customer.  Bluntly, the Romney campaign started believing their own propaganda; where have we seen that before?  And one reason was that Big Data was not telling them any different.

3.       Adopt an agile marketing strategy that does not hang on command-and-control top-down implementation. Agile marketing has a great respect for in-depth data.  However, it also has a great respect for the way that data reflects or fails to reflect actual customers, and customers who are constantly changing.

  And that brings us to our final point.  As I’ve noted, the success or failure of Big Data efforts turned out to matter surprisingly little to the outcome of this particular election.  However, Republican post-mortem efforts show that their failure to understand its implications has drastically delayed – and we are talking more than a decade here – their adaptation to changing American demographics.  They have created a party culture that makes it extremely difficult for them to move to anything more than permanent minority status nationally, because it carefully widened the divide with groups such as Hispanics in the name of older white-male “get out the vote.”

The point I am trying to make is that if Big Data cements an organization’s existing un-agile strategies in place, it is doing just as much harm as good – even though, for now, things are going better than ever.  The real value of Big Data, done well, is that it not only enables you to understand your customer better, but it also enables your organization to fit itself better to the customer – bottom-up as much as top-down – and both of them to understand how the customer is changing, not just what the customer is like now.

So what is it going to be?  Are you going to use Big Data to shoot yourselves in the foot, or to deliver better strategies, better implemented?  Inquiring Republicans want to know.