Wednesday, October 31, 2012

Happy Halloween

Just finished the candy-giving bit.

If I were to dress up as a monster, it would be as the Earth melting in an acid rain.

That's the most horrible thing I fear.

Trick or not!

Tuesday, October 16, 2012

Brief Thoughts on Data Virtualization and Master Data Management


It was nice to see, in a recent book I have been reading, some recognition of the usefulness of master data management (MDM), and how the functions included in data virtualization solutions give users a flexibility in architecture design that’s worth its weight in gold (IBM and other vendors, take note). What I have perhaps not sufficiently appreciated in the past is its usefulness in speeding MDM implementation, as duly noted by the book.

I think that this is because I have assumed that users will inevitably reinvent the wheel and replicate the functions of data virtualization in order to afford users a spectrum of choices between putting all master data in a central database and only there, and leaving the existing master data right where it is.  It now appears that they have been slow to do so.  And that, in turn, means that the “cache” of a data virtualization solution can act as that master-data central repository while preserving the farflung local data that compose it. Or, the data virtualization server can provide discovery of the components of a customer master-data record, give a sandbox to define a master data record, alert to new data types that will need to change the master record, and enforce consistency – all key functions of such a flexible MDM solution.

But the major value-add is the speedup of implementing the MDM solution in the first place, by speeding definition of master data, writing code on top of it for application interfaces, and allowing rapid but safe testing and upgrade. As the book says, abstraction gives these benefits.

Therefore, it continues to be worth it for both existing and new MDM implementations to seriously consider data virtualization.

Arctic Update


Today, it was confirmed that the Arctic sea ice area is farther below normal than ever before since record-keeping began. We are probably poised to say the same about the total global sea ice area (including the Antarctic). Meanwhile, those media who are saying anything are talking about a meaningless blip in Antarctic sea ice.

And, of course, there's the other litany of related news. Record temperatures for this date in parts of Greenland. September tied for the warmest global temperature since record-keeping began.  That kind of thing. 

Nothing to see here. This isn’t the human extinction event you aren’t looking for. You can move along.

Monday, October 15, 2012

Does Data Virtualization Cause a Performance Hit? Maybe The Opposite

In the interest of “truthiness”, consultants at Composite Software’s Data Virtualization Day last Wednesday said that the one likely tradeoff for all the virtues of a data virtualization (DV) server was a decrease in performance.  It was inevitable, they said, because DV inserts an extra layer of software between a SQL query and the engine performing that query, and that extra layer’s tasks necessarily increases response time. And, after all, these are consultants who have seen the effects of data-virtualization implementation in the real world. Moreover, it has always been one of the tricks of my analyst trade to realize that emulation (in many ways, a software technique similar to DV) must inevitably involve a performance hit – due to the additional layer of software.

And yet, I would assert that three factors difficult to discern in the heat of implementation may make data virtualization actually better-performing than the system as it was pre-implementation.  These are:
·         Querying optimization built into the data virtualization server

·         The increasingly prevalent option of cross-data-source queries

·         Data virtualization’s ability to coordinate across multiple instances of the same database.
Let’s take these one at a time.

DV Per-Database Optimization


I remember that shortly after IBM released their DV product in around 2005, they did a study in which they asked a bunch of their expert programmers to write a set of queries against an instance of IBM DB2, iirc, and then compared their product’s performance against these programmers’.  Astonishingly, their product won – and yet, seemingly, the deck was entirely stacked against it.  This was new programmer code optimized for the latest release of DB2, based on in-depth experience, and the DV product had that extra layer of software. What happened?
According to IBM, it was simply the fact that no single programmer could put together the kind of sophisticated optimization that was in the DV product. Among other things, this DV optimization considered not only the needs of the individual querying program, but also its context as part of a set of programs accessing the same database.  Now consider that in the typical implementation, the deck is not as stacked against DV: the programs being superseded may have been optimized for a previous release and never adequately upgraded, or the programmers who wrote them or kept them current with the latest release may have been inexperienced.  All in all, there is a significant chance (I wouldn't be surprised if it was better than 50%) that DV will perform better than the status quo for existing single-database-using apps “out of the box.”
Moreover, that chance increases steadily over time – and so an apparent performance hit on initial DV implementation will inevitably turn into a performance advantage 1-2 years down the line. Not only does the percentage of “older” SQL-involving code increase over time, but the need for database upgrade (as, for example, upgrading DB2 every 2 years should pay off in spades, according to my recent analyses), means that these DV performance advantages widen – and, if emulation is any guide, the performance cost from an extra layer never gets worse than 10-20%.

The Cross-Data-Source Querying Option


Suppose you had to merge two data warehouses or customer-facing apps as part of a takeover or merger.  If you used DV to do so, you might see (as in the previous section) the initial queries to either app or data warehouse be slower.  However, it seems to me that’s not the appropriate comparison.  You have to merge the two somehow.  The alternative is to physically merge the data stores and maybe the databases accessing those data stores.  If so, the comparison is with a merged data store for which neither set of querying code is optimized, and a database for which one of the two sets of querying code, at the least, is not optimized. In that case, DV should have an actual performance advantage, since it provides an ability to tap into the optimizations of both databases instead of sub-optimizing one or both.
And we haven’t even considered the physical effort and time of merging two data stores and two databases (very possibly, including the operational databases, many more than that).  DV has always sold itself on its major advantages in rapid implementation of merging – and has constantly proved its case.  It is no exaggeration to say that a year saved in merger time is a year of database performance improvement gained.
Again, as noted above, this is not obvious in the first DV implementation.  However, for those who care to look, it is definitely a real performance advantage a year down the line.
But the key point about this performance advantage of DV solutions is that this type of coordination of multiple databases/data stores instead of combining them into one or even feeding copies into one central data warehouse is becoming a major use case and strategic direction in large-enterprise shops. It was clear from DV Day that major IT shops have finally accepted that not all data can be funneled into a data warehouse, and that the trend is indeed in the opposite direction.  Thus, an increasing proportion (I would venture to say, in many cases approaching 50%) of corporate in-house data is going to involve cross-data-source querying, as in the merger case. And there, as we have seen, the performance advantages are probably on the DV side, compared to physical merging.

DV Multiple-Instance Optimization


This is perhaps a consideration more suited to abroad and to medium-sized businesses, where per-state or regional databases and/or data marts must be coordinated. However, it may well be a future direction for data warehouse and app performance optimization – see my thoughts on the Olympic database in a previous post. The idea is that these databases have multiple distributed copies of data. These copies have “grown like Topsy”, on an ad-hoc, as needed basis.  There is no overall mechanism for deciding how many copies to create in which instances, and how to load balance across copies.
That’s what a data virtualization server can provide.  It automagically decides how to optimize given today’s incidence of copies, and ensures in a distributed environment that the processing is “pushed down” to the right database instance. In other words, it is very likely that data virtualization provides a central processing software layer rather than local ones – so no performance hit in most cases – plus load balancing and visibility into the distribution of copies, which allows database administrators to achieve further optimization by changing that copy distribution. And this means that DV should, effectively implemented, deliver better performance than existing solutions in most if not all cases.
Where databases are used both operationally (including for master data management) and for a data warehouse, the same considerations may apply – even though we are now in cases where the types of operation (e.g., updates vs. querying) and the types of data (e.g., customer vs. financial) may be somewhat different. One-way replication with its attendant ETL-style data cleansing is only one way to coordinate the overall performance of multiple instances, not to mention queries spanning them.  DV’s added flexibility gives users the ability to optimize better in many cases across the entire set of use cases.
Again, this advantage may not have been perceived (a) because not many implementers are focused on the multiple-copy case and (b) because DV implementation is probably compared against performance against each individual instance instead of or as well as against the multiple-instance database as a whole. Nevertheless, at least theoretically, this performance advantage should appear – and especially because, in this case, the “extra software layer” should not typically add some DV performance cost.

The User Bottom Line:  Where’s The Pain?


It seems that, theoretically at least, we might expect to see actual performance gains over the next 1-2 years over “business as usual” from DV implementation in the majority of use cases, and that this proportion should increase, both over time after implementation and as corporations’ information architectures continue to elaborate. The key to detecting these advantages now, if the IT shop is willing to do it, is more sophisticated metrics about just what constitutes a performance hit or a performance improvement, as described above.
So maybe there isn’t such a performance tradeoff for all the undoubted benefits of DV, after all.  Or maybe there is.  After all, there is a direct analogy here with agile software development, which seems to lose by traditional metrics of cost efficiency and quality attention, and yet winds up lower-cost and higher-quality after all.  The key secret ingredient in both is ability to react or proact rapidly in response to a changing environment, and better metrics reveal that overall advantage. But the “tradeoff” for both DV and agile practices may well be the pain of embracing change instead of reducing risk.  Except that practitioners of agile software development report that embracing change is actually a lot more fun.  Could it be that DV offers a kind of support for organizational “information agility” that has the same eventual effect: gain without pain?
Impossible. Gain without pain. Perish the very thought.  How will we know we are being organizationally virtuous without the pains that accompany that virtue?  How could we possibly improve without sacrifice? 
Well, I don’t know the answer to that one.  However, I do suggest that maybe, rather than the onus being on DV to prove it won’t torch performance, the onus should perhaps be on those advocating the status quo, to prove it will.  Because it seems to me that there are plausible reasons to anticipate improvements, not decreases, in real-world performance from DV.

Wednesday, October 10, 2012

Data Virtualization: Users Grok What I Said 10 Years Ago


Ten years ago I put out the first EII (now data virtualization) report.  In it I said:
  • ·         The value of DV is both in its being a database veneer across disparate databases, and in its discovery and storage of enterprise-wide global metadata
  • ·         DV can be used for querying and updates
  • ·         DV is of potential value to users and developers and administrators. Users see a far wider array of data, and new data sources are added more quickly/semi-automatically. Developers have “one database API to choke.” Administrators can use it to manage multiple databases/data stores at a time, for cost savings
  • ·         DV can be great for mergers, metadata standardization, and as a complement to existing enterprise information architectures (obviously, including data warehousing)
  • ·         DV is a “Swiss army knife” that can be used in any database-related IT project
  • ·         DV is strategic, with effects not just on IT costs but also on corporate flexibility and speed to implement new products (I didn’t have the concept of business agility then)
  • ·         DV can give you better access to information outside the business.
  •  DV can serve as the “glue” of an enterprise information architecture.
I’m at DV Day 2012 (Composite Software) in NYC. Today, for the first time, I have heard not just vendors talking about implementing these things, but users actually doing them – global metadata, user self-service access to the full range of corporate data, development using SQL access, use for updating in operational databases, use for administration of at least the metadata management of multiple databases as well as archiving administrative tasks, use in mergers, use in “data standardization”, use as an equal partner with data warehousing, use in just about every database-related IT project, selling by as strategic to the business with citations of impacts on the bottom line through strategic products as well as “business agility”, use to access extra-enterprise Big Data across multiple clouds, and use as the framework of an enterprise information architecture.

I just wanted to say, on a personal note, that this is what makes being an analyst worthwhile.  Not that I was right 10 years ago. That, for once, by the efforts of everyone pulling together to make the point and elaborate the details, we have managed to get the world to implement a wonderful idea. I firmly believed that, at least in some small way, the world would be better off if we implemented DV. Now, it’s clear that that’s true, and that DV is unstoppable.

And so, I just wanted to take a moment to celebrate.

Monday, October 8, 2012

A Naive Idea About the Future of Data Virtualization Technology


On a weekend, bleak and dreary, as I was pondering, weak and weary, on the present state and future of data virtualization technology, I was struck by a sudden software design thought.  I have no idea whether it’s of worth; but I put it out there for discussion.

The immediate cause was an assertion that one future use for data virtualization servers was as an appliance – in its present meaning, hardware on which, pretty much, everything was designed for maximum performance of a particular piece of software, such as an application, a database, or, in this case, a data virtualization solution.  That I question:  by its very nature, most keys to data virtualization performance lie on the servers of the databases and file management tools data virtualization servers invoke, and it seems to me likely that having dedicated data-virtualization hardware will make the architecture more complicated (thereby adding administrative and other costs) to achieve a minimal gain in overall performance.  However, it did lead to my thought, and I call it the “Olympic database.”

The Olympic Database

Terrible name, you say.  This guy will never be a marketer, you say.  That’s true. In fact, when they asked me as a programmer for a name for Prime Computer software doing a graphical user interface, I suggested Primal Screens.  For some reason, no one’s asked me to name something since then.

Anyway, the idea runs as follows.  Assemble the usual array of databases (and Hadoop, yada).  Each will specialize in handling particular types of data.  One can imagine splitting relational data between that suited for columnar and that not so suited, and then applying a columnar database to the one and a traditional relational database to the other, as Oracle Exadata appears to do. But here’s the twist:  each database will also contain a relatively small subset of the data in at least one other database – maybe of a different type.  In other words, up to 10%, say, of each database will be a duplicate of another database – typically, the data that queries will typically want in cross-database queries, or the data that in the past a database incorporates just to save time switching between databases.  In effect, each database will have a cache of data in which it does not specialize, with its own interface to it, SQL or other.

On top of that, we place a data virtualization server.  Only this server’s primary purpose is not necessarily to handle data of varying types that a particular database can’t handle.  Rather, the server’s purpose is to carry out load balancing and query optimization across the entire set of databases.  It does this by choosing the correct database for a particular type of data – any multiplexer can do that – but also by picking the right database among two or several options when all the data is found in two or more databases, as well as the right combination of databases when no one database has all the data needed.  It is, in effect, a very flexible method of sacrificing some disk space for duplicate data for the purpose of query optimization – just as the original relational databases sacrificed pure 9NF and found that duplicating data in a star or snowflake schema yielded major performance improvements in large-scale querying.

Now picture this architecture in your mind, with the data stores as rings.  Each ring will intersect in a small way with at least one other data-store “ring” of data.  Kind of like the Olympic rings.  Even if it’s a terrible name, there’s a reason I called it the Olympic database.

It seems to me that such an Olympic database would have three advantages over anything out there:  specialization in multiple-data-type processing in an era in which that’s becoming more and more common, a big jump in performance from increased ability to load balance and optimize across databases, and a big jump in the ability to change the caches and hence the load to balance dynamically – not just every time the database vendor adds a new data type.

Why Use a Data Virtualization Server?

Well, because most of the technology is already there – and that’s certainly not true for other databases or file management systems.  To optimize queries, the “super-database” has to know just which combination of specialization and non-specialization will yield better performance – say, columnar or Hadoop “delayed consistency”.  That’s definitely something a data virtualization solution and supplier knows in general, and no one else does. We can argue forever about whether incorporating XML data in relational databases is better than two specialized databases – but the answer really is, it depends; and only data virtualization servers know just how it depends.

The price for such a use of a data virtualization server would be that data virtualization would need to go pretty much whole hog in being a “database veneer”:  full admin tools, etc., just like a regular database. But here’s the thing:  we wouldn’t get rid of the old data virtualization server. It’s just as useful as it ever was, for the endless new cases of new data types that no database has yet combined with its own specialization.  All the use cases of the old data virtualization server will still be there.  And an evolution of the data virtualization will accept a fixed number of databases to support with a fixed number of data types, in exchange for doing better than any of the old databases could in those conditions.

Summary

Ta da! The Olympic database! So what do you think?  

Agile Marketing and the Concept of Customer Debt


One of the fascinating things about the agile marketing movement is its identification of leverageable similarities with agile development, as well as the necessary differences.  Recently, the Boston branch of the movement identified another possible point of similarity:  an analogy with the agile-development concept called “technical debt.” I promised myself I’d put down some thoughts on the idea of “customer debt”, so here they are.

What Is Technical Debt?


Over the last few years, there has been increasing “buzz” in agile development circles and elsewhere about the concept of “technical debt.” Very briefly, as it stands now, technical debt appears to represent the idea of calculating and assigning the costs of future repair to deferred software maintenance and bug fixes, especially those tasks set aside in the rush to finish the current release. In fact, “technical debt” can also be stretched to include those parts of legacy applications that, in the past, were never adequately taken care of. Thus, the technical debt of a given piece of software can include:

1.       Inadequacies in documentation that will make future repairs or upgrades more difficult if not impossible.

2.       Poor structuring of the software (typically because it has not been “refactored” into a more easily changeable form) that make future changes or additions to the software far more difficult.

3.       Bugs or flaws that are minor enough that the software is fully operational without them, but that, when combined with similar bugs and flaws such as those that have escaped detection or those that are likewise scanted in future releases, will make the software less and less useful over time, and bug repairs more and more likely to introduce new, equally serious bugs. It should be noted here that “the cure is worse than the disease” is a frequently reported characteristic of legacy applications from the 1970s and before – and it is also beginning to show up even in applications written in the 2000s.

For both traditional and agile development, the rubber meets the road when release time nears. At that point, the hard decisions that must be made, about what functionality goes into the release and what doesn’t, what gets fixed and what doesn’t, are needed and are far easier to make. This is the point at which technical debt is most likely to be incurred – and, therefore, presenting “technical debt” costs to the project manager and corporate to influence their decisions at this point is a praiseworthy attempt to inject added reality into those decisions. This is especially true for agile development, where corporate is desperately clinging to outdated metrics for project costs and benefits that conflict with the very culture and business case of agile development, and yet the circle must be squared. Technical debt metrics simply say at this point:  Debt must be paid. Pay me now, or pay me more later. 

I have several concerns about technical debt as it is typically used today.  However, imho, the basic concept is very sound. Can we find a similar concept in marketing?

What’s My Idea of Customer Debt?


Here I focus on product marketing – although I am assured that there are similar concepts in branding and corporate messaging, as well. Product marketing can be thought of as an endless iteration of introduction of new solutions to satisfy a target audience:  the customer (customer base, market). However, as I argue in a previous discussion of continuous delivery, there is some sense in which each solution introduction falls behind the full needs of the customer:  because it was designed for a previous time when customer needs were a subset of what they are now, or because hard decisions had to be made when release time neared about what stayed and what was “deferred.” Customer debt I see as basically both of those things: lagging behind the evolution of customer needs, and failing to live up to all of one’s “promises.” 

More abstractly, in an ongoing relationship between customer and vendor, the vendor seeks to ensure good-customer loyalty by staying as close as possible to the customer, including satisfying the customer’s needs as they change, as well as it is able. Yes, the market may take a turn like the iPhone that is very hard to react swiftly to; but within those constraints, the agile marketer should be able in both solution design and messaging to capture both customer needs and the predictable evolution of those needs.  “We just don’t have the budget” for communicating the latest and greatest to a customer effectively enough, “but we’ll do it when the product succeeds,” and “sorry, development or engineering just can’t fit it into this release, so we have to make some hard choices, but it’ll be in the next release,” are examples of this kind of customer debt. 

The problem with this kind of customer debt is that it is not as easy to measure as technical debt – and technical debt isn’t that easy to measure. However, I believe that it should be possible to find some suggestive metrics in many agile marketing efforts.  The project backlog is an obvious source of these.  There will always be stuff that doesn’t get done, no matter how many sprints you can do before the grand announcement.  There should be a way (using analytics, of course) to project the effect of not doing them on immediate sales.  However, as in the case of technical debt, costs (e.g., of lost customer loyalty and branding opportunities) should rise sharply the longer you fail to clear up the backlog. I don’t yet see a clear way of identifying the rate of growth of such costs.

The Value of the Concept of Customer Debt


To me, the key value of this concept is that up to now, in my experience, neither corporate hearing marketing’s plans nor, to some extent, marketing itself has really assessed the damage done by delaying marketing efforts, and therefore has tended to assume there is none – or, at least, none that every other vendor isn’t dealing with. I think it is very possible that once we take a closer look at customer debt, as in the case of technical debt, we will find that it causes far more damage than we assumed.  And, again as in the case of technical debt, we will have numbers to put behind that damage. 

I am reminded of my colleague David Hill’s story about the Sunday school class asked if each would like to go to Heaven. All agreed except little Johnny.  What’s the matter, Johnny, asked the teacher, don’t you want to go to Heaven?  Oh, yes, Johnny replied, but I thought you meant right now.  In the same way, those who would abstractly agree that deferring marketing dollars might hurt a customer relationship but not really mean it might have a very different reaction when confronted with figures showing that it was hurting sales right now – not to mention even worse in the immediate future.

The Marketing Bottom Line


Lest you continue to think that losses from piling up customer debt are not likely to be substantial, let me draw an example from technical debt in projects in which features are delayed or features were not added to meet new customer needs arriving over the course of the project.  It turns out that these are not losses just if we fix the problems on the next release. On the contrary – and this is an absolutely key point – by deciding not to spend on this kind of technical debt, the organization is establishing a business rule, and that means that in every other release beyond this one, the organization will be assumed to apply the same business rule – which means, dollars to donuts, that in all releases involving this software over, say, the next two years, the organization will not fix this problem either 

Wait, bleat the managers, of course we’ll fix it at some point. Sorry, you say, that is not the correct investment assessment methodology. That is wrong accounting. In every other case in your books, a business rule stays a business rule, by assumption, unless it is explicitly changed. Anything else is shoddy accounting. And you can tell them I said so. 

(If you want to go into the fine details, by the way, the reason this is so is that the organization is fixing this problem in the next release, but deferring another problem. So, instead of one problem extended over two years, you have 24 monthly problems, if you release monthly. Same effect, no matter the release frequency) 

So what does this mean? It means that, according to a very crude experience-suggested guess, on average, over those two years, perhaps an average of 2-3 added major features will be delayed by about a month. Moreover, an average of 2-3 other new major new features will have been built that depend on this software, and so, whether or not you do fix the original problem, these will be delayed by up to a month. It is no exaggeration to say that major “technical debt” in today’s terms equates to moving back the date of delivery for all releases of this particular product by a month. That is in strict accounting terms, and it is conservatively estimated (remember, we didn’t even try to figure out the costs of software maintenance, or the opportunity costs/revenue losses of product releases beyond that). 

So, to sum up, my version of technical debt is an estimation of the costs from a decrease in business agility. From my surveys, agile software new product development (NPD) appears to deliver perhaps 25% gross margin improvements, year after year, over the long term, compared to traditional NPD. 1/24th of that is still a decrease of more than 1% in gross margin – not to mention the effects on customer satisfaction (although, today, you’re probably still way ahead of the game there). Let’s see, do this on 25 large parts of a large piece of software and – oops, I guess we don’t seem to be very agile any more. Wonder what happened? 

Now, translate that to marketing terms:  an advertising venue unexploited, a product feature delayed, a misidentification of the customer from too-low analytics spending. Again, we are talking about an integral part of new product development – the marketing end. And we are talking just as directly about customer satisfaction.  Why shouldn’t the effects of customer debt be comparable to the effects of technical debt? 

So my initial thoughts on customer debt are:

·         It’s a valuable concept.

·         It can potentially be a valuable addition to agile marketing project metrics.

·         Agile marketers should think about it.

Sunday, October 7, 2012

Tipping Points Considered Dangerous


A while back, discussions of Arctic sea ice, methane, and other related matters seemed dominated by the idea that there was a “tipping point” involved, a point before which we could return to the halcyon equilibria of yore, and after which we were irrevocably committed to a new, unspecified, but clearly disastrous equilibrium.  Surprisingly, this idea was recently revived as the overriding theme of a British Government report assessing trends in Arctic sea ice and their likely effects on the UK itself.  It is cast as a debate between Profs. Slingo of the Met Office and Wadhams, and the report comes out in indirect but clear support of Wadham’s position that these trends are in no sense “business as usual”.  However, it casts this conclusion as the idea that there are “tipping points” in methane emissions, carbon emissions, and sea ice extent, that these are in danger of being crossed, and that once these are crossed the consequences are inevitable and dire – an idea that seems prevalent in national discussions of an emissions “target” of no more than enough tonnage to cause no more than 2 degrees Centigrade warming by 2100.

Here, I’ll pause for a bit of personal reminiscence. My late father-in-law, who was a destroyer captain in WW II, told me that once during the early days of the US’ involvement, during a storm in the North Atlantic, the destroyer heeled over by 35 degrees. Had it heeled over by 1 or 2 more degrees, it would have turned turtle and probably all lives aboard would have been lost. As it was, it righted itself with no casualties. 
That, to me, is a real “tipping point”. The idea is that up to a certain amount of deviation, the tendency is to return to the equilibrium point; beyond that, a new equilibrium results. 35 degrees or less, the ship tends to return to an upright position; beyond that, it tends to go to an upside down position, and stay there.

So what’s wrong with applying the idea of such a tipping point to what’s going on in climate change?  Superficially, at least, it’s a great way to communicate urgency, via the idea that even if it’s not obvious to all that there’s a problem, we are rapidly approaching a point of no return.

Problem One: It Ain’t True

More specifically, if there ever was a “tipping point” in Arctic sea ice, carbon emissions, and methane emissions, we are long past it.  The correct measure of Arctic sea ice trends, now validated by Cryosat, is volume. That has been on an accelerating downward trend just about since estimates began in 1979, clearly driven by global warming, which in turn is clearly driven by human-caused carbon emissions. Carbon emissions themselves have risen in an accelerated fashion from about 1 ppm/year in 1950 at the start of measurements to about 2.1-2.5 ppm/year today. Methane emissions from natural sources (a follow-on to carbon emissions’ effect on rising global temperature) were not clearly a factor until very recently, but it is becoming clear that they have risen a minimum of 20-30% over the last decade, and are accelerating. By way of context, these methane emissions are accompanied by additional carbon emissions beyond those in present models, with the methane emissions being about 3% and the carbon emissions being about 97% of added emissions from such sources as permafrost, but with the methane being 20 to 70 times as potent, for a net effect that is double or triple that of the added carbon emissions alone – an effect that adds (in a far too optimistic forecast) around 0.5 to 1 degree Celsius to previous warming forecasts by 2100.
 
In other words, it is extremely likely that the idea of keeping global warming to 2 degrees Celsius is toast, even if our carbon atmospheric ppm levels off at around 450.

Problem Two: We Need To Understand It Can Always Get Worse

Yet the idea that we can combat global warming deniers or make things plain to folks reasonably preoccupied with their own problems by saying “we’re on a slippery slope, we’re getting close to a disaster” is that it is all too easily obfuscated or denied, and the sayer labeled as one who “cries wolf.” Rather, we need to communicate the idea of a steadily increasing problem in which doing nothing is bad and doing the wrong thing (in this case, adapting to climate change by using more energy for air conditioning and therefore drilling for more oil and natural gas, increasing emissions) is even worse. This idea is one that all too many voters in democracies find it hard to understand, as they vote to “throw the bums out” when the economy turns bad without being clear about whether the alternative proposal is better.  How’s that working out for you, UK?

The sad fact is that even when things are dreadful, they can always get worse – as Germany found out when it went from depression and a Communist scare to Hitler. It requires that both politicians and voters somehow manage to find better solutions, not just different ones.  For example, in Greece today, it appears (yes, I may be uninformed) that one party that was briefly voted in may well have had a better solution that involved questioning austerity and renegotiating the terms of European support. Two parties committed to doing nothing, and one far right-wing party committed to unspecified changes in government that probably threatened democracy. After failing to give the “good” party enough power in one election, the voters returned power to the two do-nothing parties, with the result that the situation continues to get worse. Now, more than a fifth of voters have gravitated to the far-right party, which would manage to make things yet worse. 

And that is the message that climate change without tipping points is delivering: not that changing our ways is useless because we have failed to avoid a tipping point, but doing the right thing is becoming more urgent because if we do nothing, things will get worse in an accelerating fashion, and if we do the wrong thing, things will get even worse than that.  Tipping point?  One big effort, and it’ll be over one way or another.  Accelerating slide? You pay me now, or you pay me much more later. 

Or Is That Au Revoir?

An old British comedy skit in the revue Beyond the Fringe, a take-off on WW II movies, has one character tell another:  “Perkins, we need a futile gesture at this stage. Pop over to France. Don’t come back.” The other responds: “Then goodbye, sir. Or perhaps it’s au revoir [until we meet again]?” The officer looks at him and simply says “No, Perkins.”

The idea of a tipping point in climate change is like that hope that somehow, some way, things just might return to the good old days. But there is no au revoir.  Say goodbye to tipping points. Say hello to “it can always get worse.”