Friday, May 27, 2016

Climate Change: More “Business As Unusual”

The extreme highs in atmospheric carbon continue to occur.  This week is virtually certain to surpass 408 ppm, and therefore the month as a whole is virtually certain to surpass 407.6 ppm, a new historic high, and probably more than 4 ppm greater than last year.  If trends continue, the increase in ppm will probably set another new record, being much greater during the first five months of this year than last year’s average gain of 3.05 ppm (my off-the-cuff estimate so far is 3.5 ppm).  To repeat, this is a greater increase and percentage increase than that during the last comparable el Nino.  This calls into question, I repeat, whether our present efforts (as opposed to those to which Paris climate talks committed) are really doing anything significant to avoid “business as usual.”

Meanwhile, Arctic sea ice extent is far below even the previous record low for this time of year.  The apparent cause is increased sea and air temperatures plus favorable melting conditions in both the North Atlantic and North Pacific, partially supplemented by unusually early “total melt” on land in both Canada/Alaska and Scandinavia/Russia.  Unless present weather trends sharply change over the next 3 months, new record Arctic ice lows are as likely as not, and “ice-free” (less than 1 million sq kilometers) conditions are for the first time a (remote) possibility.

Finally, here are the climate change thoughts of the Republican candidate for next President of the United States, with commentary (which I echo) from David Roberts:
[The excerpt starts with footage from a press conference, followed by an introduction, followed by Trump’s speech]
This is ... not an energy speech.
Trump now arguing that coal mining is a delightful job that people love to do.
Trump wants the Keystone pipeline, but he wants a better deal -- 'a piece of the profits for Americans.'
Facepalm forever.
'We're the highest taxed nation, by far.' That is flatly false, not that anyone cares.
75% of federal regulations are 'terrible for the country.' Sigh.
'I know a lot about solar.' Followed by several grossly inaccurate assertions about solar.
I can't believe I just got suckered into watching a Trump press conference.
And now I'm listening to bad heavy metal as I wait for the real speech. This day has become hallucinatory.
Speech finally getting under way. Oil baron Harold Hamm here to introduce Trump.
Oh, good, Hamm explained energy to Trump in 30 minutes. We're all set.
Here's ND's [North Dakota] own Kevin Cramer, representing the oil & gas industry. Thankfully, he's gonna keep it short.
What even is this music?
Trump loves farmers. 'Now you can fall asleep while we talk about energy.' Wait what.
Trump now repeating Clinton coal 'gaffe,' a story the media cooked up & served to him. Awesome, media.
Honestly, Trump sounds like he's reading this speech for the first time. Like he's reacting to it, in asides, as he reads it.
Trump promises 'complete American energy independence -- COMPLETE.' That is utter nonsense.
This is amazing. He is literally reading oil & gas talking points, reacting to them in real time. We're all discovering this together.
Clinton will "unleash" EPA to "control every aspect of our lives." Presumably also our precious bodily fluids.
Trump is having obvious difficulty staying focused on reading his speech. He so badly wants to just do his freestyle-nonsense thing.
Can't stop laughing. He's reading this sh** off the teleprompter & then expressing surprise & astonishment at it. Reading for the 1st time!
He can't resist. Wandering off into a tangent about terrorism. Stay focused, man!
Oh, good, renewable energy gets a tiny shoutout. But not to the exclusion of other energies that are "working much better."
We're gonna solve REAL environmental problems. Not the phony ones. Go ahead, man, say it ...
(1) Rescind all Obama executive actions. (2) Save the coal industry (doesn't say how). (3) Ask TransCanada to renew Keystone proposal.
(4) Lift all restrictions on fossil exploration on public land. (5) Cancel Paris climate agreement. (6) Stop US payments to UN climate fund.
(7) Eliminate all the bad regulations. (8) Devolve power to local & state level. (9) Ensure all regs are good for US workers.
(10) Gonna protect environment -- "clean air & clean water" -- but, uh, not with regulations!
(11) Lifting all restrictions on fossil fuel export will, according to right-wing hack factory, create ALL THE BILLIONS OF US MONIES.
Let us pause to note: this is indistinguishable from standard GOP energy policy, dating all the way back to Reagan. Rhetoric unchanged.
"ISIS has the oil from Libya."
Notable: not a single mention of climate change, positive or negative. Just one passing reference to "phony" environmental problems.
And this, in the Senate and House of Representatives, is what keeps the US from far more effective action on climate change.  To slightly alter George Lucas in the Star Wars series, this is how humanity ends most of itself --- to thunderous applause.
That is all.

Tuesday, May 17, 2016

Climate Change and Breaking the World Economy By 2050

According to an article at www.climateprogress.com, a recent World Bank Reports finds that the worldwide cost of disasters increased 10-fold between 1980 and 2010 (using the ten-year average with 1980 and 2010 the midpoints), to $140 billion per year.  Meanwhile, the world economy is at about $78 trillion (“in nominal terms”).  Projecting both trends forward, in 2040 the cost of disasters will be at $1.4 trillion, and (assuming a 2% growth rate per year) this will be 1% of the world’s economy.  By 2050, with the same trends, the cost of disasters will be about $3 trillion, and the world’s economy will in fact be shrinking.

Up to now, the world’s economy has in effect been able to shrug off disaster cost increases by growing, so that we now have far more in the way of investment resources to apply to any problems than we did in 1980.  Sometime between 2045 and 2050, that process will go in reverse:  we may be growing, but next year’s disaster costs will swallow up the increase.  So if we have not put ourselves on a better disaster path by handling climate change better than we have before 2045, it will be far more difficult to avoid the collapse of the global economy after that – stagnation, followed by spotty and then pervasive national-economy downward spirals.

How do we do better than we have?  Three keys:

1.       Mitigation before adaptation.  Delaying the rise of atmospheric carbon gives us more time to adapt, and less to adapt to.
2.       Long-term adaptation rather than short-term.  Our flawed forecasting, shown repeatedly to be too optimistic, has led us to rebuilding for 2030 at worst and 2050 at best – meaning that we will have to spend far more at 2050 to prepare for 2100.
3.       Moderate rather than modest added spending.  Studies have shown that taking steps to deal with climate change as per preliminary plans costs little in addition.  That means that we should be planning to do more than the “minimal” plan.


We assume that our “self-regulating” world economy can handle anything without slipping permanently into reverse.  Unless we do something more, it can’t, as it has shown over the last 35 years.  And not just our grandchildren but many of our children, children of the rich included, are going to pay a serious price if we fail to do so.

Friday, May 6, 2016

IBM's Quantum Computing Announcement: The Wedding of Two Wonderful Weirdnesses

Yesterday, IBM announced that the science of quantum computing had advanced far enough that IBM could now throw open to researchers a quantum computer containing 5 quantum bits, or “qubits.”  This announcement was also paired with striking claims that “[w]ith Moore’s Law running out of steam [true, imho], quantum computing will be among the technologies that we believe will usher in a new era of innovation across industries.”  Before I launch into the fun stuff, I have to note that based on my understanding of quantum physics and computing’s algorithmic theory, quantum computing appears pretty unlikely to spawn major business innovations in the next 5 years, and there is no certainty that it will do so beyond that point. 
 
Also, I suppose I had better state my credentials for even talking about the subject with some degree of credibility at all.  I have strange reading habits, and for the last 30 years I have enjoyed picking up the odd popularization of quantum physics and trying to determine what it all meant.  In fact, a lot of the most recent physics news has related to quantum physics, and this has meant that the latest presentations are able to a much greater degree to fit quantum physics into an overall physics theory that has some hope of making some sense of quantum physics’ weirdnesses.  As far as theory of algorithms goes, I have a much greater background, due to my luck in taking courses from Profs. Hopcroft and Hartmanis, two of the true greats of computing science and applied mathematics in their insights and their summarizations of the field of “theory of algorithms and computing.”  And, of course, I have tried to keep an eye on developments in this field since.
So where’s the fun?  Well, it is in the wedding of the two areas listed above – quantum physics and theory of algorithms – that produces some mind-bending ways of looking at the world.  Here, I’ll only touch on the stuff relevant to quantum computing. 

The Quantum Physics of Quantum Computing

I think it’s safe to say that most if not all of us grew up with the nice, tidy world of the atom with its electrons, protons, and neutrons, and its little solar system in which clouds of electrons at various distances orbited a central nucleus.  Later decomposition of the atom all the way to quarks didn’t fundamentally challenge this view – but quantum physics did and does.
Quantum physics says that at low energy levels, the unit is the “quantum,” and that this quantum unit is not so much like a discrete unit but as to a collection of probabilities of that particle’s “state” summing to one.  As we combine more and more quanta and reach higher levels of energy in the real world, the average behavior of these quanta comes to dominate, and so we wind up in the world that we perceive, with seemingly stable solar-system atoms, because the odds of the entire atom flickering out of existence or changing fundamentally are so small.
Now, some quanta change constantly, and some don’t.  For those that don’t, it is useful for quantum computing purposes to think of their collection of probabilities as a superposition of “states”.  One qubit, for example, can be thought of as a superposition of two possible states of spin of an electron – effectively, “0” and “1”.  And now things get really weird:  Two quanta can become “entangled”, either by direct physical interaction with other quanta or (according to Wikipedia), at least conceptually, by the act of our measuring what really is the state of the qubit.  To put it another way, the moment we measure the qubit, the qubit tells us what its value really is – 0 or 1 – and it no longer is a collection of probabilities.  [ Please bear in mind that this is a model of how quanta behave, not their actual behavior -- nobody knows yet what really happens]
It’s quantum entanglement by direct physical interaction pre-measurement that really matters to quantum computing – and here things get even weirder.  Say you have two qubits entangled with each other.  Then quantum entanglement means that no matter how far apart the two qubits are, when you measure them simultaneously the discovered value of one is some function of the discovered value of the other (for example, the two will yield the same value) – what Einstein called “spooky action at a distance.”  Physicists have already shown that if there is any communication going on at all between two qubits, it happens at hundreds of times the speed of light. 
The meaning of this for quantum computing is that you can create a computer with two qubits, and that quantum computer will have four states (actually, in some cases as many as 16), each with its own probability.  You can then perform various operations in parallel on each qubit, changing each state accordingly, and get an output (effectively, measuring the qubits).  That output will have a certain probability of being right.  Recreate the qubits and do it again; after several iterations, you very probably have the right answer.
One final important fact about qubits:  After a while, it is extraordinarily likely that they will decohere – that is, they will lose their original states.  One obvious reason for this is that in the real world, quanta are wandering around constantly, and so when they wander by a qubit during a computation they change that qubit’s stored state.  This is one of the things that make creating an effective quantum computer so difficult – you have to isolate your qubits carefully to avoid them decohering before you have finished your computation.

The Algorithmic Theory of Quantum Computing

 So now, hopefully, you are wondering how this strange quantum world of probabilities at a distance translates to faster computing.  To figure this out, we enter the equally strange world of the theory of algorithms.
The theory of algorithms begins with Godel’s Incompleteness Theorem.  Kurt Godel is an unsung giant of mathematics – one Ph.D. student’s oral exam consisted in its entirety of the request “Name five theorems of Godel”, the point being that each theorem opened up an entirely new branch of mathematics.  In terms of the theory of algorithms and computing, what Godel’s Incompleteness Theorem says, more or less, that there are certain classes of problems (involving infinity) for which one can never create an algorithm (solution) that will deliver the right answer in all cases.  Because there is a one-to-one mapping between algorithms and mathematical solutions, we say such problems are unsolvable.  David Hilbert the mathematician, at the start of the 20th century, posed 20 hard problems for mathematicians to solve; several of them have been proven to be unsolvable, and hence uncomputable.
By the 1960s, people had used similar methods to establish several more classifications of problem “hardness”.  The most important one was where the minimum time for an algorithm was exponential (some constant times 2**n, where n is [more or less] the most computationally expensive operation in the algorithm) rather than polynomial (some constant times n ** q, q less than 3).  In the typical case, once n got above 30 or 40, exponential-minimum-time problems like Presburger arithmetic were effectively unsolvable in any time short of the ending of the universe.
It has proved extraordinarily hard to figure out where the boundary between polynomial and exponential algorithms lies.  One insight has been that if one pursues all the possible solutions to some problems on some imagined computer offering infinite parallelism, they can be solved in polynomial time – the “NP” class of problems.  Certain real-world problems like the traveling salesman problem (figure out the minimum-cost travel sequencing to n destinations for any traveling salesperson) are indeed NP-solvable.  But no one can figure out a way to determine if those problems are solvable in polynomial time without such parallelism – whether P=NP.  In any case, it would be a major boon if some computing approach allowed P-time solution of at least some of those problems.
But even if we can’t find such an approach via quantum computing, it would still be very worthwhile for all classifications of algorithms (except unsolvable, of course!) to find a way to get to an “approximate” solution faster.  Thus, for example, if we can come to a solution to the traveling salesman problem that 99 times out of 100 is within a small distance of the “real” best solution, and do so in polynomial time on average, then we will be happy with that.  And thus, despite its probabilistic nature – or possibly because of it – quantum computing may help with both the weird world of “presently effectively unsolvable” exponential-time problems and with the more normal world of speeding up solvable ones.

How All This Applies To Today’s Quantum Computing


The world of qubits as it exists today appears to have two advantages over “classical” computers in attempting to speed up computation:
1.       Greater parallelism.  That is, you don’t sequence the computations and output; instead, within each and every qubit, they are performed pretty much at the exact same time.

2.       The ability to get “close enough” probabilistically.  As noted above, you can choose how much likelihood of being right you want, and as long as it isn’t absolute certainty, the number of iterations needed to get to a “good enough” answer on average should be less than the number of computations in classical computing.
However, there are two significant disadvantages, as well:
(a) The “sweet spot” of quantum computing is the case where the number of inputs (my interpretation:  the number of needed “states” in the quantum computer) is equal to the number of outputs needed to reach a “good enough” answer.  Outside of that “sweet spot”, quantum computing must use all sorts of costly contortions, e.g., repetition of the entire computation the same number of times when the number of states does not cover all the possible outcomes you are examining to find the right answer.
(b) When you need a very high probability or certainty of getting the right answer, or, failing that, you need to know how likely you are to be very close to the correct answer, you need a lot more runs of your quantum computer.
To see how this might work out in practice, let’s take the “searching the database” problem and use (more or less) Grover’s Algorithm (no Sesame Street jokes, please).  We have 1023 different values in 1023 different slots in our database, and we want to find out if the number “123456” is in the database, and if so, where (sorry, it’s a small database, but no bit-mapped indexing allowed).  We declare that each possibility is equal, so there is a 1/1024 chance of each outcome.  We use 10 qubits to encode each such possibility as an equally probable “state”.

The classical computing algorithm will find-and-compare all 1023 values before coming to a conclusion.  Grover’s Algorithm states that on average, the quantum computer will take sqrt(1024) or about 33 iterations before (on average) it will have found the right answer/location.  Score one for quantum computing, right?
Well, leave aside for a moment the flaws of my example and the questions it raises (e.g., why can’t a computer do the same sort of probabilistic search?) Instead, let’s note that it appears that in this example the disadvantages of quantum computing are minimized.  I configured the problem to be in quantum computing’s algorithmic “sweet spot”, and effectively only asked that there be a greater than 50% probability of having the right answer.  If all you want to know is where the data is located, then you will indeed on average take 33 iterations to find that out.  If you want to be sure that the number is or isn’t in the database, then your best solution is the classical one – 1024 find-and-compares, without the added baggage of restarting that quantum computing brings with it.
And I haven’t even mentioned that so far, we’ve only scaled up to 5 qubits available for researchers, and that as we scale further and our computations take longer, decoherence becomes even more of a problem. 
So that’s why I dropped that downer of a conclusion above.  It’s early days yet; but there’s a lot of difficult work still to come in scaling quantum computing, with no guarantee of success, and the range of long-term use cases still appears less than broad – and possibly very narrow.

Oh, Well, Fun Take-Aways


So while we wait and wait, here are some wonderful weirdnesses of the wedding of quantum physics and theory of algorithms to think about:
·         You can’t be in two places at once.  Not.  Simply create two quantum copies of yourself in your present “state.”  Keeping them isolated (details, details), move one of them halfway around the world.  Measure both simultaneously.  They will both be identical at that moment in time.

·         Things are either true or false.  Not.  Things are either true, or they are false, or whether they are true or false is undecidable.

·         Explanations of quantum computing are often incoherent, and whether they convey anything of value is undecidable.

Thursday, May 5, 2016

IBM and Blockchain: Limited But Real Value


Last week, IBM announced an expansion of its blockchain capabilities to fully support secure networks for whatever uses its customers find for the blockchain technology.  Key features include support on IBM Cloud for developers, management of the environment aimed at rapid deployment via IBM Bluemix, and enhancements to the Linux Foundation open-source HyperLedger Project code for those seeking to implement blockchain use cases at least partly outside the IBM software/services umbrella.  My initial take is that IBM is laying a solid foundation for blockchain use.  More important, I find that it is likely that, with help like this, customers of blockchain technology may well find that it delivers real value-add in limited but nevertheless concrete ways.  That is, blockchain implementations with foundations such as IBM’s can deliver much less than what its most enthusiastic proponents would claim, but can provide it more reliably and immediately than, say, data lakes at this stage in its lifecycle.

It Begins With the Blockchain Technology


At present, the word “blockchain” is surrounded by clouds of hype and techie jargon that make its nature unusually difficult to understand.  However, reading between the lines, it appears that blockchain technology has two key new approaches:

1.       A “blockchain” data unit that lends itself flexibly to use in medium-scale applications involving many-to-many business-to-business processes, such as “smart contract” management or business-to-business asset exchange involving “clearing” and the like.

2.       A fully-replicated distributed database that allows each participant a full view of the “state” of the “blockchain” data and automates some of the security involved, e.g., the ability to block a single “bad actor” from altering the data by requiring “consensus” among all copies of the replicated data as to the validity of a new transaction.
More specifically, a typical “blockchain” implementation creates a string of data together with the transactions on it, each part of the string being a “block” containing a transaction and the resulting data, and each block containing a “hash” of the previous block, in effect a backward pointer to the transaction/data block previous in time/sequence to this one.  Thus, each blockchain contains its own readily accessible “audit trail”, copied to every node and hence every user.
On the networking side, as noted, the distributed database uses pure peer-to-peer networking with no central manager or “master record”.  Thus, more or less, according to the description in Wikipedia, if one transaction comes in on one node saying the resulting block should be as follows and another disagrees, all nodes need to look at what all the other nodes are saying (step 1) and determine what the “consensus” is (step 2).  They can then all independently create a new block, because they will all come to the same conclusion.  The result is that if one “bad actor” comes in and tries to corrupt the data, the consensus will reject that attempt every time it happens, no matter what the alias of the bad actor or how many times the bad actor tries, or even if the bad actor keeps switching the node of access.
The limitations of blockchain technology also come from these two new approaches.  For one thing, the “blockchain” data unit’s use of a “chain” approach to data access means that the database is simply not as scalable in most if not all cases.  There is a reason that most databases optimize direct access to each data unit:  the time taken to go backwards through the whole chain as a series of “reads” is much greater than the time for going directly to the data unit at the end of the chain.  IBM clearly signals this limitation on scalability by noting its ability to scale to “thousands of users” – laughably small for a consumer application, but quite appropriate for many business-to-business applications.
On the networking side, the requirement of “consensus” can significantly slow the process of committing a transaction – after all, one of the participants may not be responsive at one time or another.  Therefore, blockchain technology is less appropriate where the fastest possible transaction performance is needed, and more appropriate in automating existing approaches such as “clearing” (netting out transactions between businesses and then having each participant pay/receive the differences), which can take up to 3 days in some banking apps.

What Users May Wish to Watch Out For


As noted above, blockchain technology can deliver real benefits, particularly in security, speed-to-perform, and flexibility/ease of implementation and upgrade, for business-to-business many-to-many applications – but it probably will require extra care in implementation.  One area that I believe users should consider carefully is the long-term scalability of the blockchain technology.  It should be possible to get around the limitations of a blockchain data unit by writing to a “blockchain” veneer that looks like a blockchain but acts like a regular database (in fact, this is what a relational database does at the storage level).  However, this means that the implementation must use such a veneer religiously, and avoid third-party code that “hard-codes” blockchain access.
A second implementation detail that I recommend is pre-defining the blockchain technology’s links to existing systems.  In many cases blockchain data will need to be kept in sync with parallel updates going on in IT’s existing databases.  It’s not too early to begin to think how you’re going to do that.  Moreover, the blockchain approach is not a security cure-all, and therefore requires careful integration with existing security systems.
A third implementation “key decision” is whether to use some version of Bitcoin in implementing a blockchain solution.  I would recommend against this.  First, digital currencies modeled after Bitcoin have been shown in the real world to be vulnerable to “gaming the system”; in economic terms, unlike a good paper currency, it does not serve as a good “store of value”.  Second, Bitcoin has scaled beyond the “thousands of users” consensus needed for the business-to-business apps discussed here – but, in order to do so, it is now using a very significant part of the resources of the World Wide Web.  If you use Bitcoin, you must connect to that very large Bitcoin implementation, which will mean that your network performance may well be adversely impacted by constant demands for consensus from the Internet.

A Different Viewpoint


Interestingly, Steve Wilson of Constellation Research has recently come out with a strongly worded argument against business use of blockchain technology, with two key arguments:  (1) It actually adds nothing to corporate security; and (2) It does nothing that the business’ existing database can’t do. 
While I find myself in agreement with many of his technical viewpoints, I find that he overstates the effectiveness in the use case I have described above of present corporate security and of existing business databases.  Over the years, such functions as contract management and “episode management” in health care have been relatively ill-served by being lumped in with the rest of corporate security and implemented typically in relational databases.  The blockchain model is more like an “accounting ledger”, and the ability to use such a model in business-to-business interactions as well as for security inside and outside each participating business is a key long-term source of usefulness for today’s businesses.  To put it another way, the trend in business is away from the “one size fits all” database and towards “databases for specific purposes” that can add additional “domain” security and “ease of development”.
All in all, as long as businesses are savvy about implementation, there does appear to be an important use case for blockchain technology.  In which case, support like IBM’s is right on the mark.

Monday, May 2, 2016

And the CO2 Keeps Rising Faster

It appears pretty evident from the Mauna Loa weekly data that the April CO2 level will average around 407.5.  That’s more than 4 ppm more than April 2015.  Moreover, with March having come in below 405 ppm, it will be the first time on record that we have passed the 405, 406, and 407 ppm mark on a monthly basis.  Based on past behavior, it seems likely that May 2016 will be above 408 ppm.  If memory serves me right, that will be less than 1 ½ years since the monthly average was less than 400 ppm.  And finally, given that the CO2 underlying “rate of rise” is much greater than in 1998, there is a strong possibility that we will cross 410 ppm on a monthly basis next April or May – less than 2 ½ years since we were under 400 ppm.
Guesstimating and extrapolating the new “underlying rate of rise” farther out into the future at a conservative 2.5 ppm/year to a still realistic 3.3 ppm/year, we reach 420 ppm in 2020 or 2021.  Why is this significant?  It marks the point at which atmospheric carbon reaches 50% above its pre-industrial value of about 280 ppm.  Most estimates of the warming effect of atmospheric carbon base themselves on a doubling, with somewhere between 2 and 2.8 degrees C  warming as a direct effect of the carbon, and between 4 and 5 degrees C (delayed) due to related increases in other GHGs (greenhouse gases) and additional warming due to changes in the Earth’s albedo.  The best estimates are based on previous episodes of large global warming 55 and 250 million years ago, which according to Hansen et al. resulted in about 4 degrees C increase in global temperature eventually per doubling of atmospheric carbon.  [Note that this may underestimate the effect of today’s atmospheric carbon increases over a 100s of times shorter period compared to previous global warming episodes] If we follow Hansen et al., then, we will have 2.67 degrees C of warming already “baked in” as of 2121.  Somewhere between 1.33 and 1.75 degrees C will happen pretty quickly, apparently, by 2050; and while we don’t know how fast the rest will happen, much of it will happen in the period 2050-2100. 
We may also note a recent Hansen et al. article that revises the estimated sea level rise due to this warming to between 5 and 9 feet by 2100 – and, again, the model that projects this, as noted by the authors, may still underestimate the effects of “business as usual”.   Finally, scientists have now noted several new catastrophes that would happen even with a 2 degrees C rise.
We continue to underestimate what the planet will do to us on our present path.
That is all.

Tuesday, April 19, 2016

And Now For Something Pretty Different ...


Over the last few years, for various reasons, I have had a chance to read some of the old classics that I had not managed to find time for earlier in life – Milton’s Paradise Lost, for example, or 1 Maccabees in the Bible.  I was expecting that my more mature skepticism would make them a bit annoying to read, or that I would find the stylistic genius of some of them captivating.  What I did not at all expect was that I would find them screamingly funny.  The reasons why, I suppose, might be the topic of another blog post one day.

Anyway, the latest example came when I saw the British TV Series Wolf Hall, about Thomas Cromwell, that underreported figure in the court of Henry VIII, whose reign has been otherwise mostly covered ad nauseam in books and on the big and little screen.  I realized that I was curious about just how he related to Oliver Cromwell, another well-execrated figure a century later in British history.  And so, with the aid of Wikipedia, I set myself to figuring out just how the Cromwell name happened to descend to Oliver, and what happened to it after Oliver’s death.
The first funny fact I discovered was that while Thomas Cromwell was indeed a commoner, of no noble descent, he was in fact related to Henry VIII.  This happened because Thomas Cromwell’s sister Katherine married a Welsh brewer who also lived in Putney and had dealings with Thomas’ father, a fellow named Williams.  It turns out that Master Williams was actually related to Jasper Tudor, that risqué Welshman who married a previous English King’s widow, thereby making his son Henry VII eligible for the throne.  In effect, Thomas Cromwell was both a common brewer’s son from Putney and the King’s in-law.  Only in England …
Now the picture gets even sillier.  Once Thomas Cromwell started prospering, Williams changed his last name to Cromwell – so Williams/Cromwell’s son became Richard Cromwell.   The criterion for the lower nobility was to have a certain amount of income per year, and Richard Cromwell became a useful employee of Thomas, reaping a fair amount of estates yielding the necessary cash.  Thus, when Thomas was executed, Richard kept his status in the nobility as well as the estates he had claimed, not to mention the Cromwell name.  Thomas’ direct descendants did all right as well, but ceased to be noble (iirc) over the course of the next century.  In any case, Oliver did not descend from Thomas, but rather from Richard.  Thus, Oliver the Commoner was in fact an in-law both of the Tudor/Stuart kings and of Thomas Cromwell – not to mention being descended from Richard’s nobility.  Only in England …
Now, England in those days practiced primogeniture, which meant that Richard’s estates when he died did not pass to Oliver’s ancestor but to a first-born Cromwell son.  As a result, when Oliver’s father died, Oliver did not inherit enough to qualify him for the nobility.  Later, of course, the death of a relative in the first-son line gave him enough money to qualify at the lowest level of nobility – that is, he was eligible for the House of Commons but not for the House of Lords.  Yes, to be some sort of noble in England in those days all you needed was money.  Only in England …
Anyway, by the time of Oliver’s death, he had accumulated a surprising number of sons and daughters.  His direct heir was Richard (his first son had died during the Civil War), who was briefly Lord Protector after Cromwell’s death but was widely viewed as a total incompetent after he failed to solve the split between Army and Parliament and left the door wide open for Charles II to walk into the throne.  Thus, nothing bad happened to him after Charles II arrived.  Other Oliver children, especially the daughters, married into high nobility and their descendants did just fine.  But it was Oliver’s last child, Katherine (iirc), who really made out like a bandit.  She married into high nobility and her direct descendants were high nobility all the way into the 1800s.  Really, the high nobility didn’t care that they were related to Oliver Cromwell.  But by the end of Oliver’s life, he had potloads of income, and the high nobility wanted some of that income.  And you can bet that the monarchy treated Oliver/Katherine’s descendants as high nobility, too.  Only in England …
Except that it wasn’t quite only in England.  I once read a biography of King William Rufus, that strange Norman figure who succeeded William the Conqueror, and that biography went into great detail about how some of the terms of today’s English government came about:  They were Norman posts of trust assigned to the nobility attending the King, corresponding to places in the King’s dwelling that they were supposed to take care of.  For example, the Lord High Chamberlain was supposed to take care of the King’s bedroom.  The Lord High Chancellor was supposed to take care of the King’s prayer room, the chancel.  And then there was the extremely important noble responsible for the King’s kitchens and production of beer, possibly to prevent the King being poisoned.  That is, he was the guardian of the stews, the Stew-Ward or Steward.
Somewhere along the line, these customs got translated to Scotland, and when the line of Kings ended, they were succeeded by the line of Stuarts – Scots for Steward.  And so, when Elizabeth Tudor died without issue, the Stuart kings of Scotland became heirs to the English throne.
Now think about that.  Thomas Cromwell was the son of a common English brewmaster, and an in-law to an English king.  Oliver was an in-law to Thomas, and opposed his in-law King Charles I, who was descended from a Scottish King’s brewmaster.  You gotta admit, that’s pretty funny.

CO2 and Climate Change: Our Partial Data Promises Hope, Our Best Measure Insists Alarm

For the last five years or so, I have been watching the measures of atmospheric carbon as measured at the peak of Mauna Loa in Hawaii (where contaminating influences on measurements are minimal).  I have become accustomed to monthly reports where CO2 has increased a little more than 2 ppm over the same month the previous year.  This year, things have changed drastically.
On Wed. Apr. 13, for example, I found that (a) monthly CO2 last month had increased by 3.6 ppm year to year; (b) the previous three days had been recorded as about 409.4, 409, and 408.5 ppm, of which the first measurement was about 5 ppm above the same day last year and about 10 ppm above the monthly average 1 ½ years ago, and (c) last year’s average increase had been confirmed as 3.05 ppm, breaking the record set in 1998 for greatest CO2 increase (the data from Mauna Loa go back to 1959).
[UPDATE:  3 weeks ago was the first weekly average above 405 ppm.  Last week was the first weekly average above 406 ppm.  This week was the first weekly average above 408 ppm – 4.59 ppm above the same week last year]
Meanwhile, figures gathered by IEA suggest that fossil-fuel carbon emissions essentially were flat over the 2014-15 time period.  Other studies by CarbonTracker suggest that other net sources and sinks of atmospheric carbon (uptake by the land and the oceans taking carbon from the atmosphere, “fire” [e.g., forest fires on land] acting as a source of atmospheric carbon) have been essentially flat for 15 years or more. 
What is going on?  Why the apparent large rise in Mauna Loa CO2 growth rates over 2014-16 and the apparent flat rate of growth in atmospheric carbon in 2014-15 according to IEA’s data supplemented by CarbonTracker?   Far more importantly, why does the first seem to signal an alarming development in global warming, while the second seems to promise that our efforts in sustainability and global compacts to combat global warming are beginning to bear fruit? And finally, which is right?
[Note:  Because of time constraints, I won’t be able to discuss things in depth.  However, I feel it is important that readers understand the general reasons for my conclusions]

Implications of Hope and Alarm

Let’s start with the second question:  Why does the IEA data seem to offer hope of significant progress in combating global warming, while the Mauna Loa CO2 data seems to signal alarm about the progress of global warming?
If the IEA is right, then over the last 2 years we have begun to make significant progress on reducing fossil-fuel pollution – despite a global economy that grew significantly in 2014 and 2015.  The result is that the rate of growth of CO2 has been flat the last two years.  Because of the recent global climate agreements, we can expect future years to slow the rate of growth, and in the not-too-distant future to begin to actually decrease atmospheric CO2.  Optimistically, we can hope to reach atmospheric stasis at 500 ppm, which will not keep global warming below 2 degrees C, according to Hansen and others, but should keep it below 4 degrees C, where the consequences become much worse.
If the ML CO2 data is right, then we are not only making no significant progress in combating global warming, we may very well be at the start of more rapid warming.  For the last 15 years or so, the rate of growth of atmospheric CO2 has been slightly more than 2 ppm (the previous 15 years saw a growth rate slightly more than 1.5 ppm per year).  A bit more than 2 ppm per year for 15 years translates into about 1 degree Fahrenheit of average global warming.  If we now shift to a bit more than 2.5 ppm per year, that should translate into about 1.3 degrees Fahrenheit of warming over the next 15 years.  It will also mean that we “bake in” 2.67 degrees C of warming since 1850, most of it in the last 60 years, and will be well on the way to about 4 degrees C of warming (blowing past 500 ppm easily) by the 2050-2060 time frame, if we continue with present efforts rather than increasing them.

Which Is Right?

Now let’s tackle questions 1 and 3:  what are the characteristics of the IEA and ML data that lead them to apparently different conclusions, and which of the two is more likely to be right?
Start with the IEA data on fossil-fuel emissions per year.  IEA depends primarily on self-reporting by nations of their use of electricity and heating, supplemented by extrapolation based on prior experience of the amount of CO2 released by coal/gas/oil-based heating and electricity generation. 
There are several reasons to view this data as likely to underestimate actual fossil-fuel emissions, and likely to have that underestimate increase over time.  First, the IEA data does not cover emissions during fossil-fuel production and refining.  The upsurge in fracking would show up in the IEA data as a net decrease in pollution (switching from use of oil to natural gas), while independent studies of wellhead-to-shipment emissions suggest that their total use-plus-produce emissions are almost equivalent to that of oil.  Second, there has been an ongoing shift in business’ fossil-fuel use from Europe/the US to developing countries like China and India.  Not only is these countries’ ability to report the full amount of fossil-fuel emissions less, they are probably less efficient in using fossil fuels to heat and generate electricity in comparable facilities – and the IEA appears to apply the same efficiency standards to comparable facilities in both places.
Now let’s turn to the Mauna Loa CO2 data.  Long experience has determined that any additional CO2 detection from nearby sources is transitory and will wash out in the monthly average.  Likewise, on average, Mauna Loa tracks near the center of Northern Hemisphere response to fossil-fuel emissions primary sources, and can be cross-checked against a global CO2 measure which is one month behind in reporting but has been showing a comparable upsurge (3.4 ppm in February). [Note:  The IEA shows 0% increase in fossil-fuel emissions in 2015, while global and Mauna Loa data show a 0.75% increase in total CO2 emissions]
What are possible causes of the discrepancy aside from underestimated emissions data?  There could be increased “oceanic upgassing” as melting of sea ice allows release of carbon from plants in the newly exposed ocean – not likely, as it is an effect not clearly detected before.  There could be decreased “uptake” from the oceans as they reach their capacity to absorb carbon from the air – but such an uptake slowdown should happen more gradually.  And then there is the el Nino effect.
I haven’t found a good explanation yet of just how el Nino affects CO2 positively, much less how it affects CO2 proportionally to the strength of the effect.  However, the largest recorded el Nino up to this time happened in 1998, data indicate that the 2015-16 el Nino is about as strong, and the 1998 el Nino produced approximately the same outsized jump in CO2 relative to the previous year as the 2015-2016 one has, according to the helpful folks at Arctic Sea Ice (neven1.typepad.com).  So Mauna Loa recent data could be explained as (constant underlying CO2 growth rate) plus (2015-2016 el Nino effect). 
However, aside from all the reasons cited above for being mistrustful of the IEA data, there is another reason to feel that fossil-fuel emissions have actually been going up as well: 1996-99 were years of big economic growth almost certainly leading to large increases in fossil-fuel emissions and thus the underlying CO2 growth rate.  In other words, to be truly compatible with 1998, the equation more likely should be (constant underlying CO2 growth rate) plus (significant 2013-16 increase in fossil-fuel emissions and hence CO2 growth rate) plus (2015-16 el Nino effect).
All this reminds me of the scene in Alice Through the Looking-Glass where Alice starts walking towards her destination and finds herself further away than before.  You have to run much faster to stay in the same place, another character tells her, and much faster than that to get anywhere.  I view it as more likely than not that we are continuing to increase our fossil-fuel contribution, and that we will have to run much harder to keep the CO2 growth rate flat, much harder than that to start the growth rate decreasing, and much harder even than that to begin to decrease overall CO2 any time in the near future.

Facts and the Habit of Despair

In one of Stephen Donaldson’s first books, he imagines a world beset by evil whose only hope is a leper from Earth.  In those days, leprosy had no treatment and the only way for lepers to survive was to constantly survey oneself to detect injuries that dead leprous nerves failed to warn oneself of, every waking second of every day – to constantly face one’s facts.  In the new world, he tells the leader of the fight against evil “You’re going to lose! Face facts!” The leader, well aware that only the leper can save his world, says very carefully, “You have a great respect for facts.”  “I hate facts,” is the response.  “They’re all I’ve got.”
I have concluded above that it is more likely than not that fossil-fuel pollution and therefore the underlying CO2 growth rate continues to grow – and that appears to be the fact delivered by the best data we have, the Mauna Loa and global CO2 measurements.  I hate this fact; but it seems to be what we have. 
And yet, Donaldson’s same series delivers an additional message.  At its improbable world-saving end, the leper asks the Deity responsible for his being picked why he was chosen.  Because, says the Deity, as a leper you had learned that it is not despair, but the habit of despair, that damns [a leper or a world].  To put it in global warming terms:  Yes, the facts are not good.  But getting into the habit of giving up and walking away in despair whenever you are hit by these facts is what will truly damn our world.  Because the horrible consequences of today’s facts are only a fraction of the horrible consequences of giving up permanently because of today’s facts.
To misquote Beckett:  I hate these facts.  I can’t go on.
I’ll go on.