Friday, May 27, 2016

Climate Change: More “Business As Unusual”

The extreme highs in atmospheric carbon continue to occur.  This week is virtually certain to surpass 408 ppm, and therefore the month as a whole is virtually certain to surpass 407.6 ppm, a new historic high, and probably more than 4 ppm greater than last year.  If trends continue, the increase in ppm will probably set another new record, being much greater during the first five months of this year than last year’s average gain of 3.05 ppm (my off-the-cuff estimate so far is 3.5 ppm).  To repeat, this is a greater increase and percentage increase than that during the last comparable el Nino.  This calls into question, I repeat, whether our present efforts (as opposed to those to which Paris climate talks committed) are really doing anything significant to avoid “business as usual.”

Meanwhile, Arctic sea ice extent is far below even the previous record low for this time of year.  The apparent cause is increased sea and air temperatures plus favorable melting conditions in both the North Atlantic and North Pacific, partially supplemented by unusually early “total melt” on land in both Canada/Alaska and Scandinavia/Russia.  Unless present weather trends sharply change over the next 3 months, new record Arctic ice lows are as likely as not, and “ice-free” (less than 1 million sq kilometers) conditions are for the first time a (remote) possibility.

Finally, here are the climate change thoughts of the Republican candidate for next President of the United States, with commentary (which I echo) from David Roberts:
[The excerpt starts with footage from a press conference, followed by an introduction, followed by Trump’s speech]
This is ... not an energy speech.
Trump now arguing that coal mining is a delightful job that people love to do.
Trump wants the Keystone pipeline, but he wants a better deal -- 'a piece of the profits for Americans.'
Facepalm forever.
'We're the highest taxed nation, by far.' That is flatly false, not that anyone cares.
75% of federal regulations are 'terrible for the country.' Sigh.
'I know a lot about solar.' Followed by several grossly inaccurate assertions about solar.
I can't believe I just got suckered into watching a Trump press conference.
And now I'm listening to bad heavy metal as I wait for the real speech. This day has become hallucinatory.
Speech finally getting under way. Oil baron Harold Hamm here to introduce Trump.
Oh, good, Hamm explained energy to Trump in 30 minutes. We're all set.
Here's ND's [North Dakota] own Kevin Cramer, representing the oil & gas industry. Thankfully, he's gonna keep it short.
What even is this music?
Trump loves farmers. 'Now you can fall asleep while we talk about energy.' Wait what.
Trump now repeating Clinton coal 'gaffe,' a story the media cooked up & served to him. Awesome, media.
Honestly, Trump sounds like he's reading this speech for the first time. Like he's reacting to it, in asides, as he reads it.
Trump promises 'complete American energy independence -- COMPLETE.' That is utter nonsense.
This is amazing. He is literally reading oil & gas talking points, reacting to them in real time. We're all discovering this together.
Clinton will "unleash" EPA to "control every aspect of our lives." Presumably also our precious bodily fluids.
Trump is having obvious difficulty staying focused on reading his speech. He so badly wants to just do his freestyle-nonsense thing.
Can't stop laughing. He's reading this sh** off the teleprompter & then expressing surprise & astonishment at it. Reading for the 1st time!
He can't resist. Wandering off into a tangent about terrorism. Stay focused, man!
Oh, good, renewable energy gets a tiny shoutout. But not to the exclusion of other energies that are "working much better."
We're gonna solve REAL environmental problems. Not the phony ones. Go ahead, man, say it ...
(1) Rescind all Obama executive actions. (2) Save the coal industry (doesn't say how). (3) Ask TransCanada to renew Keystone proposal.
(4) Lift all restrictions on fossil exploration on public land. (5) Cancel Paris climate agreement. (6) Stop US payments to UN climate fund.
(7) Eliminate all the bad regulations. (8) Devolve power to local & state level. (9) Ensure all regs are good for US workers.
(10) Gonna protect environment -- "clean air & clean water" -- but, uh, not with regulations!
(11) Lifting all restrictions on fossil fuel export will, according to right-wing hack factory, create ALL THE BILLIONS OF US MONIES.
Let us pause to note: this is indistinguishable from standard GOP energy policy, dating all the way back to Reagan. Rhetoric unchanged.
"ISIS has the oil from Libya."
Notable: not a single mention of climate change, positive or negative. Just one passing reference to "phony" environmental problems.
And this, in the Senate and House of Representatives, is what keeps the US from far more effective action on climate change.  To slightly alter George Lucas in the Star Wars series, this is how humanity ends most of itself --- to thunderous applause.
That is all.

Tuesday, May 17, 2016

Climate Change and Breaking the World Economy By 2050

According to an article at www.climateprogress.com, a recent World Bank Reports finds that the worldwide cost of disasters increased 10-fold between 1980 and 2010 (using the ten-year average with 1980 and 2010 the midpoints), to $140 billion per year.  Meanwhile, the world economy is at about $78 trillion (“in nominal terms”).  Projecting both trends forward, in 2040 the cost of disasters will be at $1.4 trillion, and (assuming a 2% growth rate per year) this will be 1% of the world’s economy.  By 2050, with the same trends, the cost of disasters will be about $3 trillion, and the world’s economy will in fact be shrinking.

Up to now, the world’s economy has in effect been able to shrug off disaster cost increases by growing, so that we now have far more in the way of investment resources to apply to any problems than we did in 1980.  Sometime between 2045 and 2050, that process will go in reverse:  we may be growing, but next year’s disaster costs will swallow up the increase.  So if we have not put ourselves on a better disaster path by handling climate change better than we have before 2045, it will be far more difficult to avoid the collapse of the global economy after that – stagnation, followed by spotty and then pervasive national-economy downward spirals.

How do we do better than we have?  Three keys:

1.       Mitigation before adaptation.  Delaying the rise of atmospheric carbon gives us more time to adapt, and less to adapt to.
2.       Long-term adaptation rather than short-term.  Our flawed forecasting, shown repeatedly to be too optimistic, has led us to rebuilding for 2030 at worst and 2050 at best – meaning that we will have to spend far more at 2050 to prepare for 2100.
3.       Moderate rather than modest added spending.  Studies have shown that taking steps to deal with climate change as per preliminary plans costs little in addition.  That means that we should be planning to do more than the “minimal” plan.


We assume that our “self-regulating” world economy can handle anything without slipping permanently into reverse.  Unless we do something more, it can’t, as it has shown over the last 35 years.  And not just our grandchildren but many of our children, children of the rich included, are going to pay a serious price if we fail to do so.

Friday, May 6, 2016

IBM's Quantum Computing Announcement: The Wedding of Two Wonderful Weirdnesses

Yesterday, IBM announced that the science of quantum computing had advanced far enough that IBM could now throw open to researchers a quantum computer containing 5 quantum bits, or “qubits.”  This announcement was also paired with striking claims that “[w]ith Moore’s Law running out of steam [true, imho], quantum computing will be among the technologies that we believe will usher in a new era of innovation across industries.”  Before I launch into the fun stuff, I have to note that based on my understanding of quantum physics and computing’s algorithmic theory, quantum computing appears pretty unlikely to spawn major business innovations in the next 5 years, and there is no certainty that it will do so beyond that point. 
 
Also, I suppose I had better state my credentials for even talking about the subject with some degree of credibility at all.  I have strange reading habits, and for the last 30 years I have enjoyed picking up the odd popularization of quantum physics and trying to determine what it all meant.  In fact, a lot of the most recent physics news has related to quantum physics, and this has meant that the latest presentations are able to a much greater degree to fit quantum physics into an overall physics theory that has some hope of making some sense of quantum physics’ weirdnesses.  As far as theory of algorithms goes, I have a much greater background, due to my luck in taking courses from Profs. Hopcroft and Hartmanis, two of the true greats of computing science and applied mathematics in their insights and their summarizations of the field of “theory of algorithms and computing.”  And, of course, I have tried to keep an eye on developments in this field since.
So where’s the fun?  Well, it is in the wedding of the two areas listed above – quantum physics and theory of algorithms – that produces some mind-bending ways of looking at the world.  Here, I’ll only touch on the stuff relevant to quantum computing. 

The Quantum Physics of Quantum Computing

I think it’s safe to say that most if not all of us grew up with the nice, tidy world of the atom with its electrons, protons, and neutrons, and its little solar system in which clouds of electrons at various distances orbited a central nucleus.  Later decomposition of the atom all the way to quarks didn’t fundamentally challenge this view – but quantum physics did and does.
Quantum physics says that at low energy levels, the unit is the “quantum,” and that this quantum unit is not so much like a discrete unit but as to a collection of probabilities of that particle’s “state” summing to one.  As we combine more and more quanta and reach higher levels of energy in the real world, the average behavior of these quanta comes to dominate, and so we wind up in the world that we perceive, with seemingly stable solar-system atoms, because the odds of the entire atom flickering out of existence or changing fundamentally are so small.
Now, some quanta change constantly, and some don’t.  For those that don’t, it is useful for quantum computing purposes to think of their collection of probabilities as a superposition of “states”.  One qubit, for example, can be thought of as a superposition of two possible states of spin of an electron – effectively, “0” and “1”.  And now things get really weird:  Two quanta can become “entangled”, either by direct physical interaction with other quanta or (according to Wikipedia), at least conceptually, by the act of our measuring what really is the state of the qubit.  To put it another way, the moment we measure the qubit, the qubit tells us what its value really is – 0 or 1 – and it no longer is a collection of probabilities.  [ Please bear in mind that this is a model of how quanta behave, not their actual behavior -- nobody knows yet what really happens]
It’s quantum entanglement by direct physical interaction pre-measurement that really matters to quantum computing – and here things get even weirder.  Say you have two qubits entangled with each other.  Then quantum entanglement means that no matter how far apart the two qubits are, when you measure them simultaneously the discovered value of one is some function of the discovered value of the other (for example, the two will yield the same value) – what Einstein called “spooky action at a distance.”  Physicists have already shown that if there is any communication going on at all between two qubits, it happens at hundreds of times the speed of light. 
The meaning of this for quantum computing is that you can create a computer with two qubits, and that quantum computer will have four states (actually, in some cases as many as 16), each with its own probability.  You can then perform various operations in parallel on each qubit, changing each state accordingly, and get an output (effectively, measuring the qubits).  That output will have a certain probability of being right.  Recreate the qubits and do it again; after several iterations, you very probably have the right answer.
One final important fact about qubits:  After a while, it is extraordinarily likely that they will decohere – that is, they will lose their original states.  One obvious reason for this is that in the real world, quanta are wandering around constantly, and so when they wander by a qubit during a computation they change that qubit’s stored state.  This is one of the things that make creating an effective quantum computer so difficult – you have to isolate your qubits carefully to avoid them decohering before you have finished your computation.

The Algorithmic Theory of Quantum Computing

 So now, hopefully, you are wondering how this strange quantum world of probabilities at a distance translates to faster computing.  To figure this out, we enter the equally strange world of the theory of algorithms.
The theory of algorithms begins with Godel’s Incompleteness Theorem.  Kurt Godel is an unsung giant of mathematics – one Ph.D. student’s oral exam consisted in its entirety of the request “Name five theorems of Godel”, the point being that each theorem opened up an entirely new branch of mathematics.  In terms of the theory of algorithms and computing, what Godel’s Incompleteness Theorem says, more or less, that there are certain classes of problems (involving infinity) for which one can never create an algorithm (solution) that will deliver the right answer in all cases.  Because there is a one-to-one mapping between algorithms and mathematical solutions, we say such problems are unsolvable.  David Hilbert the mathematician, at the start of the 20th century, posed 20 hard problems for mathematicians to solve; several of them have been proven to be unsolvable, and hence uncomputable.
By the 1960s, people had used similar methods to establish several more classifications of problem “hardness”.  The most important one was where the minimum time for an algorithm was exponential (some constant times 2**n, where n is [more or less] the most computationally expensive operation in the algorithm) rather than polynomial (some constant times n ** q, q less than 3).  In the typical case, once n got above 30 or 40, exponential-minimum-time problems like Presburger arithmetic were effectively unsolvable in any time short of the ending of the universe.
It has proved extraordinarily hard to figure out where the boundary between polynomial and exponential algorithms lies.  One insight has been that if one pursues all the possible solutions to some problems on some imagined computer offering infinite parallelism, they can be solved in polynomial time – the “NP” class of problems.  Certain real-world problems like the traveling salesman problem (figure out the minimum-cost travel sequencing to n destinations for any traveling salesperson) are indeed NP-solvable.  But no one can figure out a way to determine if those problems are solvable in polynomial time without such parallelism – whether P=NP.  In any case, it would be a major boon if some computing approach allowed P-time solution of at least some of those problems.
But even if we can’t find such an approach via quantum computing, it would still be very worthwhile for all classifications of algorithms (except unsolvable, of course!) to find a way to get to an “approximate” solution faster.  Thus, for example, if we can come to a solution to the traveling salesman problem that 99 times out of 100 is within a small distance of the “real” best solution, and do so in polynomial time on average, then we will be happy with that.  And thus, despite its probabilistic nature – or possibly because of it – quantum computing may help with both the weird world of “presently effectively unsolvable” exponential-time problems and with the more normal world of speeding up solvable ones.

How All This Applies To Today’s Quantum Computing


The world of qubits as it exists today appears to have two advantages over “classical” computers in attempting to speed up computation:
1.       Greater parallelism.  That is, you don’t sequence the computations and output; instead, within each and every qubit, they are performed pretty much at the exact same time.

2.       The ability to get “close enough” probabilistically.  As noted above, you can choose how much likelihood of being right you want, and as long as it isn’t absolute certainty, the number of iterations needed to get to a “good enough” answer on average should be less than the number of computations in classical computing.
However, there are two significant disadvantages, as well:
(a) The “sweet spot” of quantum computing is the case where the number of inputs (my interpretation:  the number of needed “states” in the quantum computer) is equal to the number of outputs needed to reach a “good enough” answer.  Outside of that “sweet spot”, quantum computing must use all sorts of costly contortions, e.g., repetition of the entire computation the same number of times when the number of states does not cover all the possible outcomes you are examining to find the right answer.
(b) When you need a very high probability or certainty of getting the right answer, or, failing that, you need to know how likely you are to be very close to the correct answer, you need a lot more runs of your quantum computer.
To see how this might work out in practice, let’s take the “searching the database” problem and use (more or less) Grover’s Algorithm (no Sesame Street jokes, please).  We have 1023 different values in 1023 different slots in our database, and we want to find out if the number “123456” is in the database, and if so, where (sorry, it’s a small database, but no bit-mapped indexing allowed).  We declare that each possibility is equal, so there is a 1/1024 chance of each outcome.  We use 10 qubits to encode each such possibility as an equally probable “state”.

The classical computing algorithm will find-and-compare all 1023 values before coming to a conclusion.  Grover’s Algorithm states that on average, the quantum computer will take sqrt(1024) or about 33 iterations before (on average) it will have found the right answer/location.  Score one for quantum computing, right?
Well, leave aside for a moment the flaws of my example and the questions it raises (e.g., why can’t a computer do the same sort of probabilistic search?) Instead, let’s note that it appears that in this example the disadvantages of quantum computing are minimized.  I configured the problem to be in quantum computing’s algorithmic “sweet spot”, and effectively only asked that there be a greater than 50% probability of having the right answer.  If all you want to know is where the data is located, then you will indeed on average take 33 iterations to find that out.  If you want to be sure that the number is or isn’t in the database, then your best solution is the classical one – 1024 find-and-compares, without the added baggage of restarting that quantum computing brings with it.
And I haven’t even mentioned that so far, we’ve only scaled up to 5 qubits available for researchers, and that as we scale further and our computations take longer, decoherence becomes even more of a problem. 
So that’s why I dropped that downer of a conclusion above.  It’s early days yet; but there’s a lot of difficult work still to come in scaling quantum computing, with no guarantee of success, and the range of long-term use cases still appears less than broad – and possibly very narrow.

Oh, Well, Fun Take-Aways


So while we wait and wait, here are some wonderful weirdnesses of the wedding of quantum physics and theory of algorithms to think about:
·         You can’t be in two places at once.  Not.  Simply create two quantum copies of yourself in your present “state.”  Keeping them isolated (details, details), move one of them halfway around the world.  Measure both simultaneously.  They will both be identical at that moment in time.

·         Things are either true or false.  Not.  Things are either true, or they are false, or whether they are true or false is undecidable.

·         Explanations of quantum computing are often incoherent, and whether they convey anything of value is undecidable.

Thursday, May 5, 2016

IBM and Blockchain: Limited But Real Value


Last week, IBM announced an expansion of its blockchain capabilities to fully support secure networks for whatever uses its customers find for the blockchain technology.  Key features include support on IBM Cloud for developers, management of the environment aimed at rapid deployment via IBM Bluemix, and enhancements to the Linux Foundation open-source HyperLedger Project code for those seeking to implement blockchain use cases at least partly outside the IBM software/services umbrella.  My initial take is that IBM is laying a solid foundation for blockchain use.  More important, I find that it is likely that, with help like this, customers of blockchain technology may well find that it delivers real value-add in limited but nevertheless concrete ways.  That is, blockchain implementations with foundations such as IBM’s can deliver much less than what its most enthusiastic proponents would claim, but can provide it more reliably and immediately than, say, data lakes at this stage in its lifecycle.

It Begins With the Blockchain Technology


At present, the word “blockchain” is surrounded by clouds of hype and techie jargon that make its nature unusually difficult to understand.  However, reading between the lines, it appears that blockchain technology has two key new approaches:

1.       A “blockchain” data unit that lends itself flexibly to use in medium-scale applications involving many-to-many business-to-business processes, such as “smart contract” management or business-to-business asset exchange involving “clearing” and the like.

2.       A fully-replicated distributed database that allows each participant a full view of the “state” of the “blockchain” data and automates some of the security involved, e.g., the ability to block a single “bad actor” from altering the data by requiring “consensus” among all copies of the replicated data as to the validity of a new transaction.
More specifically, a typical “blockchain” implementation creates a string of data together with the transactions on it, each part of the string being a “block” containing a transaction and the resulting data, and each block containing a “hash” of the previous block, in effect a backward pointer to the transaction/data block previous in time/sequence to this one.  Thus, each blockchain contains its own readily accessible “audit trail”, copied to every node and hence every user.
On the networking side, as noted, the distributed database uses pure peer-to-peer networking with no central manager or “master record”.  Thus, more or less, according to the description in Wikipedia, if one transaction comes in on one node saying the resulting block should be as follows and another disagrees, all nodes need to look at what all the other nodes are saying (step 1) and determine what the “consensus” is (step 2).  They can then all independently create a new block, because they will all come to the same conclusion.  The result is that if one “bad actor” comes in and tries to corrupt the data, the consensus will reject that attempt every time it happens, no matter what the alias of the bad actor or how many times the bad actor tries, or even if the bad actor keeps switching the node of access.
The limitations of blockchain technology also come from these two new approaches.  For one thing, the “blockchain” data unit’s use of a “chain” approach to data access means that the database is simply not as scalable in most if not all cases.  There is a reason that most databases optimize direct access to each data unit:  the time taken to go backwards through the whole chain as a series of “reads” is much greater than the time for going directly to the data unit at the end of the chain.  IBM clearly signals this limitation on scalability by noting its ability to scale to “thousands of users” – laughably small for a consumer application, but quite appropriate for many business-to-business applications.
On the networking side, the requirement of “consensus” can significantly slow the process of committing a transaction – after all, one of the participants may not be responsive at one time or another.  Therefore, blockchain technology is less appropriate where the fastest possible transaction performance is needed, and more appropriate in automating existing approaches such as “clearing” (netting out transactions between businesses and then having each participant pay/receive the differences), which can take up to 3 days in some banking apps.

What Users May Wish to Watch Out For


As noted above, blockchain technology can deliver real benefits, particularly in security, speed-to-perform, and flexibility/ease of implementation and upgrade, for business-to-business many-to-many applications – but it probably will require extra care in implementation.  One area that I believe users should consider carefully is the long-term scalability of the blockchain technology.  It should be possible to get around the limitations of a blockchain data unit by writing to a “blockchain” veneer that looks like a blockchain but acts like a regular database (in fact, this is what a relational database does at the storage level).  However, this means that the implementation must use such a veneer religiously, and avoid third-party code that “hard-codes” blockchain access.
A second implementation detail that I recommend is pre-defining the blockchain technology’s links to existing systems.  In many cases blockchain data will need to be kept in sync with parallel updates going on in IT’s existing databases.  It’s not too early to begin to think how you’re going to do that.  Moreover, the blockchain approach is not a security cure-all, and therefore requires careful integration with existing security systems.
A third implementation “key decision” is whether to use some version of Bitcoin in implementing a blockchain solution.  I would recommend against this.  First, digital currencies modeled after Bitcoin have been shown in the real world to be vulnerable to “gaming the system”; in economic terms, unlike a good paper currency, it does not serve as a good “store of value”.  Second, Bitcoin has scaled beyond the “thousands of users” consensus needed for the business-to-business apps discussed here – but, in order to do so, it is now using a very significant part of the resources of the World Wide Web.  If you use Bitcoin, you must connect to that very large Bitcoin implementation, which will mean that your network performance may well be adversely impacted by constant demands for consensus from the Internet.

A Different Viewpoint


Interestingly, Steve Wilson of Constellation Research has recently come out with a strongly worded argument against business use of blockchain technology, with two key arguments:  (1) It actually adds nothing to corporate security; and (2) It does nothing that the business’ existing database can’t do. 
While I find myself in agreement with many of his technical viewpoints, I find that he overstates the effectiveness in the use case I have described above of present corporate security and of existing business databases.  Over the years, such functions as contract management and “episode management” in health care have been relatively ill-served by being lumped in with the rest of corporate security and implemented typically in relational databases.  The blockchain model is more like an “accounting ledger”, and the ability to use such a model in business-to-business interactions as well as for security inside and outside each participating business is a key long-term source of usefulness for today’s businesses.  To put it another way, the trend in business is away from the “one size fits all” database and towards “databases for specific purposes” that can add additional “domain” security and “ease of development”.
All in all, as long as businesses are savvy about implementation, there does appear to be an important use case for blockchain technology.  In which case, support like IBM’s is right on the mark.

Monday, May 2, 2016

And the CO2 Keeps Rising Faster

It appears pretty evident from the Mauna Loa weekly data that the April CO2 level will average around 407.5.  That’s more than 4 ppm more than April 2015.  Moreover, with March having come in below 405 ppm, it will be the first time on record that we have passed the 405, 406, and 407 ppm mark on a monthly basis.  Based on past behavior, it seems likely that May 2016 will be above 408 ppm.  If memory serves me right, that will be less than 1 ½ years since the monthly average was less than 400 ppm.  And finally, given that the CO2 underlying “rate of rise” is much greater than in 1998, there is a strong possibility that we will cross 410 ppm on a monthly basis next April or May – less than 2 ½ years since we were under 400 ppm.
Guesstimating and extrapolating the new “underlying rate of rise” farther out into the future at a conservative 2.5 ppm/year to a still realistic 3.3 ppm/year, we reach 420 ppm in 2020 or 2021.  Why is this significant?  It marks the point at which atmospheric carbon reaches 50% above its pre-industrial value of about 280 ppm.  Most estimates of the warming effect of atmospheric carbon base themselves on a doubling, with somewhere between 2 and 2.8 degrees C  warming as a direct effect of the carbon, and between 4 and 5 degrees C (delayed) due to related increases in other GHGs (greenhouse gases) and additional warming due to changes in the Earth’s albedo.  The best estimates are based on previous episodes of large global warming 55 and 250 million years ago, which according to Hansen et al. resulted in about 4 degrees C increase in global temperature eventually per doubling of atmospheric carbon.  [Note that this may underestimate the effect of today’s atmospheric carbon increases over a 100s of times shorter period compared to previous global warming episodes] If we follow Hansen et al., then, we will have 2.67 degrees C of warming already “baked in” as of 2121.  Somewhere between 1.33 and 1.75 degrees C will happen pretty quickly, apparently, by 2050; and while we don’t know how fast the rest will happen, much of it will happen in the period 2050-2100. 
We may also note a recent Hansen et al. article that revises the estimated sea level rise due to this warming to between 5 and 9 feet by 2100 – and, again, the model that projects this, as noted by the authors, may still underestimate the effects of “business as usual”.   Finally, scientists have now noted several new catastrophes that would happen even with a 2 degrees C rise.
We continue to underestimate what the planet will do to us on our present path.
That is all.