Wednesday, September 21, 2016

August 16th, 2070: Rising Waters Flood Harvard Cambridge Campus, Harvard Calls For More Study of Problem

A freak nor’easter hit the Boston area yesterday, causing a 15-foot storm surge that overtopped the already precarious walls near the Larz Anderson Bridge.  Along with minor ancillary effects such as the destruction of much of Back Bay, the Boston Financial District, and Cambridgeport, the rampaging flood destroyed the now mostly-unused eastern Harvard Business School and Medical School, as well as the eastern Eliot and Lowell Houses.  The indigent students now housed in Mather House can only reach the dorms in floors 5 and above, by boat, although according to Harvard President Mitt Romney III the weakened structural supports should last at least until the end of the next school year.
A petition was hastily assembled by student protest groups last night, and delivered to President Romney at the western Harvard campus at Mt. Wachusett around midnight, asking for immediate mobilization of school resources to fight climate change, full conversion to solar, and full divestment from fossil-fuel companies. President Romney, whose salary was recently raised to $20 million due to his success in increasing the Harvard endowment by 10% over the last two years, immediately issued a press release stating that he would gather “the best and brightest” among the faculty and administration to do an in-depth study and five-year plan for responding to these developments.  Speaking from the David Koch Memorial Administrative Center, he cautioned that human-caused climate change remained controversial, especially among alumni.  He was seconded by the head of the Medical School, speaking from the David Koch Center for the Study of Migrating Tropical Diseases, as well as the head of the Business School, speaking from the David Koch Free-Market Economics Center. 
President Romney also noted that too controversial a stance might alienate big donors such as the Koch heirs, which in turn allowed indigent students to afford the $100,000 yearly tuition and fees.  He pointed out that as a result of recent necessary tuition increases, and the decrease in the number of students able to afford them from China and India due to the global economic downturn, the endowment was likely to be under stress already, and that any further alienation by alumni might mean a further decrease in the number of non-paying students.  Finally, he noted the temporary difficulties caused by payment of a $50 million severance package for departing President Fiorina.
Asked for a comment early this morning, David Koch professor of environmental science Andrew Wanker said, “Reconfiguring the campus to use solar rather than oil shale is likely to be a slow process, and according to figures released by the Winifred Koch Company, which supplies our present heating and cooling systems, we have a minimal impact on overall CO2 emissions and conversion will be extremely expensive.”  David Koch professor of climate science Jennifer Clinton said, “It’s all too controversial to even try to tackle.  It’s such a relief to consider predictions of further sea rise, instead.  There, I am proud to say, we have clearly established that the rest of the eastern Harvard campus will be underwater sometime within the next 100 years.”
In an unrelated story, US President for Life Donald Trump III, asked to comment on the flooding of the Boston area, stated “Who really cares?  I mean, these universities are full of immigrant terrorists anyway, am I right?  We’ll just deport them, as soon as I bother to get out of bed.”

Monday, September 19, 2016

HTAP: An Important And Useful New Acronym

Earlier this year, participants at the second In-Memory Summit frequently referred to a new marketing term for data processing in the new architectures:  HTAP, or Hybrid Transactional-Analytical Processing.  That is, “transactional” (typically update-heavy) and “analytical” (typically read-heavy) handling of user requests are thought of as loosely coupled, with each database engine somewhat optimized for cross-node, networked operations. 

Now, in the past I have been extremely skeptical of such marketing-driven “new acronym coinage,” as it has typically had underappreciated negative consequences.  There was, for example, the change from “database management system” to “database”, which has caused unending confusion about when one is referring to the system that manages and gives access to the data, and when one is referring to the store of data being accessed.  Likewise, the PC notion of “desktop” has meant that most end users assume that information stored on a PC is just a bunch of files scattered across the top of a desk – even “file cabinet” would be better at getting end users to organize their personal data.  So what do I think about this latest distortion of the previous meaning of “transactional” and “analytical”?

Actually, I’m for it.

Using an Acronym to Drive Database Technology

I like the term for two reasons:

1.       It frees us from confusing and outdated terminology, and

2.       It points us in the direction that database technology should be heading in the near future.

Let’s take the term “transactional”.  Originally, most database operations were heavy on the updates and corresponded to a business transaction that changed the “state” of the business:  a product sale, for example, reflected in the general ledger of business accounting. However, in the early 1990s, pioneers such as Red Brick Warehouse realized that there was a place for databases that specialized in “read” operations, and that functional area corresponded to “rolling up” and publishing financials, or “reporting”.  In the late 1990s, analyzing that reporting data and detecting problems were added to the functions of this separate “read-only” area, resulting in Business Intelligence, or BI (similar to military intelligence) suites with a read-only database at the bottom.  Finally, in the early 2000s, the whole function of digging into the data for insights – “analytics” – expanded in importance to form a separate area that soon came to dominate the “reporting” side of BI. 

So now let’s review the terminology before HTAP.  “Transaction” still meant “an operation on a database,” whether its aim was to record a business transaction, report on business financials, or dig into the data for insights – even though the latter two had little to do with business transactions.  “Analytical”, likewise, referred not to monthly reports but to data-architect data mining – even though those who read quarterly reports were effectively doing an analytical process.  In other words, the old words had pretty much ceased to describe what data processing is really doing these days.

But where the old terminology really falls down is in talking about sensor-driven data processing, such as in the Internet of Things.  There, large quantities of data must be ingested via updates in “almost real time”, and this is a very separate function from the “quick analytics” that must then be performed to figure out what to do about the car in the next lane that is veering toward one, as well as the deeper, less hurried analytics that allows the IoT to do better next time or adapt to changes in traffic patterns.

In HTAP, transactional means “update-heavy”, in the sense of both a business transaction and a sensor feed.  Analytical means not only “read-heavy” but also gaining insight into the data quickly as well as over the long term.  Analytical and transactional, in their new meanings, correspond to both the way data processing is operating right now and the way it will need to operate as Fast Data continues to gain tasks in connection to the IoT.

But there is also the word “hybrid” – and here is a valuable way of thinking about moving IT data processing forward to meet the needs of Fast Data and the IoT.  Present transactional systems operating as a “periodic dump” to a conceptually very separate data warehouse simply are too disconnected from analytical ones.  To deliver rapid analytics for rapid response, users also need “edge analytics” done by a database engine that coordinates with the “edge” transactional system.  Transactional and analytical systems cannot operate in lockstep as part of one engine, because we cannot wait as each technological advance in the transactional side waits for a new revision of the analytical side, or vice versa.  HTAP tells us that we are aiming for a hybrid system, because only that has the flexibility and functionality to handle both Big Data and Fast Data.

The Bottom Line

I would suggest that IT shops looking to take next steps in IoT or Fast Data try adopting the HTAP mindset.  This would involve asking oneself:

·         To what degree does my IT support both transactional and analytical processing by the new definition, and how clearly separable are they?

·         Does my system for IoT involve separate analytics and operational functions, or loosely-coupled ones (rarely today does it involve “one database fits all”)?

·         How well does my IT presently support “rapid analytics” to complement my sensor-driven analytical system?

If your answer to all three questions puts you in sync with HTAP, congratulations:  you are ahead of the curve.  If, as I expect, in most cases the answers reveal areas for improvement, those improvements should be at a part of IoT efforts, rather than trying to patch the old system a little to meet today’s IoT need.  Think HTAP, and recognize the road ahead.

Saturday, September 17, 2016

The Climate-Change Dog That Did Bark In The Night: CO2 Continues Its Unprecedented 6-Month Streak

In one of Conan Doyle’s Sherlock Holmes mysteries, an apparent theft of a racehorse is solved when Holmes notes “the curious incident of the dog in the night” – the point being that the guard dog for the stable did not bark, showing that the only visitor was known and trusted.  In some sense, CO2 is a “guard dog” for oncoming climate change, signaling future global warming when its increases overwhelm the natural Milankovitch and other climate cycles.  It is therefore distressing to note that in the last 6 months, the dog has barked very loudly indeed:  CO2 in the atmosphere has increased at an unprecedented rate. 

And this is occurring in the “nighttime”, i.e., at a time when, by all our measures, CO2 emissions growth should be flat or slowing down.  As noted in previous posts, efforts to cut emissions, notably in the EU and China, plus the surge of the solar industry, have seemed to lend credibility to metrics of carbon emissions from various sources that suggest more or less flat global emissions in 2014 and 2015 despite significant global economic and population growth.

What is going on?  I have already noted the possibility that a major el Nino event, such as occurred in 1998, can cause a temporary surge in CO2 growth.  In 1998, indeed, CO2 growth set a record that was not beaten until last year, but in the two years after 1998, CO2 atmospheric ppm growth fell back sharply to nearly the previous level.  By our measures, the el Nino occurring in the first 5 months or so of 2016 was about equal in magnitude to the one in 1998, so one would expect to see a similar short surge.  However, we are almost 4 months past the end of this el Nino, and there is very little sign of any major decrease in growth rate.  It already appears certain that we cannot dismiss the CO2 surge as a short-term blip.

Recent Developments in CO2 Mauna Loa

In the last few days, I was privileged to watch the video of Prof. John Sterman of MIT, talking about the “what-if” tool he had developed and made available in which climate models drive CO2 emissions growth depending on how aggressive the national targets are for emissions reduction.  He was blunt in saying that even the commitments coming out of the Paris meeting are grossly inadequate, but he did show how much more aggressive targets could indeed keep total growth at 2 degrees C.  In fact, he was so forthright and well-informed that I could finally hope that MIT’s climate-change legacy would not be the government-crippling misinformation of that narcissistic hack Prof. Lindzen.

However, two of his statements – somewhat true in 2015 but clearly not true at this point in 2016 (the lecture, I believe, was given in the spring of 2016) – stick in my head.  First, he said that we are beginning to approach 1.5 degrees C growth in global land temperature.  According to the latest figures cited by Joe Romm, the most likely global land temperature for 2015 will be approximately 1.5 degrees C.  Second, he said that CO2 (average per year) had reached the 400 ppm level – a statement true at this time last year.  As of April-July 2016, however, the average per year has reached between 404 and 405 ppm.

CO2 as measured at the Mauna Loa observatory tends to follow a seasonal cycle, with the peak occurring in April and May, and the trough in September.  In the last few years, at all times of the year, growth year-to-year (measured monthly) averaged slightly more than 2 ppm.  Note that this was true for both 2014 and most of 2015.  Then, around the time that the el Nino arrived, it rose to 3 ppm.  But it didn’t stop there:  In April, a breathtakingly sharp rise of 4.1 ppm took CO2 up to 408 ppm.  And it didn’t stop there:  May and June were likewise near 4 ppm, and the resulting total average rise through August has been almost 3.6 ppm.  September so far continues to be in the 3.3-3.5 range. 

CO2, el Nino, Global Land Temperature:  What Causes What?

Let’s add another factoid.  Over the last 16 or so months, each month’s global land temperature has set a new record.  In fact, July and August (July is typically the hottest month) tied for absolute heat record ever recorded, well ahead of the records set last year.

So here we have three factors:  CO2, el Nino, and variations in global land temperature.  Clearly, in this heat wave “surge”, the land temperature started spiking first, the full force of el Nino arrived second, and the full surge in CO2 arrived third.  On the other hand, we know that atmospheric CO2 is not only a “guard dog”, but also, in James Hansen’s phrase, a “control knob”:  in the long term, for large enough variations in CO2 (which can be 10 ppm in some cases), the global land temperature will eventually follow CO2 by rising or falling in proportion.  Moreover, it seems likely that el Nino’s short-term effect on CO2 must be primarily by raising the land temperature, which does things like expose black carbon on melting ice for release to the atmosphere, or increase the imbalance between carbon-absorbing forest growth and carbon-emitting forest fires by increasing the incidence of forest fires.

But I think we also have to ask whether the effect of increasing CO2 on global temperatures (land plus sea, this time) begins over a shorter time frame than we thought.  The shortest time frame for a CO2 effect suggested by conservative science is perhaps 2000 years, when a spike in CO2 caused Arctic melting indicative of global warming less than a million years ago.  Hansen and others, as well, have identified 360 ppm of atmospheric CO2 as the level at which Arctic sea ice melts out, and we only passed that level about 20 years ago – a paper by a Harvard professor projects that Arctic sea ice will melt out at minimum somewhere between 2032 and 2053.  In other words, we at least have some indication that CO2 can affect global temperature in 50-100 years or so.

And finally, we have scientific work showing that global land temperature increases that melt Arctic sea and land ice affect albedo (e.g., turn white ice into blue water), which in turn increases sea and land heat absorption and hence temperatures, and these changes “ripple down” to the average temperatures of temperate and subtropical zones.  So part of global land temperature increase is caused by – global land temperature increase.  It is for these reasons that many scientists feel that climate models underestimate the net effect of a doubling of CO2, and these scientists estimate that rather than 500 ppm leading to a 2 degree C temperature increase, it will lead to a 4 degree C increase.

I would summarize by saying that while it seems we don’t know enough about the relationship between CO2, el Nino, and global land temperature, it does seem likely that today’s CO2 increase is much more than can be explained by el Nino plus land temperature rise, and that the effects of this CO2 spike will be felt sooner than we think.


If the “guard dog” of CO2 in the atmosphere is now barking so loudly, why did we not anticipate this?  I still cannot see an adequate explanation that does not include the likelihood that our metrics of our carbon emissions are not capturing an increasing proportion of what we put into the air.  That certainly needs looking into.

At the same time, I believe that we need to recognize the possibility that this is not a “developing nations get hit hard, developed ones get by” or “the rich escape the worst effects” story.  If things are happening faster than we expect and may result in temperature changes higher than we expect, then it is reasonable to assume that the “trickle-up” effects of climate change may become a flood in the next few decades, as rapid ecosystem, food, and water degradation starts affecting the livability of developed nations and their ability to feed their own reasonably-well-off citizens. 

The guard dog barks in the night-time, while we sleep.  We have a visitor that is not usual, and should not be trusted or ignored.  I urge that we start paying attention. 

Wednesday, September 14, 2016

In Which We Return to Arctic Sea Ice Decline and Find It Was As Bad As We Thought Seven Years Ago

For the last 3 years or so, I have rarely blogged about Arctic sea ice, because my model of how it worked seemed flawed and yet replacement models did not satisfy. 

Until 2013, measures of Arctic sea ice volume, trended downward, exponentially rather than linearly, at minimum (usually in early September).  Then, in 2013, 2014, and 2015, volume, area and extent measures seemed to rebound above 2012 almost to the levels of 2007, the first “alarm” year.  My model, which was of a cube of ice floating in water and slowly moving from one side of a glass to the other, with seasonal heat increasing yearly applied at the top, bottom, and sides, simply did not seem to reflect what was going on. 

And now comes 2016 (the year’s melting is effectively over), and it seems clear that the underlying trends remain, and even that a modified version of my model loosely fits what’s happening.  This has been an unprecedented Arctic sea ice melting year in many ways – and the strangest thing of all may be this glimpse of the familiar.

Nothing of It But Doth Change, Into Something Strange

The line is from early in Shakespeare’s Richard III, in which a character talks of his father’s drowning:  “Full fathom five my father lies/Of his bones are coral made/ … /Nothing of him but doth change,/Into something rich, and strange.”  And, indeed, the changes in this year’s Arctic sea ice saga deserve the title “sea-change.”  Here’s a list:
  1. 1.       Lowest sea-ice maximum, by a significant amount, back in late March/April.
  2. 2.       Lowest 12-month average.
  3. 3.       Unprecedented amount of major storm activity in August.
  4. 4.       Possibly greatest amount of melt during July for years when generally cloudy conditions hinder melting.
  5. 5.       First sighting of a large “lake” of open water at the North Pole on Aug. 28, when two icebreakers parked next to an ice floe and one took a picture.  Santa wept.
  6. 6.       First time since I have been monitoring Arctic sea ice melt when the Beaufort Sea (north of Canada and Alaska) was almost completely devoid of ice to within 5 degrees of the pole.

These events actually describe a coherent story, as I understand it.  It begins in early winter, when unusual ocean heat shows up at the edges of the ice pack, especially around Norway.  The ocean temperature (especially in the North Atlantic) has been slowly heating over time, but this time it seemed to cross a threshold:  parts of the Atlantic near Norway stayed ice free all the way through the year, leading to the lowest-ever Arctic-ocean maximum.

In May and early June, the sun was above the horizon but temperatures were still significantly below freezing in the Arctic, so relatively little melting got done. Then, when cloudy conditions descended and stayed late in June or thereabouts, the relative ocean heat counteracted some of the loss of melting energy from the sun.  But another factor emerged:  the thinness of the ice.  Over 2013, 2014, and 2015, despite the lack of summer melt, multi-year ice remained a small fraction of the whole.  So when a certain amount of melt occurred in July and August, water “punched through” in many places, so that melting was occurring not just on the top (air temperature) and bottom (ocean heat) but also the sides (ocean heat) of ice floes.  And this Swiss cheese effect was happening not just at the periphery, but all over the central Arctic – hence the open water at the Pole.

Then came the storms of August.  In previous years, at any time of the year, the cloudiness caused by storms counteracted their heat energy.  This year, the thin, broken ice was driven by waves that packed some of the storms’ energy, further melting them.  And, of course, one of the side effects was to drive sea ice completely out of the Beaufort Sea.

Implications For The Future

It is really hard to find good news in this year’s Arctic sea ice melting season.  Years 2013-2015 were years of false hope, in which although it seemed that although eventually Arctic sea ice must reach zero at minimum (defined as less than 1% of the Arctic Ocean covered with ice), we had reached a period of flat sea ice volume, which only a major disturbance such as abundant sunshine in July and August could tip into a new period of decline. 

However, the fact of 2016 volume decrease in such unpromising weather conditions has pretty much put paid to those hopes.  It is hard to see what can stop continued volume decreases, since neither clouds nor storms will apparently do so any longer.  One can argue that the recent el Nino artificially boosted ocean temperatures, although it is not clear how it could have such a strong relative effect; but there is no sign that ocean heat will return to 2013-2015 levels now that the el Nino is over.  

Instead, the best we can apparently hope for is a year or two of flatness at the present volume levels if such an el Nino effect exists, not a return to 2013-2015 levels.  My original off-the-cuff projection “if this [volume decreases 2000-2012] goes on” was for zero Arctic sea ice around 2018, and while I agree that the 2013-2015 “pause” makes 2018 very unlikely, 2016 also seems to make any “zero date” after 2030 less likely than zero Arctic sea ice at some point in the 2020s. 

A second conclusion I draw is that my old model, while far overestimating the effects of “bottom heating” pre-2016, now works much better in the “fragmented ice” state of today’s Arctic sea ice in July and August.  In this model, as ice volume approaches zero at minimum, volume flattens out, while extent decreases rapidly and area more rapidly still (unless the ice is compacted by storms, as occurred this year).  This effect will be unclear to some extent, as present measurement instruments can’t distinguish between “melt ponds” caused by the sun’s heat and actual open water.

Finally, the Arctic sea “ice plug” that slowed Greenland glacier melt by pushing back against glacier sea outlets continues to appear less and less powerful.  This year, almost the entire west coast of Greenland was clear of ice by early to mid June – a situation I cannot recall ever happening while I’ve been watching.  Since this speeds glacier flow and therefore melting at its terminus entering the sea, it appears that this decade, like the 1990s and 2000s, will show a doubling of Greenland snow/ice melt.  James Hansen’s model, which assumes this will continue for at least another couple of decades, projects 6-10 feet of sea rise by 2100.  And even this may be optimistic – as I hope to discuss in a follow-on post on 2016’s sudden CO2 rise.

Most sobering of all, in one sense we are already as near to zero Arctic sea ice as makes no difference.  Think of my model of an ice cube floating in water in a glass for a minute, and imagine that instead of a cube you see lots of thin splinters of ice.  You know that it will take very little for that ice to vanish, whereas if the same volume of ice were still concentrated in one cube it will take much more.  By what I hear of on-the-ground reports, much of the remaining ice in the Arctic right now is those thin chunks of ice floating in water. 

“Because I do not hope to turn again/Because I do not hope/ … May the judgment not be too heavy on us/ … Teach us to care and not to care.”  T.S. Eliot, Ash Wednesday

Friday, May 27, 2016

Climate Change: More “Business As Unusual”

The extreme highs in atmospheric carbon continue to occur.  This week is virtually certain to surpass 408 ppm, and therefore the month as a whole is virtually certain to surpass 407.6 ppm, a new historic high, and probably more than 4 ppm greater than last year.  If trends continue, the increase in ppm will probably set another new record, being much greater during the first five months of this year than last year’s average gain of 3.05 ppm (my off-the-cuff estimate so far is 3.5 ppm).  To repeat, this is a greater increase and percentage increase than that during the last comparable el Nino.  This calls into question, I repeat, whether our present efforts (as opposed to those to which Paris climate talks committed) are really doing anything significant to avoid “business as usual.”

Meanwhile, Arctic sea ice extent is far below even the previous record low for this time of year.  The apparent cause is increased sea and air temperatures plus favorable melting conditions in both the North Atlantic and North Pacific, partially supplemented by unusually early “total melt” on land in both Canada/Alaska and Scandinavia/Russia.  Unless present weather trends sharply change over the next 3 months, new record Arctic ice lows are as likely as not, and “ice-free” (less than 1 million sq kilometers) conditions are for the first time a (remote) possibility.

Finally, here are the climate change thoughts of the Republican candidate for next President of the United States, with commentary (which I echo) from David Roberts:
[The excerpt starts with footage from a press conference, followed by an introduction, followed by Trump’s speech]
This is ... not an energy speech.
Trump now arguing that coal mining is a delightful job that people love to do.
Trump wants the Keystone pipeline, but he wants a better deal -- 'a piece of the profits for Americans.'
Facepalm forever.
'We're the highest taxed nation, by far.' That is flatly false, not that anyone cares.
75% of federal regulations are 'terrible for the country.' Sigh.
'I know a lot about solar.' Followed by several grossly inaccurate assertions about solar.
I can't believe I just got suckered into watching a Trump press conference.
And now I'm listening to bad heavy metal as I wait for the real speech. This day has become hallucinatory.
Speech finally getting under way. Oil baron Harold Hamm here to introduce Trump.
Oh, good, Hamm explained energy to Trump in 30 minutes. We're all set.
Here's ND's [North Dakota] own Kevin Cramer, representing the oil & gas industry. Thankfully, he's gonna keep it short.
What even is this music?
Trump loves farmers. 'Now you can fall asleep while we talk about energy.' Wait what.
Trump now repeating Clinton coal 'gaffe,' a story the media cooked up & served to him. Awesome, media.
Honestly, Trump sounds like he's reading this speech for the first time. Like he's reacting to it, in asides, as he reads it.
Trump promises 'complete American energy independence -- COMPLETE.' That is utter nonsense.
This is amazing. He is literally reading oil & gas talking points, reacting to them in real time. We're all discovering this together.
Clinton will "unleash" EPA to "control every aspect of our lives." Presumably also our precious bodily fluids.
Trump is having obvious difficulty staying focused on reading his speech. He so badly wants to just do his freestyle-nonsense thing.
Can't stop laughing. He's reading this sh** off the teleprompter & then expressing surprise & astonishment at it. Reading for the 1st time!
He can't resist. Wandering off into a tangent about terrorism. Stay focused, man!
Oh, good, renewable energy gets a tiny shoutout. But not to the exclusion of other energies that are "working much better."
We're gonna solve REAL environmental problems. Not the phony ones. Go ahead, man, say it ...
(1) Rescind all Obama executive actions. (2) Save the coal industry (doesn't say how). (3) Ask TransCanada to renew Keystone proposal.
(4) Lift all restrictions on fossil exploration on public land. (5) Cancel Paris climate agreement. (6) Stop US payments to UN climate fund.
(7) Eliminate all the bad regulations. (8) Devolve power to local & state level. (9) Ensure all regs are good for US workers.
(10) Gonna protect environment -- "clean air & clean water" -- but, uh, not with regulations!
(11) Lifting all restrictions on fossil fuel export will, according to right-wing hack factory, create ALL THE BILLIONS OF US MONIES.
Let us pause to note: this is indistinguishable from standard GOP energy policy, dating all the way back to Reagan. Rhetoric unchanged.
"ISIS has the oil from Libya."
Notable: not a single mention of climate change, positive or negative. Just one passing reference to "phony" environmental problems.
And this, in the Senate and House of Representatives, is what keeps the US from far more effective action on climate change.  To slightly alter George Lucas in the Star Wars series, this is how humanity ends most of itself --- to thunderous applause.
That is all.

Tuesday, May 17, 2016

Climate Change and Breaking the World Economy By 2050

According to an article at, a recent World Bank Reports finds that the worldwide cost of disasters increased 10-fold between 1980 and 2010 (using the ten-year average with 1980 and 2010 the midpoints), to $140 billion per year.  Meanwhile, the world economy is at about $78 trillion (“in nominal terms”).  Projecting both trends forward, in 2040 the cost of disasters will be at $1.4 trillion, and (assuming a 2% growth rate per year) this will be 1% of the world’s economy.  By 2050, with the same trends, the cost of disasters will be about $3 trillion, and the world’s economy will in fact be shrinking.

Up to now, the world’s economy has in effect been able to shrug off disaster cost increases by growing, so that we now have far more in the way of investment resources to apply to any problems than we did in 1980.  Sometime between 2045 and 2050, that process will go in reverse:  we may be growing, but next year’s disaster costs will swallow up the increase.  So if we have not put ourselves on a better disaster path by handling climate change better than we have before 2045, it will be far more difficult to avoid the collapse of the global economy after that – stagnation, followed by spotty and then pervasive national-economy downward spirals.

How do we do better than we have?  Three keys:

1.       Mitigation before adaptation.  Delaying the rise of atmospheric carbon gives us more time to adapt, and less to adapt to.
2.       Long-term adaptation rather than short-term.  Our flawed forecasting, shown repeatedly to be too optimistic, has led us to rebuilding for 2030 at worst and 2050 at best – meaning that we will have to spend far more at 2050 to prepare for 2100.
3.       Moderate rather than modest added spending.  Studies have shown that taking steps to deal with climate change as per preliminary plans costs little in addition.  That means that we should be planning to do more than the “minimal” plan.

We assume that our “self-regulating” world economy can handle anything without slipping permanently into reverse.  Unless we do something more, it can’t, as it has shown over the last 35 years.  And not just our grandchildren but many of our children, children of the rich included, are going to pay a serious price if we fail to do so.

Friday, May 6, 2016

IBM's Quantum Computing Announcement: The Wedding of Two Wonderful Weirdnesses

Yesterday, IBM announced that the science of quantum computing had advanced far enough that IBM could now throw open to researchers a quantum computer containing 5 quantum bits, or “qubits.”  This announcement was also paired with striking claims that “[w]ith Moore’s Law running out of steam [true, imho], quantum computing will be among the technologies that we believe will usher in a new era of innovation across industries.”  Before I launch into the fun stuff, I have to note that based on my understanding of quantum physics and computing’s algorithmic theory, quantum computing appears pretty unlikely to spawn major business innovations in the next 5 years, and there is no certainty that it will do so beyond that point. 
Also, I suppose I had better state my credentials for even talking about the subject with some degree of credibility at all.  I have strange reading habits, and for the last 30 years I have enjoyed picking up the odd popularization of quantum physics and trying to determine what it all meant.  In fact, a lot of the most recent physics news has related to quantum physics, and this has meant that the latest presentations are able to a much greater degree to fit quantum physics into an overall physics theory that has some hope of making some sense of quantum physics’ weirdnesses.  As far as theory of algorithms goes, I have a much greater background, due to my luck in taking courses from Profs. Hopcroft and Hartmanis, two of the true greats of computing science and applied mathematics in their insights and their summarizations of the field of “theory of algorithms and computing.”  And, of course, I have tried to keep an eye on developments in this field since.
So where’s the fun?  Well, it is in the wedding of the two areas listed above – quantum physics and theory of algorithms – that produces some mind-bending ways of looking at the world.  Here, I’ll only touch on the stuff relevant to quantum computing. 

The Quantum Physics of Quantum Computing

I think it’s safe to say that most if not all of us grew up with the nice, tidy world of the atom with its electrons, protons, and neutrons, and its little solar system in which clouds of electrons at various distances orbited a central nucleus.  Later decomposition of the atom all the way to quarks didn’t fundamentally challenge this view – but quantum physics did and does.
Quantum physics says that at low energy levels, the unit is the “quantum,” and that this quantum unit is not so much like a discrete unit but as to a collection of probabilities of that particle’s “state” summing to one.  As we combine more and more quanta and reach higher levels of energy in the real world, the average behavior of these quanta comes to dominate, and so we wind up in the world that we perceive, with seemingly stable solar-system atoms, because the odds of the entire atom flickering out of existence or changing fundamentally are so small.
Now, some quanta change constantly, and some don’t.  For those that don’t, it is useful for quantum computing purposes to think of their collection of probabilities as a superposition of “states”.  One qubit, for example, can be thought of as a superposition of two possible states of spin of an electron – effectively, “0” and “1”.  And now things get really weird:  Two quanta can become “entangled”, either by direct physical interaction with other quanta or (according to Wikipedia), at least conceptually, by the act of our measuring what really is the state of the qubit.  To put it another way, the moment we measure the qubit, the qubit tells us what its value really is – 0 or 1 – and it no longer is a collection of probabilities.  [ Please bear in mind that this is a model of how quanta behave, not their actual behavior -- nobody knows yet what really happens]
It’s quantum entanglement by direct physical interaction pre-measurement that really matters to quantum computing – and here things get even weirder.  Say you have two qubits entangled with each other.  Then quantum entanglement means that no matter how far apart the two qubits are, when you measure them simultaneously the discovered value of one is some function of the discovered value of the other (for example, the two will yield the same value) – what Einstein called “spooky action at a distance.”  Physicists have already shown that if there is any communication going on at all between two qubits, it happens at hundreds of times the speed of light. 
The meaning of this for quantum computing is that you can create a computer with two qubits, and that quantum computer will have four states (actually, in some cases as many as 16), each with its own probability.  You can then perform various operations in parallel on each qubit, changing each state accordingly, and get an output (effectively, measuring the qubits).  That output will have a certain probability of being right.  Recreate the qubits and do it again; after several iterations, you very probably have the right answer.
One final important fact about qubits:  After a while, it is extraordinarily likely that they will decohere – that is, they will lose their original states.  One obvious reason for this is that in the real world, quanta are wandering around constantly, and so when they wander by a qubit during a computation they change that qubit’s stored state.  This is one of the things that make creating an effective quantum computer so difficult – you have to isolate your qubits carefully to avoid them decohering before you have finished your computation.

The Algorithmic Theory of Quantum Computing

 So now, hopefully, you are wondering how this strange quantum world of probabilities at a distance translates to faster computing.  To figure this out, we enter the equally strange world of the theory of algorithms.
The theory of algorithms begins with Godel’s Incompleteness Theorem.  Kurt Godel is an unsung giant of mathematics – one Ph.D. student’s oral exam consisted in its entirety of the request “Name five theorems of Godel”, the point being that each theorem opened up an entirely new branch of mathematics.  In terms of the theory of algorithms and computing, what Godel’s Incompleteness Theorem says, more or less, that there are certain classes of problems (involving infinity) for which one can never create an algorithm (solution) that will deliver the right answer in all cases.  Because there is a one-to-one mapping between algorithms and mathematical solutions, we say such problems are unsolvable.  David Hilbert the mathematician, at the start of the 20th century, posed 20 hard problems for mathematicians to solve; several of them have been proven to be unsolvable, and hence uncomputable.
By the 1960s, people had used similar methods to establish several more classifications of problem “hardness”.  The most important one was where the minimum time for an algorithm was exponential (some constant times 2**n, where n is [more or less] the most computationally expensive operation in the algorithm) rather than polynomial (some constant times n ** q, q less than 3).  In the typical case, once n got above 30 or 40, exponential-minimum-time problems like Presburger arithmetic were effectively unsolvable in any time short of the ending of the universe.
It has proved extraordinarily hard to figure out where the boundary between polynomial and exponential algorithms lies.  One insight has been that if one pursues all the possible solutions to some problems on some imagined computer offering infinite parallelism, they can be solved in polynomial time – the “NP” class of problems.  Certain real-world problems like the traveling salesman problem (figure out the minimum-cost travel sequencing to n destinations for any traveling salesperson) are indeed NP-solvable.  But no one can figure out a way to determine if those problems are solvable in polynomial time without such parallelism – whether P=NP.  In any case, it would be a major boon if some computing approach allowed P-time solution of at least some of those problems.
But even if we can’t find such an approach via quantum computing, it would still be very worthwhile for all classifications of algorithms (except unsolvable, of course!) to find a way to get to an “approximate” solution faster.  Thus, for example, if we can come to a solution to the traveling salesman problem that 99 times out of 100 is within a small distance of the “real” best solution, and do so in polynomial time on average, then we will be happy with that.  And thus, despite its probabilistic nature – or possibly because of it – quantum computing may help with both the weird world of “presently effectively unsolvable” exponential-time problems and with the more normal world of speeding up solvable ones.

How All This Applies To Today’s Quantum Computing

The world of qubits as it exists today appears to have two advantages over “classical” computers in attempting to speed up computation:
1.       Greater parallelism.  That is, you don’t sequence the computations and output; instead, within each and every qubit, they are performed pretty much at the exact same time.

2.       The ability to get “close enough” probabilistically.  As noted above, you can choose how much likelihood of being right you want, and as long as it isn’t absolute certainty, the number of iterations needed to get to a “good enough” answer on average should be less than the number of computations in classical computing.
However, there are two significant disadvantages, as well:
(a) The “sweet spot” of quantum computing is the case where the number of inputs (my interpretation:  the number of needed “states” in the quantum computer) is equal to the number of outputs needed to reach a “good enough” answer.  Outside of that “sweet spot”, quantum computing must use all sorts of costly contortions, e.g., repetition of the entire computation the same number of times when the number of states does not cover all the possible outcomes you are examining to find the right answer.
(b) When you need a very high probability or certainty of getting the right answer, or, failing that, you need to know how likely you are to be very close to the correct answer, you need a lot more runs of your quantum computer.
To see how this might work out in practice, let’s take the “searching the database” problem and use (more or less) Grover’s Algorithm (no Sesame Street jokes, please).  We have 1023 different values in 1023 different slots in our database, and we want to find out if the number “123456” is in the database, and if so, where (sorry, it’s a small database, but no bit-mapped indexing allowed).  We declare that each possibility is equal, so there is a 1/1024 chance of each outcome.  We use 10 qubits to encode each such possibility as an equally probable “state”.

The classical computing algorithm will find-and-compare all 1023 values before coming to a conclusion.  Grover’s Algorithm states that on average, the quantum computer will take sqrt(1024) or about 33 iterations before (on average) it will have found the right answer/location.  Score one for quantum computing, right?
Well, leave aside for a moment the flaws of my example and the questions it raises (e.g., why can’t a computer do the same sort of probabilistic search?) Instead, let’s note that it appears that in this example the disadvantages of quantum computing are minimized.  I configured the problem to be in quantum computing’s algorithmic “sweet spot”, and effectively only asked that there be a greater than 50% probability of having the right answer.  If all you want to know is where the data is located, then you will indeed on average take 33 iterations to find that out.  If you want to be sure that the number is or isn’t in the database, then your best solution is the classical one – 1024 find-and-compares, without the added baggage of restarting that quantum computing brings with it.
And I haven’t even mentioned that so far, we’ve only scaled up to 5 qubits available for researchers, and that as we scale further and our computations take longer, decoherence becomes even more of a problem. 
So that’s why I dropped that downer of a conclusion above.  It’s early days yet; but there’s a lot of difficult work still to come in scaling quantum computing, with no guarantee of success, and the range of long-term use cases still appears less than broad – and possibly very narrow.

Oh, Well, Fun Take-Aways

So while we wait and wait, here are some wonderful weirdnesses of the wedding of quantum physics and theory of algorithms to think about:
·         You can’t be in two places at once.  Not.  Simply create two quantum copies of yourself in your present “state.”  Keeping them isolated (details, details), move one of them halfway around the world.  Measure both simultaneously.  They will both be identical at that moment in time.

·         Things are either true or false.  Not.  Things are either true, or they are false, or whether they are true or false is undecidable.

·         Explanations of quantum computing are often incoherent, and whether they convey anything of value is undecidable.