Saturday, December 31, 2016

A Short Look Back at 2016


I have found very little in the last month and a half to add to previous posts.  CO2 continues its alarming rise, to what will in all likelihood be more than 406 ppm (yearly average) by end of year, and while global temperatures have ended their string of monthly records, we are still on course for a 1.2 degree C rise since 1850, about 0.3 degrees of it in the last 2 ½ years.  The big news in the Arctic (and Antarctic) is an unprecedented low in sea ice extent/area, plus record high temps in the Arctic in December – but that continues a smaller trend evident in most of the first half of 2016. 

Meanwhile, on the computing side, relatively little real-world innovation happened this year.  In-memory computing continued its steady rise in both performance and applicability, with smaller companies taking more of a lead as compared to 2015.  While blockchain technology and quantum computing made a big news splash early in the year, careful reading of “use cases” shows that real-world implementations of blockchain are thin on the ground or non-existent, as most companies try to figure out how best to make it work, while quantum computing is clearly far from real-world usefulness as of yet.

The big news in both areas, alas, is therefore the election of Donald Trump as President.  In the climate change area, as I predicted, he is proving to be an absolute disaster, with nominees for at least six posts that are climate change deniers with every incentive to make the American government a hindrance rather than a help in efforts to change “business as usual.” 

In the computing area, we see the spectacle of some large computing firms offering their services in public to Trump, an unprecedented move based on the calculation that while being seen as cooperative may not bring any benefits, failure to act in this way may cause serious problems for the firm.  Thus, we see Silicon Valley execs whose workforces are not at all enthused about Trump acting in meetings with him as if he offers new business opportunities, and IBM’s CEO announcing ways in which IBM technology can aid in achieving his presumed goals. 

It may seem odd to give such prominence to the personality of the President in assessing either climate change or the computing industry.  The fact is, however, that all of the moves I have cited are unprecedented, and derive from Trump’s personality.  To fail to consider this in assessing the likely long-run effects of the “new abnormal” in both the sustainability field and the computing industry is, imho, a failure to be an effective computer industry analyst.  And while no one likes a perpetually downbeat analyst, one that continually predicts rosy outcomes in this type of situation is simply not worth listening to.

I look back on 2016, and I see little that is permanent to celebrate – although the willingness of the media to begin to report on and accept climate change is, however temporary, worth noting.  I wish I could say that there is hope for better things in 2017; but as far as I can see, there isn’t.

Tuesday, November 15, 2016

Climate Change: The News Is Sadder Than You Think


Hopefully, any readers are aware of the likely climate-change implications of the election of Donald Trump to the US Presidency.  These include disengagement from international efforts to slow climate change, making government-driven change much more difficult; deregulation of “carbon pollution”, with predictable effects on the ability of wind and solar to replace rather than supplement oil; and effective barriers to any use of incentives (“carbon tax” or “carbon market”) to drive carbon-emissions reduction.  The effect on climate-change efforts is that China (hopefully) and Europe will have to drive them; which is a bit like trying to pedal a tricycle with one wheel gone.

And that doesn’t even include the likely effects of the new administration’s cuts in funding to NOAA, on whose metrics much of the world depends.  We have seen this before, in miniature, during the HW Bush years. 

But there is more sad news – much more – some of which I have learned very recently.  It concerns – well, let’s just go into the details.

Sea-Level Rise By Century’s End:  3 Feet and Rising


One recent factoid published, iirc, in thinkprogress.org/tagged/climate, is that in the period from late 2014 to early 2016, oceanic water levels rose by 15 mm, i.e., at an annual rate of 10 mm – up from 3 mm per year before that.  Very little of that rise is due to el Nino, which began around Dec.-Jan. of this year and ended around April-May.  Instead, there is a clear connection to the sharp rise in global temperature which began in 2014. 

So let’s put this number (10 mm/year) in perspective.  If we guesstimate the first 14 years of the 21st century at 3 mm/year, then 42 mm are already banked.  To get to 909 mm (3 feet) will therefore require 86.7 years.  There are 86 years from 2015 to 2100.  So at today’s rates, we will reach 3 feet of sea level rise by some time in the year 2101.

In other words, 3 feet of sea level rise by 2100 – the prediction widely disseminated as late as a year ago – is already “baked in”.  And what reason do we have for thinking that matters will stop here?  There is no reason to expect global temperatures to decrease over the medium term, and good reason (atmospheric carbon increases) to expect them to continue to increase.  So we are talking sea level rise of over a foot by 2050, at the very least, and we are wondering if the new “more realistic” estimate of 6-9 feet of sea level rise may be too optimistic.

The Faster Rise of Atmospheric CO2 Isn’t Going Away – and Other Greenhouse Gases Are Following


Let’s start with the CO2 rise that I have been following since early this year.  The “baked in” (yearly average) amount of CO2 has reached, effectively, 405 ppm, as of October’s results about 1 ½ years after it passed 400 ppm permanently.  More alarmingly, the surge caused at least partly by el Nino is not going away.  I have been following CO2 measurements from Mauna Loa for about 5 years, and always before this year the 10-year rate of increase has been a little more than 1.0 ppm per year.  The el Nino and follow-on has added almost 3 ppm to that rate, making a rate of almost 1.1 ppm per year.  And the new rate of increase shows little signs of stopping, so that we can project a rate of 1.2 ppm within the next 5 years -- a 20 percent increase.

Global Warming Appears to Follow – Right Now


A second measurement of greenhouse gases in the atmosphere – this one including all greenhouse gases, e.g., methane – has recently been discussed in neven1.typepad.com.  It now stands at over 483 ppm.  Some caution must be used in assessing this figure, since we have no figure for 1850 and the years following.  However, we can make a rough guesstimate, based on the facts that natural methane production would then have been lower than now, man-made methane production almost non-existent, and as of the early 2010s human methane production was approximately equivalent to natural methane production. 

This suggests that at least 2/3 of the gap between 400-405 ppm of carbon and 483 ppm of greenhouse gases as a whole is due to increases in human-caused non-CO2 greenhouse gas emissions.  To put it another way, human activities that have caused the 145-odd ppm increases in CO2 have also caused at least 2/3 of the 80-odd ppm difference between today’s CO2 atmospheric ppm and today’s total greenhouse gas atmospheric ppm.

This is consistent with Hansen’s thesis that a doubling of carbon ppm in the atmosphere would lead to 4 degrees C global warming, not 2 degrees – the additional warming comes partly from the aforementioned non-CO2 greenhouse gases, and partly from the additional “stored heat” at the Earth’s surface due to changed albedo as snow melts.  The reason for the difference between the two estimates, afaik, is that no one knew just how fast this additional warming would occur, since there is (and was, 55 million and 250 million years ago) clearly a lag time between large atmospheric carbon rises and increased non-CO2 greenhouse gases and “stored heat”. 

In summary, not only is the rise of atmospheric CO2 speeding up, but the effect that appears to have a much shorter lag time than we thought.  And so, there is real reason to fear that the global warming we are afraid of (presently estimated at 1.2-1.3 ppm above 1850 already) will in the near and intermediate term increase faster than we thought.

The Arctic Sea Melt Resumes


As late as a month ago, it was possible to argue that the ongoing melting away of Arctic sea ice was still on the “pause” it had been on since 2012.  And then, surprisingly, the usual refreezing occurring during October stopped.  It stopped for several weeks.  As a result, the average Arctic sea ice volume year-round reached a record low, and kept going.  And going.  As of the beginning of November, the average is now far below all previous years.

What caused this sudden stoppage?  Apparently, primarily unprecedented October oceanic and atmospheric heat in the areas where refreeze typically occurs in late October.  This is apparently much the same reason that minimum Arctic volume by some measures reached a new low in late September, despite a melting season with weather that in all previous years had resulted in much less melt than usual.

And what will prevent this from happening next year, and the year after?  Nothing, apparently (note that the 2016 el Nino had little or no effect on Arctic sea ice melt).  It now appears that we are still facing a September “melt-out” by the mid-2030s, at best.  I am happy that the direst predictions (2018 and thereabouts) are almost certainly not going to happen; and yet the scientific consensus has gone from “melt-out at 2100 only if we continue business as usual” to “melt-out around 2035 with much less chance of avoidance” in the last 5 years, and I hope against hope, given everything else that’s happening, that the forecast doesn’t slip again.

The Bottom Line:   Agility, Not Flexibility


I find, at the end, that I need to re-emphasize my concern, not just about the first 4 degrees C of temperature rise, but even more so the next, and the next.  And the next after that, the final rise.

I find that that a lot of people discount scientific warnings about loss of food production and so on, reasoning that the global market can, as it has done in the past, adapt to these problems at little cost, by using less water-intensive methods of farming, shifting farming production allocations north (and south) as the temperature increases, and handling disaster costs as usual while doing quick fixes on existing facilities to adapt.  In other words, our system is highly flexible; flexible enough to get us through the next 40-50 years, I think, while providing food for the developed world and probably the large majority of humanity. 

But, as a systems analyst will tell you, such a flexible system tends to make the inevitable crash far worse.  Patching existing processes (in climate change terms, adaptation) rather than fixing the problem at its root (in climate change terms, mitigation) causes over-investment in the present system that makes changing to a new, needed one far more costly – and therefore, far more likely to deep-six the company, or, in this case, the global market.

At a certain point, the average of national markets going south passes the average of global markets still growing, and then the cycle starts running in reverse:  smaller and smaller markets that can be serviced at higher and higher costs, with food scarcer and scarcer.  The only way to avoid total collapse into a system with inefficient production of the 1/10 of the food necessary for the survival of 9 billion people is mitigation; but the cost to do that is 10 times, 100 times what it was.  The result is brutal military dictatorships where the commander is the main rich person, as has happened so often throughout history.  Today’s rich will suffer less, because they make accommodations with the military; but they will on average suffer severely, by famine and disease. 

An agile system (here I am speaking about the ideal, not today’s usually far-from-agile companies) anticipates this eventuality, and moves far more rapidly towards fundamental change.  It can be done.  But, as of now, we are headed in the opposite direction.

Tuesday, October 25, 2016

The Cult of the Algorithm: Not So Fast, Folks

Sometimes I feel like Emily Litella in the old Saturday Night Live skit, huffing and puffing in offense while everyone wonders what I’m talking about.  That’s particularly true in the case of new uses of the word “algorithm.”  I find this in an interview by NPR of Cathy O’Neil, author of “Weapons of Math Destruction”, where part of the conversation is “We have these algorithms … we don’t know what they are under the hood … They don’t say, oh, I wonder why this algorithm is excluding women”.  I find this in the Fall 2016 Sloan Management Review, where one commenter says “developer ‘managers’ provide feedback to the workers in the form of tweaks to their programs or algorithms … the algorithms themselves are sometimes the managers of human workers.” 
As a once-upon-a-time computer scientist, I object.  I not only object, I assert that this is fuzzy thinking that will lead us to ignore the elephant in the living room of problems in the modeling of work/management/etc. to focus on the gnat on the porch of developer creation of software.
But how can I possibly say that a simple misuse of one computer term can have such large effects?  Well, let’s start by understanding (iirc) what an algorithm is.

The Art of the Algorithm

As I was taught it at Cornell back in the ‘70s (and I majored in Theory of Algorithms and Computing), an algorithm is an abstraction of a particular computing task or function that allows us to identify the best (i.e., usually, the fastest) way of carrying out that task/function, on average, in the generality of cases.  The typical example of an algorithm is one for carrying out a “sort”, whether that means sorting numbers from lowest to highest or sorting words alphabetically (theoretically, they are much the same thing), or any other variant.  In order to create an algorithm, one breaks down the sort into unitary abstract computing operations (e.g., add, multiply, compare), assigns costs to each, and then specifies the steps (do this, then do this).  Usually it turns out that one operation costs more than the others, and so sort algorithms can be reduced to considering the overall number of compares n as n increases from one to infinity.
Now consider a particular algorithm for sorting.  It runs like this:  Suppose I have 100 numbers to sort.  Take the first number in line, compare it to all the others, determine that it is the 23rd lowest.  Do the same for the second, third, … 100th number.  At the end, for any n, I will have an ordered, sorted list of numbers, no matter how jumbled the numbers handed to me are.
This is a perfectly valid algorithm.  It is also a bad algorithm.  For every 100 numbers, it requires at least 100 squared steps, and for any n, it requires at least n squared steps.  We say that this is an “order of n squared” or O(n**2) algorithm.  But now we know what to look for, so we take a different approach.
Here it is:  We go through the list of 100 numbers from both ends, and we find the maximum of the numbers from the low end and the minimum of the numbers from the high end, stopping when both sets of comparisons are dealing with the same number.  We then partition the list into two buckets, one containing the list up to that number (all of whose items are guaranteed to be less than that number), and one containing the list after that number (all of whose items are guaranteed to be greater than that number.  We repeat the process until we have reached buckets containing one number.  On average, there will be O(logarithm to the base two, or “log” of n) such splits, and each level of split performs O(n) comparisons.  So the average number of comparisons in this sorting algorithm is O(n times log n), called O(n log n) for short, which is way better than O(n**2) and explains why the algorithm is now called Quicksort.
Notice one thing about this:  finding a good algorithm for a function or task says absolutely nothing about whether that function or task makes sense in the real world.  What does the heavy lifting of creating a new program that is useful is more along the lines of a “model”, implicit in the mind of the company or person driving development, or additionally explicit in the program software actually carrying out the model.   An algorithm doesn’t say “do this”; it says, “if you want to do this, here’s the fastest way to do it.”

Algorithms and the Real World

So why do algorithms matter in the real world?  After all, any newbie programmer can write a program using the Quicksort algorithm, and there are a huge mass of algorithms available for public study in computer-science journals and the like.  The answer, I believe, lies in copyright and patent law.  Here, again, I know somewhat of the subject, because my father was a professor of copyright law and I held some conversations with him as he grappled with how copyright law should deal with computer software, and also because in the ‘70s I did a little research into the possibility of getting a patent on one of my ideas (it was later partially realized by Thinking Machines).
To understand how copyright and patent law can make algorithms matter, imagine that you are Google, 15 or so years ago.  You have a potential competitive advantage in your programs that embody your search engine, but what you would really like is to turn that temporary competitive advantage into a more permanent one, by patenting some of the code (not to mention copyrighting it to prevent disgruntled employees from using it in their next job).  However, patent law requires that this be a significant innovation.  Moreover, if someone just looks at what the program does and figures out how to mimic it with another search engine set of programs (a process called “reverse engineering”), then that does not violate your patent.
However, suppose you come up with a new algorithm?  In that case, you have a much stronger case for the program embodying that algorithm being a significant innovation (because your program is faster and [usually] therefore can handle many more petabytes or thousands of users), and the job of reverse engineering the program becomes much harder, because the new algorithm is your “secret sauce”. 
That means, if you are Google, that your new algorithm becomes the biggest secret of all, the piece of code you are least likely to share with the outside world – outsiders can’t figure out what is going on.  And all the programs written using the new algorithm likewise become much more “impenetrable”, even to many of the developers writing them.  It’s not just a matter of complexity; it’s a matter of preserving some company’s critical success factor.  Meanwhile, you (Google) are seeing if this new algorithm leads to another new algorithm – and that compounds the advantage and secrecy.
Now, let me pause here to note that I really believe that much of this problem is due to the way patent and copyright law adapted to the advent of software.  In the case of patent law, the assumption used to be that patents were on physical objects, and even if it was the idea that was new, the important thing was that the inventor could offer a physical machine or tool to allow people to use the invention.  However, software is “virtual” or “meta” – it can be used to guide many sorts of machines or tools, in many situations; at its best, it is fact a sort of “Swiss Army knife”.  Patent law has acted as if each program was physical, and therefore what mattered was the things the program did that hadn’t been done before – whereas if the idea was what mattered, as it does in software, then a new algorithm or new model should be what is patentable, not “the luck of tackling a new case”.
Likewise, in copyright law, matters were set up so that composers, writers, and the companies that used them had a right to be paid for any use of material that was original – it’s plagiarism that matters.  In software, it’s extremely easy to write a piece of a program that is effectively identical to what someone else has written, and that’s a Good Thing.  By granting copyright to programs that just happened to be the first time someone had written code in that particular way, and punishing those who (even if they steal code from their employer) could very easily have written that code on their own, copyright law can fail to focus on truly original, creative work, which typically is associated with new algorithms.
[For those who care, I can give an example from my own experience.  At Computer Corp. of America, I wrote a program that incorporated an afaik new algorithm that let me take a page’s worth of form fields and turn it into a good imitation of a character-at-a-time form update.  Was that patentable?  Probably, and it should have been.  Then I wrote a development tool that allowed users to drive development by user-facing screens, program data, or the functions to be coded in the same general way – “have it your way” programming.  Was that patentable? Probably.  Should it have been?  Probably not:  the basic idea was already out there, I just happened to be the first to do it.]

It's About the Model, Folks

Now let’s take another look at the two examples I cited at the beginning of this post.  In the NPR interview, O’Neil is really complaining that she can’t get a sense of what the program actually does.  But why does she need to see inside a program or an “algorithm” to do that?  Why can’t she simply have access to an abstraction of the program that tells her what the program does in particular cases?
In point of fact, there are plenty of such tools.  They are software design tools, and they are perfectly capable of spitting out a data model that includes outputs for any given input.  So why can’t Ms. O’Neil use one of those? 
The answer, I submit, is that companies developing the software she looks at typically don’t use those design tools, explicitly or implicitly, to create programs.  A partial exception to this is in the case of agile development.  Really good agile development is based on an ongoing conversation with users leading to ongoing refinement of code – not just execs in the developing company and execs in the company you’re selling the software to, but ultimate end users.  And one of the things that a good human resources department and the interviewee want to know is exactly what the criteria are for hiring, and why they are valid.  In other words, they want a model of the program that tells them what they want to know, not dense thickets of code or even of code abstractions (including algorithms).
My other citation seems to go to the opposite extreme:  to assume that automation of a part of the management task using algorithms reflects best management practices automagically, as old hacker jargon would put it.  But we need to verify this, and the best way, again, is to offer a design model, in this case of the business process involved.  Why doesn’t the author realize this?  My guess is that he or she assumes that the developer will somehow look at the program or algorithm and figure this out.  And my guess is that he/she would be wrong, because often the program involves code written by another programmer, about which this programmer knows only the correct inputs to supply, and the algorithms are also often Deep Dark Secrets.
Notice how a probably wrong conception of what an algorithm is has led to attaching great importance to the algorithm involved, and little to the model embodied by the program in which the algorithm occurs.  As a result, O’Neil appears to be pointing the finger of blame at some ongoing complexity that has grown like Topsy, rather than at the company supplying the software for failing to practice good agile development.  Likewise, the other cite’s belief in the magical power of the algorithm has led him/her to ignore the need to focus on the management-process model in order to verify the assumed benefits.  As I said in the beginning, they are focusing on the gnat of the algorithm and ignoring the elephant of the model embodied in the software.

Action Items

So here’s how such misapprehensions play out for vendors on a grand scale (quote from an Isabella Kaminska article excerpted in Prof. deLong’s blog, delong.typepad.com):  “You will have heard the narrative.... Automation, algorithms and robotics... means developed countries will soon be able to reshore all production, leading to a productivity boom which leads to only one major downside: the associated loss of millions of middle class jobs as algos and robots displace not just blue collar workers but the middle management and intellectual jobs as well. Except... there’s no quantifiable evidence anything like that is happening yet.”  And why should there be?  In the real world, a new algorithm usually automates nothing (it’s the program using it that does the heavy lifting) and the average algorithm does little except give one software vendor a competitive advantage over others.
Vendors of, and customers for, this type of new software product therefore have an extra burden:  ensuring that these products deliver, as far as possible, only verifiable benefits for the ultimate end user.  This is especially true of products that can have a major negative impact on these end users, such as hiring/firing software and self-driving cars.  In these cases, it appears that there may be legal risks, as well:  A vendor defense of “It’s too complex to explain” may very well not fly when there are some relatively low-cost ways of providing the needed information to the customer or end user, and corporate IT customers are likewise probably not shielded from end user lawsuits by a “We didn’t ask” defense.
Here are some action items that have in the past shown some usefulness in similar cases:
·         Research software design tools, and if possible use them to implement a corporate IT standard of providing documentation at the “user API” level specifying in comprehensible terms the outputs for each class of input and why.
·         Adopt agile development practices that include consideration and documentation of the interests of the ultimate end users.
·         Create an “open user-facing API” for the top level of end-user-critical programs, that allows outside developers to (as an intermediary) understand what’s going on, and as a side-benefit to propose and vet extensions to these programs.  Note that in the case of business-critical algorithms, this trades a slight increase in the risk of reverse engineering for a probable larger increase in customer satisfaction and innovation speedup.
Above all, stop deifying and misusing the word “algorithm.”  It’s a good word for understanding a part of the software development process, when properly used.  When improperly used – well, you’ve seen what I think the consequences are and will be.

Wednesday, September 21, 2016

August 16th, 2070: Rising Waters Flood Harvard Cambridge Campus, Harvard Calls For More Study of Problem


A freak nor’easter hit the Boston area yesterday, causing a 15-foot storm surge that overtopped the already precarious walls near the Larz Anderson Bridge.  Along with minor ancillary effects such as the destruction of much of Back Bay, the Boston Financial District, and Cambridgeport, the rampaging flood destroyed the now mostly-unused eastern Harvard Business School and Medical School, as well as the eastern Eliot and Lowell Houses.  The indigent students now housed in Mather House can only reach the dorms in floors 5 and above, by boat, although according to Harvard President Mitt Romney III the weakened structural supports should last at least until the end of the next school year.
A petition was hastily assembled by student protest groups last night, and delivered to President Romney at the western Harvard campus at Mt. Wachusett around midnight, asking for immediate mobilization of school resources to fight climate change, full conversion to solar, and full divestment from fossil-fuel companies. President Romney, whose salary was recently raised to $20 million due to his success in increasing the Harvard endowment by 10% over the last two years, immediately issued a press release stating that he would gather “the best and brightest” among the faculty and administration to do an in-depth study and five-year plan for responding to these developments.  Speaking from the David Koch Memorial Administrative Center, he cautioned that human-caused climate change remained controversial, especially among alumni.  He was seconded by the head of the Medical School, speaking from the David Koch Center for the Study of Migrating Tropical Diseases, as well as the head of the Business School, speaking from the David Koch Free-Market Economics Center. 
President Romney also noted that too controversial a stance might alienate big donors such as the Koch heirs, which in turn allowed indigent students to afford the $100,000 yearly tuition and fees.  He pointed out that as a result of recent necessary tuition increases, and the decrease in the number of students able to afford them from China and India due to the global economic downturn, the endowment was likely to be under stress already, and that any further alienation by alumni might mean a further decrease in the number of non-paying students.  Finally, he noted the temporary difficulties caused by payment of a $50 million severance package for departing President Fiorina.
Asked for a comment early this morning, David Koch professor of environmental science Andrew Wanker said, “Reconfiguring the campus to use solar rather than oil shale is likely to be a slow process, and according to figures released by the Winifred Koch Company, which supplies our present heating and cooling systems, we have a minimal impact on overall CO2 emissions and conversion will be extremely expensive.”  David Koch professor of climate science Jennifer Clinton said, “It’s all too controversial to even try to tackle.  It’s such a relief to consider predictions of further sea rise, instead.  There, I am proud to say, we have clearly established that the rest of the eastern Harvard campus will be underwater sometime within the next 100 years.”
In an unrelated story, US President for Life Donald Trump III, asked to comment on the flooding of the Boston area, stated “Who really cares?  I mean, these universities are full of immigrant terrorists anyway, am I right?  We’ll just deport them, as soon as I bother to get out of bed.”

Monday, September 19, 2016

HTAP: An Important And Useful New Acronym


Earlier this year, participants at the second In-Memory Summit frequently referred to a new marketing term for data processing in the new architectures:  HTAP, or Hybrid Transactional-Analytical Processing.  That is, “transactional” (typically update-heavy) and “analytical” (typically read-heavy) handling of user requests are thought of as loosely coupled, with each database engine somewhat optimized for cross-node, networked operations. 

Now, in the past I have been extremely skeptical of such marketing-driven “new acronym coinage,” as it has typically had underappreciated negative consequences.  There was, for example, the change from “database management system” to “database”, which has caused unending confusion about when one is referring to the system that manages and gives access to the data, and when one is referring to the store of data being accessed.  Likewise, the PC notion of “desktop” has meant that most end users assume that information stored on a PC is just a bunch of files scattered across the top of a desk – even “file cabinet” would be better at getting end users to organize their personal data.  So what do I think about this latest distortion of the previous meaning of “transactional” and “analytical”?

Actually, I’m for it.

Using an Acronym to Drive Database Technology


I like the term for two reasons:

1.       It frees us from confusing and outdated terminology, and

2.       It points us in the direction that database technology should be heading in the near future.

Let’s take the term “transactional”.  Originally, most database operations were heavy on the updates and corresponded to a business transaction that changed the “state” of the business:  a product sale, for example, reflected in the general ledger of business accounting. However, in the early 1990s, pioneers such as Red Brick Warehouse realized that there was a place for databases that specialized in “read” operations, and that functional area corresponded to “rolling up” and publishing financials, or “reporting”.  In the late 1990s, analyzing that reporting data and detecting problems were added to the functions of this separate “read-only” area, resulting in Business Intelligence, or BI (similar to military intelligence) suites with a read-only database at the bottom.  Finally, in the early 2000s, the whole function of digging into the data for insights – “analytics” – expanded in importance to form a separate area that soon came to dominate the “reporting” side of BI. 

So now let’s review the terminology before HTAP.  “Transaction” still meant “an operation on a database,” whether its aim was to record a business transaction, report on business financials, or dig into the data for insights – even though the latter two had little to do with business transactions.  “Analytical”, likewise, referred not to monthly reports but to data-architect data mining – even though those who read quarterly reports were effectively doing an analytical process.  In other words, the old words had pretty much ceased to describe what data processing is really doing these days.

But where the old terminology really falls down is in talking about sensor-driven data processing, such as in the Internet of Things.  There, large quantities of data must be ingested via updates in “almost real time”, and this is a very separate function from the “quick analytics” that must then be performed to figure out what to do about the car in the next lane that is veering toward one, as well as the deeper, less hurried analytics that allows the IoT to do better next time or adapt to changes in traffic patterns.

In HTAP, transactional means “update-heavy”, in the sense of both a business transaction and a sensor feed.  Analytical means not only “read-heavy” but also gaining insight into the data quickly as well as over the long term.  Analytical and transactional, in their new meanings, correspond to both the way data processing is operating right now and the way it will need to operate as Fast Data continues to gain tasks in connection to the IoT.

But there is also the word “hybrid” – and here is a valuable way of thinking about moving IT data processing forward to meet the needs of Fast Data and the IoT.  Present transactional systems operating as a “periodic dump” to a conceptually very separate data warehouse simply are too disconnected from analytical ones.  To deliver rapid analytics for rapid response, users also need “edge analytics” done by a database engine that coordinates with the “edge” transactional system.  Transactional and analytical systems cannot operate in lockstep as part of one engine, because we cannot wait as each technological advance in the transactional side waits for a new revision of the analytical side, or vice versa.  HTAP tells us that we are aiming for a hybrid system, because only that has the flexibility and functionality to handle both Big Data and Fast Data.

The Bottom Line


I would suggest that IT shops looking to take next steps in IoT or Fast Data try adopting the HTAP mindset.  This would involve asking oneself:

·         To what degree does my IT support both transactional and analytical processing by the new definition, and how clearly separable are they?

·         Does my system for IoT involve separate analytics and operational functions, or loosely-coupled ones (rarely today does it involve “one database fits all”)?

·         How well does my IT presently support “rapid analytics” to complement my sensor-driven analytical system?

If your answer to all three questions puts you in sync with HTAP, congratulations:  you are ahead of the curve.  If, as I expect, in most cases the answers reveal areas for improvement, those improvements should be at a part of IoT efforts, rather than trying to patch the old system a little to meet today’s IoT need.  Think HTAP, and recognize the road ahead.

Saturday, September 17, 2016

The Climate-Change Dog That Did Bark In The Night: CO2 Continues Its Unprecedented 6-Month Streak


In one of Conan Doyle’s Sherlock Holmes mysteries, an apparent theft of a racehorse is solved when Holmes notes “the curious incident of the dog in the night” – the point being that the guard dog for the stable did not bark, showing that the only visitor was known and trusted.  In some sense, CO2 is a “guard dog” for oncoming climate change, signaling future global warming when its increases overwhelm the natural Milankovitch and other climate cycles.  It is therefore distressing to note that in the last 6 months, the dog has barked very loudly indeed:  CO2 in the atmosphere has increased at an unprecedented rate. 

And this is occurring in the “nighttime”, i.e., at a time when, by all our measures, CO2 emissions growth should be flat or slowing down.  As noted in previous posts, efforts to cut emissions, notably in the EU and China, plus the surge of the solar industry, have seemed to lend credibility to metrics of carbon emissions from various sources that suggest more or less flat global emissions in 2014 and 2015 despite significant global economic and population growth.

What is going on?  I have already noted the possibility that a major el Nino event, such as occurred in 1998, can cause a temporary surge in CO2 growth.  In 1998, indeed, CO2 growth set a record that was not beaten until last year, but in the two years after 1998, CO2 atmospheric ppm growth fell back sharply to nearly the previous level.  By our measures, the el Nino occurring in the first 5 months or so of 2016 was about equal in magnitude to the one in 1998, so one would expect to see a similar short surge.  However, we are almost 4 months past the end of this el Nino, and there is very little sign of any major decrease in growth rate.  It already appears certain that we cannot dismiss the CO2 surge as a short-term blip.

Recent Developments in CO2 Mauna Loa


In the last few days, I was privileged to watch the video of Prof. John Sterman of MIT, talking about the “what-if” tool he had developed and made available in which climate models drive CO2 emissions growth depending on how aggressive the national targets are for emissions reduction.  He was blunt in saying that even the commitments coming out of the Paris meeting are grossly inadequate, but he did show how much more aggressive targets could indeed keep total growth at 2 degrees C.  In fact, he was so forthright and well-informed that I could finally hope that MIT’s climate-change legacy would not be the government-crippling misinformation of that narcissistic hack Prof. Lindzen.

However, two of his statements – somewhat true in 2015 but clearly not true at this point in 2016 (the lecture, I believe, was given in the spring of 2016) – stick in my head.  First, he said that we are beginning to approach 1.5 degrees C growth in global land temperature.  According to the latest figures cited by Joe Romm, the most likely global land temperature for 2015 will be approximately 1.5 degrees C.  Second, he said that CO2 (average per year) had reached the 400 ppm level – a statement true at this time last year.  As of April-July 2016, however, the average per year has reached between 404 and 405 ppm.

CO2 as measured at the Mauna Loa observatory tends to follow a seasonal cycle, with the peak occurring in April and May, and the trough in September.  In the last few years, at all times of the year, growth year-to-year (measured monthly) averaged slightly more than 2 ppm.  Note that this was true for both 2014 and most of 2015.  Then, around the time that the el Nino arrived, it rose to 3 ppm.  But it didn’t stop there:  In April, a breathtakingly sharp rise of 4.1 ppm took CO2 up to 408 ppm.  And it didn’t stop there:  May and June were likewise near 4 ppm, and the resulting total average rise through August has been almost 3.6 ppm.  September so far continues to be in the 3.3-3.5 range. 

CO2, el Nino, Global Land Temperature:  What Causes What?


Let’s add another factoid.  Over the last 16 or so months, each month’s global land temperature has set a new record.  In fact, July and August (July is typically the hottest month) tied for absolute heat record ever recorded, well ahead of the records set last year.

So here we have three factors:  CO2, el Nino, and variations in global land temperature.  Clearly, in this heat wave “surge”, the land temperature started spiking first, the full force of el Nino arrived second, and the full surge in CO2 arrived third.  On the other hand, we know that atmospheric CO2 is not only a “guard dog”, but also, in James Hansen’s phrase, a “control knob”:  in the long term, for large enough variations in CO2 (which can be 10 ppm in some cases), the global land temperature will eventually follow CO2 by rising or falling in proportion.  Moreover, it seems likely that el Nino’s short-term effect on CO2 must be primarily by raising the land temperature, which does things like expose black carbon on melting ice for release to the atmosphere, or increase the imbalance between carbon-absorbing forest growth and carbon-emitting forest fires by increasing the incidence of forest fires.

But I think we also have to ask whether the effect of increasing CO2 on global temperatures (land plus sea, this time) begins over a shorter time frame than we thought.  The shortest time frame for a CO2 effect suggested by conservative science is perhaps 2000 years, when a spike in CO2 caused Arctic melting indicative of global warming less than a million years ago.  Hansen and others, as well, have identified 360 ppm of atmospheric CO2 as the level at which Arctic sea ice melts out, and we only passed that level about 20 years ago – a paper by a Harvard professor projects that Arctic sea ice will melt out at minimum somewhere between 2032 and 2053.  In other words, we at least have some indication that CO2 can affect global temperature in 50-100 years or so.

And finally, we have scientific work showing that global land temperature increases that melt Arctic sea and land ice affect albedo (e.g., turn white ice into blue water), which in turn increases sea and land heat absorption and hence temperatures, and these changes “ripple down” to the average temperatures of temperate and subtropical zones.  So part of global land temperature increase is caused by – global land temperature increase.  It is for these reasons that many scientists feel that climate models underestimate the net effect of a doubling of CO2, and these scientists estimate that rather than 500 ppm leading to a 2 degree C temperature increase, it will lead to a 4 degree C increase.

I would summarize by saying that while it seems we don’t know enough about the relationship between CO2, el Nino, and global land temperature, it does seem likely that today’s CO2 increase is much more than can be explained by el Nino plus land temperature rise, and that the effects of this CO2 spike will be felt sooner than we think.

Implications


If the “guard dog” of CO2 in the atmosphere is now barking so loudly, why did we not anticipate this?  I still cannot see an adequate explanation that does not include the likelihood that our metrics of our carbon emissions are not capturing an increasing proportion of what we put into the air.  That certainly needs looking into.

At the same time, I believe that we need to recognize the possibility that this is not a “developing nations get hit hard, developed ones get by” or “the rich escape the worst effects” story.  If things are happening faster than we expect and may result in temperature changes higher than we expect, then it is reasonable to assume that the “trickle-up” effects of climate change may become a flood in the next few decades, as rapid ecosystem, food, and water degradation starts affecting the livability of developed nations and their ability to feed their own reasonably-well-off citizens. 

The guard dog barks in the night-time, while we sleep.  We have a visitor that is not usual, and should not be trusted or ignored.  I urge that we start paying attention. 

Wednesday, September 14, 2016

In Which We Return to Arctic Sea Ice Decline and Find It Was As Bad As We Thought Seven Years Ago

For the last 3 years or so, I have rarely blogged about Arctic sea ice, because my model of how it worked seemed flawed and yet replacement models did not satisfy. 

Until 2013, measures of Arctic sea ice volume, trended downward, exponentially rather than linearly, at minimum (usually in early September).  Then, in 2013, 2014, and 2015, volume, area and extent measures seemed to rebound above 2012 almost to the levels of 2007, the first “alarm” year.  My model, which was of a cube of ice floating in water and slowly moving from one side of a glass to the other, with seasonal heat increasing yearly applied at the top, bottom, and sides, simply did not seem to reflect what was going on. 

And now comes 2016 (the year’s melting is effectively over), and it seems clear that the underlying trends remain, and even that a modified version of my model loosely fits what’s happening.  This has been an unprecedented Arctic sea ice melting year in many ways – and the strangest thing of all may be this glimpse of the familiar.

Nothing of It But Doth Change, Into Something Strange

The line is from early in Shakespeare’s Richard III, in which a character talks of his father’s drowning:  “Full fathom five my father lies/Of his bones are coral made/ … /Nothing of him but doth change,/Into something rich, and strange.”  And, indeed, the changes in this year’s Arctic sea ice saga deserve the title “sea-change.”  Here’s a list:
  1. 1.       Lowest sea-ice maximum, by a significant amount, back in late March/April.
  2. 2.       Lowest 12-month average.
  3. 3.       Unprecedented amount of major storm activity in August.
  4. 4.       Possibly greatest amount of melt during July for years when generally cloudy conditions hinder melting.
  5. 5.       First sighting of a large “lake” of open water at the North Pole on Aug. 28, when two icebreakers parked next to an ice floe and one took a picture.  Santa wept.
  6. 6.       First time since I have been monitoring Arctic sea ice melt when the Beaufort Sea (north of Canada and Alaska) was almost completely devoid of ice to within 5 degrees of the pole.


These events actually describe a coherent story, as I understand it.  It begins in early winter, when unusual ocean heat shows up at the edges of the ice pack, especially around Norway.  The ocean temperature (especially in the North Atlantic) has been slowly heating over time, but this time it seemed to cross a threshold:  parts of the Atlantic near Norway stayed ice free all the way through the year, leading to the lowest-ever Arctic-ocean maximum.

In May and early June, the sun was above the horizon but temperatures were still significantly below freezing in the Arctic, so relatively little melting got done. Then, when cloudy conditions descended and stayed late in June or thereabouts, the relative ocean heat counteracted some of the loss of melting energy from the sun.  But another factor emerged:  the thinness of the ice.  Over 2013, 2014, and 2015, despite the lack of summer melt, multi-year ice remained a small fraction of the whole.  So when a certain amount of melt occurred in July and August, water “punched through” in many places, so that melting was occurring not just on the top (air temperature) and bottom (ocean heat) but also the sides (ocean heat) of ice floes.  And this Swiss cheese effect was happening not just at the periphery, but all over the central Arctic – hence the open water at the Pole.

Then came the storms of August.  In previous years, at any time of the year, the cloudiness caused by storms counteracted their heat energy.  This year, the thin, broken ice was driven by waves that packed some of the storms’ energy, further melting them.  And, of course, one of the side effects was to drive sea ice completely out of the Beaufort Sea.

Implications For The Future

It is really hard to find good news in this year’s Arctic sea ice melting season.  Years 2013-2015 were years of false hope, in which although it seemed that although eventually Arctic sea ice must reach zero at minimum (defined as less than 1% of the Arctic Ocean covered with ice), we had reached a period of flat sea ice volume, which only a major disturbance such as abundant sunshine in July and August could tip into a new period of decline. 

However, the fact of 2016 volume decrease in such unpromising weather conditions has pretty much put paid to those hopes.  It is hard to see what can stop continued volume decreases, since neither clouds nor storms will apparently do so any longer.  One can argue that the recent el Nino artificially boosted ocean temperatures, although it is not clear how it could have such a strong relative effect; but there is no sign that ocean heat will return to 2013-2015 levels now that the el Nino is over.  

Instead, the best we can apparently hope for is a year or two of flatness at the present volume levels if such an el Nino effect exists, not a return to 2013-2015 levels.  My original off-the-cuff projection “if this [volume decreases 2000-2012] goes on” was for zero Arctic sea ice around 2018, and while I agree that the 2013-2015 “pause” makes 2018 very unlikely, 2016 also seems to make any “zero date” after 2030 less likely than zero Arctic sea ice at some point in the 2020s. 

A second conclusion I draw is that my old model, while far overestimating the effects of “bottom heating” pre-2016, now works much better in the “fragmented ice” state of today’s Arctic sea ice in July and August.  In this model, as ice volume approaches zero at minimum, volume flattens out, while extent decreases rapidly and area more rapidly still (unless the ice is compacted by storms, as occurred this year).  This effect will be unclear to some extent, as present measurement instruments can’t distinguish between “melt ponds” caused by the sun’s heat and actual open water.

Finally, the Arctic sea “ice plug” that slowed Greenland glacier melt by pushing back against glacier sea outlets continues to appear less and less powerful.  This year, almost the entire west coast of Greenland was clear of ice by early to mid June – a situation I cannot recall ever happening while I’ve been watching.  Since this speeds glacier flow and therefore melting at its terminus entering the sea, it appears that this decade, like the 1990s and 2000s, will show a doubling of Greenland snow/ice melt.  James Hansen’s model, which assumes this will continue for at least another couple of decades, projects 6-10 feet of sea rise by 2100.  And even this may be optimistic – as I hope to discuss in a follow-on post on 2016’s sudden CO2 rise.

Most sobering of all, in one sense we are already as near to zero Arctic sea ice as makes no difference.  Think of my model of an ice cube floating in water in a glass for a minute, and imagine that instead of a cube you see lots of thin splinters of ice.  You know that it will take very little for that ice to vanish, whereas if the same volume of ice were still concentrated in one cube it will take much more.  By what I hear of on-the-ground reports, much of the remaining ice in the Arctic right now is those thin chunks of ice floating in water. 

“Because I do not hope to turn again/Because I do not hope/ … May the judgment not be too heavy on us/ … Teach us to care and not to care.”  T.S. Eliot, Ash Wednesday

Friday, May 27, 2016

Climate Change: More “Business As Unusual”

The extreme highs in atmospheric carbon continue to occur.  This week is virtually certain to surpass 408 ppm, and therefore the month as a whole is virtually certain to surpass 407.6 ppm, a new historic high, and probably more than 4 ppm greater than last year.  If trends continue, the increase in ppm will probably set another new record, being much greater during the first five months of this year than last year’s average gain of 3.05 ppm (my off-the-cuff estimate so far is 3.5 ppm).  To repeat, this is a greater increase and percentage increase than that during the last comparable el Nino.  This calls into question, I repeat, whether our present efforts (as opposed to those to which Paris climate talks committed) are really doing anything significant to avoid “business as usual.”

Meanwhile, Arctic sea ice extent is far below even the previous record low for this time of year.  The apparent cause is increased sea and air temperatures plus favorable melting conditions in both the North Atlantic and North Pacific, partially supplemented by unusually early “total melt” on land in both Canada/Alaska and Scandinavia/Russia.  Unless present weather trends sharply change over the next 3 months, new record Arctic ice lows are as likely as not, and “ice-free” (less than 1 million sq kilometers) conditions are for the first time a (remote) possibility.

Finally, here are the climate change thoughts of the Republican candidate for next President of the United States, with commentary (which I echo) from David Roberts:
[The excerpt starts with footage from a press conference, followed by an introduction, followed by Trump’s speech]
This is ... not an energy speech.
Trump now arguing that coal mining is a delightful job that people love to do.
Trump wants the Keystone pipeline, but he wants a better deal -- 'a piece of the profits for Americans.'
Facepalm forever.
'We're the highest taxed nation, by far.' That is flatly false, not that anyone cares.
75% of federal regulations are 'terrible for the country.' Sigh.
'I know a lot about solar.' Followed by several grossly inaccurate assertions about solar.
I can't believe I just got suckered into watching a Trump press conference.
And now I'm listening to bad heavy metal as I wait for the real speech. This day has become hallucinatory.
Speech finally getting under way. Oil baron Harold Hamm here to introduce Trump.
Oh, good, Hamm explained energy to Trump in 30 minutes. We're all set.
Here's ND's [North Dakota] own Kevin Cramer, representing the oil & gas industry. Thankfully, he's gonna keep it short.
What even is this music?
Trump loves farmers. 'Now you can fall asleep while we talk about energy.' Wait what.
Trump now repeating Clinton coal 'gaffe,' a story the media cooked up & served to him. Awesome, media.
Honestly, Trump sounds like he's reading this speech for the first time. Like he's reacting to it, in asides, as he reads it.
Trump promises 'complete American energy independence -- COMPLETE.' That is utter nonsense.
This is amazing. He is literally reading oil & gas talking points, reacting to them in real time. We're all discovering this together.
Clinton will "unleash" EPA to "control every aspect of our lives." Presumably also our precious bodily fluids.
Trump is having obvious difficulty staying focused on reading his speech. He so badly wants to just do his freestyle-nonsense thing.
Can't stop laughing. He's reading this sh** off the teleprompter & then expressing surprise & astonishment at it. Reading for the 1st time!
He can't resist. Wandering off into a tangent about terrorism. Stay focused, man!
Oh, good, renewable energy gets a tiny shoutout. But not to the exclusion of other energies that are "working much better."
We're gonna solve REAL environmental problems. Not the phony ones. Go ahead, man, say it ...
(1) Rescind all Obama executive actions. (2) Save the coal industry (doesn't say how). (3) Ask TransCanada to renew Keystone proposal.
(4) Lift all restrictions on fossil exploration on public land. (5) Cancel Paris climate agreement. (6) Stop US payments to UN climate fund.
(7) Eliminate all the bad regulations. (8) Devolve power to local & state level. (9) Ensure all regs are good for US workers.
(10) Gonna protect environment -- "clean air & clean water" -- but, uh, not with regulations!
(11) Lifting all restrictions on fossil fuel export will, according to right-wing hack factory, create ALL THE BILLIONS OF US MONIES.
Let us pause to note: this is indistinguishable from standard GOP energy policy, dating all the way back to Reagan. Rhetoric unchanged.
"ISIS has the oil from Libya."
Notable: not a single mention of climate change, positive or negative. Just one passing reference to "phony" environmental problems.
And this, in the Senate and House of Representatives, is what keeps the US from far more effective action on climate change.  To slightly alter George Lucas in the Star Wars series, this is how humanity ends most of itself --- to thunderous applause.
That is all.