Tuesday, April 19, 2016

CO2 and Climate Change: Our Partial Data Promises Hope, Our Best Measure Insists Alarm

For the last five years or so, I have been watching the measures of atmospheric carbon as measured at the peak of Mauna Loa in Hawaii (where contaminating influences on measurements are minimal).  I have become accustomed to monthly reports where CO2 has increased a little more than 2 ppm over the same month the previous year.  This year, things have changed drastically.
On Wed. Apr. 13, for example, I found that (a) monthly CO2 last month had increased by 3.6 ppm year to year; (b) the previous three days had been recorded as about 409.4, 409, and 408.5 ppm, of which the first measurement was about 5 ppm above the same day last year and about 10 ppm above the monthly average 1 ½ years ago, and (c) last year’s average increase had been confirmed as 3.05 ppm, breaking the record set in 1998 for greatest CO2 increase (the data from Mauna Loa go back to 1959).
[UPDATE:  3 weeks ago was the first weekly average above 405 ppm.  Last week was the first weekly average above 406 ppm.  This week was the first weekly average above 408 ppm – 4.59 ppm above the same week last year]
Meanwhile, figures gathered by IEA suggest that fossil-fuel carbon emissions essentially were flat over the 2014-15 time period.  Other studies by CarbonTracker suggest that other net sources and sinks of atmospheric carbon (uptake by the land and the oceans taking carbon from the atmosphere, “fire” [e.g., forest fires on land] acting as a source of atmospheric carbon) have been essentially flat for 15 years or more. 
What is going on?  Why the apparent large rise in Mauna Loa CO2 growth rates over 2014-16 and the apparent flat rate of growth in atmospheric carbon in 2014-15 according to IEA’s data supplemented by CarbonTracker?   Far more importantly, why does the first seem to signal an alarming development in global warming, while the second seems to promise that our efforts in sustainability and global compacts to combat global warming are beginning to bear fruit? And finally, which is right?
[Note:  Because of time constraints, I won’t be able to discuss things in depth.  However, I feel it is important that readers understand the general reasons for my conclusions]

Implications of Hope and Alarm

Let’s start with the second question:  Why does the IEA data seem to offer hope of significant progress in combating global warming, while the Mauna Loa CO2 data seems to signal alarm about the progress of global warming?
If the IEA is right, then over the last 2 years we have begun to make significant progress on reducing fossil-fuel pollution – despite a global economy that grew significantly in 2014 and 2015.  The result is that the rate of growth of CO2 has been flat the last two years.  Because of the recent global climate agreements, we can expect future years to slow the rate of growth, and in the not-too-distant future to begin to actually decrease atmospheric CO2.  Optimistically, we can hope to reach atmospheric stasis at 500 ppm, which will not keep global warming below 2 degrees C, according to Hansen and others, but should keep it below 4 degrees C, where the consequences become much worse.
If the ML CO2 data is right, then we are not only making no significant progress in combating global warming, we may very well be at the start of more rapid warming.  For the last 15 years or so, the rate of growth of atmospheric CO2 has been slightly more than 2 ppm (the previous 15 years saw a growth rate slightly more than 1.5 ppm per year).  A bit more than 2 ppm per year for 15 years translates into about 1 degree Fahrenheit of average global warming.  If we now shift to a bit more than 2.5 ppm per year, that should translate into about 1.3 degrees Fahrenheit of warming over the next 15 years.  It will also mean that we “bake in” 2.67 degrees C of warming since 1850, most of it in the last 60 years, and will be well on the way to about 4 degrees C of warming (blowing past 500 ppm easily) by the 2050-2060 time frame, if we continue with present efforts rather than increasing them.

Which Is Right?

Now let’s tackle questions 1 and 3:  what are the characteristics of the IEA and ML data that lead them to apparently different conclusions, and which of the two is more likely to be right?
Start with the IEA data on fossil-fuel emissions per year.  IEA depends primarily on self-reporting by nations of their use of electricity and heating, supplemented by extrapolation based on prior experience of the amount of CO2 released by coal/gas/oil-based heating and electricity generation. 
There are several reasons to view this data as likely to underestimate actual fossil-fuel emissions, and likely to have that underestimate increase over time.  First, the IEA data does not cover emissions during fossil-fuel production and refining.  The upsurge in fracking would show up in the IEA data as a net decrease in pollution (switching from use of oil to natural gas), while independent studies of wellhead-to-shipment emissions suggest that their total use-plus-produce emissions are almost equivalent to that of oil.  Second, there has been an ongoing shift in business’ fossil-fuel use from Europe/the US to developing countries like China and India.  Not only is these countries’ ability to report the full amount of fossil-fuel emissions less, they are probably less efficient in using fossil fuels to heat and generate electricity in comparable facilities – and the IEA appears to apply the same efficiency standards to comparable facilities in both places.
Now let’s turn to the Mauna Loa CO2 data.  Long experience has determined that any additional CO2 detection from nearby sources is transitory and will wash out in the monthly average.  Likewise, on average, Mauna Loa tracks near the center of Northern Hemisphere response to fossil-fuel emissions primary sources, and can be cross-checked against a global CO2 measure which is one month behind in reporting but has been showing a comparable upsurge (3.4 ppm in February). [Note:  The IEA shows 0% increase in fossil-fuel emissions in 2015, while global and Mauna Loa data show a 0.75% increase in total CO2 emissions]
What are possible causes of the discrepancy aside from underestimated emissions data?  There could be increased “oceanic upgassing” as melting of sea ice allows release of carbon from plants in the newly exposed ocean – not likely, as it is an effect not clearly detected before.  There could be decreased “uptake” from the oceans as they reach their capacity to absorb carbon from the air – but such an uptake slowdown should happen more gradually.  And then there is the el Nino effect.
I haven’t found a good explanation yet of just how el Nino affects CO2 positively, much less how it affects CO2 proportionally to the strength of the effect.  However, the largest recorded el Nino up to this time happened in 1998, data indicate that the 2015-16 el Nino is about as strong, and the 1998 el Nino produced approximately the same outsized jump in CO2 relative to the previous year as the 2015-2016 one has, according to the helpful folks at Arctic Sea Ice (neven1.typepad.com).  So Mauna Loa recent data could be explained as (constant underlying CO2 growth rate) plus (2015-2016 el Nino effect). 
However, aside from all the reasons cited above for being mistrustful of the IEA data, there is another reason to feel that fossil-fuel emissions have actually been going up as well: 1996-99 were years of big economic growth almost certainly leading to large increases in fossil-fuel emissions and thus the underlying CO2 growth rate.  In other words, to be truly compatible with 1998, the equation more likely should be (constant underlying CO2 growth rate) plus (significant 2013-16 increase in fossil-fuel emissions and hence CO2 growth rate) plus (2015-16 el Nino effect).
All this reminds me of the scene in Alice Through the Looking-Glass where Alice starts walking towards her destination and finds herself further away than before.  You have to run much faster to stay in the same place, another character tells her, and much faster than that to get anywhere.  I view it as more likely than not that we are continuing to increase our fossil-fuel contribution, and that we will have to run much harder to keep the CO2 growth rate flat, much harder than that to start the growth rate decreasing, and much harder even than that to begin to decrease overall CO2 any time in the near future.

Facts and the Habit of Despair

In one of Stephen Donaldson’s first books, he imagines a world beset by evil whose only hope is a leper from Earth.  In those days, leprosy had no treatment and the only way for lepers to survive was to constantly survey oneself to detect injuries that dead leprous nerves failed to warn oneself of, every waking second of every day – to constantly face one’s facts.  In the new world, he tells the leader of the fight against evil “You’re going to lose! Face facts!” The leader, well aware that only the leper can save his world, says very carefully, “You have a great respect for facts.”  “I hate facts,” is the response.  “They’re all I’ve got.”
I have concluded above that it is more likely than not that fossil-fuel pollution and therefore the underlying CO2 growth rate continues to grow – and that appears to be the fact delivered by the best data we have, the Mauna Loa and global CO2 measurements.  I hate this fact; but it seems to be what we have. 
And yet, Donaldson’s same series delivers an additional message.  At its improbable world-saving end, the leper asks the Deity responsible for his being picked why he was chosen.  Because, says the Deity, as a leper you had learned that it is not despair, but the habit of despair, that damns [a leper or a world].  To put it in global warming terms:  Yes, the facts are not good.  But getting into the habit of giving up and walking away in despair whenever you are hit by these facts is what will truly damn our world.  Because the horrible consequences of today’s facts are only a fraction of the horrible consequences of giving up permanently because of today’s facts.
To misquote Beckett:  I hate these facts.  I can’t go on.
I’ll go on.

Monday, January 11, 2016

The Climate Science Global Warming Model, Part II: Implications for Action


To repeat:  In this series of blog posts, I attempt to give an overall view of the physics/chemistry-based climate science dealing with climate change and today’s global warming.  I do so because I can’t find an overall summary such as the one I’m about to try to create.  My hope is that readers will understand why this science makes me so alarmed and seemingly so pessimistic.  As always, misunderstandings and misstatements are my fault and do not reflect on the science itself.

Implications from Climate Science for Action on Global Warming

I don’t want to stray too far from the science in discussions of implications for actions in fighting today’s global warming.  However, it seems to me that there are very clear links between the science and certain strategies that are worth discussing.  Here they are:

1.       Mitigation is far more important than adaptation.  Here, mitigation means action directly aimed at ceasing carbon emissions, e.g., converting a house to solar heating/cooling, while adaptation means changing our environment to handle present and future changes in it due to global warming – for instance, changing the mix of heating and cooling devices to emphasize greater efficiency at cooling a house, or moving away from ocean shorelines.  The point that climate science makes is that when we are choosing between $1000 of spending on mitigation and $1000 of spending on adaptation, in the long run, we are better off spending on mitigation.  Remember, many warming-related damages increase exponentially, and that increase is happening extremely fast, and therefore the increment of comfort gained now, even if it suffices to handle the changes arriving 40 years from now, is not equivalent to the lesser damage caused by doing mitigation now, even with a reasonable NPV factored in.  This is particularly true when we assume, as seems likely, that your behavior replicated over a major proportion of a global population will have a major impact on the effectiveness of mitigation.

2.       Certain key complementary strategies are also vital to success.  In particular, we may cite reductions in emissions of black carbon; strategies aimed at keeping a large proportion of reserves of oil, natural gas, oil shale/tar sands, and above all coal, in the ground for the next 2000 years; water conservation and better soil-erosion handling to allow the maximum of future arable land; cessation of fertilizer and garbage runoff into the oceans, to delay or mitigate killing of ocean life/food; and investment in the lands that we are likely to move to and their future ecosystems, so that we may eventually move to them at low cost and live in them with low carbon emissions.

3.       There should be investment in moving to low-carbon-emissions technologies, products, and services at a much greater and/or more rapid rate than we are doing now.  Climate science – in particular, the records of increases in atmospheric carbon at Mauna Loa, Hawaii – tells us that our present efforts are not yet having a really detectable effect on the exponentially increasing amount of atmospheric carbon.  We need to do much more, as soon as possible. 

Conclusions

I first began to read and write about climate science – peripherally – about 6 years ago.  At that time, the core climate science as I have laid it out here was in existence.  In the last 6 years, all that has happened has given greater detail, firmed up the data behind some of the conclusions, and fleshed out the model with particular cases such as the effects of a lower Arctic-northern America temperature gradient.  As science, it is more solid than ever; but I knew that then.

Almost all the climate science news since then that I did not anticipate but recognized was possible has been negative.  The visible effects of climate change have showed up in dramatic form 15 years earlier than I expected – Hurricane Sandy, 60 degrees in Boston on Christmas Day.  The ancillary effects have moved the consensus estimate for effects associated with a doubling of atmospheric carbon from 2-3 degrees C to 4 degrees C.  Permafrost has begun to melt earlier than hoped, and the Antarctic land ice at a greater rate than first estimated.  At least the research on what it would take to turn Earth into Venus has given us 900 million years of breathing room.

As a model explaining the workings of the world, I find climate science breathtakingly elegant and beautiful.  As a new way of seeing the world I live in, it is a constant revelation.  As a pointer to the implications of today’s global warming, I find it hard not to feel sick.  How would you feel, if you thought there was a significant likelihood that the vanishing of 90% of arable land would mean the death of at least ½ of all of today’s great-great-grandchildren (and yes, that includes those of the rich)?  And that that likelihood turned into a probability if we continued to do pretty much what we have been doing for another 60-100 years or so?

I am reminded of Lincoln’s tale of the little boy who stubbed his toe.  He was too old to cry, he said, but it hurt too much to laugh.  I appreciate climate science very much; but its implications hurt too much for me to revel in it.

Happy New Year.

The Climate Science Global Warming Model, Part I: Today's Human-Caused Global Warming


To repeat:  In this series of blog posts, I attempt to give an overall view of the physics/chemistry-based climate science dealing with climate change and today’s global warming.  I do so because I can’t find an overall summary such as the one I’m about to try to create.  My hope is that readers will understand why this science makes me so alarmed and seemingly so pessimistic.  As always, misunderstandings and misstatements are my fault and do not reflect on the science itself.
In this post, we take a look at what climate science has to say about today’s human-caused global warming.  This episode of global warming beyond what would be expected from the Milankovitch cycle and underwater volcanism (that is, from a “Goldilocks” steady state that would soon begin to slowly decrease towards an Ice Age) has already taken us to global temperatures not seen for the last 1-5 million years. An additional 1.5 degrees C is already “baked in” – that is, the temperatures have not yet caught up to the atmospheric carbon level, but even with zero carbon emissions from now on, and without some technology that we do not presently have, we will still see this additional increase to a new “semi-steady state”.
The key difference between all previous episodes of global warming and this human-caused one is that for the first time ever, carbon stored in animal and vegetative matter beneath the Earth (including the shallow parts of the oceans) is being brought above ground and burned, injecting into the atmosphere much of the carbon so stored over the last 100 million to billion years of life on Earth.  Specifically, this means oil, natural gas, and coal, as well as the tar sands and oil shale that also contain “fossils”.  If all of the rest of this “energy reserve” were to be used in the same way over the next 100 years, then atmospheric carbon would probably reach more than 2000 ppm, and possibly as high as 4000 ppm.
As should be clear from my last blog post, the main difference in climate behavior from previous episodes of global warming is that atmospheric carbon injection is happening far faster:  about 100 times faster.  Thus, parts of the climate process that operate to slow global warming or return to a previous “steady state”, which operate much more slowly, have very little impact on this global warming.  More specifically, “weathering” has very little impact, and oceanic ice-creation mechanisms that operate to keep the Arctic filled with sea ice also cannot operate very well.
Let’s be a little more specific about the last point.  The Oceans in today’s Earth operate in a great surface/underwater loop or “conveyor belt”.  The Gulf Stream’s warm water moves up to a point near Greenland, cooling as it goes.  When it reaches that point, it dives down below the surface and begins a journey south to the Antarctic Ocean, and from thence up the Pacific to a point near Alaska and Siberia, where it resurfaces and completes the loop back to the Gulf Stream.  This is called a “conveyor belt” because it transmits surface ocean temperature changes to the deep Ocean over a period of about 100 years. 
When global warming causes sea ice to melt more at the “dive-down” point, that sea ice is fresher water, and therefore warmer (salt prevents water from freezing until it reaches about 29 degrees F, at which point the salt is expelled from the sea ice).  That warmer, fresher water slows or even stops the Gulf Stream from diving down at the dive-down point.  That, in turn, slows down or stops the rest of the Gulf Stream if it goes on too long.  If it doesn’t, the fresher water, being more easy to freeze, reforms near the dive point, and the “diving down” resumes.  In effect, up to a point, this circulation cycle acts to retard or reverse the loss-of-sea-ice albedo effects of global warming.  However, the faster the global warming, the less effect this mechanism has.

Climate Effects of Today’s Global Warming

Today’s global warming has and will have (remember, further temperature increases are “baked in”) effects on global climate especially in four ways:

1.       Temperatures in the Arctic and Antarctic warm perhaps 5-10 times as much as those nearer the Equator.  Also, winter temperatures warm more than summer ones, and night temperatures warm more than day ones.  The key effect from this on the Northern Hemisphere is that the temperature differential between Northern Eurasia and Northern America and the Arctic decreases, while weather patterns on both sides of that divide have more energy to stay on the “wrong side”.  The initial result is that weather systems on both sides linger longer, and are more extreme.  Boston saw almost-record cold last January for long periods of time, as Arctic weather came down and lingered; the Arctic saw several unprecedented warm spells from the south over this New Year, including a 22-hour period where the temperature was above freezing.  In the long run, however, the warmth from global warming will dominate the cold from the Arctic: a few million years ago, 360 ppm atmospheric carbon saw a subtropical Arctic.

2.       The atmosphere has more energy (heat) and more rain, making for stronger and windier extreme-weather events.  It appears that increased incidence of tornadoes and increased top winds in hurricanes are to some extent caused by these temperature increases.  Likewise, heavier rain/snow when it occurs and greater wind-driven oceanic “storm surges” onto land are definitely occurring and are effectively caused by global warming.

3.       Climate “zones” move poleward, i.e., northward/southward.  In particular, subtropical zones with high heat and massive droughts extend northward/southward.  In 2050-2070, America/southern Canada (except New England/NY), pretty much all of Europe, the Middle East, India, and 2/3 of China, as well as Australia and Tasmania, are projected to be suffering Dust-Bowl-like droughts, if things go on as they have.  This may or may not be true if very strong action is taken that reduces drastically human contributions to atmospheric carbon.  Because of the unusual speed of global warming, the majority of ecosystems in these area cannot move northward effectively and are faced with extinction in the next 100 years.  Also, a decreased availability of water for farming due to loss of snowpacks and overuse of aquifers combined with the drought conditions places perhaps ½ of all of today’s arable land under threat.

4.       Land ice and snow melts exceptionally rapidly.  Melting of sea ice only causes slight ocean rise, as ice displaces as much area as cold water; oceanic heat increases (slow to happen, as noted) expand and therefore raise the oceans less than 10%.  Land ice and snow melting, however, translates directly into sea-level rise.  The key areas here are Greenland, West Antarctica, and East Antarctica.  A domino effect in the Arctic, which has already started, removes the sea-ice barriers to increased flow of Greenland glaciers into the oceans surrounding it, leading to a doubling of net land-ice loss there every 10 years.  West Antarctica has already begun the same process, especially in the Antarctic Peninsula, and it appears that East Antarctica is doing so as well.  Greenland may contribute as much as 10 feet of sea-level rise this century, while West Antarctica may well also contribute that amount.  In the longer term (again, this may be “baked in” by a 360-ppm atmospheric carbon level), sea level rise may reach a maximum of 220-240 feet.  This would submerge 1/3 of the world’s present arable land in salt water.  If we add 30-foot-higher storm surges that “salt” water supplies inland, most of today’s coasts to 50-100 miles inland will be uninhabitable.  Thus, for example, in the US, Florida, Louisiana, Alabama, and Mississippi will effectively cease to exist.
There are also indirect “feedback loops” related to climate change that make temperature increases per atmospheric-carbon doubling 4 degrees C rather than 2.5 degrees C.  The three key effects there are increased albedo due to ice/snow replacement by water/rock/earth, melting of permafrost (which has already begun to occur) that leads to release of permafrost carbon into the air, which in turn increases temperature, and emergence of “black carbon” from the layers of melted ice, which “dirties” and thereby increases the albedo of remaining snow/ice as well as adding carbon to the atmosphere.

Finally, we should note some of the things that do not affect today’s climate change in the medium to long term:

1.       Above-water volcanism and “nuclear winter”.  These create winter-like conditions in which the temperature drops drastically.  In both cases, the particles that create these conditions linger for 1-7 years, but the net effect on underlying climate change is exactly zero:  if you double atmospheric carbon during the 1-7-year period after an eruption or nuclear-winter episode, at the end of 7 years a temperature increase of 4 degrees C will be “baked in.”

2.       Changes in the amount of cloud cover caused by climate change.  Studies have shown that, if anything, these increase temperatures slightly. 

3.       Aerosol pollution.  This is most frequently a side-effect of coal burning, and results in increased “smog”-related airborn particles that decrease temperatures significantly.  However, as China is now finding out, aerosol pollution can only counteract increased atmospheric carbon up to a point, after which people suffering aerosol pollution will die.  Thus, it now appears extremely likely that overall aerosol pollution will decrease over the next century, “uncovering” the temperature increases that were masked by the pollution increases.

4.       Methane emissions will increase sharply, but it now appears that their overall effect will be relatively small.

Summary

What worries climate scientists the most about today’s global warming is its unprecedented speed and extent, both of these being the result of humans rapidly and massively extracting and injecting into the atmosphere hundreds of millions of years of sequestered carbon primarily from dead animals and plants. The resulting feedback loops and processes, less checked than ever before by stabilization processes, enhance the immediate temperature effects of the new atmospheric carbon and have reached the point of oceanic saturation where much of the carbon now in the atmosphere will not be “cycled out” for thousands of years.

But there is another key implication of today’s process of global warming.  At every stage, the climate effects of doubling atmospheric carbon are 2-10 times greater – including the negative ones.  Thus, while storm surges once we enter the 500-1000 ppm atmospheric carbon stage, the frequency of 100-mph storms now equals that of 80- or 90-mph storms in the previous stage; and the damages from 100-mph winds is 10 times that of 80-90-mph winds. To fail to prevent 500 ppm atmospheric carbon is an argument for redoubling our efforts to prevent 1000 ppm atmospheric carbon, not for giving up, because the consequences of failure to prevent each stage get progressively worse.

What is the worst that can happen, and what does it require?  According to a recent study by Hansen et al, the “can’t get any worse” scenario happens when we burn a certain amount of today’s fossil fuel reserves within the next 100-200 years:
If we burn all the coal plus a very minor amount of everything else, we reach “worst consequences.”
If we burn all of the oil, 33% of the coal, and none of the gas/tar sands/oil shale we reach “worst consequences”.
If we burn 17% of the coal, 50% of the natural gas, and all the tar sands/oil shale/oil, we reach “worst consequences”.
What are the worst consequences?  Here’s a brief summary of those:
In Hansen’s worst-consequence world, humans could survive below the mountains during the day outside only for short periods of time during the summer, and there would be few if any places to grow grains. 
In my version of Hansen's worst-consequence world, we would try to survive on less than 10 % of today's farmland, less than 10 % of the animal and vegetable species (with disrupted ecosystems), and practically zero edible ocean species, in methane-heavy mosquito-infested peat swamps that must be developed before they can be farmed, in dangerous weather, for thousands of years.
My final blog post on this subject will examine certain implications of climate science on effective ways to combat today’s global warming.

Climate Science's Climate Change Model


In this series of blog posts, I attempt to give an overall view of the physics/chemistry-based climate science dealing with climate change and today’s global warming.  I do so because I can’t find an overall summary such as the one I’m about to try to create.  My hope is that readers will understand why this science makes me so alarmed and seemingly so pessimistic.  As always, misunderstandings and misstatements are my fault and do not reflect on the science itself.
Let’s begin with sunlight.  The sun’s light is always accompanied by warmth/energy.  Even on barren Mars, without an atmosphere to “contain” that warmth, temperatures at the surface are 100 degrees F below zero, only part of which is due to the planet’s internal heat.  The rest is sunlight striking the surface during daylight and giving off heat that is absorbed and radiated by surface materials.
The actual amount of absorption and heating depends on the “color” of the surface (or, in scientific jargon, the “albedo” of materials).  More specifically, think of the color spectrum you were taught:  white reflects all (except perhaps infrared) radiation, and absorbs no or very little heat.  Black absorbs all (except perhaps ultraviolet) radiation, and therefore absorbs a lot of heat.  On the Earth, water is blue and absorbs a moderate amount of heat; ice and snow are white or off-white and absorb very little heat; and soil and rock tend to be brown and black and absorb lots of heat.  Likewise, land covered by vegetation tends to be light to dark green and absorbs much more heat than the sand that would otherwise predominate on land – and that doesn’t even consider the effects of photosynthesis.
Finally, the atmosphere of Earth takes reflected light and reflects it back to Earth, adding yet more warmth.  The amount reflected depends on the amount of certain elements in the atmosphere.
The warmth of Earth therefore consists of three layers, added over time:
1.       Light striking the surface of the planet, heating it to perhaps -100 degrees F;

2.       The atmosphere that both reflects some light back to the surface and prevents water from evaporating into space – resulting in oceans that absorb more light/heat, raising the temperature to perhaps -30 degrees F; and

3.       Animal and vegetative (plant) matter, both dead and alive, that absorb still more light/heat and raise the global average temperature to 56 degrees F – in a “steady state” climate.

Carbon Dioxide and Other So-Called “Greenhouse Gases”

“Greenhouse” is a misleading term for what these gases do, which is to reflect light from other parts of the spectrum than are handled by oxygen and hydrogen (the main components of the atmosphere).   However, compared to oxygen and hydrogen, these can vary much more over long periods of time.  In the case of carbon, a “steady-state” value is about 250 ppm, but historically carbon has been above 1000 ppm and below 200 ppm at various times.   

Carbon is one of six elements in the periodic table that is constantly cycling between the atmosphere and the surface of the planet.  Life forms are carbon-based, so animals and vegetables (in the sea as well) constantly add carbon that is either buried as part of the animal/vegetable fossil, or released to the air via breathing or burning (in the case of forests).  However, in order for the amount in the atmosphere and the surface of the Earth to roughly balance, carbon must also be absorbed by “weathering”, wind/rain erosion of rocks that exposes new rock with which carbon can combine.  These are usually eventually washed down to the oceans, which equalize with the atmosphere by agitation that propels the carbon into the air.  In essence, then, over the long run this cycling stabilizes carbon in the atmosphere at about 250 ppm.   The “half-life” of carbon in the atmosphere is about 100 years, so if no cycling upwards were going on it would take about 200 years to drain the atmosphere of carbon.

Probably the only other “greenhouse gas” relevant to climate is methane, which is CH4.  Without me going into a long discussion of methane, you should know that it, too, has seen a massive upsurge in emissions over the last 165 years due to human emissions.  However, the half-life of methane is about 9 years, and the amount of emitted methane needed to have an impact on global temperature comparable to carbon is about 4-10 times what is presently being emitted per year, while the amount unlockable in shallow waters or permafrost in any given 9 years, even at our accelerated pace of warming, is probably less than that.  Rather, because methane contains carbon, the long-term worry is the possibility that the methane in permafrost will be converted primarily to carbon, which would add about 0.6C (guesstimate) to hundreds-of-years global warming.

Plate Tectonics and the Milankovitch Cycle

There are two main ways that atmospheric carbon can deviate significantly from the norm, absent human intervention.  One is predictable, cyclic variation over a period of 100,000 to 200,000 years:  the Milankovitch cycle.  The other is underwater volcanism that ejects carbon, typically while moving one of the Earth’s plates.

The Milankovitch cycle results from three changes in the Earth’s orbit around the Sun:

1.        The Earth wobbles around its axis of spin;

2.       The Earth’s orbit at some times of the year takes it closer to the Sun, at others further away;

3.       Like a rubber band, the Earth’s yearly orbit sometimes becomes more like a circle, sometimes more like an ellipse.
Visualize these in your mind.  At about the point where it is winter in the Northern Hemisphere (where most of the land is) and where the other two effects place the Earth farthest from the Sun, an ice age is kick-started.  The temperature descends gradually as ice encroaches downwards from the Arctic Ocean over land, changing the albedo in the areas affected.  Less carbon is exposed, and so less carbon is emitted into the atmosphere.  This continues until the three effects are closest to the Sun, in Northern Hemisphere’s summer, when a pretty rapid rise in temperature and carbon occurs, reaching a steady state that lasts for about 50-100,000 years.

The underwater volcanism effect is much less frequent but can be far more powerful in its effect on atmospheric carbon and global warming.  In the most recent example, about 55 million years ago, continuous underwater eruptions in around the plate then near the South Pole sent it steadily northwards to crash into the Eurasia plates, yielding India plus the Himalayas at the point of impact.  During this period, which is unlikely to have lasted less than 20,000 years, steady output of carbon into the atmosphere kept the atmospheric carbon above 1000 ppm.  The results were high temperatures (about 7 degrees C more than the present time), mass extinctions of land and sea flora and fauna, and very high sea levels – so-called “hell and high water.”  Mass extinctions, high temperatures, and high sea levels have also been confirmed for the previous such atmospheric-carbon rise (about 155 million years ago).

Also noteworthy is what happened after the eruptions came to an end.  Atmospheric carbon decreased back to its “steady-state” level, but only slowly.  In the case of the most recent such episode, atmospheric carbon took 50-54 million years to return to “steady-state”, reaching it only 1-5 million years ago.  The reason is that the oceans were in effect saturated with carbon:  most of any decrease in atmospheric carbon was offset by fresh contributions from the ocean, while “weathering” that returned the carbon to the planet’s interior worked only slowly to end that saturation.

And another factor worthy of note is the composition of the carbon dioxide.  Carbon dioxide put in the air during one of these extraordinary periods is more acid than “normal” CO2.  Therefore, the atmosphere and rain are both more acid than in “steady-state” periods.

Summary

The Earth’s climate can therefore be said to be a process that operates to keep climate relatively stable both in the short (10,000s of years) and long (billions of years) term, but where too large a deviation from atmospheric carbon stability has the opposite effect:  it drives and maintains further deviation to a new “semi-steady state” that lingers for a while even when the main impetus for deviation is gone.  To cite one example:  we are presently in the late stages of the “high-temperature” phase of the Milankovitch cycle; what some scientists call the “Goldilocks” climate (not too hot nor too cold for human purposes).  Absent human-caused carbon emissions, we would have expected to see a slow descent into an Ice Age begin in less than 10,000 years.

The critical factor in creating both Milankovitch and plate-tectonic deviations from a “Goldilocks” steady state is atmospheric carbon.  In the case of the Milankovitch cycle, increased/decreased sunlight is the initial cause of warming/cooling, followed by a feedback loop between sunlight absorption and carbon emissions.  In the case of underwater volcanism, the atmospheric carbon itself is the initial cause of warming, followed by a feedback loop between sunlight absorption and carbon emissions as well as further volcanic carbon injections.

A minor note:  Above a certain point, atmospheric carbon would become so prevalent as to drive global temperatures above the boiling point of water.  The oceans would then evaporate, and from then on global surface temperatures would be such that acid (from the carbon) rain would simply evaporate before it reaches the surface, and no life apparently could survive.  The planet Venus now operates in just such a way.  Luckily, even if all fossil-fuel reserves were used, we cannot presently reach that point of atmospheric carbon.  However, heat from the sun increases at the rate of 1 degree C every billion years, and therefore, at the earliest, it would be possible for Earth to turn into Venus 900 million years from now.

Friday, January 8, 2016

Climate Change Bulletin 2016 #2: CO2

The initial estimate for the 2015 yearly increase in CO2 as measured from Mauna Loa, Hawaii, is now available.  It set a new record for amount of increase:  3.17 ppm.  This was the first increase above 3, and was 0.25 above the second biggest increase (1998). It also represented the first time the increase had surpassed 2 in either three or four consecutive years.  Increases in 1959-1964 averaged about 0.6 ppm per year, so that in the last 50 years the yearly increase has more than quintupled – a doubling approximately every 20 years. 

Projecting forward 40 years, the increase in 2055 would therefore be in the area of 12 ppm, and the total carbon in the atmosphere would be in the 600 ppm range, or more than double the pre-industrial value.  Projecting forward another 40 years, the yearly increase would be in the area of 45 ppm, and the total carbon ppm in the atmosphere would be about 1040 ppm, or four times the preindustrial value.  Acorrding to the estimate of James Hansen et al that each doubling of CO2 in the atmosphere historically corresponds in the long run to a 4 degrees C (7.2 degrees F) increase in global land temperature, and twice that in the far north/south, We are therefore talking about an increase of 8 degrees C or 15 degrees F from 1850 to 2100 "baked in" (that is, not achieved in 2100 but very likely to be reached 100-200 years later) on average and 16 degrees C or 29 degrees F in the far north/south. 

Already 1/2 of arable land has been lost to soil erosion, according to a recent study.  1/3 of the remaining land would be under threat from rising salt-water seas and storm surges due to the increased heat, by some estimates as high as 30 feet of seal-level rise by the end of the century.  Much of the rest would be under threat from massive droughts affecting all of the US except New England/NY/PA, most of Europe, much of the Middle East, India, most of Africa, Australia, and most of China.

That is all.

Thursday, January 7, 2016

Climate Change Bulletin 2016 #1

In the last week of December 2015, a storm from the northern United States surged northwards, reaching a near-record low barometer reading and, for a period of about 22 hours, heating the air on a line running along the eastern coast of Greenland to the North Pole to above freezing – or more than 50 degrees F above the average temperature at this time of year.  It was the deepest into winter that above-freezing temperatures had been recorded by almost a month, and it was accompanied by rain whose warm moisture caused serious melting of sea and land (Greenland) ice.

The consensus of scientific observers has been that such an occurrence at this time of year will not by itself significantly reduce the sea ice at maximum or minimum.  However, it will give a significant acceleration to melting of Greenland’s glaciers into the sea, thus keeping the “doubling every decade” trend of Greenland land ice melt going.  This is not a one-off event, but rather related to the “wavy jet stream” pattern of weather in the winter that will not only export colder air south but also import much warmer air north into the Arctic.  In other words, this event suggests that the pessimistic forecast of sea rise of 15-30 feet this century is more in line with what is happening than more optimistic forecasts of 1-15 feet.

That is all. 

Tuesday, December 15, 2015

Two Humble Suggestions For Basic Research in Computer Science

In a somewhat odd dream the other night, I imagined myself giving potloads of targeted money to some major university a la Bill Gates, and I was choosing to spend it on basic research in computing.  What, I asked myself, was worthy of basic research that had not been probed to death already?

Obvious candidates relevant to the frontiers of the computing market these days include artificial intelligence and speech recognition.  These, however, imho are well-plowed fields already, having received major basic-research attention at least since Minsky in the 1970s (AI) and government need for translations of foreign documents (written-speech parsing) in the Cold War of the 1960s.  The result of 40+ years of basic research has been an achingly slow translation of these into something useful (Watson, Siri, and their ilk), so that most of the action now is in applied rather than basic research.  So, I said in my dream, I’m not going down that rathole.

Now, I really am not up to date on what universities are really doing in basic computer research, but I do get some gleanings from some of the prizes awarded to Comp Sci professors at a couple of universities.  So I would like to suggest two new areas where I wonder if basic research could really make a difference in the medium term, and allow computing products to do much better.  Here they are:
1.       Recursive computing; and
2.       Causality in analytics.

Recursive Computing

Back in the early 1970s, “theory of algorithms and computing” provided some very valuable insights into the computation times of many key computer tasks, such as sorting and solving a matrix.  One hot topic of ongoing research was figuring out whether tasks done in parallel (non-deterministically) in less than n**3 [n cubed] x some constant (where n is the number of number of data points used by the task) can also be done sequentially in that amount of time.  In math jargon, this was known as the P (olynomial) = N(on-deterministic)P Problem.  Note that at the time, a task that must take exponential time was effectively undoable for all but the smallest cases.

It turned out that several useful tasks fit in the category of those that might possibly be solvable.  For example, the traveling salesman problem seeks to minimize travel time for any possible route between n points.  If and only if P=NP, then the traveling salesman problem could be done for any case in less than order of n**3 or O(n**3) time.  The last time I checked, in the 1980s, the P=NP problem had not been solved, but “good enough” approximations to answers had been identified that got close enough to the right solution to be somewhat satisfactory.

Recursion is, briefly, the solution of a problem of size n by combining the solutions of the same problem of smaller sizes, say, n-1 or n/2.  For example, one can solve a sorting problem of size n by sorting two lists of size n/2 and then running a comparison of list 1 and list 2, piece by piece.  Each list can, in turn, be solved by sorting 4 lists of size n/4 and combining, and so on down to lists of size 2.  If all of this is done sequentially, then the time is O (n log n).  If it is done in parallel, however, with n processors, then the time is O (log n).  That’s a big speedup through parallelism – but it’s not parallelism as P=NP means it.  In practical terms, you simply can’t pack the processors in a tree structure next to each other without the length of time to talk from one processor to another becoming longer and longer.  I estimated the crossover point when parallelism become no longer of use at about 10 ** 6 (a million) to 10 ** 9 (a billion) processors.

In the years since the 1980s, this kind of analysis seemed irrelevant to speeding up computing.  Other techniques, such as pipelining and massively parallel (but not recursive) arrays of PCs seemed to offer better ways to gain performance.  But two recent developments suggest to me that it may be time to dust off and modernize recursion research:
1.       Fast Data depends on Apache Spark, and the model of Apache Spark is of one processor per piece of the data stream applied to a humongous flat-memory data storage architecture (a cluster of PCs).  In other words, we can achieve real-time transaction processing and initial analytics by completely parallel application of thousands to millions of local PCs followed by recombination of the results.  There seems a good case to be made that “divide and conquer” here will yield higher performance than mindless pipelining. 
2.       Quantum computing has apparently proved its ability to handle computer operations and solve problems.  As I understand it, quantum computing data storage (via qubits) is completely parallel, and is not bounded by distance (what Einstein apparently referred to as “spooky [simultaneous] action [by two entangled quantum objects] at a distance [that apparently could be quite large]”.  Or, to put it another way, in quantum computing, P=NP.

Whether this means recursion will be useful again, I don’t know.  But it seems to me worth the effort.

Causality In Analytics

One of the more embarrassing failures of statistics in recent years was in handling the tobacco controversy.  It seemed plain from the data that tobacco smoke was causing first-hand and second-hand data, but the best statistics could apparently do was to establish a correlation, which could mean that tobacco smoke caused cancer, or that genetic tendency to cancer caused one to smoke, or that lung cancer and tobacco use increased steadily because of other factors entirely.  It was only when the biological effects of nicotine on the lungs were traced that a clear causal path could be projected.  In effect, statistics could say nothing useful about causality without a separate scientific explanation.
A recent jaunt through Wikipedia in search of “causality” confirmed my concerns about the present state of statistics’ ability to identify causality.  There were plenty of philosophical attempts to say what causality was, but there was no clear statistical method mentioned that allowed early identification of causality.  Moreover, there seemed to be no clear way of establishing anything between correlation/linear regression and pure causality. 

If any modern area would seem to offer a promise of something better, it would be business analytics.  After all, the name of the game today in most cases is understanding the customer, in aggregate and individually.  That understanding also seeks to foster a long-term relationship with key customers.  And therefore, distinguishing between the customer that spends a lot but, once dissatisfied, leaves and the customer who spends less but is more likely to be a long-term sell (as a recent Sloan Management Review article pointed out, iirc) can be mission-critical.

The reason basic research would seem likely to yield new insights into causality is that one key component of doing better is “domain knowledge”.  Thus, Amazon recently noted that I was interested in climate change, and then proceeded to recommend not one but two books by climate change deniers.  Had the analytics been able to use something like IBM’s Watson, they might have deduced that I was interested in climate change because I was highly concerned about it, not because I was a paranoid conspiracy theorist who thought climate change was a plot to rob me of my hard-earned money.  And basic research that could establish causal models better should also be able to enhance domain knowledge in order to provide the ability to establish the degree of confidence in causality that is appropriate in a particular case, and avoid data-mining bias (the fact that trying out too many models will increase the chances of choosing an untrue one).

Envoi


I expect that neither of these basic research topics will actually ever be plumbed.  Still, that’s my holidays wish list.  And now I can stop worrying about that dream.