Sunday, August 19, 2018

Climate Change Mid-2018: The Relatively Good Bad News


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.

As I have argued before, human metrics on how well we are coping with climate change can be highly misleading, usually on the side of false optimism.  Two metrics that are clearly not thus biased are:

1.       Measurements of atmospheric CO2 at Mauna Loa in Hawaii, which have been recorded since 1959;

2.       Estimates of Arctic sea ice volume (with extent serving as a loose approximation), especially at minimum in September, which have been carried out since the 1980s.

Over the past few years, I have covered the drumbeat of bad news from those two metrics, indicating that we are in a “business as usual” scenario that is accelerating climate change.  In the first half of 2018, what has happened in both cases is that the metrics are not following a “worst possible case” path – hence the “relatively good” part of the title.  At the same time, there is no clearly apparent indication that we are deviating from our “business as usual” scenario – mitigation is not clearly having any effect.  It is possible, however, that we are seeing the beginnings of an effect; it’s just not possible to detect it in the statistical “noise.”  And given that scientists are now talking about a “tipping point” in the near future in which not only a frightening 2 degrees C temperature by 2100 is locked in, but also follow-on feedbacks (like permafrost melt) that take temperature rise eventually to a far more disastrous 3-4 degrees C – well, that’s the underlying, ongoing bad news.
Of course, this summer’s everlasting heat waves in the US, Europe, and the Middle East – heat waves clearly caused primarily by human-generated CO2 emissions and the resulting climate change – make the “new abnormal” obvious to those of us who are not wilfully blind.  But for anyone following the subject with an open mind, the heat waves are not a surprise.  
So let’s take a look at each metric.

The El Nino Effect Recedes


From late 2016 to around June of 2017, the El Nino effect crested, and, as it has done in the past (e.g., 1998) drove both temperatures and the rate of CO2 rise skyward.  Where 2013-2015 saw an unprecedented streak of 3 years of greater than 2 ppm atmospheric CO2 growth, 2016 and 2017 both saw record-breaking growth of around 3 ppm (hiding a brief spurt to almost 4 ppm).  1998 (2.86 ppm) was followed by a year or two of growth around 1 ppm – in fact, slower than 1996-7.  But the percentage rate of rise has also been rising over the years (it reached almost 1% in early 2017, 4 ppm over 404 ppm).  Therefore, it seemed a real possibility that 2018 would see 2.5 ppm growth.  Indeed, we saw 2.5 ppm growth as late as the first month or two of 2018.
Now, however, weekly and monthly growth has settled back to a 1.5-2 ppm rate, consistently since early 1998.  Even a 2 ppm rate gives hope that El Nino did not mean a permanent uptick in the rate of rise.  A 1.5 ppm rate would seem to indicate that 2018 is following the 1999 script – a dip in the rate of rise, possibly because of the follow-on La Nina.  It might even indicate a slight – very slight – decrease in the underlying rate of rise (i.e., the rate of rise with no El Nino or La Nina going on).  And that, as I noted above, is the first indication I have seen that things might possibly be diverging from “business as usual”.  
Of course, there’s always the background of bad news.  In this case, it lies in the fact that whereas ever since I started following CO Mauna Loa 6 or 7 years ago CO2 levels in year 201x were about 10 ppm greater than in year 200x (10 years before), right now CO2 levels are about 13.5 ppm greater than in year 2008.  So, even if the El Nino effect has ended, the underlying amount of rise may still be increasing.  
The best indicator that our efforts are making a difference would be two years of 1 ppm rise or less (CO2 Mauna Loa measures the yearly amount of rise by averaging the Nov.-Feb. monthly rises).  Alas, no such trend has shown up in the data yet.

Arctic Sea Ice:  Not In Stasis, Not in Free Fall


Over the last 2 years, the “new normal” in Arctic sea ice advance and retreat has become apparent.  It involves both unprecedented heat in winter, leading to new low extent maxima, and a cloudy and stormy July and August (key melt months), apparently negating the effects of the winter melt.  However, volume continues to follow a downward overall trend (if far more linear and closer to flat-line than the apparently exponential “free fall” until 2012, which had some predicting “ice-free in 2018”).
As Neven’s Arctic Sea Ice blog (neven1.typepad.com) continues to show, however, “ice-free in September” still appears only a matter of time (at a best guess, according to some statisticians, in the early 2030s).  Subsea temperatures (SSTs) in key parts of the Arctic like above Norway and in the Bering Sea continue to rise and impact sea ice formation in those areas.  As the ice inherited from winter thins, we are beginning to see storms that actually break up the weaker ice into pieces, encouraging increased export of ice to the south via the Fram Strait.  The ice is so thin that a few days ago an icebreaker carrying scientists had to go effectively all the way to the North Pole to find ice thick enough to support their instruments for any length of time.
So the relatively good news is that it appears highly unlikely that this year will see a new low extent, much less an ice-free moment.  The underlying, ongoing bad news is that eventually the rise in SSTs will inevitably overcome the counteracting cloudiness in July and August (and that assumes that the cloudiness will persist).  Since 1980, extent at maximum has shrunk perhaps 12%, while extent at minimum has shrunk perhaps 45% (volume shows sharper decreases).  And in this, unlike CO2 Mauna Loa, there is no trace of a hint that the process is slowing down or reversing due to CO2 emissions reductions.  Nor would we expect there to be such an indication, given that we have only gotten globally serious about emissions reduction in the last 3 years (yes, I recognize that Europe is an exception). 

The Challenge


The question the above analysis raises is:  What will it take to really make a significant impact on our carbon emissions – much less the dramatic reductions scientists have been calling for?  I see no precise answer at the moment.  What I do know is that what we are doing needs to be done even faster, far more extensively – because the last few years have also seen a great increase in understanding on the details of change, as I have tried to show in some of my Reading New Thoughts posts.  The ways are increasingly there; the will is not.  And that, I think, along with countering the disgustingly murderous role of President Trump in particular in climate change (I am thinking of Hurricane Maria and Puerto Rico as an obvious example), should be the main task of the rest of 2018. 

Saturday, August 4, 2018

Reading New Thoughts: Two Books On the Nasty Details of Cutting Carbon Emissions


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
I have finally gotten around to talking about two books I recently read, tomes that have greatly expanded my knowledge of the details and difficulties of reducing carbon emissions drastically.  These books are Peter Kalmus; “Being the Change” and David Owen’s “Where the Water Goes”, and I’d like to discuss the new thoughts I believe they give rise to, very briefly.

Kalmus and the Difficulties of Individual Efforts to Cut Carbon Emissions


“Being the Change” is a bit of an odd duck; it’s the personal musings of a physicist dealing with climate change at the level of cross-planet climates, on his personal efforts to reduce his own greenhouse-gas emissions.  Imho, its major value is that it gives perhaps the best explanation I have read on the science of climate change.  However, as promised, it also discusses Kalmus’ careful dissection of his own and his family’s lifestyle in terms of carbon emissions, and his efforts to reduce these emissions as much as possible.
At the start we find out that Kalmus has been successful in reducing his emissions by 90% over the course of a few years, so that they are only 10% of what they were at the start of the effort.  This is significant because many scientists’ recommendations for what is needed to avoid “worst cases” talk about reductions of 80-90% in a time frame of less than 25 years.  In other words, it seems at first glance that a world of individual efforts, if not hindered as they are now by business interests or outdated government regulations, might take us all the way to a carbon-reduced world.
When we look at the details of Kalmus’ techniques, however, it becomes apparent that a major portion of his techniques are not easily reproducible.  In particular, a significant chunk of savings comes from not flying any more; but he was flying more than most, as a scientist attending conferences, so his techniques extended worldwide are more likely to achieve 50-70% emissions reductions, not 80-90%.  Then we add his growing his own food while using “human manure” as manure; and that is something that is far more difficult to reproduce worldwide, given that perhaps 50% of humanity is now in cities and that scavenging human manure is a very time-consuming activity (not to mention borderline illegal is some jurisdictions).  So we lose another 10-20%, for a net reduction of 30-60%, according to my SWAG (look it up).  
The net of it is, to me, that using many of Kalmus’ techniques universally, if it can be done, is very much worth doing; but also changing business practices and adopting government policies and global efforts is necessary, whether we do our individual efforts or not, to achieve the needed drastic reductions in carbon emissions, over a short or a long time period.  There are two pieces of good news here.  First, Kalmus notes that he could have achieved further significant personal reductions if he’d been able to afford a solar-powered home; and that’s something that governments (and businesses) can indeed take a shot at implementing worldwide.  Second, I heard recently that my old junior high’s grade school was now teaching kids about individual carbon footprints ("pawprints") and what to do about them.  Yes, the recommendations were weak tea; but it’s a good start at spreading individual carbon-emissions reductions efforts across society. 

Owen and the Resistance of Infrastructure, Politics, and Law to Emissions Reductions and Sustainability


Nominally, “Where the Water Goes” is about the Colorado River watershed, how its water is allocated, and changes due to the evolution of the economies of neighboring state plus the pressures due to increasing climate-change water scarcity, increased usage from population growth, and the need for sustainability and carbon-emissions reductions.  What stands out about his account, however, is the weird and unexpected permutations of watershed management involved.  Here are a few:

·         The Colorado originally did not provide enough water for mining, except if it was reserved in large chunks for individuals.  As a result, a Law of the River set of water-use rights has grown up in place of the usual “best fair use”, where the older your claim to a certain amount of the water is, the more others whose use of scarce water you pre-empt. 

·         An elaborate system of aqueducts and reservoirs that feed water to cities from Los Angeles to Denver.

·         Rural economies entirely dependent on tourism from carbon-guzzling RVs and jetskis used on man-made lakes.

·         Agriculture that is better in the desert than in fertile areas – because the weather is more predictably good.

·         A food-production system in which supermarket chains and the like now drive agriculture to the point of demanding that individual farmers deliver produce of very specific types, weight ranges, and quality – or else;

·         A mandated cut in water use can lead to real-world water use increase – because now users must use draw more water in low-water-use periods to avoid the risk of running out of their “claimed amount” in a high-use period.

Owen’s take is that it is possible, if people on all sides of the water-scarcity issue (e.g., environmentalists and business) sit down and work things out, to “muddle through” and preserve this strange world by incremental adaptation in a world of increased water scarcity due to climate change, and that crude efforts at quick fixes risk the catastrophic breakdown of the entire system.  My reaction to this is quite different:  to change a carbon-based energy system like the Colorado River is going to take fundamental rethinking, because not only the “sunk cost” infrastructure of aqueducts, reservoirs, and irrigation-fed agriculture, plus rural-industry and state-city politics reinforces the status quo, but the legal system itself – the legal precedents flowing into real-world practices – metastasizes and elaborates the carbon excesses of the system. 
For this particular system, and probably in a lot of cases, I conjecture that the key actors in bringing about carbon reductions are the farmers and the “tourism” industries.  The farmers are key because they in fact use far more water than the cities for their irrigation, and therefore carbon-reduction/sustainability policies that impact them (such as reductions in pesticides, less meat production, or less nitrogen in fertilizers) on top of water restrictions make their job that much harder.  It is hard to see how anything but money (correctly targeted supports and incentives) plus water-use strategies focused on this can overcome both the supermarket control over farmers and these constraints to achieve major carbon-use reductions.  
Meanwhile, the “tourism industries” are key because, like flying as discussed above, they represent an easier target for major reductions in energy and carbon efficiency than cities.  On the other hand, these rural economies are much more fragile, being dependent on low-cost transport/homes in the RV case, and feeding the carbon-related whims of the rich and semi-rich few, in the jetski case.  In the RV case, as in the farmer case, money for less fossil-fuel-consuming RVs and recreation methods will probably avoid major economic catastrophe.
However, I repeat, what is likely to happen if this sort of rethinking does not permeate throughout infrastructure, politics, and the law, is the very major catastrophe that was supposed to be avoided by incrementalism, only in the medium term rather than in the short term, and therefore with greater negative effects.  The tourism industries will be inevitable victims of faster-than-expected, greater-than-expected water shortages and weather destruction.  The farmers will be victims of greater-than-expected, faster-than-expected water evaporation from heat and weather destruction.  The cities will be the victims of resulting higher food prices and shortages.  
What Owen’s book does is highlight just how tough some of the resistance “built into the system” to carbon-emissions reductions is.  What it does not do is show that therefore incrementalism is preferable.  On the contrary.

Wednesday, July 25, 2018

Reading New Thoughts: Haskel and Westlake’s Capitalism Without Capital, and the Distorting Rise of Intangible Assets


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
In my view, Haskel/Westlake’s “Capitalism Without Capital” is not so much an argument that the increasing importance of “intangible assets” constitutes a new and different “intangible economy”, as strong evidence that the ever-increasing impact of software means that a fundamental idea of economics – that everything can be modeled as a mass of single-product manufacturers and industries – is farther and farther from the real world.  As a result, I would argue, measures of the economy and our well-being based on those assumptions are increasingly distorted.  And we need to do more than tweak our accounting to reflect this “brave new world”.  
First, my own brief “summary” of what Haskel/Westlake say.  They start by asserting that present-day accounting does not count intangible company investments like software during development, innovation property such as patents and R&D, and “economic competencies” such as training, branding, and business-process research.  In the case of software development, for example, instead of inventory that is capitalized and whose value is represented at cost until sold, we typically have expenses but no capitalized value right up until the software is released and sold.  A software company, therefore, with its continual development cycle, appears to have zero return on investment on a lot of its product portfolio.
Haskel/Westlake go on to argue that a lot of newer companies are more and more like software companies, in that they predominantly depend on these “intangible assets.”  The new breed of company, they say, has key new features:

1.       “sunk costs” – that is, you can’t resell development, research, or branding to get at its monetary value to you.

2.       “spillovers” – it is exceptionally easy for others to use your development-process insights and research.

3.       “scalability” – it requires relatively little effort and cost to scale usage of these intangible assets from a thousand to a million to a billion end users.

4.       “synergies” – research in various areas, software infrastructure, and business-process skills complement each other, so that the whole is more than the sum of the value-added of the parts.

5.       “uncertainty” – compared to, say, a steel manufacturing firm, the software company has far more potential upside from its investments, and often far more potential downside.

6.       “contestedness” – such a company faces much greater competition for control of its assets, particularly since they are so easy to use by others.

Finally, Haskel/Westlake say that, given their assumption that “intangible companies” make up a significant and growing part of the global economy, they already have significant impacts on that economy in particular areas:

·         “Secular stagnation” over the last decade is partially ascribed to the increasing undervaluing of these companies’ “intangible assets”.

·         Intangible companies increase income inequality because they function best with physical communication by specialized managers in cities.

·         Intangible companies are under-funded, because banks are not well suited to investing without physical capital to repossess and resell.  Haskel/Westlake suggests that greater use of equity rather than loans is required, and may be gotten from institutional investors and by funding collaborating universities.

·         Avoiding too much failure will require new business practices as well as new government encouragement, e.g., via better support for in-business control of key intangible assets (clear intangible-asset ownership rules) or supporting the new methods of financing the “intangible companies.”

It’s the Software, Sirs


Let’s look at it a different way.  Today’s theories of economics grew out of a time (1750-1850) when large-scale manufacturing was on the rise, and its microeconomics reflects that, as does the fact that data on economic performance (e.g., income) comes from surveys of businesses, which is then “adjusted” to try to include non-business data (trade and reconciling with personal income reports).  From 1750-about 1960, manufacturing continued to increase as a percentage of overall economic activity and employment, at the expense of farming.  From 1960 or so, “services” (ranging from hospitals to concierges) began to carve into that dominance, but all those services, in terms of jobs, could still be cast in the mold of “corporation that is mostly workers producing/dealing with customers, plus physical infrastructure/capital”. 
Now consider today’s typical large software-driven company.  Gone is the distinction between line and staff.  Manufacturing has shrunk dramatically as a share of economic activity, both within the corporation and overall.  Bricks and mortar is shrinking.  Jobs are much more things like developers (development Is not manufacturing nor engineering but applied math), marketers/branders, data scientists (in my mind, a kind of developer), help desk, Internet presence support.  The increased popularity of “business agility” goes along with shorter careers at a particular company, outsourcing, intra-business “services” that are primarily software (“platform as a service”, Salesforce.com).  Success is defined as control over an Internet/smartphone software-related bottleneck like goods-ordering (Amazon), advertising (Google), or “apps” (Apple). 
Now consider what people are buying from these software-driven firms.  I would argue that it differs in two fundamental ways from “manufacturing” and old-style “services”:

1.       What is bought is more abstract, and therefore applicable to a much wider range of products.  You don’t just buy a restaurant meal; you buy a restaurant-finding app.  You don’t just browse for books; you browse across media.  What you are selling is not a widget or a sweater, as in economics textbooks, but information or an app.

2.       You can divide up the purposes of buying into (a) Do (to get something done); (b)  Socialize/Communicate, as in Facebook and Pinterest; and (c) Learn/Create, as in video gaming and blog monetization.  The last two of these are really unlike the old manufacturing/services model, and their share of business output has already increased to a significant level.  Of course, most if not all software-driven companies derive their revenues from a mix of all three.

The result of all this, in economic terms, is complexity, superficially masked by the increased efficiency.  Complexity for the customer, who narrows his or her gaze to fewer companies after a while.  Complexity for the business, whose path to success is no longer as clear as cutting costs amid stable strategies – so the company typically goes on cutting costs and hiring and firing faster or outsourcing more in default of an alternative.  Complexity for the regulator, whose ability to predict and control what is going on is undercut by such fast-arriving devices as “shadow banking”, information monopolies, and patent trolling.
In other words, what I am arguing for is a possible rethinking of macroeconomics in terms of different microeconomic foundations, not the ones of behavioral economics, necessarily, but rather starting from the viewpoint “what is really going on inside the typical software-driven corporation” and then asking how such a changed internal world will reflect back to the overall economy, how macroeconomic data can capture what is going on, and how one can use the new data to regulate and anticipate future problems better.
John LeCarre once said that the key problem post-Cold-War was how to handle the “wrecking infant” – here he was referencing an amoral businessman creating Third-World havoc, although you can translate that to the situation in the US right now.  The software-driven business, in terms of awareness of what to do, is a bit of a wrecking infant.  If it isn’t helped to grow up, the fault will not lie solely in our inability to anticipate the distorting rise of intangible assets, its side-effect, but also in our failure to deal adequately with its abstraction, new forms of organization and revenue, and complexity.

Wednesday, July 4, 2018

There’s Something Wrong Here: July 4, 2018


For the first time in polling history (since 2000), less than half of Americans say they are “extremely proud” of being an American.  Before Donald J. Trump became the Republican candidate for Presidency, that figure had never gone below 54 % (“extremely proud”); it is now 47 %.  The figures show equivalent declines among Democrats and “independents”, but a slight uptick among Republicans.
The conclusion is that President Trump makes a net 7 % of the country more ashamed to be an American.  I cannot find another President of whom historians say that he made a large proportion of Americans more ashamed of their nationality. 

Monday, June 11, 2018

Reading New Thoughts: Hessen’s Many Lives of Carbon and the Two Irresistible Forces of Climate Change


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
Dag Hessen’s “The Many Lives of Carbon” is the best book I have been able to find on the chemical details of the carbon cycle and CO2-driven climate change (despite some odd idioms that make it difficult to understand him at times).  In particular, it goes deeply into “positive feedbacks” that exacerbate departures from atmospheric-O2 equilibrium and “negative feedbacks” that operate to revert to equilibrium.  In the long run, negative feedbacks win; but the long run can be millions of years.
More precisely, if humans weren’t involved, there are medium-term and long-term “global warmings” and coolings.  The medium-term warming/cooling is usually thought to result from the Milankovitch Cycle, in which Earth orbit unusually far from the Sun during winter (“rubber-banding” of its elliptical orbit), a more severe tilt of the Earth’s axis as it oscillates periodically, and “precession” (wobbling back and forth on its axis) combine to yield unusually low Northern-Hemisphere winter heat from the Sun, which kickstarts glaciation, which acts as a positive feedback for global cooling along with atmospheric CO2 loss.  This cooling operates slowly over most of the next 100,000 years to descend into an Ice Age, but then the opposite effect (maximum heat from the sun) kicks in and brings a sharp rise in temperatures back to its original point. 
Long-term global warming is much rarer – there are instances 55 million years ago and 250 million years ago, and indeed some indications that most of the previous 5 largest mass extinctions on Earth were associated with this type of global warming.  The apparent cause is massive volcanism, especially underwater volcanism (the clouds from on-land volcanism actually reduce temperatures significantly over several years, but then the temperatures revert to what they were before the eruption).  It also seems clear that the main if not only cause of the warming is CO2 and methane (CH4) released into the atmosphere by these eruptions – carbon infusions so persistent that the level of atmospheric CO2 did not revert to equilibrium for 50 million years or so after the last such warming.
For both medium-term and long-term global warming/cooling, “weathering” acts as a negative feedback – but only over hundreds of thousands of years.  Weathering involves water and wind wearing away at rock, exposing carbon-infused silica that are then carried to the ocean.  There, among other outcomes, they are taken up by creatures who die, forming ocean-floor limestone that “captures” the carbon and thus withdraws it from the carbon cycle circulating carbon into and out of the atmosphere.
Human-caused global warming takes the slow negative feedback of animal/plant fossils being turned into coal, oil, and natural gas below the Earth’s surface and makes it into an unprecedentedly fast direct cause of global warming, mostly by burning it for fuel (hence “fossil fuels”).   The net effect of CO2 doubling in the atmosphere is 2-2.8 degrees Centigrade land-temperature warming, but it is also accompanied by positive feedbacks (including increased cloud cover, increased atmospheric methane, and “black carbon”/soot decreases in albedo [reflectivity of the sun’s heat]) that, in a slower way, may take the net global warming associated with CO2 doubling up to 4 degrees C.  Some of this can be overcome by the negative feedback of the ocean’s slower absorption of the increased heat, as the ocean “sink” can take more carbon out of the atmosphere, but at some point soon the ocean will be in balance with the atmosphere and much of what we emit in carbon pollution will effectively stay aloft for thousands of years. 
My point in running through the implications of Hessen’s analysis is that there is a reason for scientists to fear for all of our lives:  The global warming that, in the extreme, disrupts agriculture and food webs extremely and makes it unsafe for most of us to operate outside for more than a short period of time most of the year, except in the high mountains, is a CO2-driven “irresistible force”.  There is therefore a good case to be made that “mitigation” – reducing carbon emissions (and methane and black carbon and possibly nitrous oxide) is by far the best way to avoid the kind of global warming that literally threatens humanity’s survival.  And this is a point that I have argued vociferously in past blog posts.

The Other Irresistible Force:  The Business/Legal Bureaucracy


And yet, based on my reading, I would argue that there is an apparently equally irresistible force operating on its own momentum in today’s world:  what I am calling the Business/Legal Bureaucracy.  Here, I am using the word “bureaucracy” in a particular sense:  that of an organization operating on its own momentum.  In the case of businesses, that means a large-business bureaucracy operating always under a plan to make more profit this year than last.  In the case of the law, that means a court system and mindset always to build on precedents and to support property rights.  
To see what I am talking about, consider David Owen’s “Where the Water Goes”, about what happens to all the water in the Colorado River watershed.  Today, most of that water is shipped east to fuel the farms and businesses of Colorado and its vicinity, or west, to meet the water needs of Las Vegas and Los Angeles and the like.  The rest is allocated to farmers or other property owners along the way under a peculiar legal doctrine called “prior appropriation” – instead of the more typical equal sharing among bordering property owners, it limits each portion to a single “prior claimant”, since in the old mining days there wasn’t enough water in each portion for more than one miner to be able to “wash” the gold effectively.  Each new demand for water therefore is fitted legally within this framework, ensuring that the agricultural business bureaucracy will push for more water from the same watershed, while the legal bureaucracy will accommodate this within the array of prior claimants.
To see what creates such an impression of an irresistible force about this, consider the need to cut back on water use as climate change inevitably decreases the snowpack feeding the Colorado.  As Owen points out, there is no clear way to do this.  Business-wise, no one wants to be the one to sacrifice water.  Legally, the simple solution of limiting per-owner water use can actually result in more water use from each owner, as seasonal and annual variations in water needs mean that many owners will now need to draw down and store water in off-seasons “just in case”.   The result is a system that not only resists “sustainability” but in fact can also be less profit-producing in the long run – the ecosystems shafted by this approach, such as the now-dry Mexican terminus of the Colorado, may well be those that might have been most arable in the global-warming future.  And the examples of this multiply quickly:  the wildfire book I reviewed in an earlier blog post noted the extreme difficulties in fighting wildfires with their exacerbating effect on global warming, difficulties caused by the encroachment of mining and real-estate development on northern forests, driven by Legal/Business Bureaucracy.
The primary focus of the Legal/Business Bureaucracy with respect to climate change and global warming, therefore, is after-the-fact, incremental adaptation.  It is for that reason, as well as the dangerous trajectory we are now on, that I view calling the most disastrous future climate-change scenarios “business as usual” as entirely appropriate.

Force, Meet Force


It also seems to me that reactions to the most dire predictions of climate change fall into two camps (sometimes in the same person!): 

1.       We need drastic change or few of us will survive, because no society or business system can possibly resist the upcoming “business as usual” changes:  extreme weather, loss of water for agriculture, loss/movement of arable land, large increases in the area ripe for debilitating/killing tropical diseases, extreme heat, loss of ocean food, loss of food-supporting ecosystems, loss of seacoast living areas, possible toxic emissions from the ocean. 

2.       Our business-economy-associated present system will somehow automagically “muddle through”, as it always seems to have done.  After all, there is, it seems, plenty of “slack” in the water efficiency of agriculture, plenty of ideas about how to adopt businesses and governments to encourage better adaptation and mitigation “at the margin” (e.g., solar now dominates the new-energy market in the United States, although it does not seem to have made much of a dent in the existing uses of fossil fuels), and plenty of new business-associated technologies to apply to problems (e.g., business sustainability metrics). 

An interesting example of how both beliefs can apparently coexist in the same person is Steven Pinker’s “Enlightenment Now”, an argument that the “reason reinforced by science” approach to the world of the late 18th century known as the Enlightenment is primarily responsible for a dramatic, continuing improvement in the overall human situation, and should be chosen as the primary impetus for further improvements.  My overall comment on this book is that Pinker has impressed me with the breadth of his reading and the general fairness of his assessments of that reading.  However, Pinker appears to have one frustrating flaw:  A belief that liberals and other proponents of change in such areas as equality, environmentalism, and climate change are primarily so ideology-driven that they are actually harmful to Enlightenment-type improvements, and likewise their proponents within the government.  Any reasonable reading of Jeffrey Sachs’ “The Age of Sustainable Development”, which I hope to discuss in a later post, puts the lie to that one. 
In any case, Pinker has an extensive section on climate change, in which he both notes the “existential threat” (i.e., threat to human existence) posed by human-caused climate change, and asserts his belief that it can be overcome by a combination of Business/Legal-Complex adaptation to the findings of Enlightenment scientists and geoengineering.  One particular assertion is that if regulations could be altered, new technologies in nuclear power can meet all our energy needs in short order, with little risk to people and little need of waste storage.  I should note that James Hansen appears to agree with him (although Hansen is far more pessimistic that regulation alteration will happen or that nuclear businesses will choose to change their models) and Joe Romm very definitely does not.  
One also often sees the second camp among experts on Business/Legal-Complex matters such as allocation of water in California in reaction to climate-change-driven droughts.  These reactions assume linear effects on water availability of what is an exponential global-warming process, and note that under these assumptions, there is plenty of room for less water use by existing agriculture, and everyone should “stop panicking.” 
So what do I think will happen when force meets force?

Flexibility Is the Enemy of Agility


One of the things that I noticed (and noted in my blog) in my past profession is that product and organizational flexibility – the ability to easily repurpose for other needs and uses – is a good thing in the short run, but often a bad thing in the long run.  How can this be?  Because the long run will often require fundamental changes to products and/or organizations, and the more that the existing product or system is “patched” to meet new needs, the greater the cultural, organizational, and experiential investment in the present system – not to mention the greater the gap between that and what’s going on elsewhere that will eventually lead to the need for a fundamental change.
There is one oncoming business strategy that reduces the need for such major, business-threatening transitions:  business agility.  Here the idea is to build both products and organizations with the ability for much more radical change, plus better antennae to the outside to stay as close as possible to changes in consumer needs.  But flexibility is in fact the enemy of agility:  patches in the existing product or organization exacerbate the “hard-coded” portions of it, making fundamental changes more unlikely and the huge costs of an entire do-over more inevitable.
As you can guess, I think that today’s global economy is extraordinarily flexible, and that is what camp 2 counts on to save the day.  But this very flexibility is the enemy of the agility that I believe we will find ourselves needing, if we are to avoid this kind of collapse of the economy and our food production.
But it’s a global economy – that’s part of its global Business/Legal-Bureaucracy flexibility.  So local or even regional collapses aren’t enough to do it.  Rather, if we fail to mitigate strongly, e.g., in an agile fashion, and create a sustainable global economy over the next 20 years, some time over the 20 years after that the average costs of increasing stresses from climate change put the global economy in permanent negative territory, and, unless fundamental change happens immediately after that, the virtuous circle of today’s economy turns into a vicious, accelerating circle of economic collapse.  And the smaller our economies become, the less we wind up spending on combating the ever-increasing effects of climate change.  At the end, we are left with less than half (and maybe 1/10th) the food sources and arable land, much of it in new extreme-northern areas with soil of lower quality, with food being produced under much more brutal weather conditions.  Or, we could by then have collapsed governmentally to such a point that we can’t even manage that.  It’s an existential threat.

Initial Thoughts On What to Do


Succinctly, I would put my thoughts under three headings:

1.       Governmental.  Governments must drive mitigation well beyond what they have done up to now in their Paris-agreement commitments.  We, personally, should place climate change at the top of our priority lists (where possible), reward commitments and implementation politically, and demand much more.

2.       Cultural.  Businesses, other organizations, and the media should be held accountable for promoting, or failing to promote, mitigation-targeted governmental, organizational, and individual efforts.  In other words, we need to create a new lifestyle.  There’s an interesting take on what a lifestyle like that would look like in Peter Kalmus’ “Being the Change”, which I hope to write about soon.

3.       Personal.  Simply as a matter of not being hypocrites, we should start to change our carbon “consumption” for the better.  Again, Kalmus’ book offers some suggestions.

Or, as Yoda would put it, “Do or do not.  There is no try.”  Because both Forces will be against you.

Thursday, May 17, 2018

Reading New Thoughts: Dolnick’s Seeds Of Life and the Science of Our Disgust


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
Michael Dolnick’s “The Seeds of Life” is a fascinating look at how scientists figured out exactly how human reproduction works.  In particular, it shows that until 1875, we still were not sure what fertilized what, and until the 1950s (with the discovery of DNA) how that fertilization led to a fully formed human being.  Here, however, I’d like to consider what Dolnick says about beliefs before scientists began/finished their quest, and those beliefs’ influence on our own thinking.
In a very brief summary, Dolnick says that most cultures guessed that the man’s seminal fluid was “seed”, while the woman was a “field” to be sown (obviously, most if not all those talking about cultural beliefs are male).  Thus, for example, the Old Testament talks about “seed” several times.  An interesting variant was the idea that shortly after fertilization, seminal fluid and menstrual blood combined to “curdle” the embryo like milk curdled into cheese – e.g., in the Talmud and Hindu writings, seminal fluid, being milky, supplied the white parts of the next generation, like bone, while menstrual blood, being red, supplied the red parts, like blood.  In any case, the seed/field belief in some ways casts the woman as inferior – thus, for example, infertility is always the woman’s fault, since the seeds are fine but the field may be “barren” – another term in the Bible.
In Western culture, early Christianity superimposed its own additional concerns, which affected our beliefs not just about how procreation worked and the relative inferiority or superiority of the sexes, but also our notion of “perfection”, both morally and with regard to sex.  The Church and some Protestant successors viewed sex as a serious sin “wanting a purpose [i.e, if not for the purpose of producing babies]”, including sex within marriage.  Moreover, the Church with its heritage in Platonism viewed certain things as less disgusting than others, and I would suggest that many implicit assumptions of our culture derive from these.  The circle is perfect, hence female breasts are beautiful; smooth surfaces and straight lines are beautiful, while body hair, the asymmetrical male member, and the crooked labia are not.  Menstrual blood is messy, “unclean,” and disgusting, as is sticky, messy seminal fluid.

Effects on Sexism


It seems to me that much more can be said about differing cultural attitudes towards men and women based on the seed/field belief.  For one thing, the seed/field metaphor applies in all agricultural societies – and until the early 1900s, most societies were almost completely agricultural rather than hunter/pastoral or industrial.  Thus, this way of viewing women as inferior was dangerously plausible not only to men, but also to women.  In fact, Dolnick records examples in Turkey and Egypt of modern-day women believing in the seed/field theory and therefore women’s inferiority in a key function of life.
Another implication of the seed/field theory is that the “nature” of the resulting children is primarily determined by the male, just as different types of seed yield different plants.  While this is somewhat counteracted by the obvious fact that physically, children tend to favor each parent more or less equally, there is some sense in literature such as the Bible that some seed is “special” – Abraham’s seed will pass down the generations and single out for special attention from God his Jewish descendants.  And that, in turn, can lead naturally to the idea that mingling other men’s seed with yours can interfere with that specialness, hence wives are to be kept away from other men – and that kind of control over the wife leads inevitably to the idea of women as at least partially property.  And finally, the idea of the woman as passive receptacle of the seed can lead to men viewing women actively desiring sex or a woman’s orgasm as indications of mentally-unbalanced “wantonness”, further reinforcing the (male) impression of women’s inferiority.  
I find it not entirely coincidental that the first major movements toward feminism occurred soon after Darwin’s take on evolution (implicitly even-handed between the sexes) and the notion of the sperm and the egg were established as scientifically superior alternatives to Biblical and cultural beliefs.  And I think it is important to realize that, with genetic inheritance via DNA being still in the process of examination and major change, the role of culture rather than “inherence” in male and female is still in the ascendant – as one geneticist put it, we now know that sex is a spectrum, not either-or.  So the ideas of both similarity and “equality” between the sexes are now very much science-based.
But there’s one other possible effect of the seed/field metaphor that I’d like to consider.  Is it possible that the ancients decided that there was only so much seed that a man had, for a lifetime?  And would this explain to some extent the abhorrence of both male masturbation and homosexuality that we see in cultures worldwide?  Think of Onan in the Bible, and his sin of wasting his seed on the barren ground …

Rethinking Disgust


“Girl, Wash Your Face” (by Rachel Hollis, one of the latest examples of the new breed that live their lives in public) is, I think, one of the best self-help books I have seen, although it is aimed very clearly not at men – because many of the things she suggests are perfectly doable and sensible, unlike the many self-help books in which in a competitive world only a few can achieve financial success.  What I also find fascinating about it is the way in which “norms” of sexual roles have changed since the 1950s.  Not only is the author running a successful women’s-lifestyle website with herself as overworking boss, but her marriage is what she views as her vision of Christianity, complete with a positive view of sex primarily on her terms.
What I find particularly interesting is how she faced the age-old question of negotiating sex within marriage.  What she decided was that she was going to learn how to want to have sex as a norm, rather than being passively “don’t care” or disgusted by it.  I view this as an entirely positive approach – it means that both sides in a marriage are on the same page (more or less) with the reassurance that “not now” doesn’t mean “not for a long time” or “no, I don’t like you”.  But the main significance of this is that it means a specific way of overcoming a culture of disgust, about sex among other things.
I believe that the way it works is captured best by poetry by Alexander Pope:  “Vice is a monster of so frightful a mien/As, to be hated, needs but to be seen/Yet, seen oft, familiar with her face/We first endure; then pity; then embrace.”  The point is that it is often not vice that causes the disgust, but rather disgust than causes us to call it vice – as I have suggested above.  And the cure for that disgust is to “see it oft” and become “familiar” with it, knowing that we will eventually move from “enduring” it to having it be normal to “embracing” it. 
Remember, afaik, we can’t help feeling that everything about us is normal or nice, including excrement odor, body odor, messiness, maybe fat, maybe blotches or speech problems – and yet, culturally and perhaps viscerally, the same things about other people disgust us (or the culture tells us they should disgust us).  And therefore, logically, we should be disgusted about ourselves as well – as we often are.  Moreover, in the case of sex, the disgusting can also seem forbidden and hence exciting.  The result, for both sexes, can be a tangled knot of life-long neuroses.  
The path of moving beyond disgust, therefore, can lie simply with learning to view the disgusting as in a sense ours:  the partner’s body odor as our body odor, their fat as our love handles, etc.  But it is “ours” not in the sense of possession, but in the sense of being part of an integral part of an overall person that is now a vital part of your world and, yes, beautiful to you in an every-day sort of way, just as you can’t help think of yourself as beautiful.  This doesn’t happen overnight, but, just as the ability to ignore itches during Zen meditation inevitably happens, so this will happen in its own good time, while you’re not paying attention.
The role of science in general, not just in the case of how babies are made or sex, has typically been to undercut the rationale for our disgust, for our prejudices and for many of our notions of what vice is.  And thus, rethinking our beliefs in the light of science allows us to feel comfort that when we overcome our disgust about something, it is in a good cause; it is not succumbing to a vice.   And maybe, just maybe, we can start to overcome the greatest prejudice of all:  against crooked lines, imperfect circles, and asymmetry.  

Thursday, May 3, 2018

Perhaps Humans Are One Giant Kluge (Genetically Speaking)

Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
Over the past year, among other pastimes, I have read several books on the latest developments of genetics and the theory of evolution.  The more I read, the more I feel that my background in programming offers a fresh perspective on these developments – a somewhat different way of looking at evolution. 
Specifically, the way in which we appear to have evolved to become various flavors of the species Homo sapiens sapiens suggests to me that we ascribe far too much purpose to our evolution of various genes.  Much if not most of our genetic material appears to have come from a process similar to what, in the New Hacker’s Dictionary and more generally used computer slang, is called a kluge.
Here’s a brief summary of the definition of kluge in The New Hacker’s Dictionary (Eric Raymond), still my gold standard for wonderful hacker jargon.  Kluge (pronounced kloodj) is “1.  A Rube Goldberg device in hardware/software, … 3.  Something that works for the wrong reason, … 5. A feature that is implemented in a ‘rude’ manner.”  I would add that a kluge is just good enough to handle a particular case, but may include side effects, unnecessary code, bugs in other cases, and/or huge inefficiencies. 

Our Genes as a Computer Program

(Note: most of the genetics assertions in this post are plundered from Adam Rutherford’s “A Brief History of Everyone Who Ever Lived”)
The genes within our genome can be thought of as a computer program in a very peculiar programming language.  The primitives of that language are proteins with abbreviations A, G, C, and T.  The statements of the language, effectively, are of form IF (state in the cell surrounding this gene is x AND gene has value y THEN (trigger sequence of chemical reactions z, which may not change the state within the cell but does change the state of the overall organism).  Two peculiarities:
1.       All these gene “statements” operate in parallel (the same state can trigger several genes).
2.       The program is more or less “firmware” – that is, it can be changed, but over short periods of time it isn’t. 
Obviously, given evolution, the human genome “program” has changed – quite a lot.  The mechanism for this is mutation:  changes in the “state” outside an instance of DNA that physically change A, G, C, T, delete or add genes, or change the order of the genes in one side of the chromosome or the other.  Some of these mutations usually occur within the lifetime of an individual, during the time when cells carry out their programmed imperatives to carry out tasks and subdivide into new cells.  Thus, one type of cancer (we now know) is caused when mutation deletes some genes on one side of the DNA pairing, resulting in deletion of the statement (“once the cell has finished this task, do not subdivide the cell”).  It turns out that some individuals are much less susceptible to this cancer because they have longer chains of “spare genes” on that side of the DNA, so that it takes much longer for a steady statistically-random stream of deletions to result in statement deletion.

Evolution as an Endless Series Of Kluges

Evolution, in our computer-program model, is new (i.e., not already present somewhere in the population of the species) mutations.  The accepted theory of the constraints that determine what new mutations prosper over the long run is natural selection. 
Natural selection has been approximated as “survival of the fittest” – more precisely, survival of genes and gene variants because they are the best adapted to their physical environment, including competitors, predators, mates, and climate, and therefore are most likely to survive long enough to reproduce and out-compete alternative mates.  The sequencing of the human genome (and that of other species) has given us a much better picture of evolution in action as well as human evolution in the recent past.  Applied to the definition of natural selection, it suggests somewhat different conclusions:
·         The typical successful mutation is not the best for the environment, but simply one that is “good enough”.  An ability to distinguish ultraviolet and infrared light, as the mantis shrimp does, is clearly best suited to most environments.  Most other species, including humans, wound up with an inability to see outside the “visible spectrum.”  Likewise, light entering the eye is interpreted at the back of the eye, whereas the front of the eye would be a better idea.
·         Just because a mutation is harmful in a new environment, that does not mean that it will go away entirely.  The gene variant causing sickle-cell anemia is present in 30-50% of the population in much of Africa, the Middle East, the Philippines, and Greece.   Its apparent effect is to allow those who would die early in life from malaria to survive through most of the period when reproduction can happen.  However, indications are that the mutation is not disappearing fast if at all in offspring living in areas not affected by malaria.  In other words, the relative lack of reproductive success for those afflicted by sickle-cell anemia in the new environment is not enough to eradicate it from the population.  In the new environment, the sickle-cell anemia variant is a “bug”; but it’s not enough of a bug for natural selection to operate.
·         The appendix serves no useful purpose in our present environment – it’s just unnecessary code, with appendicitis a potential “side effect”.  There is no indication that the appendix is going away. Nor, despite recent sensationalizing, is red hair, which may be a potential side effect of genes in northern climes having less need for eumelanin to protect against the damaging effects of direct sunlight.
·         Most human traits and diseases, we are finding, are not determined by one mutation in one gene, but rather are the “side effects” of many genes.  For example, to the extent that autism is heritable (and remembering that autism is a spectrum of symptoms and therefore may be multiple diseases), no one gene has been shown to explain more than a fraction of the heritable part.
In other words, evolution seems more like a series of kludges:
·         It has resulted in a highly complex set of code, in which it is very hard to determine which gene-variant “statement” is responsible for what;
·         Compared to a set of genes designed from the start to result in the same traits, it is a “rude” implementation (inefficient and with lots of side-effects), much like a program consisting mostly of patches;
·          It appears to involve a lot of bugs.  For example, one estimate is that there have been at least 160,000 new human mutations in the last 5,000 years, and about 18% of these appear to be increases in inefficiency or potentially harmful – but not, it seems, harmful enough to trigger natural selection.

Variations in Human Intelligence and the Genetic Kluge

The notion of evolution as a series of kluges resulting in one giant kluge – us – has, I believe, an interesting application to debates about the effect of genes vs. culture (nature vs. nurture) on “intelligence” as measured imperfectly by IQ tests. 
Tests on nature vs. nurture have not yet shown the percentage of each involved in intelligence variation (Rutherford says only that variations from gene variance “are significant”).  A 2013 survey of experts at a conference shows that the majority think 0-40% of intelligence variation is caused by gene variation, the rest by “culture”.  However, the question that has caused debate is how much of that gene variance is variance between individuals in the overall human population and how much is variance between groups – typically, so-called “races” – each with its own different “average intelligence.” 
I am not going to touch on the sordid history of race profiling at this point, although I am convinced it is what makes proponents of the “race” theory blind to recent evidence to the contrary.  Rather, I’m going to conservatively follow up the chain of logic that suggests group gene variance is more important than individual variance. 
We have apparently done some testing of gene variance between groups.  The second-largest variance is apparently between Africans (not African-Americans) and everyone else – but the striking feature is how very little difference (compared to overall gene variation in humans) that distinction involves.  The same process has been carried out to isolate even smaller amounts of variance, and East Asians and Europeans/Middle East show up in the top 6, but Jews, Hispanics, and Native Americans don’t show up in the top 7. 
What this means is that, unless intelligence is affected by one or only a few genes falling in those “group variance” categories, most of the genetic variance is overwhelmingly likely to be individual.  And, I would argue, there’s a very strong case that intelligence is affected by lots of genes, as a side-effect of kluges, just like autism.
First, over most of human history until the last 250 years, the great bulk of African or non-African humans have been hunters or farmers, with no reading, writing, or test-taking skills, and with natural selection for particular environments apparently focused on the physical (lactose tolerance for European cow use) rather than intelligence-related (e.g., larger brains/new brain capabilities).  That is, there is little evidence for natural selection targeted at intelligence but lots for natural selection targeted at other things. 
Second, as I’ve noted, it appears that in general human traits and diseases usually involve large numbers of genes.  Why should “intelligence” (which, at a first approximation, applies mostly to humans) be different?  Statistically, it shouldn’t.  And as of yet, no one has been even able to find one gene significantly connected to intelligence – which again suggests lots of small-effect genes.
So let’s imagine a particular case.  100 genes affect intelligence variation, in equal amounts (1% plus or minus).  Group A and Group B share all but 10 genes.  To cook the books further, Group A has 5 unique plus genes, and Group B 5 unique minus genes (statistically, they both should have equal amounts plus and minus on average).  In addition, gene variance as a whole is 50% of overall variation.  Then 10/95 of the genetic variation in intelligence (about 10.5%) is explained by whether an individual is in Group A or B. This translates to 5.2% of the overall variation being due to the genetics of the group, 44.8% being due to individual genetic variation, and 50% being due to nurture.
Still, someone might argue, those nature vs. nurture survey participants have got it wrong:  gene variation explains all or almost all of intelligence variation.  Well, that still means that nurture has 5 times the effect that belonging to a group does.  Moreover, under the kluge model, the wider the variation between Race A and Race B, between, say, Jewish-American and African-American, THE MORE LIKELY IT IS THAT NURTURE PLAYS A LARGE ROLE.  First of all, “races” do not correspond at all well to the genetic groups I described earlier, and so are more likely to have identical intelligence on average than the groups I cited.  Second, because group variation is so much smaller than individual variation, group genetics is capable of much less variation than nurture (0-10.5% of overall variation, vs. 0-100% for nurture).  And I haven’t even bothered to discuss the Flynn Effect, which is increasing intelligence over time, equally between groups, far more rapidly than natural selection can operate – a clear indication that nurture is involved.
Variation in human intelligence, I say, isn’t survival of the smartest.  It’s a randomly distributed side effect of lots of genetic kluges, plus the luck of the draw in the culture and family you grow up in.