Monday, June 11, 2018

Reading New Thoughts: Hessen’s Many Lives of Carbon and the Two Irresistible Forces of Climate Change


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
Dag Hessen’s “The Many Lives of Carbon” is the best book I have been able to find on the chemical details of the carbon cycle and CO2-driven climate change (despite some odd idioms that make it difficult to understand him at times).  In particular, it goes deeply into “positive feedbacks” that exacerbate departures from atmospheric-O2 equilibrium and “negative feedbacks” that operate to revert to equilibrium.  In the long run, negative feedbacks win; but the long run can be millions of years.
More precisely, if humans weren’t involved, there are medium-term and long-term “global warmings” and coolings.  The medium-term warming/cooling is usually thought to result from the Milankovitch Cycle, in which Earth orbit unusually far from the Sun during winter (“rubber-banding” of its elliptical orbit), a more severe tilt of the Earth’s axis as it oscillates periodically, and “precession” (wobbling back and forth on its axis) combine to yield unusually low Northern-Hemisphere winter heat from the Sun, which kickstarts glaciation, which acts as a positive feedback for global cooling along with atmospheric CO2 loss.  This cooling operates slowly over most of the next 100,000 years to descend into an Ice Age, but then the opposite effect (maximum heat from the sun) kicks in and brings a sharp rise in temperatures back to its original point. 
Long-term global warming is much rarer – there are instances 55 million years ago and 250 million years ago, and indeed some indications that most of the previous 5 largest mass extinctions on Earth were associated with this type of global warming.  The apparent cause is massive volcanism, especially underwater volcanism (the clouds from on-land volcanism actually reduce temperatures significantly over several years, but then the temperatures revert to what they were before the eruption).  It also seems clear that the main if not only cause of the warming is CO2 and methane (CH4) released into the atmosphere by these eruptions – carbon infusions so persistent that the level of atmospheric CO2 did not revert to equilibrium for 50 million years or so after the last such warming.
For both medium-term and long-term global warming/cooling, “weathering” acts as a negative feedback – but only over hundreds of thousands of years.  Weathering involves water and wind wearing away at rock, exposing carbon-infused silica that are then carried to the ocean.  There, among other outcomes, they are taken up by creatures who die, forming ocean-floor limestone that “captures” the carbon and thus withdraws it from the carbon cycle circulating carbon into and out of the atmosphere.
Human-caused global warming takes the slow negative feedback of animal/plant fossils being turned into coal, oil, and natural gas below the Earth’s surface and makes it into an unprecedentedly fast direct cause of global warming, mostly by burning it for fuel (hence “fossil fuels”).   The net effect of CO2 doubling in the atmosphere is 2-2.8 degrees Centigrade land-temperature warming, but it is also accompanied by positive feedbacks (including increased cloud cover, increased atmospheric methane, and “black carbon”/soot decreases in albedo [reflectivity of the sun’s heat]) that, in a slower way, may take the net global warming associated with CO2 doubling up to 4 degrees C.  Some of this can be overcome by the negative feedback of the ocean’s slower absorption of the increased heat, as the ocean “sink” can take more carbon out of the atmosphere, but at some point soon the ocean will be in balance with the atmosphere and much of what we emit in carbon pollution will effectively stay aloft for thousands of years. 
My point in running through the implications of Hessen’s analysis is that there is a reason for scientists to fear for all of our lives:  The global warming that, in the extreme, disrupts agriculture and food webs extremely and makes it unsafe for most of us to operate outside for more than a short period of time most of the year, except in the high mountains, is a CO2-driven “irresistible force”.  There is therefore a good case to be made that “mitigation” – reducing carbon emissions (and methane and black carbon and possibly nitrous oxide) is by far the best way to avoid the kind of global warming that literally threatens humanity’s survival.  And this is a point that I have argued vociferously in past blog posts.

The Other Irresistible Force:  The Business/Legal Bureaucracy


And yet, based on my reading, I would argue that there is an apparently equally irresistible force operating on its own momentum in today’s world:  what I am calling the Business/Legal Bureaucracy.  Here, I am using the word “bureaucracy” in a particular sense:  that of an organization operating on its own momentum.  In the case of businesses, that means a large-business bureaucracy operating always under a plan to make more profit this year than last.  In the case of the law, that means a court system and mindset always to build on precedents and to support property rights.  
To see what I am talking about, consider David Owen’s “Where the Water Goes”, about what happens to all the water in the Colorado River watershed.  Today, most of that water is shipped east to fuel the farms and businesses of Colorado and its vicinity, or west, to meet the water needs of Las Vegas and Los Angeles and the like.  The rest is allocated to farmers or other property owners along the way under a peculiar legal doctrine called “prior appropriation” – instead of the more typical equal sharing among bordering property owners, it limits each portion to a single “prior claimant”, since in the old mining days there wasn’t enough water in each portion for more than one miner to be able to “wash” the gold effectively.  Each new demand for water therefore is fitted legally within this framework, ensuring that the agricultural business bureaucracy will push for more water from the same watershed, while the legal bureaucracy will accommodate this within the array of prior claimants.
To see what creates such an impression of an irresistible force about this, consider the need to cut back on water use as climate change inevitably decreases the snowpack feeding the Colorado.  As Owen points out, there is no clear way to do this.  Business-wise, no one wants to be the one to sacrifice water.  Legally, the simple solution of limiting per-owner water use can actually result in more water use from each owner, as seasonal and annual variations in water needs mean that many owners will now need to draw down and store water in off-seasons “just in case”.   The result is a system that not only resists “sustainability” but in fact can also be less profit-producing in the long run – the ecosystems shafted by this approach, such as the now-dry Mexican terminus of the Colorado, may well be those that might have been most arable in the global-warming future.  And the examples of this multiply quickly:  the wildfire book I reviewed in an earlier blog post noted the extreme difficulties in fighting wildfires with their exacerbating effect on global warming, difficulties caused by the encroachment of mining and real-estate development on northern forests, driven by Legal/Business Bureaucracy.
The primary focus of the Legal/Business Bureaucracy with respect to climate change and global warming, therefore, is after-the-fact, incremental adaptation.  It is for that reason, as well as the dangerous trajectory we are now on, that I view calling the most disastrous future climate-change scenarios “business as usual” as entirely appropriate.

Force, Meet Force


It also seems to me that reactions to the most dire predictions of climate change fall into two camps (sometimes in the same person!): 

1.       We need drastic change or few of us will survive, because no society or business system can possibly resist the upcoming “business as usual” changes:  extreme weather, loss of water for agriculture, loss/movement of arable land, large increases in the area ripe for debilitating/killing tropical diseases, extreme heat, loss of ocean food, loss of food-supporting ecosystems, loss of seacoast living areas, possible toxic emissions from the ocean. 

2.       Our business-economy-associated present system will somehow automagically “muddle through”, as it always seems to have done.  After all, there is, it seems, plenty of “slack” in the water efficiency of agriculture, plenty of ideas about how to adopt businesses and governments to encourage better adaptation and mitigation “at the margin” (e.g., solar now dominates the new-energy market in the United States, although it does not seem to have made much of a dent in the existing uses of fossil fuels), and plenty of new business-associated technologies to apply to problems (e.g., business sustainability metrics). 

An interesting example of how both beliefs can apparently coexist in the same person is Steven Pinker’s “Enlightenment Now”, an argument that the “reason reinforced by science” approach to the world of the late 18th century known as the Enlightenment is primarily responsible for a dramatic, continuing improvement in the overall human situation, and should be chosen as the primary impetus for further improvements.  My overall comment on this book is that Pinker has impressed me with the breadth of his reading and the general fairness of his assessments of that reading.  However, Pinker appears to have one frustrating flaw:  A belief that liberals and other proponents of change in such areas as equality, environmentalism, and climate change are primarily so ideology-driven that they are actually harmful to Enlightenment-type improvements, and likewise their proponents within the government.  Any reasonable reading of Jeffrey Sachs’ “The Age of Sustainable Development”, which I hope to discuss in a later post, puts the lie to that one. 
In any case, Pinker has an extensive section on climate change, in which he both notes the “existential threat” (i.e., threat to human existence) posed by human-caused climate change, and asserts his belief that it can be overcome by a combination of Business/Legal-Complex adaptation to the findings of Enlightenment scientists and geoengineering.  One particular assertion is that if regulations could be altered, new technologies in nuclear power can meet all our energy needs in short order, with little risk to people and little need of waste storage.  I should note that James Hansen appears to agree with him (although Hansen is far more pessimistic that regulation alteration will happen or that nuclear businesses will choose to change their models) and Joe Romm very definitely does not.  
One also often sees the second camp among experts on Business/Legal-Complex matters such as allocation of water in California in reaction to climate-change-driven droughts.  These reactions assume linear effects on water availability of what is an exponential global-warming process, and note that under these assumptions, there is plenty of room for less water use by existing agriculture, and everyone should “stop panicking.” 
So what do I think will happen when force meets force?

Flexibility Is the Enemy of Agility


One of the things that I noticed (and noted in my blog) in my past profession is that product and organizational flexibility – the ability to easily repurpose for other needs and uses – is a good thing in the short run, but often a bad thing in the long run.  How can this be?  Because the long run will often require fundamental changes to products and/or organizations, and the more that the existing product or system is “patched” to meet new needs, the greater the cultural, organizational, and experiential investment in the present system – not to mention the greater the gap between that and what’s going on elsewhere that will eventually lead to the need for a fundamental change.
There is one oncoming business strategy that reduces the need for such major, business-threatening transitions:  business agility.  Here the idea is to build both products and organizations with the ability for much more radical change, plus better antennae to the outside to stay as close as possible to changes in consumer needs.  But flexibility is in fact the enemy of agility:  patches in the existing product or organization exacerbate the “hard-coded” portions of it, making fundamental changes more unlikely and the huge costs of an entire do-over more inevitable.
As you can guess, I think that today’s global economy is extraordinarily flexible, and that is what camp 2 counts on to save the day.  But this very flexibility is the enemy of the agility that I believe we will find ourselves needing, if we are to avoid this kind of collapse of the economy and our food production.
But it’s a global economy – that’s part of its global Business/Legal-Bureaucracy flexibility.  So local or even regional collapses aren’t enough to do it.  Rather, if we fail to mitigate strongly, e.g., in an agile fashion, and create a sustainable global economy over the next 20 years, some time over the 20 years after that the average costs of increasing stresses from climate change put the global economy in permanent negative territory, and, unless fundamental change happens immediately after that, the virtuous circle of today’s economy turns into a vicious, accelerating circle of economic collapse.  And the smaller our economies become, the less we wind up spending on combating the ever-increasing effects of climate change.  At the end, we are left with less than half (and maybe 1/10th) the food sources and arable land, much of it in new extreme-northern areas with soil of lower quality, with food being produced under much more brutal weather conditions.  Or, we could by then have collapsed governmentally to such a point that we can’t even manage that.  It’s an existential threat.

Initial Thoughts On What to Do


Succinctly, I would put my thoughts under three headings:

1.       Governmental.  Governments must drive mitigation well beyond what they have done up to now in their Paris-agreement commitments.  We, personally, should place climate change at the top of our priority lists (where possible), reward commitments and implementation politically, and demand much more.

2.       Cultural.  Businesses, other organizations, and the media should be held accountable for promoting, or failing to promote, mitigation-targeted governmental, organizational, and individual efforts.  In other words, we need to create a new lifestyle.  There’s an interesting take on what a lifestyle like that would look like in Peter Kalmus’ “Being the Change”, which I hope to write about soon.

3.       Personal.  Simply as a matter of not being hypocrites, we should start to change our carbon “consumption” for the better.  Again, Kalmus’ book offers some suggestions.

Or, as Yoda would put it, “Do or do not.  There is no try.”  Because both Forces will be against you.

Thursday, May 17, 2018

Reading New Thoughts: Dolnick’s Seeds Of Life and the Science of Our Disgust


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
Michael Dolnick’s “The Seeds of Life” is a fascinating look at how scientists figured out exactly how human reproduction works.  In particular, it shows that until 1875, we still were not sure what fertilized what, and until the 1950s (with the discovery of DNA) how that fertilization led to a fully formed human being.  Here, however, I’d like to consider what Dolnick says about beliefs before scientists began/finished their quest, and those beliefs’ influence on our own thinking.
In a very brief summary, Dolnick says that most cultures guessed that the man’s seminal fluid was “seed”, while the woman was a “field” to be sown (obviously, most if not all those talking about cultural beliefs are male).  Thus, for example, the Old Testament talks about “seed” several times.  An interesting variant was the idea that shortly after fertilization, seminal fluid and menstrual blood combined to “curdle” the embryo like milk curdled into cheese – e.g., in the Talmud and Hindu writings, seminal fluid, being milky, supplied the white parts of the next generation, like bone, while menstrual blood, being red, supplied the red parts, like blood.  In any case, the seed/field belief in some ways casts the woman as inferior – thus, for example, infertility is always the woman’s fault, since the seeds are fine but the field may be “barren” – another term in the Bible.
In Western culture, early Christianity superimposed its own additional concerns, which affected our beliefs not just about how procreation worked and the relative inferiority or superiority of the sexes, but also our notion of “perfection”, both morally and with regard to sex.  The Church and some Protestant successors viewed sex as a serious sin “wanting a purpose [i.e, if not for the purpose of producing babies]”, including sex within marriage.  Moreover, the Church with its heritage in Platonism viewed certain things as less disgusting than others, and I would suggest that many implicit assumptions of our culture derive from these.  The circle is perfect, hence female breasts are beautiful; smooth surfaces and straight lines are beautiful, while body hair, the asymmetrical male member, and the crooked labia are not.  Menstrual blood is messy, “unclean,” and disgusting, as is sticky, messy seminal fluid.

Effects on Sexism


It seems to me that much more can be said about differing cultural attitudes towards men and women based on the seed/field belief.  For one thing, the seed/field metaphor applies in all agricultural societies – and until the early 1900s, most societies were almost completely agricultural rather than hunter/pastoral or industrial.  Thus, this way of viewing women as inferior was dangerously plausible not only to men, but also to women.  In fact, Dolnick records examples in Turkey and Egypt of modern-day women believing in the seed/field theory and therefore women’s inferiority in a key function of life.
Another implication of the seed/field theory is that the “nature” of the resulting children is primarily determined by the male, just as different types of seed yield different plants.  While this is somewhat counteracted by the obvious fact that physically, children tend to favor each parent more or less equally, there is some sense in literature such as the Bible that some seed is “special” – Abraham’s seed will pass down the generations and single out for special attention from God his Jewish descendants.  And that, in turn, can lead naturally to the idea that mingling other men’s seed with yours can interfere with that specialness, hence wives are to be kept away from other men – and that kind of control over the wife leads inevitably to the idea of women as at least partially property.  And finally, the idea of the woman as passive receptacle of the seed can lead to men viewing women actively desiring sex or a woman’s orgasm as indications of mentally-unbalanced “wantonness”, further reinforcing the (male) impression of women’s inferiority.  
I find it not entirely coincidental that the first major movements toward feminism occurred soon after Darwin’s take on evolution (implicitly even-handed between the sexes) and the notion of the sperm and the egg were established as scientifically superior alternatives to Biblical and cultural beliefs.  And I think it is important to realize that, with genetic inheritance via DNA being still in the process of examination and major change, the role of culture rather than “inherence” in male and female is still in the ascendant – as one geneticist put it, we now know that sex is a spectrum, not either-or.  So the ideas of both similarity and “equality” between the sexes are now very much science-based.
But there’s one other possible effect of the seed/field metaphor that I’d like to consider.  Is it possible that the ancients decided that there was only so much seed that a man had, for a lifetime?  And would this explain to some extent the abhorrence of both male masturbation and homosexuality that we see in cultures worldwide?  Think of Onan in the Bible, and his sin of wasting his seed on the barren ground …

Rethinking Disgust


“Girl, Wash Your Face” (by Rachel Hollis, one of the latest examples of the new breed that live their lives in public) is, I think, one of the best self-help books I have seen, although it is aimed very clearly not at men – because many of the things she suggests are perfectly doable and sensible, unlike the many self-help books in which in a competitive world only a few can achieve financial success.  What I also find fascinating about it is the way in which “norms” of sexual roles have changed since the 1950s.  Not only is the author running a successful women’s-lifestyle website with herself as overworking boss, but her marriage is what she views as her vision of Christianity, complete with a positive view of sex primarily on her terms.
What I find particularly interesting is how she faced the age-old question of negotiating sex within marriage.  What she decided was that she was going to learn how to want to have sex as a norm, rather than being passively “don’t care” or disgusted by it.  I view this as an entirely positive approach – it means that both sides in a marriage are on the same page (more or less) with the reassurance that “not now” doesn’t mean “not for a long time” or “no, I don’t like you”.  But the main significance of this is that it means a specific way of overcoming a culture of disgust, about sex among other things.
I believe that the way it works is captured best by poetry by Alexander Pope:  “Vice is a monster of so frightful a mien/As, to be hated, needs but to be seen/Yet, seen oft, familiar with her face/We first endure; then pity; then embrace.”  The point is that it is often not vice that causes the disgust, but rather disgust than causes us to call it vice – as I have suggested above.  And the cure for that disgust is to “see it oft” and become “familiar” with it, knowing that we will eventually move from “enduring” it to having it be normal to “embracing” it. 
Remember, afaik, we can’t help feeling that everything about us is normal or nice, including excrement odor, body odor, messiness, maybe fat, maybe blotches or speech problems – and yet, culturally and perhaps viscerally, the same things about other people disgust us (or the culture tells us they should disgust us).  And therefore, logically, we should be disgusted about ourselves as well – as we often are.  Moreover, in the case of sex, the disgusting can also seem forbidden and hence exciting.  The result, for both sexes, can be a tangled knot of life-long neuroses.  
The path of moving beyond disgust, therefore, can lie simply with learning to view the disgusting as in a sense ours:  the partner’s body odor as our body odor, their fat as our love handles, etc.  But it is “ours” not in the sense of possession, but in the sense of being part of an integral part of an overall person that is now a vital part of your world and, yes, beautiful to you in an every-day sort of way, just as you can’t help think of yourself as beautiful.  This doesn’t happen overnight, but, just as the ability to ignore itches during Zen meditation inevitably happens, so this will happen in its own good time, while you’re not paying attention.
The role of science in general, not just in the case of how babies are made or sex, has typically been to undercut the rationale for our disgust, for our prejudices and for many of our notions of what vice is.  And thus, rethinking our beliefs in the light of science allows us to feel comfort that when we overcome our disgust about something, it is in a good cause; it is not succumbing to a vice.   And maybe, just maybe, we can start to overcome the greatest prejudice of all:  against crooked lines, imperfect circles, and asymmetry.  

Thursday, May 3, 2018

Perhaps Humans Are One Giant Kluge (Genetically Speaking)

Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
Over the past year, among other pastimes, I have read several books on the latest developments of genetics and the theory of evolution.  The more I read, the more I feel that my background in programming offers a fresh perspective on these developments – a somewhat different way of looking at evolution. 
Specifically, the way in which we appear to have evolved to become various flavors of the species Homo sapiens sapiens suggests to me that we ascribe far too much purpose to our evolution of various genes.  Much if not most of our genetic material appears to have come from a process similar to what, in the New Hacker’s Dictionary and more generally used computer slang, is called a kluge.
Here’s a brief summary of the definition of kluge in The New Hacker’s Dictionary (Eric Raymond), still my gold standard for wonderful hacker jargon.  Kluge (pronounced kloodj) is “1.  A Rube Goldberg device in hardware/software, … 3.  Something that works for the wrong reason, … 5. A feature that is implemented in a ‘rude’ manner.”  I would add that a kluge is just good enough to handle a particular case, but may include side effects, unnecessary code, bugs in other cases, and/or huge inefficiencies. 

Our Genes as a Computer Program

(Note: most of the genetics assertions in this post are plundered from Adam Rutherford’s “A Brief History of Everyone Who Ever Lived”)
The genes within our genome can be thought of as a computer program in a very peculiar programming language.  The primitives of that language are proteins with abbreviations A, G, C, and T.  The statements of the language, effectively, are of form IF (state in the cell surrounding this gene is x AND gene has value y THEN (trigger sequence of chemical reactions z, which may not change the state within the cell but does change the state of the overall organism).  Two peculiarities:
1.       All these gene “statements” operate in parallel (the same state can trigger several genes).
2.       The program is more or less “firmware” – that is, it can be changed, but over short periods of time it isn’t. 
Obviously, given evolution, the human genome “program” has changed – quite a lot.  The mechanism for this is mutation:  changes in the “state” outside an instance of DNA that physically change A, G, C, T, delete or add genes, or change the order of the genes in one side of the chromosome or the other.  Some of these mutations usually occur within the lifetime of an individual, during the time when cells carry out their programmed imperatives to carry out tasks and subdivide into new cells.  Thus, one type of cancer (we now know) is caused when mutation deletes some genes on one side of the DNA pairing, resulting in deletion of the statement (“once the cell has finished this task, do not subdivide the cell”).  It turns out that some individuals are much less susceptible to this cancer because they have longer chains of “spare genes” on that side of the DNA, so that it takes much longer for a steady statistically-random stream of deletions to result in statement deletion.

Evolution as an Endless Series Of Kluges

Evolution, in our computer-program model, is new (i.e., not already present somewhere in the population of the species) mutations.  The accepted theory of the constraints that determine what new mutations prosper over the long run is natural selection. 
Natural selection has been approximated as “survival of the fittest” – more precisely, survival of genes and gene variants because they are the best adapted to their physical environment, including competitors, predators, mates, and climate, and therefore are most likely to survive long enough to reproduce and out-compete alternative mates.  The sequencing of the human genome (and that of other species) has given us a much better picture of evolution in action as well as human evolution in the recent past.  Applied to the definition of natural selection, it suggests somewhat different conclusions:
·         The typical successful mutation is not the best for the environment, but simply one that is “good enough”.  An ability to distinguish ultraviolet and infrared light, as the mantis shrimp does, is clearly best suited to most environments.  Most other species, including humans, wound up with an inability to see outside the “visible spectrum.”  Likewise, light entering the eye is interpreted at the back of the eye, whereas the front of the eye would be a better idea.
·         Just because a mutation is harmful in a new environment, that does not mean that it will go away entirely.  The gene variant causing sickle-cell anemia is present in 30-50% of the population in much of Africa, the Middle East, the Philippines, and Greece.   Its apparent effect is to allow those who would die early in life from malaria to survive through most of the period when reproduction can happen.  However, indications are that the mutation is not disappearing fast if at all in offspring living in areas not affected by malaria.  In other words, the relative lack of reproductive success for those afflicted by sickle-cell anemia in the new environment is not enough to eradicate it from the population.  In the new environment, the sickle-cell anemia variant is a “bug”; but it’s not enough of a bug for natural selection to operate.
·         The appendix serves no useful purpose in our present environment – it’s just unnecessary code, with appendicitis a potential “side effect”.  There is no indication that the appendix is going away. Nor, despite recent sensationalizing, is red hair, which may be a potential side effect of genes in northern climes having less need for eumelanin to protect against the damaging effects of direct sunlight.
·         Most human traits and diseases, we are finding, are not determined by one mutation in one gene, but rather are the “side effects” of many genes.  For example, to the extent that autism is heritable (and remembering that autism is a spectrum of symptoms and therefore may be multiple diseases), no one gene has been shown to explain more than a fraction of the heritable part.
In other words, evolution seems more like a series of kludges:
·         It has resulted in a highly complex set of code, in which it is very hard to determine which gene-variant “statement” is responsible for what;
·         Compared to a set of genes designed from the start to result in the same traits, it is a “rude” implementation (inefficient and with lots of side-effects), much like a program consisting mostly of patches;
·          It appears to involve a lot of bugs.  For example, one estimate is that there have been at least 160,000 new human mutations in the last 5,000 years, and about 18% of these appear to be increases in inefficiency or potentially harmful – but not, it seems, harmful enough to trigger natural selection.

Variations in Human Intelligence and the Genetic Kluge

The notion of evolution as a series of kluges resulting in one giant kluge – us – has, I believe, an interesting application to debates about the effect of genes vs. culture (nature vs. nurture) on “intelligence” as measured imperfectly by IQ tests. 
Tests on nature vs. nurture have not yet shown the percentage of each involved in intelligence variation (Rutherford says only that variations from gene variance “are significant”).  A 2013 survey of experts at a conference shows that the majority think 0-40% of intelligence variation is caused by gene variation, the rest by “culture”.  However, the question that has caused debate is how much of that gene variance is variance between individuals in the overall human population and how much is variance between groups – typically, so-called “races” – each with its own different “average intelligence.” 
I am not going to touch on the sordid history of race profiling at this point, although I am convinced it is what makes proponents of the “race” theory blind to recent evidence to the contrary.  Rather, I’m going to conservatively follow up the chain of logic that suggests group gene variance is more important than individual variance. 
We have apparently done some testing of gene variance between groups.  The second-largest variance is apparently between Africans (not African-Americans) and everyone else – but the striking feature is how very little difference (compared to overall gene variation in humans) that distinction involves.  The same process has been carried out to isolate even smaller amounts of variance, and East Asians and Europeans/Middle East show up in the top 6, but Jews, Hispanics, and Native Americans don’t show up in the top 7. 
What this means is that, unless intelligence is affected by one or only a few genes falling in those “group variance” categories, most of the genetic variance is overwhelmingly likely to be individual.  And, I would argue, there’s a very strong case that intelligence is affected by lots of genes, as a side-effect of kluges, just like autism.
First, over most of human history until the last 250 years, the great bulk of African or non-African humans have been hunters or farmers, with no reading, writing, or test-taking skills, and with natural selection for particular environments apparently focused on the physical (lactose tolerance for European cow use) rather than intelligence-related (e.g., larger brains/new brain capabilities).  That is, there is little evidence for natural selection targeted at intelligence but lots for natural selection targeted at other things. 
Second, as I’ve noted, it appears that in general human traits and diseases usually involve large numbers of genes.  Why should “intelligence” (which, at a first approximation, applies mostly to humans) be different?  Statistically, it shouldn’t.  And as of yet, no one has been even able to find one gene significantly connected to intelligence – which again suggests lots of small-effect genes.
So let’s imagine a particular case.  100 genes affect intelligence variation, in equal amounts (1% plus or minus).  Group A and Group B share all but 10 genes.  To cook the books further, Group A has 5 unique plus genes, and Group B 5 unique minus genes (statistically, they both should have equal amounts plus and minus on average).  In addition, gene variance as a whole is 50% of overall variation.  Then 10/95 of the genetic variation in intelligence (about 10.5%) is explained by whether an individual is in Group A or B. This translates to 5.2% of the overall variation being due to the genetics of the group, 44.8% being due to individual genetic variation, and 50% being due to nurture.
Still, someone might argue, those nature vs. nurture survey participants have got it wrong:  gene variation explains all or almost all of intelligence variation.  Well, that still means that nurture has 5 times the effect that belonging to a group does.  Moreover, under the kluge model, the wider the variation between Race A and Race B, between, say, Jewish-American and African-American, THE MORE LIKELY IT IS THAT NURTURE PLAYS A LARGE ROLE.  First of all, “races” do not correspond at all well to the genetic groups I described earlier, and so are more likely to have identical intelligence on average than the groups I cited.  Second, because group variation is so much smaller than individual variation, group genetics is capable of much less variation than nurture (0-10.5% of overall variation, vs. 0-100% for nurture).  And I haven’t even bothered to discuss the Flynn Effect, which is increasing intelligence over time, equally between groups, far more rapidly than natural selection can operate – a clear indication that nurture is involved.
Variation in human intelligence, I say, isn’t survival of the smartest.  It’s a randomly distributed side effect of lots of genetic kluges, plus the luck of the draw in the culture and family you grow up in.

Thursday, April 26, 2018

Reading New Thoughts: Two Different Thoughts About American History


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.

In my retirement, I have, without great enthusiasm, looked at various books on American History.  My lack of interest stems from my impression that most of these, in the past, were (a) overly impressed by certain personalities (Jefferson and Lee spring to mind) and (b) had no ability to take the point of view of foreign actors or even African and Native Americans. 

As it turns out, that is changing, although not universally, and so there are, imho, some fascinating re-takes on some of the historical narratives such as those I found growing up in my American History textbooks.  I’d just like to call attention here to two “new thoughts” about that history that I recently ran across.  I will state them first in abbreviated and provocative form:

1.       President John Tyler may have played the largest role in making the Civil War inevitable.

2.       The most dangerous part of the American Revolution for the American cause was AFTER Valley Forge.

Now, let’s do a deeper dive into these.

John Tyler Did It


The time from the War of 1812 to the Mexican-American War has always been a bit of a blank spot in my knowledge of American History.  The one event I remember that seems to mean much is Jackson’s termination of the Bank of the United States, which subjected the U.S. to severe rather than mild recessions for a century – including, of course, the effect on the Great Depression.

A recent biography of John Quincy Adams adds a great deal to my understanding of this period, even if it is (necessarily) overly focused on Adams himself.  As it turns out, Adams was both involved in American diplomacy abroad from the Revolution to 1830, and also in its politics from 1805 to 1848, when he died.  In that span, he saw three key developments in American politics:

1.       The change in slavery’s prominence in politics from muted to front and center.  We tend to see the Compromises leading up to the Civil War as springing out of the ground at the end of the Mexican War; on the contrary, it appears that the key event is the formation of a pro-slavery Republic of Texas in the early 1830s, along with (and related to) a new, much more uncompromising anti-slavery movement arriving around 1830.  That, in turn, was a reaction to a much more expansionist slave-state politics after the War of 1812. 

2.       A change in the Presidential election system from “election by elites” (electors were much less bound to particular parties) to a much more widespread participation in elections as new states arrived (although, of course, the electorate still remained male, white, and Northern European). 

3.       A related change in the overall political system from “one-person” parties to a real two-party system.  For the value of that, see modern Israel, whose descent from a two-party system to one in which most if not all parties are centered around one person and mutate or vanish whenever that person leaves the scene has brought dysfunction, short-term and selfish thinking, and paralysis to policy-making.

The way in which these changes occurred, however, had a great deal to do with the ultimate shape of the Civil War.  Here’s my take at a quick synopsis:  From 1812 to 1830, the old style of politics, in which each Democratic President effectively “crowned” his successor by making him Secretary of State, was dominant.  During that time, also, there was a kind of “pause” in westward expansion during which the rest of the continent east of the Mississippi became states.  And Adams during his time as Secretary of State arranged with Mexico the acquisition of the territory that later became the Dakotas, Wyoming, Idaho, and (most importantly) Oregon and Washington. 

When Adams was elected, Jackson was the closest competitor (more electoral votes but actually fewer popular votes) in a four-person race.  Inevitably, in the next election, as the power of the new Western states made itself felt, Jackson was a clear victor – but parties were still one-person things.  Jackson as a slave-holder and anti-Native-American bigot certainly did African and Native Americans no favors personally, but did not strengthen the slave states’ political power significantly.  Moreover, his successor, Martin van Buren, is credited as the true founder of the two-party system, very much along today’s lines:  a party of Federal-government spending on “improvements” to supplement state and local spending (the Whigs) vs. a party along Jefferson’s lines of small government (the Democrats) – and these things cut across slavery and anti-slavery positions. 

Inevitably, the Whigs quickly won an election vs. the Democrats, ushering in William Henry Harrison, a governor from the Northwest who promised to rein in the political power of the slave states via the Democrats (for example, they had imposed a “gag rule” to prevent Adams, who was now in Congress, from speaking out against slavery).  And this was important, because between the 1820s and the early 1840s, settlers had created a “slave-state Republic” in Texas with uncertain boundaries, and the slave states were clamoring to accept Texas as a state, effectively upsetting the balance between slave and free states. 

Unfortunately, Harrison died shortly after election, and John Tyler, a slave-state “balanced ticket” politician, took over.  He had no interest in the Whig party – rather, he sought to carve out a position close to the Democratic one, in order to create his own political party and get re-elected.  Thus, he splintered his own party and sectionalized it as well.  What followed, as Polk and the Democrats accepted Texas and acquired the Southwest during the Mexican-American War, simply made inevitable and immediate the “irrepressible conflict.”

 But let’s imagine Harrison hadn’t died, and had instead served two terms.  It is very possible that the slave states would have benefited from “improvements”, and slavery expansion would have become less a matter of absolute necessity in their politics.  It is then possible that the notion of secession would not have been so universally accepted, nor the excesses of slaveholding immigrants in Missouri as easily excused, nor the pressure to make the North conform as extreme.  Thus, Kentucky and Tennessee would not have been as up for grabs as the Civil War started, the Confederacy weaker, and the result less bloody and quicker.

All this is speculative, I know.  But to me, the biggest what-if about the Civil War is now:  What if John Tyler had never gotten his mitts on the Presidency?

It’s Not About Valley Forge


Up until recently, my view of the Revolutionary War, militarily speaking, was that Washington simply hung on despite financial difficulties, defeats, and near-victories squandered by subordinates, until he surmounted his troops’ starvation at Valley Forge and the British, frustrated, shifted their focus to the South.  Then, when Cornwallis in the South failed to conquer there and desperately marched North, Washington nipped down, wiped out his troops at Yorktown, and the war was effectively over, 2 years before the peace treaty got signed. 

However, a new book called “The Strategy of Victory” (SoV) presents a different picture.  It does so by (a) more completely presenting the British point of view, and (b) probing deeper into the roles of militia and regular army in the fighting.

SoV suggests that the overriding strategy of Washington was (a) keeping a trained regular army in existence so that the British in venturing south beyond New York could be defeated on its flank and in detail where possible, and (b) combining that regular army with militia who would, more or less, take the role of long-distance sharpshooters at the beginning of a battle.  At first, that strategy was highly successful, and led to the “crossing the Delaware” victories, followed by a British conquest of Philadelphia that backfired because Washington sat on the communications and supply lines between New York and Philadelphia.  So the British went back to New York.

However, Clinton, the British general in New York, next devised a strategy (and this is after Valley Forge) to catch Washington out of his well-defended “fortress area”, by landing his troops in mid-New Jersey and heading inland.  In fact, he came surprisingly close to succeeding, and only a desperate last stand by a small portion of the militia and army allowed Washington to slip away again.  That was Closest Shave Number One.

Attention then shifted to the South, where a nasty British officer named Tarleton basically moved faster than any colonial resistance and steadily wiped out militia resistance in Georgia, then South Carolina, then into North Carolina.  Had he succeeded in North Carolina, it is very possible that that he would have succeeded in Virginia, and then Washington would have really been caught between two fires.  But a magnificent mousetrap by Daniel Morgan that again combined regulars and militia to firstly tempt Tarleton into battle, and secondly ensure he lost it, saved the day:  Closest Shave Number 2.  Without Tarleton’s regulars to maintain it, the British pressure on the inland Carolinas and Georgia collapsed, and Cornwallis’ move into Virginia had no effect beyond where his army moved, making the Virginia invasion pointless and allowing Washington to trap him.  Meanwhile, General Greene moved into the southern vacuum, his main concern being to preserve his regulars while doing so (again, Washington’s strategy), and was able to take over most of the Carolinas and Georgia again – as SoV puts it, he “took over all of Georgia while losing every battle.”   

However, Yorktown was not the end of the fight for the North.  It was always possible that the British would again sally from New York – if Washington were no longer there.  And so the next two years were a colossal bluff, in which a regular army of a couple of thousand was kept together with spit, baling wire, and monetary promises just to keep the British afraid to venture again out of New York.  Washington’s last address to his troops was not, says SoV, his thanks for faithful service; it was his apology for the lies he told them in order to hold the army together, meant to keep them from revolting rather than disbanding. 

And one other point of this narrative that applies to the historical American “cult of the militia” that still hangs over us in today’s venal legal interpretations of the Second Amendment.  SoV makes it very clear, as the other American History books through to the Civil War also do, that America could not have survived without a regular army, and that militia without a regular army are not sufficient to maintain a free society – whereas, in the Civil War, a regular army without militia made us more free.

Thursday, April 12, 2018

Reading New Thoughts: Grinspoon’s Earth in Human Hands and Facing the Climate-Change-Driven Anthropocene Bottleneck


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.

In my mind, David Grinspoon’s “Earth In Human Hands” raises two issues of import as we try to take a long-term view of what to do about climate change:

1.        How best do we approach making the necessary political and cultural changes to tackle mitigation – what mix of business/market, national/international governmental, and individual strategies?

2.       For the even longer term, how do we tackle a “sustainable” economy and human-designed set of ecosystems?  Grinspoon claims that there are two opposing views on this – that of “eco-modernists” who say that we should design ecosystems based on our present setup, ameliorated to achieve sustainability, and those of “environmental purists” who advocate removal of humans from ecosystems completely.

First, a bit of context.  Grinspoon’s book is a broad summary of what “planetary science” (how planets work and evolve on a basic level, with our example leavened by those of Venus and Mars) has to say about Earth’s history and future.  His summary, more or less, is this:  (a) For the first time in Earth’s history, the whole planet is being altered consciously – by us – and therefore we have in the last few hundred years entered a whole new geological era called the Anthropocene; (b) That new era brings with it human-caused global warming and climate change, which form a threat to human existence, and therefore to the existence of Anthropocene-type “self-conscious” life forms on this planet, a threat that he calls the “Anthropocene bottleneck”; (c) it is likely that any other such planets with self-conscious life forms in the universe face the same Anthropocene bottleneck, and other possible threats to us pale in comparison, so that surviving the Anthropocene bottleneck is a good sign that humanity will survive for quite a while.

Tackling Mitigation


Grinspoon is very forceful in arguing that probably the only way to survive the Anthropocene bottleneck is through coordinated, pervasive global efforts:  in particular, new global institutions.  Translated, that means not “loose cannon” business/market nor individual nor even conflicting national efforts, but science-driven global governance, plus changes in cultural norms towards international cooperation.  Implicitly, I think, his model is that of the physics community he is familiar with:  one where key information, shared and tested by scientific means, informs strategies and reins in individual and institutional conflicts. 

If there is anything that history teaches us, it is that there is enormous resistance to the idea of global enforcement of anything.  I myself tend to believe that it represents one side of a two-sided conflict that plays out in any society – between those more inclined toward “hope” and those inclined toward “fear”, which in ordinary times plays out as battles between “liberals” and “conservatives.” 

Be that as it may, Grinspoon does not say there is not resistance to global enforcement.  He says, however, that global coordination, including global enforcement, is a prerequisite for surviving the Anthropocene bottleneck.  We cooperate and thereby effectively mitigate climate change, or we die.  And the rest of this century will likely be the acid test.

I don’t disagree with Grinspoon;  I just don’t think we know what degree of cooperation will be needed to deal with climate change in order to avoid facing the ultimate in global warming.   What he describes would be ideal; but we are very far from it now, as anyone watching CO2 rise over at Mauna Loa is well aware.   Rather, I think we can take his idea of scientifically-driven global mitigation as a metric and an “ideal” model, to identify key areas where we are falling down now by failing to react quickly, globally, and as part of a coherent strategy to scientific findings on the state of climate change and means of mitigation. 

Designing Sustainability


Sustainability and fighting climate change are not identical.  One of the concerns about fighting climate change is that while most steps toward sustainability are in line with the quickest path to the greatest mitigation, practically, some are not.  This is because, for example, farming almonds in California with less water than typical almond farming does indeed reduce the impact of climate-change-related water shortages, but also encourages consumption of water-greedy almonds.  I would argue, in that case, that the more direct path towards climate-change mitigation (discouraging almond growing while reducing water consumption in general) is better than the quicker path towards sustainability (focus on the water shortage).

This may seem arcane; but Grinspoon’s account of the fight between environmental traditionalists and eco-modernists suggests that the difference between climate-change-mitigation-first and sustainability-first is at least a major part of the disagreement between the two sides.  To put it another way, the traditionalists according to Grinspoon are advocating “no more people” ecosystems which effectively minimize carbon pollution, while the eco-modernists are advocating tinkering incrementally with the human-driven ecosystems that exist in order, apparently, to achieve long-term sustainability – thereby effectively putting sustainability at a higher priority than mitigation.

It may sound as if I am on the side of the traditionalists.  However, I am in fact on the side of whatever gets us to mitigation fastest – and here there is a glaring lack of mention in Grinspoon of a third alternative:  reverting to ecosystems with low-footprint human societies.  That can mean Native American, Lapp, Mongolian, or aborigine cultures, for example.  But it also means removing as far as possible the human impact exclusive of these cultures.

Let’s take a recent example:  the acquisition of Native Americans with conservationist aid of land around the Columbia River to enable restoration and sustainability (as far as can be managed with increasing temperatures) of salmon spawning.  This is a tradeoff:  In return for removal of almost all things exacerbating climate change, possibly including dams, the Native Americans get to restore their traditional culture as far as possible in toto.  They will be, as in their conceptions of themselves, stewards of the land. 

And this is not an isolated example (again, a recent book limns efforts all over the world to take the same strategy).  An examination of case studies shows that even in an impure form, it provides clear evidence of OVERALL negative impact on carbon pollution, while still providing “good enough” sustainability.  Nor do I see reflexive opposition from traditionalists on this one.

The real problem is that this approach is only going to be applicable to a minority of human habitats.  However, it does provide a track record, a constituency, and innovations useful for a far more aggressive approach towards mitigation than the one the eco-modernists appear to be reflexively taking.  In other words, it offers hope for an approach that uses human technology to design sustainable ecosystems, even in the face of climate changes, with the focus on the ecosystem first and the humans second.  The human technology can make the humans fit the ecosystem with accommodation for human present practice, not the other way around.

In summary, I would say that Grinspoon’s idea of casting how to deal with mitigation and sustainability as a debate between traditionalists and modernists misses the point.   With all the cooperation in the world, we still must push the envelope of mitigation now in order to have a better chance for sustainability in the long term.  The strategy should be pushed as far as possible toward mitigation-driven changes of today’s human ecosystems, but can be pushed toward what worked with humans in the past rather than positing an either/or humans/no-humans choice.

Monday, March 5, 2018

Reading New Thoughts: Harper’s Fate of Rome and Climate Change’s Effect On Empires


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
Note:  My focus in these “reading new thoughts” posts is on new ways of thinking about a topic, not on a review of the books themselves.
Kyle Harper’s “The Fate of Rome” provides new climate-change/disease take on that perennial hot topic “reasons for the fall of Rome.”  Its implications for our day seem to me not a reason to assume inevitable catastrophe, but a caution that today’s seemingly resilient global economic structures are not infinitely flexible.  I believe that the fate of Rome does indeed raise further questions about our ability to cope with climate change. 
Climate Change’s Effect on the Roman Empire
As I understand it, “Fate of Rome” is presented as a drama in 5 acts:
1.      A “Roman Climatic Optimum” or “Goldilocks” climate in the Mediterrean allows the development of a state and economic system, centered on Rome and supporting a strong defensive military, that pushes Malthusian boundaries in the period up to about 150 AD.
2.      A transitional climatic period arrives, and runs for 200 years.  For several decades, the Plague of Cyprian (smallpox) rages and some regions tip into drought, leading to 10-20% population losses, and massive invasion against a weakened military.  Order is then restored at a slightly lower population level from that in 150 AD.
3.      At about 240 AD, another plague (probably viral hemorrhagic fever) arrives, accompanied by widespread drought in most key food-supplying regions (Levant, north Africa, and above all Egypt).  Northern and eastern borders collapse as the supply of soldiers and supplies dries up.  Again, recovery takes decades, and a new political order is built up, breaking the power of Roman senators and creating a new city “center” at Constantinople.
4.      At around 350 AD, a Little Ice Age arrives.  Climate change on the steppe, stretching from Manchuria to southern Russia, drives the Hsiungnu or Huns westward, pushing the existing Goth society on Rome’s northern border into Roman territory.  Rome’s western empire collapses as this pressure plus localized droughts leads to Gothic conquests of Gaul, Spain, North Africa, and Italy.  Rome itself collapses in population without grain shipments from abroad, but the economic and cultural structure of the western Roman state is preserved by the Goths.  In the early 500s, as the Eastern Empire recovers much of its population and economic strength, Justinian reconquers North Africa and much of Italy, again briefly and partially reconstituting the old Roman Empire.
5.      At 540 AD or thereabouts, bubonic plague driven by changes in climate for its animal hosts in central Asia arrives from the East.  The details of the illness are horrific and it is every bit as devastating as the Black (also bubonic) Plague in medieval times – 50-60 % of the population dead, affecting rich and poor, city and rural equally, with recurrence over many decades.  Only Gaul and the north, now oriented to a different economic and social network, are spared.  Villas, forums, and Roman roads vanish.  The Eastern frontier collapses, again due to military recruit and supply decimation, and a prolonged deathbed struggle with Persia ends in the conquest of most of both by Islam in the early 600s.  The only thing remaining of the old Roman state and its artifacts is a “rump” state in Anatolia and Greece. 
Implications for Today
As Harper appears to view it, the Roman Empire was a construct in some ways surprisingly modern, and in some ways very dissimilar to our own “tribe”-spanning clusters of nation-states.  It is similar to today in that it was a well-knit economic and cultural system that involved an effective central military and tax collection, and could effectively strengthen itself by trade in an intercontinental network.  It is dissimilar in that the economic system (until near the end) funneled most trade and government through a single massive city (Rome) that required huge food supplies from all over the Mediterranean; in that for most of its existence, the entire system rested on the ability of the center to satisfy the demands of regional “elites”, thus impoverishing the non-elite; and in that they had none of our modern knowledge of public health and medicine, and thus were not able to combat disease effectively. 
What does this mean for climate change affecting today’s global society?  There is a tendency to assume, as I have noted, that it is infinitely resilient:  Once disaster takes a rest in a particular area, outside trade and assistance complement remaining internal structures in recovering completely, and then resuming an upward economic path.  Moreover, internal public health, medicine, and disaster relief plus better advance warnings typically minimize the extent of the disaster.  The recovery of utterly devastated Japan after WW II is an example.
However, the climate-change story of Rome suggests that one of these two “pillars of resilience” is not as sturdy as we think.  Each time climate-change-driven disasters occurred, the Roman Empire had to “rob Peter to pay Paul”, outside military pressures being what they were and trade networks being insufficient for disaster recovery.  This, in turn, made recovery from disaster far more difficult, and eventually impossible. 
Thus, recovery from an ongoing string of future climate-change-driven disasters may not be sufficiently able to be internally driven – and then the question comes down to whether all regional systems face ongoing disasters unable to be handled internally, simultaneously.  Granted, the fact that our system does not depend on regional elites or fund a single central city are signs of internal resilience beyond that of Rome.  But is the amount of additional internal resilience significant?  This does not seem clear to me.
What remains in my mind is the picture of climate-change-driven bubonic plague in Rome’s interconnected world. People die in agony, in wracking fevers or with bloody eyeballs and bloody spit, or they simply drop dead where they are, in twos and threes.  Death is already almost inevitable, when the first symptoms show, and there is no obvious escape.  If by some miracle you live through the first bout, you walk in a world of the stench of unburied bodies, alone where two weeks ago you walked with family, with friends, with communities.  All over your world, this is happening.  And then, a few years later, when you have begun to pick up the pieces and move back into a world of many people, it happens again.  And again.
If something like that happens today, our world is not infinitely resilient.  Not at all.   

Friday, March 2, 2018

The Transition From Agile Development to the Agile Organization Is Beginning to Happen


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
Recently, new publications from CA Technologies via Techtarget arrived in my in-box.  The surveys mentioned in them confirmed to me that the agile organization or agile business is beginning to be a Real Thing, not just vendor hype. 
Five years ago, I wrote but did not publish a book on the evidence of agile development’s benefits and implications for creation of a truly agile business or other organization.  Now, it appears not only that the theoretical foundation has been laid for implementation at least of agile processes in every major department of the typical business, but also that a significant subset of businesses now think that they have at implemented agile according to that foundation across pretty much all of the enterprise – yes, apparently in a few cases including legal (what does legal-department agility mean?  I have no idea, yet).
So what are the details of this evidence?  And what benefits of an agile organization seem to be proving themselves?

The Solid Foundation of Agile-Development Benefits

It is now approaching a truism that agile development delivers benefits compared to traditional software-development approaches.  My own survey during my brief re-up at Aberdeen Group 9 years ago suggested improvements in the 20-30 % range for project cost and speed, product quality, and customer satisfaction (with the obvious implication that it also decreased product-development risk by around that amount, as Standish Group studies also showed).  One striking fact was that it was achieving comparable results when compared to traditional approaches focused on cost and/or quality.
One CA pub (“Discover the Benefits of Agile:  The Business Case for a New Way to Work) extends these findings to agile development plus agile project management.  It says that a “summary of research” finds that agile delivered 29 % improvements in cost, 50 % in quality, 97 % in “productivity” (something like my “speed”), 400 % (!) in customer satisfaction, 470 (!!) in ROI (a proxy for revenue and profit), compared to the “least effective” traditional approaches. 
While this may sound like cherry-picking, my research showed that the most effective traditional approaches were not that much better than the least effective ones.  So I view this CA-cited result as the “practice effect”:  experience with agile development has actually increased its advantage over all traditional approaches – in the case of customer satisfaction and profitability, by really large amounts.

The Theoretical Case For Business Agility

Note, as I did back when I did my survey, that agile development often delivers benefits that show up in the top and bottom line of the success-critical-software-developing business, even before the rest of the organization attempts to go agile.  So why would it be important to go the rest of the way and make most or all of the organization agile?
The CA pub “The State of Business Agility 2017” plus my own work suggest that potential benefits of “agile beyond software development” fall into three areas: 
1.      Hard-nosed top and bottom line benefits:  That is, effects on revenue, cost, and margin (profit).  For example, better “innovation management” via agile project management goes to the top line and eventually to the bottom line, and in some cases can be measured.
2.      “Fuzzy” corporate benefits, including competitive advantage, quality, customer satisfaction, speed to act and change strategies, and reduction in negative risks (e.g., project or IT-infrastructure failures) and “fire drills”.
3.      “Synergy” benefits stemming from most of the corporation being on the same “agile page” with coordinated and communicating agile processes, including better collaboration, better/faster employee strategy buy-in, better employee satisfaction through better corporate information, and better “alignment between strategy and execution.”
The results of the CA business-agility pub survey suggest that most respondents understand many but not all of these potential benefits before they take the first step towards the agile business.  I would guess, in particular, that they don’t realize the possible positive effects on combating “existential risks”, such as security breaches or physical IT-infrastructure destruction, as well as the effects on employee satisfaction and better strategy-execution alignment.

The Extent of Agile Organizations and Their Realized Benefits

Before I begin, I should note two caveats about the CA-reported results.  The first is that respondents are in effect self-selected to be farther along in agile-organization implementation and more positive about it.  These are, if you will, among the “best and the brightest.”  So actual agile-organization implementations “on the ground” are certainly far less than the survey suggests.
Second, CA’s definition of “agile” leaves out an important component.  CA’s project-management focus makes part of its definition of agility to be holding time and cost in a project constant while varying project scope.  What that really means is that CA de-emphasizes the ability to make major changes in project aims at any point in the project.  In the agile-organization survey, this means an entire lack of focus on the ability to incrementally (and bottom-up!) change a strategy rather than just roll out a whole new one every few years.  And yet, “more agile” strategy change is at the heart of agility’s meaning and is business agility’s largest long-term potential benefit.
How far are we towards the agile organization?  By some CA-cited estimates, 83% of all businesses have the first agile-development step at least in their plans, and a majority of IT projects are now “agile-centric.”  Bearing in mind caveat (1) above, I note that 22% of CA-survey respondents say they are just focused on extending “agile” to IT as a whole, 17% are also working on a plan for full business agility, 19% have gotten as far as organizational meetings to determine agility-implementation goals, and 39% are “well underway” with rollout.  Ignoring the question about “momentum” in partial departmental implementations for a moment, I also note that 47 % say IT is agile, 36% that Project Management is, and marketing, R&D, operations/manufacturing (are they counting lean methodologies as agile?), and sales (!) are a few percentage points lower. 
Getting back to partial implementation, service/support seems to be the “new frontier.”  Surprisingly, corporate communications/PR is among the laggards even in implementation, along with accounting/finance, HR, and legal.  What I find interesting about this list is that accounting and legal are even in the conversation, indicating that people really are planning for some degree of “agile” in them.  And, of course, the CEO’s agility isn’t even in the survey – as I said 9 years ago, the CEO is likely to be the last person in the organization to “go agile.”  Long discussion, not relevant here.
How about benefits?  In the hard-nosed category, agile organizations increase revenue 37% faster and deliver 30% greater profit (an outside survey).  For the rest of benefits, there is far less concrete evidence – the CA business-agility survey apparently did not ask what business benefits respondents had already seen from their efforts.  What we can deduce is that most of the 39% of respondents who said they were “well underway” believe that they are already achieving the benefits they already understand, including most of the “fuzzy” and “synergy” benefits cited above.

Implications:  The Cat Is In The Details

At this point, I would ordinarily say to you the reader that you should move towards implementing an agile business/organization, bearing in mind that “the devil is in the details.”  Specifically, the CA surveys note that the complexity of the business and cultural/political opposition are key (and the usual) problems in implementation.  And, indeed, this would be a useful thing to know.
However, I also want to emphasize that there is a “cat” in the details of implementation:  a kind of Schrodinger’s Cat.  In quantum physics, as I understand it, different states of basic particles (e.g., exists, doesn’t exist) are entangled until we disentangle them (e.g., by “opening the box” and measuring them).  Schrodinger imagined entangled states of “cat inside the box/no cat inside the box”, so that we wouldn’t know whether Schrodinger’s Cat existed until we opened the box.  In the same way, we not only don’t know what and how much in the way of benefits we get until we examine the implementation details, we won’t know just how really agile we are until we “open the box.”
Why does that matter?  Because, as I have noted above, the really big long-term potential benefit of business agility, is, well, agility:  The ability to turn strategically, instantly, on a dime and ensure the business’ long-term existence, as well as comparative success, much more effectively.  Just because some departments seem to be delivering greater benefits right now, that doesn’t mean you have built the basic infrastructure to turn on a dime.
And so, the cat is in the details.  Please open the box by testing whether your implementation details allow you to change overall business strategies fast, and then, if there is no cat there, what should you do?
Why, change your agile-implementation strategy, of course.  Preferably fast.