Disclaimer: I am now retired, and am therefore no longer
an expert on anything. This blog post
presents only my opinions, and anything in it should not be relied on.
Over the past year, among other pastimes, I have read
several books on the latest developments of genetics and the theory of
evolution. The more I read, the more I
feel that my background in programming offers a fresh perspective on these
developments – a somewhat different way of looking at evolution.
Specifically, the way in which we appear to have evolved to
become various flavors of the species Homo sapiens sapiens suggests to me that
we ascribe far too much purpose to our evolution of various genes. Much if not most of our genetic material
appears to have come from a process similar to what, in the New Hacker’s
Dictionary and more generally used computer slang, is called a kluge.
Here’s a brief summary of the definition of kluge in The New
Hacker’s Dictionary (Eric Raymond), still my gold standard for wonderful hacker
jargon. Kluge (pronounced kloodj) is
“1. A Rube Goldberg device in
hardware/software, … 3. Something that
works for the wrong reason, … 5. A feature that is implemented in a ‘rude’ manner.” I would add that a kluge is just good enough
to handle a particular case, but may include side effects, unnecessary code,
bugs in other cases, and/or huge inefficiencies.
Our Genes as a Computer Program
(Note: most of the
genetics assertions in this post are plundered from Adam Rutherford’s “A Brief
History of Everyone Who Ever Lived”)
The genes within our genome can be thought of as a computer
program in a very peculiar programming language. The primitives of that language are proteins
with abbreviations A, G, C, and T. The
statements of the language, effectively, are of form IF (state in the cell
surrounding this gene is x AND gene has value y THEN (trigger sequence of
chemical reactions z, which may not change the state within the cell but does
change the state of the overall organism).
Two peculiarities:
1.
All these gene “statements” operate in parallel
(the same state can trigger several genes).
2.
The program is more or less “firmware” – that
is, it can be changed, but over short periods of time it isn’t.
Obviously, given evolution, the human genome “program” has
changed – quite a lot. The mechanism for
this is mutation: changes in the “state”
outside an instance of DNA that physically change A, G, C, T, delete or add
genes, or change the order of the genes in one side of the chromosome or the
other. Some of these mutations usually
occur within the lifetime of an individual, during the time when cells carry
out their programmed imperatives to carry out tasks and subdivide into new
cells. Thus, one type of cancer (we now
know) is caused when mutation deletes some genes on one side of the DNA
pairing, resulting in deletion of the statement (“once the cell has finished
this task, do not subdivide the cell”).
It turns out that some individuals are much less susceptible to this
cancer because they have longer chains of “spare genes” on that side of the
DNA, so that it takes much longer for a steady statistically-random stream of
deletions to result in statement deletion.
Evolution as an Endless Series Of Kluges
Evolution, in our computer-program model, is new (i.e., not
already present somewhere in the population of the species) mutations. The accepted theory of the constraints that
determine what new mutations prosper over the long run is natural selection.
Natural selection has been approximated as “survival of the
fittest” – more precisely, survival of genes and gene variants because they are
the best adapted to their physical environment, including competitors,
predators, mates, and climate, and therefore are most likely to survive long
enough to reproduce and out-compete alternative mates. The sequencing of the human genome (and that
of other species) has given us a much better picture of evolution in action as
well as human evolution in the recent past.
Applied to the definition of natural selection, it suggests somewhat
different conclusions:
·
The typical successful mutation is not the best
for the environment, but simply one that is “good enough”. An ability to distinguish ultraviolet and
infrared light, as the mantis shrimp does, is clearly best suited to most
environments. Most other species,
including humans, wound up with an inability to see outside the “visible
spectrum.” Likewise, light entering the
eye is interpreted at the back of the eye, whereas the front of the eye would
be a better idea.
·
Just because a mutation is harmful in a new
environment, that does not mean that it will go away entirely. The gene variant causing sickle-cell anemia
is present in 30-50% of the population in much of Africa, the Middle East, the
Philippines, and Greece. Its apparent
effect is to allow those who would die early in life from malaria to survive
through most of the period when reproduction can happen. However, indications are that the mutation is
not disappearing fast if at all in offspring living in areas not affected by malaria. In other words, the relative lack of
reproductive success for those afflicted by sickle-cell anemia in the new
environment is not enough to eradicate it from the population. In the new environment, the sickle-cell
anemia variant is a “bug”; but it’s not enough of a bug for natural selection
to operate.
·
The appendix serves no useful purpose in our
present environment – it’s just unnecessary code, with appendicitis a potential
“side effect”. There is no indication
that the appendix is going away. Nor, despite recent sensationalizing, is red
hair, which may be a potential side effect of genes in northern climes having
less need for eumelanin to protect against the damaging effects of direct
sunlight.
·
Most human traits and diseases, we are finding,
are not determined by one mutation in one gene, but rather are the “side
effects” of many genes. For example, to
the extent that autism is heritable (and remembering that autism is a spectrum
of symptoms and therefore may be multiple diseases), no one gene has been shown
to explain more than a fraction of the heritable part.
In other words, evolution seems more like a series of
kludges:
·
It has resulted in a highly complex set of code,
in which it is very hard to determine which gene-variant “statement” is
responsible for what;
·
Compared to a set of genes designed from the
start to result in the same traits, it is a “rude” implementation (inefficient
and with lots of side-effects), much like a program consisting mostly of
patches;
·
It
appears to involve a lot of bugs. For
example, one estimate is that there have been at least 160,000 new human mutations
in the last 5,000 years, and about 18% of these appear to be increases in
inefficiency or potentially harmful – but not, it seems, harmful enough to
trigger natural selection.
Variations in Human Intelligence and the Genetic Kluge
The notion of evolution as a series of kluges resulting in
one giant kluge – us – has, I believe, an interesting application to debates
about the effect of genes vs. culture (nature vs. nurture) on “intelligence” as
measured imperfectly by IQ tests.
Tests on nature vs. nurture have not yet shown the
percentage of each involved in intelligence variation (Rutherford says only
that variations from gene variance “are significant”). A 2013 survey of experts at a conference
shows that the majority think 0-40% of intelligence variation is caused by gene
variation, the rest by “culture”. However,
the question that has caused debate is how much of that gene variance is
variance between individuals in the overall human population and how much is
variance between groups – typically, so-called “races” – each with its own
different “average intelligence.”
I am not going to touch on the sordid history of race
profiling at this point, although I am convinced it is what makes proponents of
the “race” theory blind to recent evidence to the contrary. Rather, I’m going to conservatively follow up
the chain of logic that suggests group gene variance is more important than
individual variance.
We have apparently done some testing of gene variance
between groups. The second-largest
variance is apparently between Africans (not African-Americans) and everyone
else – but the striking feature is how very little difference (compared to
overall gene variation in humans) that distinction involves. The same process has been carried out to
isolate even smaller amounts of variance, and East Asians and Europeans/Middle
East show up in the top 6, but Jews, Hispanics, and Native Americans don’t show
up in the top 7.
What this means is that, unless intelligence is affected by
one or only a few genes falling in those “group variance” categories, most of
the genetic variance is overwhelmingly likely to be individual. And, I would argue, there’s a very strong
case that intelligence is affected by lots of genes, as a side-effect of
kluges, just like autism.
First, over most of human history until the last 250 years,
the great bulk of African or non-African humans have been hunters or farmers,
with no reading, writing, or test-taking skills, and with natural selection for
particular environments apparently focused on the physical (lactose tolerance for
European cow use) rather than intelligence-related (e.g., larger brains/new
brain capabilities). That is, there is
little evidence for natural selection targeted at intelligence but lots for
natural selection targeted at other things.
Second, as I’ve noted, it appears that in general human traits
and diseases usually involve large numbers of genes. Why should “intelligence” (which, at a first
approximation, applies mostly to humans) be different? Statistically, it shouldn’t. And as of yet, no one has been even able to
find one gene significantly connected to intelligence – which again suggests
lots of small-effect genes.
So let’s imagine a particular case. 100 genes affect intelligence variation, in
equal amounts (1% plus or minus). Group
A and Group B share all but 10 genes. To
cook the books further, Group A has 5 unique plus genes, and Group B 5 unique minus
genes (statistically, they both should have equal amounts plus and minus on
average). In addition, gene variance as
a whole is 50% of overall variation. Then
10/95 of the genetic variation in intelligence (about 10.5%) is explained by
whether an individual is in Group A or B. This translates to 5.2% of the
overall variation being due to the genetics of the group, 44.8% being due to
individual genetic variation, and 50% being due to nurture.
Still, someone might argue, those nature vs. nurture survey
participants have got it wrong: gene
variation explains all or almost all of intelligence variation. Well, that still means that nurture has 5
times the effect that belonging to a group does. Moreover, under the kluge model, the wider
the variation between Race A and Race B, between, say, Jewish-American and
African-American, THE MORE LIKELY IT IS THAT NURTURE PLAYS A LARGE ROLE. First of all, “races” do not correspond at
all well to the genetic groups I described earlier, and so are more likely to
have identical intelligence on average than the groups I cited. Second, because group variation is so much
smaller than individual variation, group genetics is capable of much less
variation than nurture (0-10.5% of overall variation, vs. 0-100% for nurture). And I haven’t even bothered to discuss the
Flynn Effect, which is increasing intelligence over time, equally between
groups, far more rapidly than natural selection can operate – a clear
indication that nurture is involved.
Variation in human intelligence, I say, isn’t survival of
the smartest. It’s a randomly
distributed side effect of lots of genetic kluges, plus the luck of the draw in
the culture and family you grow up in.
No comments:
Post a Comment