From: Morgan Grey <cynical_prophet@yahoo.com>
To: <DebunkCreation@yahoogroups.com>
Reply-to: <DebunkCreation@yahoogroups.com>
Date: Sun Sep 16, 2001 8:36 pm
Message: 21453
Subject: ID-Commentary: "Explaining Specified Complexity"


This is my first of what I hope to be many weekly
posts, in which I post a document by one of the
leaders of the ID-movement, together with my comments
on it. I hope this feature will spark increased
discussion in the group, and, in particular, that any
lurking IDers will see this as an oppurtunity to argue
the validity of ID.

My first such commentary will be on Dembski's
"Explaining Specified Complexity", posted on Metanexus
at <http://www.metanexus.org/archives/message_fs.asp?&listtype=Magazine&ARCHIVEID=3066>,
and reproduced at <http://www.leaderu.com/offices/dembski/docs/bd-specified.html>
[also online here]. Although I have already made a few comments about this
article ([online here]), I feel that it still contain some material, and also
since I've read all of Dembski's TDI since then,
thereby having clearified some misunderstandings on my
own part about what Dembski is saying.

(Note: The Meta is badly formated, and contains all
kind of annoying "=20"'s, which I've removed. If
anyone doubt my integrity in reproducing the text,
they are welcome to refer back to the original.)

-------------------------------------------------------
Meta 139: Dembski on "Explaining Specified Complexity"

<grassie@VOICENET.COM> William Grassie
Meta 139. 1999/09/13. Approximately 1883 words.

BG> Below is a column entitled "Explaining Specified
BG> Complexity" by William Dembski at Baylor
BG> University in Texas. Dembski discusses whether
BG> evolutionary algorithms can generate
BG> actual "specified complexity" in nature, as
BG> opposed to merely the appearances thereof (i.e.,
BG> unspecified or randomly generated complexity).
BG> Dembski believes these problems in probability
BG> make plausible a concept of intelligence involved
BG> in evolution. Your comments are welcome on
BG> <reiterations@meta-list.org>.
BG>
BG> -- Billy Grassie
BG>
WAD> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=- EXPLAINING
WAD> SPECIFIED COMPLEXITY
WAD>
WAD> William A. Dembski
WAD>
WAD> Michael Polanyi Center Baylor University Waco,
WAD> Texas 76798

Dembski has a long record of claiming that
"[a]lgorithms and natural laws are in principle
incapable of explaining the origin of information"
(see for example "Intelligent Design", pp. 160), and
in a response to a review of "The Design Inference",
he directs readers to two Meta-posts, claiming that
these would show "that the Darwinian mechanism -and
indeed any non-telic mechanism- is incapable of
generating specified complexity" (see
<http://www.arn.org/docs/dembski/wd_responsetowiscu.htm>).
He identifies these two posts as "Explaining Specified
Complexity" and "Why Evolutionary Algorithms Cannot
Explain Specified Complexity", and it will be
interesting to see if "Explaining Specified
Complexity" or "Why Evolutionary Algorithms Cannot
Explain Specified Complexity" (which I'll be
commenting on next week) really do contain the
scathing critique that Dembski claims they do.

WAD> In his recent book The Fifth Miracle, Paul Davies
WAD> suggests that any laws capable of explaining the
WAD> origin of life must be radically different from
WAD> scientific laws known to date. The problem, as he
WAD> sees it, with currently known scientific laws,
WAD> like the laws of chemistry and physics, is that
WAD> they are not up to explaining the key feature of
WAD> life that needs to be explained. That feature is
WAD> specified complexity. Life is both complex and
WAD> specified. The basic intuition here is
WAD> straightforward. A single letter of the alphabet
WAD> is specified without being complex (i.e., it
WAD> conforms to an independently given pattern but is
WAD> simple). A long sequence of random letters is
WAD> complex without being specified (i.e., it
WAD> requires a complicated instruction-set to
WAD> characterize but conforms to no independently
WAD> given pattern). A Shakespearean sonnet is both
WAD> complex and specified.

Notice Dembski's use of "specified complexity" as well
as his claim that "[l]ife is both complex and
specified" and that seeing this is "straightforward".

About a week ago, when I had started reading TDI, I
thought that the only difference between it and
Dembski's popular writings was that TDI couldn't
detect agency ([online here]), a thing I already knew from Wesley Elsberry's
excellent posts. However, after having read the rest
of TDI, as well as other critiques of it, I have
noticed several other inconsistencies in Dembski's
representation of his explantory filter.

Take "specified complexity". While this term is
nowhere to be found in TDI, Dembski often claims that
the presence of this is a reliable indicator of design
(intelligent or not). And, as Dembski claims above,
"specified complexity" really *is* "straightforward"
to recognize. All one need to do is to check whether
the feature in question "conforms to an independently
given pattern" and if it "requires a complicated
instruction-set to characterize". Of course, given
this, few people would object that a DNA molecule
contains "specified complexity".

Contrast this with Dembski's treatment of design in
TDI. To "make a successful design inference" regarding
an event E, one needs to calculate the propability of
E, with regard to *all* "relevant chance hypotheses"
(see pp. 50-1 in TDI), as well as determining that E
is specified.

This is of course a huge operation, not at all as
simple as recognizing "specified complexity", as it is
advocated by Dembski above. Indeed, in all of TDI,
there is not a *single* example of Dembski applying
his explanatory filter to anything, let alone a DNA
molecule. Not even in his discussions of Caputo does
he regard any other chance hypothesis than "Caputo
using an urn model with two balls, one ball for each
party".

In TDI, Dembski implores us to "Do the propability
calculation!" Yet, nowhere in any of his writings has
he actually *done* the calculation. Instead, in poular
writings, he points to the fact that the DNA molecule
contains "specified complexity" without at any time
informing his readers that this doesn't mean that his
explanatory filter would label it "design".

This is, IMHO, a far graver inconsistency than his
muddlement of design and agency.

[...]
WAD> I submit that the problem of explaining specified
WAD> complexity is even worse than Davies makes out in
WAD> The Fifth Miracle. Not only have we yet to
WAD> explain specified complexity at the origin of
WAD> life, but evolutionary algorithms fail to explain
WAD> it in the subsequent history of life as well.
WAD> Given the growing popularity of evolutionary
WAD> algorithms, such a claim may seem ill-conceived.
WAD> But consider a well known example by Richard
WAD> Dawkins (The Blind Watchmaker, pp. 47-48) in
WAD> which he purports to show how a cumulative
WAD> selection process acting on chance can generate
WAD> specified complexity.

Dembski has promised us to show that natural selection
is *in principle* incabable of creating specified
information. Yet, when push comes to show, he's only
going to look at a particular evolutionary algorithm,
namely Dawkins' famous "Weasel applet", that, in
Dawkins' own words, was "misleading in important
ways":
"Although the monkey/Shakespeare model is
useful for explaining the distinction between
single-step selection and cumulative
selection, it is misleading in important ways.
One of these is that each generation
of selective "breeding", the mutant "progeny"
phrases were judged according to the criterion
of resemblance to a *distant ideal* target,
the phrase METHINKS IT IS LIKE A WEASEL. Life
isn't like that. Evolution has no long-term
goal. There is no long-distance target, no
final perfection to serve as a criterion for
selection, although human vanity cherishes the
absurd notion that our species is the
final goal of evolution." (Dawkins, R., 1996, "The
Blind Watchmaker", pp. 50, original emphasis)
In other words, Dembski has simply chosen a
"misleading", albeit popular, version of natural
selection, instead of dealing with the *real* models
used by researchers in evolutionary algorithms. For
example, Dembski could have chosen to deal with how an
evolutionary algorithm solves the "500 city Traveling
Salesman Problem", like Wesley has asked him to do for
years ([online here]).

WAD> He starts with the following target sequence, a
WAD> putative instance of specified complexity:
WAD>
WAD> METHINKS*IT*IS*LIKE*A*WEASEL

Dembski's use of "putative" indicates that Dembski
thinks that METHINKS*IT*IS*LIKE*A*WEASEL really isn't
an instance of specified complexity. However, after
having followed Dembski's (often confusing) advice on
how to calculate specified complexity, I have reached
the conclusion that METHINKS*IT*IS*LIKE*A*WEASEL
really *is* an instance of "specified complexity".
Anyone interested in the calculations as well as the
reasons for them can consult the not-so-technical
appendix.

But since, as we have seen, "specified complexity" is
*not* the same as "design", it is still a possibility
that METHINKS*IT*IS*LIKE*A*WEASEL will not be labelled
"design" by the explanatory filter.

WAD> (he considers only capital Roman letters and
WAD> spaces, here represented by bullets-thus 27
WAD> possibilities at each location in a symbol
WAD> string).
WAD>
WAD> If we tried to attain this target sequence by
WAD> pure chance (for example, by randomly shaking out
WAD> scrabble pieces), the probability of getting it
WAD> on the first try would be around 10 to the -40,
WAD> and correspondingly it would take on average
WAD> about 10 to the 40 tries to stand a better than
WAD> even chance of getting it. Thus, if we depended
WAD> on pure chance to attain this target sequence, we
WAD> would in all likelihood be unsuccessful. As a
WAD> problem for pure chance, attaining Dawkins's
WAD> target sequence is an exercise in generating
WAD> specified complexity, and it becomes clear that
WAD> pure chance simply is not up to the task.

I consider it interesting that while the propability
of the sequence is considerably lower than Dembski's
own bound of 500 bits (see the appendix), he still
thinks that "pure chance simply is not up to the task"
of generating it. This indicates that not even Dembski
takes his probability bound serious.

WAD> But consider next Dawkins's reframing of the
WAD> problem. In place of pure chance, he considers
WAD> the following evolutionary algorithm: (i) Start
WAD> out with a randomly selected sequence of 28
WAD> capital Roman letters and spaces, e.g.,
WAD>
WAD> WDL*MNLT*DTJBKWIRZREZLMQCO*P
WAD>
WAD> (note that the length of Dawkins's target
WAD> sequence, METHINKS*IT*IS*LIKE*A*WEASEL, comprises
WAD> exactly 28 letters and spaces); (ii) randomly
WAD> alter all the letters and spaces in this initial
WAD> randomly-generated sequence; (iii) whenever an
WAD> alteration happens to match a corresponding
WAD> letter in the target sequence, leave it and
WAD> randomly alter only those remaining letters that
WAD> still differ from the target sequence.

This is a complete misrepresentation of Dawkins'
"Weasel applet". I refuse to believe that anyone who
has read "The Blind Watchmaker" can write the above.

The "Weasel applet" does *not* "randomly alter all the
letters and spaces", and since it has no way of
knowing which alterations "happens to match a
corresponding letter in the target sequence", there is
no way it can "leave it and randomly alter only those
remaining letters that still differ from the target
sequence."

Dawkins' "Weasel applet" "breeds" a number of
"progeny", in every case randomly changing *some* of
the letters, then seeing what "progeny" has the
closest *overall* match with the target sequence. As
though the "Weasel applet" wasn't misleading enough,
Dembski has made it even more so, making it in no way
a representive of the process (natural selection) he's
criticizing.

WAD> In very short order this algorithm converges to
WAD> Dawkins's target sequence. In The Blind
WAD> Watchmaker, Dawkins (p. 48) provides the
WAD> following computer simulation of this algorithm:
WAD>
WAD> (1)WDL*MNLT*DTJBKWIRZREZLMQCO*P
WAD>
WAD> (2)WDLTMNLT*DTJBSWIRZREZLMQCO*P
WAD> ...
WAD> (10) MDLDMNLS*ITJISWHRZREZ*MECS*P
WAD> ...
WAD> (20) MELDINLS*IT*ISWPRKE*Z*WECSEL
WAD> ...
WAD> (30) METHINGS*IT*ISWLIKE*B*WECSEL
WAD> ...
WAD> (40) METHINKS*IT*IS*LIKE*I*WEASEL
WAD> ...
WAD> (43) METHINKS*IT*IS*LIKE*A*WEASEL
WAD>
WAD> Thus, Dawkins's simulation converges on the
WAD> target sequence in 43 steps. In place of 10 to
WAD> the 40 tries on average for pure chance to
WAD> generate the target sequence, it now takes on
WAD> average only 40 tries to generate it via an
WAD> evolutionary algorithm.
WAD>
WAD> Although Dawkins uses this example to illustrate
WAD> the power of evolutionary algorithms, the example
WAD> in fact illustrates the inability of evolutionary
WAD> algorithms to generate specified complexity. We
WAD> can see this by posing the following question:
WAD> Given Dawkins's evolutionary algorithm, what
WAD> besides the target sequence can this algorithm
WAD> attain?

This is nothing but a red herring. Nowhere in TDI, as
well as in any of his popular writings, has Dembski
identified "ability to attain anything besides the
target sequence" as a requirement of having specified
complexity.

And if this criterion was applied to humans doing
things as well, it is doubtful how much of what we do
could be considered exhibiting "specified complexity".
If by "target sequence" Dembski means "what one set
one's mind to doing before doing it", it would only be
people with nerve or brain damages, being unable to do
what they wanted, who could be considered producers of
specified complexity.

WAD> Think of it this way. Dawkins's evolutionary
WAD> algorithm is chugging along; what are the
WAD> possible terminal points of this algorithm?
WAD> Clearly, the algorithm is always going to
WAD> converge on the target sequence (with
WAD> probability 1 for that matter). An evolutionary
WAD> algorithm acts as a probability amplifier.
WAD> Whereas it would take pure chance on average 10
WAD> to the 40 tries to attain Dawkins's target
WAD> sequence, his evolutionary algorithm on average
WAD> gets it for you in the logarithm of that number,
WAD> that is, on average in only 40 tries (and with
WAD> virtual certainty in a few hundred tries).
WAD>
WAD> But a probability amplifier is also a complexity
WAD> attenuator. For something to be complex, there
WAD> must be many live possibilities that could take
WAD> its place.

Notice Dembski's terminological sleight-of-hand here.
First, he determines that the sequence indeed *does*
contatin "specified complexity". Then, he conflates
"specified complexity" with "design" (which, as we
have seen, there is no basis for doing). And finally,
he submits it to the explanatory filter, which labels
it "regularity", since it is a high-propability event
(see Section 2.1 of TDI). This, of course, leads to a
contradiction, since specified complexity is *not*
"design". Therefore, Dembski must now forget what he
initially claimed, namely, that the sequence exhibits
specified complexity. How this is done will become
"appearant"...

[...]
WAD> It follows that Dawkins's evolutionary algorithm,
WAD> by vastly increasing the probability of getting
WAD> the target sequence, vastly decreases the
WAD> complexity inherent in that sequence. As the sole
WAD> possibility that Dawkins's evolutionary algorithm
WAD> can attain, the target sequence in fact has
WAD> minimal complexity (i.e., the probability is 1
WAD> and the complexity, as measured by the usual
WAD> information measure, is 0). In general, then,
WAD> evolutionary algorithms generate not true
WAD> complexity but only the appearance of complexity.
WAD> And since they cannot generate complexity, they
WAD> cannot generate specified complexity either.

Now, we also have "appearant" and "actual specified
complexity", where "appearant specified complexity" is
anything produced by an evolutionary algorithm, while
"actual specified complexity" is that produced by
intelligent agents.

But since "specified complexity" is supposed to be
what divides the product of intelligence from that of
other processes, the only way we can determine if the
sequence METHINKS*IT*IS*LIKE*A*WEASEL is produced by
an evolutionary algorithm is to see... if it was made
by an evolutionary algorithm.

WAD> This conclusion may seem counterintuitive,
WAD> especially given all the marvelous properties
WAD> that evolutionary algorithms do possess. But the
WAD> conclusion holds. What's more, it is consistent
WAD> with the "no free lunch" (NFL) theorems of David
WAD> Wolpert and William Macready, which place
WAD> significant restrictions on the range of problems
WAD> genetic algorithms can solve.

According to Wesley, Dembski has seriously distorted
Wolpert and Macready's "No Free Lunch" theorems:
"NFL says that when you average the
performance of an algorithm over all "cost
functions" of a problem, it performs no better
on average than blind search. That is for
*any* algorithm, not just evolutionary
computation (which Dembski likes to imply).
This goes to early claims that certain forms of
evolutionary computation could be considered
as general problem-solvers that could be
deployed without much domain knowledge of a
problem. NFL says that if you are concerned
about the relative efficiency of getting a
solution, you have to apply domain knowledge
of the problem and cost function to select an
algorithm with good performance on that
problem and cost function. NFL isn't about
essential capacity of an algorithm to produce
a solution; it is about comparative efficiency
of algorithms in producing solutions.

It's my opinion that Dembski misconstrues or
misunderstands what the NFL theorems say.
I've passed word along that Dembski's choice
of "No Free Lunch" for the title of a book
that is due out this fall sets him up for
embarrassment. That's still the title, so far
as I know. It will be interesting to see
how the reviews turn out. The introduction to the
book is online at ."
(Wesley R. Elsberry, [online here])
WAD> The claim that evolutionary algorithms can only
WAD> generate the appearance of specified complexity
WAD> is reminiscent of a claim by Richard Dawkins. On
WAD> the opening page of his The Blind Watchmaker he
WAD> states, "Biology is the study of complicated
WAD> things that give the appearance of having been
WAD> designed for a purpose." Just as the Darwinian
WAD> mechanism does not generate actual design but
WAD> only its appearance, so too the Darwinian
WAD> mechanism does not generate actual specified
WAD> complexity but only its appearance.
WAD>
WAD> But this raises the obvious question, whether
WAD> there might not be a fundamental connection
WAD> between intelligence or design on the one hand
WAD> and specified complexity on the other. In fact
WAD> there is. There's only one known source for
WAD> producing actual specified complexity, and that's
WAD> intelligence.

Of course, if "actual specified complexity" is defined
as "that specified complexity that is produced by
intelligence", then the above is *obvious*. But then
the claim "life contains actual specified complexity"
becomes false, since we do *not* know that life was
produced by intelligence. Indeed, that is what
Dembski's filter is supposed to demonstrate in the
first place.

WAD> In every case where we know the causal history
WAD> responsible for an instance of specified
WAD> complexity, an intelligent agent was involved.

Of course, if you a priori eliminate all instances of
non-intelligent processes producing specified
complexity, then you would hardly be surprised in
finding that "[i]n every case where we know the causal
history responsible for an instance of specified
complexity, an intelligent agent was involved."

[...]
WAD> Thus, to claim that laws, even radically new
WAD> ones, can produce specified complexity is in my
WAD> view to commit a category mistake. It is to
WAD> attribute to laws something they are
WAD> intrinsically incapable of delivering-indeed, all
WAD> our evidence points to intelligence as the sole
WAD> source for specified complexity. Even so, in
WAD> arguing that evolutionary algorithms cannot
WAD> generate specified complexity and in noting that
WAD> specified complexity is reliably correlated with
WAD> intelligence, I have not refuted Darwinism or
WAD> denied the capacity of evolutionary algorithms to
WAD> solve interesting problems. In the case of
WAD> Darwinism, what I have established is that the
WAD> Darwinian mechanism cannot generate actual
WAD> specified complexity. What I have not established
WAD> is that living things exhibit actual specified
WAD> complexity. That is a separate question.

Some much needed honesty from Dembski! But unless
Dembski has an emperical way of distinguishing "actual
specified complexity" from "[appearant] specified
complexity" without knowing the causal story, the
oft-repeated claim from IDers that "organisms
demonstrate clear, empirically detectable marks of
being intelligently caused" [Dembski, 1999, "Intelligent
Design", pp. 110] is nothing but hot air.

WAD> Does Davies's original problem of finding
WAD> radically new laws to generate specified
WAD> complexity thus turn into the slightly modified
WAD> problem of finding find radically new laws that
WAD> generate apparent-but not actual-specified
WAD> complexity in nature? If so, then the scientific
WAD> community faces a logically prior question,
WAD> namely, whether nature exhibits actual specified
WAD> complexity. Only after we have confirmed that
WAD> nature does not exhibit actual specified
WAD> complexity can it be safe to dispense with design
WAD> and focus all our attentions on natural laws and
WAD> how they might explain the appearance of
WAD> specified complexity in nature.
WAD> Does nature exhibit actual specified complexity?
WAD> This is the million dollar question. Michael
WAD> Behe's notion of irreducible complexity is
WAD> purported to be a case of actual specified
WAD> complexity and to be exhibited in real
WAD> biochemical systems (cf. his book Darwin's Black
WAD> Box). If such systems are, as Behe claims, highly
WAD> improbable and thus genuinely complex with
WAD> respect to the Darwinian mechanism of mutation
WAD> and natural selection and if they are specified
WAD> in virtue of their highly specific function (Behe
WAD> looks to such systems as the bacterial
WAD> flagellum), then a door is reopened for design in
WAD> science that has been closed for well over a
WAD> century.

Now Dembski shifts back to a simplified version of his
explanatory filter, demanding that the complexity of a
structure is calculated with respect to one chance
hypothesis (as opposed to *all* chance hypotheses, as
advocated in TDI), instead of whether it "requires a
complicated instruction-set to characterize", as
Dembski defined it at the beginning of his post.

I, for one, am scratching my head, trying to figure
out how Dembski intends to calculate the probability
of the bacterial flagellum evolving, especially since
we have no idea what kinds of selection pressures or
changing environments the species in question has
experienced. And, of course, once that is done,
Dembski needs to calculate the same probability for
*all* "relevant chance hypotheses", as advocated in
TDI. If the ID movement is ever to overturn the
current paraigm in science, they have *a lot* of
calculating to do.

WAD> Does nature exhibit actual specified
WAD> complexity? The jury is still out.

What started out as a "in principle refutation" of the
specified complexity-making powers of natural
selection, seems to end as a plea for agnosticism wrt
the specified complexity of life. Dembski's motivation
for writing this seems puzzling.

WAD> William A. Dembski
WAD>
META> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
META> Footer information below last updated:
META> 1999/07/12.
META>
META> Meta is an edited and moderated listserver and
META> news service dedicated to promoting the
META> constructive engagement of science and religion.

META> Subscriptions are free. For more information,
META> including archives and submission guidelines, go
META> to .
[...]
META> Permission is granted to reproduce this e-mail
META> and distribute it without restriction with the
META> inclusion of the following credit line: This is
META> another posting from the Meta-List
META> . Copyright 1997,
META> 1998, 1999. William Grassie.
-------------------------------------------------------

-------------------------------------------------------
APPENDIX:

This is my attempt to calculate the specified
complexity of the sequence produced by Dawkins'
"Weasel applet". Throughout, remember that I'm
calculating specified complexity, *not* running it
through the explanatory filter.

"METHINKS*IT*IS*LIKE*A*WEASEL" is the event E to be
explained. It is specified with regard to <"E IS AN
English sentence from Hamlet",*> (see chp. 5 in TDI).
Whether Dembski's concept of specification is really
sound is questioned by Fitelson, Stephens & Sober in
<http://www.arn.org/docs/dembski/wd_wisconsinureview.htm>,
but I am here more concerned with Dembski's other
claims, and is willing to accept, for the moment, that
his specification criterion really *is* sound.

In addition to being specified, E also is of a certain
complexity. E is 28 symbols long, and if we assume the
only possible symbols are the 26 letters in the
English alphabet as well as the asterix (*) for a
total of 27 symbols, it is obvious that there are
28^27 ways that E *could* have been.

Now for the complexity-calculation. At the beginning
of his meta-post, Dembski caracterizes "complexity" as
requiring "a complicated instruction-set to
characterize". If we assume that no shorter
instruction-sets characterize
"METHINKS*IT*IS*LIKE*A*WEASEL", we can now begin.

Following Dembski's advice in Section 6.1 of
"Intelligent Design", I will now calculate the
complexity of event E in terms of propability.
Although Dembski is anything but clear on whether I
should use one chance hypoethesis or many, I will here
use only the chance hypothesis A
A: Each of the 28 symbols are randomly
selected from the possible 27 English letters
and the asterix.
, the total set of posibilities U (28^27), and the
desired set W
W: E is METHINKS*IT*IS*LIKE*A*WEASEL
and then calculate P(E|A) (the probability of E given
A):
P(E|A)= W/U= 1/(28^27)= 8.45 X 10^-40
Taking the negative logarithm of 2 for this (as
advocated in Section 6.1 of "Intelligent Design") gets
us about 130 bits. While not as high as the bound of
500 bits proposed in Section 6.3 of "Intelligent
Design", it is still considerably higher than the puny
36 bits contained in Dembski's favourite example of
Caputo cheating with the ballot positions, and I
therefore feel justified in characterizing
METHINKS*IT*IS*LIKE*A*WEASEL as an instance of
specified complexity.

Note again that I have *not* "done the calculation"
wrt the explanatory filter, but only wrt whether E
exhibits specified complexity or not.
-------------------------------------------------------



=====
Morgan

"Evolution is to the social sciences as statues are to
birds: a convenient platform upon which to deposit badly
digested ideas." (Steve Jones, 2000, "Darwin's Ghost", pp.
xxvii)


[More ID-Commentaries]

1