From: Morgan Grey <cynical_prophet@yahoo.com>
To: <DebunkCreation@yahoogroups.com>
Reply-to: <DebunkCreation@yahoogroups.com>
Date: Sun Oct 7, 2001 10:29 pm
Message: 21866
Subject: ID-Commentary: "Another Way to Detect Design?"
This is my third commentary on ID-texts. This week,
I'm commenting on Dembski's "Another Way to Detect
Design?", a reply to Fitelson, Stephens & Sober's
(FS&S) "How Not to Detect Design" (online at
<http://www.arn.org/docs/dembski/wd_wisconsinureview.htm>),
which I recommend everyone to read. It is a bit
technical in passages, but is a sound critique of
Dembski's filter.
Dembski's reply on Meta-views can be found at
<http://www.meta-list.org/archives/fulldetails.asp?listtype=Magazine&ARCHIVEID=3097>
(and is also online at
<http://www.geocities.com/evolutionsteori/IDC/3097.html>).
My last such post can be found at
<http://www.geocities.com/evolutionsteori/IDC/002.html>.
-------------------------------------------------------
167: Detecting Design? by William Dembski
Metaviews 167. 1999/12/29. Approximately 2593 words.
BR> Below is another column from William Dembski, now
BR> at Baylor University's Polanyi Center . In the
BR> piece below, Dembski responds to Elliott Sober
BR> review in the Philosophy of Science, September
BR> 1999. Dembski concludes:
BR>
BR> "We are back, then, to needing some account of
BR> complexity and specification. Thus a likelihood
BR> analysis that pits competing design and chance
BR> hypotheses against each other must itself
BR> presuppose the legitimacy of specified complexity
BR> as a reliable empirical marker of intelligence.
BR> Consequently, if there is a way to detect design,
BR> specified complexity is it."
BR>
BR> -- Billy Grassie
BR>
WAD> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
WAD> =-=-= From: William_Dembski@baylor.edu (William
WAD> A. Dembski) Subject: Another Way to Detect
WAD> Design?
WAD>
WAD> In The Design Inference (Cambridge, 1998) I argue
WAD> that specified complexity is a reliable empirical
WAD> marker of intelligent design.
Dembski does no such thing. "Specified complexity" is
never even *mentioned* in TDI. Only in later writings
does Dembski claim that the concept has anything to do
with his 1998 dissertation.
Furthermore, Dembski's claim that his filter can
somehow detect *intelligent* design, is also confined
to later, popular writings. In TDI, he writes:
"Thus, even though a design inference is
frequently the first step toward identifying
an intelligent agent, design as inferred from
the design inference does not logically entail
an intelligent agent. The design that emerges
from the design inference must not be
conflated with intelligent agency. Though they
are frequently linked, the two are seperate.
Whether an event conforms to a pattern is a
seperate question from what caused an event to
conform to a pattern." (pp. 8-9)
And this is only the first time Dembski claims this.
Other similar attempts to distinguish "design" from.
intelligent agency can be found in TDI on pp. 19-20,
36, and 226-7.
WAD> A long sequence of random letters is complex
WAD> without being specified.
Here, Dembski is taking advantage of the fact that to
the general lay-reader, "complexity" is something
about consisting of many parts, or being difficult to
understand:
-------------------------------------------------------
(<http://www.dictionary.com/cgi-bin/dict.pl?term=complex>)
[...]
complex \Com"plex\, a. [...] 1. Composed of two or
more parts; composite; not simple; as, a complex
being; a complex idea.
[...]
2. Involving many parts; complicated; intricate.
[...]
Source: Webster's Revised Unabridged Dictionary, ©
1996, 1998 MICRA, Inc.
-------------------------------------------------------
However, Dembski's notion of complexity is far from
that of the lay-reader. In chapter 5 in TDI, Dembski
argues that "[p]robability measures are disguised
complexity measures" (p. 114).
Or, as Dembski writes in "Intelligent Design":
"It follows, therefore, that how we measure
information needs to be independent of
whatever procedure we use to individuate the
possibilities under consideration. The way to
do this is not simply to count possibilities
but to assign probabilities to these
possibilities." (pp. 154)
and
"As a purely formal object, the information
measure described here is a complexity
measure." (pp. 158)
To illustrate the difference between the general
concept of complexity and the concept as proposed by
Dembski, consider an atom with nine electrons in its
outer shell. While the average observer would probably
refer to this system as "simple", Dembski's complexity
measure would label it "complex", due to its low
probability of occuring.
To the one's who spend their chemistry lessons drawing
naked women in their notebooks: Having eight electrons
in the outer shell is the "optimal" state of atoms,
and every atoms "attemps" to reach this state (except
in the first shell, where the optimal number is two).
Finding an atom with *nine* electrons in its outer
shell would thus be as unlikely as finding two magnets
with their positive poles stuck together.
But most of Dembski's audience doesn't know this. They
think that "[a] long sequence of random letters is
complex" because it is long, not because it is
improbable. And of course, Dembski never establishes
*which* "long sequence[s] of random letters" are
improbable, let alone informing his audience that this
needs to be determined, letting them stay in blissful
ignorance.
WAD> A short sequence of letters like "the," "so,"
WAD> or "a" is specified without being complex.
Again, Dembski very conveniently "forgets" to mention
that according to his own complexity measure,
complexity have nothing to do with the length of the
sequence.
WAD> A Shakespearean sonnet is both complex and
WAD> specified. Thus in general, given an event,
WAD> object, or structure, to convince ourselves that
WAD> it is designed we need to show that it is
WAD> improbable (i.e., complex) and suitably patterned
WAD> (i.e., specified).
It is only later (and even then, tucked away in
parantheses) that Dembski even hints at the connection
between propability and complexity.
WAD> Not everyone agrees. Elliott Sober, for instance,
WAD> holds that specified complexity is exactly the
WAD> wrong instrument for detecting design (see his
WAD> September 1999 review in Philosophy of Science
WAD> titled "How Not to Detect Design"). In this piece
WAD> I want to consider the main criticisms of
WAD> specified complexity as a reliable empirical
WAD> marker of intelligence, show how they fail, and
WAD> argue that not only does specified complexity
WAD> pinpoint how we detect design, but it is also our
WAD> sole means for detecting design.
Before evaluating if Dembski succeeds in
"consider[ing] the main criticisms of specified
complexity", let us see what FS&S's "main criticisms"
are.
1) Dembski's filter is much too punctilious, since a
design inference can be made with respect to
likelihoods of the hypothesis being correct:
"There is a straightforward reason for
thinking that the observed outcomes favor
Design over Chance. If Caputo had allowed his
political allegiance to guide his arrangement
of ballots, you'd expect Democrats to be
listed first on all or almost all of the
ballots. ... The key concept is *likelihood*.
The likelihood of a hypothesis is the
probability it confers on the observations; it
is not the probability that the observations
confer on the hypothesis. ... Chance and
Design can be evaluated by comparing their
likelihoods, relative to the same set of
observations." (original emphasis)
2) Dembski's terminology and examples are confusing
and contradictory:
"Dembski defines the Regularity hypothesis in
different ways. Sometimes it is said to assert
that the evidence E is noncontingent and is
reducible to law (39, 53); at other times it
is taken to claim that E is a deterministic
consequence of earlier conditions (65, 146n5);
and at still other times, it is supposed to
say that E was highly probable, given some
earlier state of the world (38)."
"Understanding what "regularity," "chance,"
and "design" mean in Dembski's framework is
made more difficult by some of his examples.
Dembski discusses a teacher who finds that the
essays submitted by two students are nearly
identical (46). One hypothesis is that the
students produced their work independently; a
second hypothesis asserts that there was
plagiarism. Dembski treats the hypothesis of
independent origination as a Chance hypothesis
and the plagiarism hypothesis as an instance
of Design. Yet, both describe the matching
papers as issuing from intelligent agency, as
Dembski points out (47)."
"The same sort of interpretive problem
attaches to Dembski's discussion of the Caputo
example. We think that all of the following
hypotheses appeal to intelligent agency: (i)
Caputo decided to spin a roulette wheel on
which 00 was labeled "Republican" and the
other numbers were labeled "Democrat;" (ii)
Caputo decided to toss a fair coin; (iii)
Caputo decided to favor his own party. Since
all three hypotheses describe the ballot
ordering as issuing from intelligent agency,
all, apparently, are instances of Design in
Dembski's sense. However, Dembski says that
they are examples, respectively, of
Regularity, Chance, and Design."
3) There is no reason why regularity needs to be
rejected before chance can be accepted, or why both
need to be rejected before design can be accepted:
"In the first example, Dembski (39) says that
Newton's hypothesis that the stability of the
solar system is due to God's intervention into
natural regularities is less parsimonious than
Laplace's hypothesis that the stability is due
solely to regularity. In the second, he
compares the hypothesis that a pair of dice is
fair with the hypothesis that each is heavily
weighted towards coming up 1. He claims that
the latter provides the more parsimonious
explanation of why snake-eyes occurred on a
single roll. We agree with Dembski's
simplicity ordering in the first example; the
example illustrates the idea that a hypothesis
that postulates two causes R and G is less
parsimonious than a hypothesis that postulates
R alone. However, this is not an example of
Regularity versus Design, but an example of
Regularity&Design versus Regularity alone; in
fact, it is an example of two causes versus
one, and the parsimony ordering has nothing to
do with the fact that one of those causes
involves design. In Dembski's second example,
the hypotheses differ in likelihood, relative
to the data cited; however, if parsimony is
supposed to be a different consideration from
fit-to-data, it is questionable whether these
hypotheses differ in parsimony."
5) With respect to chance and regularity, Dembski's
filter is likely to make a wrong inference:
"The fact that the Filter allows you to accept
or reject Regularity without attending to what
specific Regularity hypotheses predict has
some peculiar consequences. Suppose you have
in mind just one specific regularity
hypothesis that is a candidate for explaining
E; you think that if E has a regularity-style
explanation, this has got to be it. If E is a
rare type of event, the Filter says to
conclude that E is not due to Regularity. This
can happen even if the specific hypothesis,
when conjoined with initial condition
statements, predicts E with perfect precision.
Symmetrically, if E is a common kind of event,
the Filter says not to reject Regularity, even
if your lone specific Regularity hypothesis
deductively entails that E is false."
4) Dembski's specification criterion is useless in
helping one formulate patterns of specification (as
opposed to those of fabrications). One of the
conditions offered by Dembski is far too lenient on
chance hypotheses, and the two other even allows
tautologies to be formulated as specifications:
"CINDE [one of the conditions] is too lenient
on Chance hypotheses -- it says that their
violating CINDE suffices for them to be
accepted (or not rejected). Suppose you want
to explain why Smith has lung cancer (E). It
is part of your background knowledge (I) that
he smoked cigarettes for thirty years, but you
are considering the hypothesis (H) that Smith
read the works of Ayn Rand and that this
helped bring about his illness. To investigate
this question, you do a statistical study and
discover that smokers who read Rand have the
same chance of lung cancer as smokers who do
not. This study allows you to draw a
conclusion about Smith -- that Pr(E | H&I) = Pr
(E | not-H &I). Surely this equality is
evidence against the claim that E is due to H.
However, the filter says that you can't reject
the causal claim, because CINDE is false -- Pr
(E | H&I) [is not equal to] Pr(E | H)."
"In fact, just writing down a tautology
satisfies TRACT and DELIM (165). On the
assumption that human beings are able to write
down tautologies, we conclude that these two
conditions are always satisfied and so play no
substantive role in the Filter."
5) Dembski is wrong in claiming that one should reject
speciefied events of small probability to avoid a
"probabilistic inconsistency":
"Suppose you know that an urn contains either
10% green balls or 1% green balls; perhaps you
saw the urn being filled from one of two
buckets (you don't know which), whose contents
you examined. Suppose you draw 10 balls from
the urn and find that 7 are green. From a
likelihood point of view, the evidence favors
the 10% hypothesis. However, Dembski would
point out that the 10% hypothesis predicted
that most of the balls in your sample would
fail to be green. Your observation contradicts
this prediction. Are you therefore forced to
reject the 10% hypothesis? If so, you are
forced to reject the 1% hypothesis on the same
grounds. But you know that one or the other
hypothesis is true. Dembski's talk of
a "probabilistic inconsistency" suggests that
he thinks that improbable events can't really
occur -- a true theory would never lead you to
make probabilistic predictions that fail to
come true."
I must admit that I, on this particular point, find
FS&S's reasoning to be a little strained.
First, I think that everyone agrees that, say, a human
being being assembled by random atoms banging together
as a chance hypothesis is *far* too improbable, and
that another explanation for the existence of humans
is required. Pointing out that we sometimes must
accept hypotheses requiring low probabilities doesn't
justify accepting hypotheses requiring *extremely* low
probabilities.
Second, the example proposed above is only valid if we
are absolutely sure that the mentioned buckets are the
only possible sources. Given the above event, I would
consider the possibility that someone was pulling my
leg, and that the balls really came from a bucket,
where 70% of the balls were green.
6) Since the explanatory filter requires you to reject
*all* chance hypotheses before accepting design,
infering design is nigh-impossible, unless you happen
to be omniscient:
"Here Dembski is *much* too hard on Design.
Paley reasonably concluded that the watch he
found is better explained by postulating a
watchmaker than by the hypothesis of random
physical processes. This conclusion makes
sense even if Paley admits his lack of
omniscience about possible Chance hypotheses,
but it does not make sense according to the
Filter. What Paley did was compare a specific
chance hypothesis and a specific design
hypothesis without pretending that he thereby
surveyed all possible chance hypotheses. For
this reason as well as for others we have
mentioned, friends of Design should shun the
Filter, not embrace it." (emphasis original)
7) Since it is impossible to eliminate all competing
theories, ID-theory can't establish itself merely by
eliminating Neo-Darwinistic evolution. Instead, it
must be able to make predictions that can be tested:
"To test evolutionary theory against the
hypothesis of intelligent design, you must
know what both hypotheses predict about
observables (Fitelson and Sober 1998, Sober
1999b). The searchlight therefore must be
focused on the design hypothesis itself. What
does it predict? If defenders of the design
hypothesis want their theory to be scientific,
they need to do the scientific work of
formulating and testing the predictions that
creationism makes (Kitcher 1984, Pennock
1999). Dembski's Explanatory Filter encourages
creationists to think that this responsibility
can be evaded. However, the fact of the matter
is that the responsibility must be faced."
It will be interesting to see if Dembski in fact *has*
succeeded in "consider[ing] the main criticisms of
specified complexity".
WAD> Consequently, specified complexity is not just
WAD> one of several ways for reinstating design in the
WAD> natural sciences-it is the only way.
There is no need to "reinstat[e] design in the natural
sciences": Several natural sciences, like archaeology
and anthropology, have already been detecting and
studying intelligent designed objects for decades.
Perhaps some of the archaeologists on this list
(Brian? Anne?) would like to comment on whether they
need Dembski's filter to infer that, say, a cave
painting is designed?
WAD> Specified complexity, as I explicate it in The
WAD> Design Inference, belongs to statistical decision
WAD> theory. Statistical decision theory attempts to
WAD> set the ground rules for how to draw inferences
WAD> for occurrences governed by probabilities. Now,
WAD> statistical decision theorists have their own
WAD> internal disputes about the proper definition of
WAD> probability and the proper logic for drawing
WAD> probabilistic inferences. It was therefore
WAD> unavoidable that specified complexity should come
WAD> in for certain technical criticisms simply
WAD> because the field of statistical decision theory
WAD> is itself so factionalized (cf. Bayesian vs.
WAD> frequentist approaches to probability).
Dembski is ignoring that most of "the main criticisms
of specified complexity" didn't deal with the
probability-part of his book, but with the
philosophical and logical assumptions, as well as his
inability to clearly define his terms. For Dembski to
claim that "the main criticisms of specified
complexity" are mostly due to the fact that "the field
of statistical decision theory is itself so
factionalized", is simply misleading.
WAD> The approach I take follows the common
WAD> statistical practice (popularized by Ronald
WAD> Fisher) of rejecting a chance hypothesis if a
WAD> sample appears in a prespecified rejection
WAD> region. What my complexity-specification
WAD> criterion does is extend this statistical
WAD> practice in two ways: First, it generalizes the
WAD> types of rejections regions by which chance is
WAD> eliminated, namely, to what I call
WAD> "specifications."
Another example of Dembski obfuscating the issue by
using vague and impressive-sounding terms to describe
even the most simple concepts.
Statistical theory is operating with what is known as
"rejection regions". For example, before taking a
statistical sample of the number of people who have
been cured of cancer after taking drug X, the
statistican might decide that if the result differs
from that expected by pure luck by 0.01 or less, the
usefulness of drug X in curing cancer should be
rejected.
Dembski's "inovation" is to claim that this "rejection
region" can be formulated *after* the sample has been
taken, in which case it should be called a
"specification". So when Dembski says that his
"complexity-specification criterion" "generalizes the
types of rejections regions by which chance is
eliminated ... to ... specifications", he is just
saying that it can also eliminate chance *after* the
event is known.
This is the claim that many reviewers have objected
to, and which FS&S's review spends considerable time
questioning.
WAD> Second, it allows for the elimination of all
WAD> relevant chance hypotheses for an event, rather
WAD> than just a single one.
Dembski's filter *requires* "the elimination of all
relevant chance hypotheses", but he nowhere specifies
how this should be done.
Thus, to claim that his filter "allows for the
elimination of all relevant chance hypotheses" is just
like saying that driving a car *allows* one to have a
drivers license.
[...]
WAD> The worry with specified complexity centers not
WAD> with its precise technical formulation (though
WAD> that is important), but with the jump from
WAD> specified complexity to design. Here's the worry.
WAD> Specified complexity is a statistical notion.
WAD> Design, as generally understood, is a causal
WAD> notion. How, then, do the two connect? In The
WAD> Design Inference, and more explicitly in my
WAD> recently published Intelligent Design: The Bridge
WAD> Between Science & Theology, I argue the
WAD> connection as follows.
Dembski is here doing some serious re-writing of "The
Design Inference". In this, his "scholarly work", he
explixcitly denies every connection between "design"
and "intelligent agency":
"To attribute an event to design is to say
that regularity and chance have been ruled
out. Referring Caputo's ballot line selections
to design is therefore not identical with
referring it to agency." (TDI, pp. 19-20)
I have a hard time acting surprised about this
conflating of "design" (as defined in TDI) and the
question of intelligent agency, since I, through the
readings of Dembski's writings, have learned that when
dealing with the explanatory filter, such
terminological muddle is the norm, rather than the
exception.
WAD> First I offer an inductive argument, showing that
WAD> in all cases where we know the causal story and
WAD> specified complexity was involved, that an
WAD> intelligence was involved also.
Here, Dembski seems to be implying that we know for
certain that there in fact *are* "cases where ...
specified complexity was involved". If Dembski is of
the opinion that "specified complexity" is what is
found by using the explanatory filter as described in
TDI (which is a reasonable conclusion, considering the
very first sentence in his "Another Way to Detect
Design?"), he must be knowing something which the rest
of us don't.
As should be clear by now, any attempt to use
Dembski's filter is a punctilious affair, especially
since one needs to eliminate *all* chance hypotheses.
Indeed, in all of the examples that Dembski claims
should be labelled "design" in TDI, he always
entertains only *one* chance hypothesis.
So unless Dembski know of any instance in which his
explanatory filter has been applied and has detected
"design", I am at a loss trying to understand his
"inductive argument".
WAD> The inductive generalization that follows is that
WAD> all cases of specified complexity involve
WAD> intelligence. Next I argue that choice is the
WAD> defining feature of intelligence and that
WAD> specified complexity is how in fact we identify
WAD> choice.
Contrary to Dembski's assertions, none of this is
found in TDI. Only in Dembski's later "Intelligent
Design" does he claim that "choice" is "the defining
feature of intelligence" (cf. Section 5.6).
In TDI, he claims that intelligent agency is
characterized by what he calls the
"Actualization-Exclusion-Specification triad".
Although out of scope for this commentary, I wish to
direct readers to Wesley's brief discussion of this,
found at
<http://www.geocities.com/evolutionsteori/199811261107.html>.
WAD> Although I regard these two arguments as utterly
WAD> convincing, others regard them as less so. The
WAD> problem--and Elliott Sober gives particularly apt
WAD> expression to it--is that specified complexity by
WAD> itself doesn't tell us anything about how an
WAD> intelligent designer might have produced an
WAD> object we observe.
I have been unable to find any place where FS&S
express any criticism of the fact that "specified
complexity by itself doesn't tell us anything about
how an intelligent designer might have produced an
object we observe." I wonder what has given Dembski
the impression that they did?
WAD> Sober regards this as a defect. I regard it as a
WAD> virtue. I'll come back to why I think it is a
WAD> virtue, but for the moment let's consider this
WAD> criticism on its own terms.
WAD>
WAD> According to this criticism it is not enough to
WAD> have a criterion that simply from certain
WAD> features of an object infers to an intelligence
WAD> responsible for those features. Rather, we must
WAD> also be able to tell a causal story about how
WAD> that intelligence produced those features.
[...]
WAD> Let us now examine this criticism. First, even
WAD> though specified complexity is established via an
WAD> eliminative argument, it is not fair to say that
WAD> it is established via a purely eliminative
WAD> argument.
Dembski certainly seems to be saying so in TDI:
"These two moves -ruling out regularity,
and then ruling out chance- constitute the
design inference. The conception of design
that emerges from the design inference is
therefore eliminative, asserting of an event
what it is not, not what it is." (TDI, pp. 19)
"The design inference is in the business of
eliminating hypotheses, not confirming
them. ... Because the design inference is
eliminative, there is no "design hypothesis"
against which the relevant chance hypothesis
compete, and which must then be compared
within a Bayesian confirmation scheme."
(TDI, pp. 68)
Unless Dembski can point to any passage in TDI where
he indentifies any sort of *positive* argument as
being part of "the design inference" (or can refer to
a later re-definition of his work in which he argues
for such), I believe that it *is* "fair to say that it
[i.e. "specified complexity"] is established via a
purely eliminative argument."
WAD> If the argument were purely eliminative, one
WAD> might be justified in saying that the move from
WAD> specified complexity to a designing intelligence
WAD> constitutes an argument from ignorance. The fact
WAD> is, however, that it takes considerable knowledge
WAD> on our part to come up with the right patterns
WAD> (specifications) for eliminating chance and
WAD> inferring design.
This is just a non sequitur. Whether or not it "takes
considerable knowledge ... to come up with the right
patterns ... for eliminating chance" is completely
irrelevant in the question of whether "eliminating
chance" is enough to "infer... design" in the first
place.
WAD> Because these patterns qua specifications are
WAD> essential to identifying specified complexity,
WAD> the inference from specified complexity to a
WAD> designing intelligence is not purely eliminative
WAD> and may appropriately be called a "design
WAD> inference" since pattern, specification, and
WAD> design are, after all, related concepts.
This is the whole point in question! If Dembski wants
us to accept his conflation of "the set-theoretic
complement of the disjunction regularity-or-chance"
("design", as defined in TDI, pp. 36) and "intelligent
agency", he needs to do better than circular
reasoning.
WAD> But this raises the obvious question about what
WAD> is the connection between design as a statistical
WAD> notion (i.e., specified complexity) and design as
WAD> a causal notion (i.e., the action of a designing
WAD> intelligence). Now it's true that simply knowing
WAD> that an object is complex and specified tells us
WAD> nothing about its causal history. Even so, it's
WAD> not clear why this should be regarded as a defect
WAD> of the concept.
Again, Dembski has not established that FS&S regards
this "as a defect of the concept." Since Dembski seems
to be attacking a straw man, I have deleted his
discussion of this claim below as being irrelevant.
[...]
WAD> So where is the problem in connecting design as a
WAD> statistical notion (i.e., specified complexity)
WAD> to design as a causal notion (i.e., the action of
WAD> a designing intelligence), especially given the
WAD> close parallels between specified complexity and
WAD> choice as well as the absence of counterexamples
WAD> in generating specified complexity apart from
WAD> intelligence?
The succes of evolutionary algorithms in producing
"specified complexity" differs, according to which
definition of "specified complexity" one adopts.
If one defines "complexity" in terms of being
"complicated; intricate" (as defined by Webster's
Revised Unabridged Dictionary, and the definition
Dembski's audience most likely has in mind),
evolutionary algorithms indeed *have* produced
"specified complexity", as testified by the case of an
evolutionary algorithm's solution to the 500 city
Traveling Salesman Problem, mentioned by Wesley in
<http://www.geocities.com/evolutionsteori/5f5blk.html>.
If, on the other hand, one defines "complexity" in
terms of "being improbable" (the definition offered in
TDI, and -sometimes- advocated in Dembski's popular
writings), it is quite another matter. In that case,
any solution produced by an evolutionary algorithms
ceases to be an instance of "specified complexity",
simply because it *is* produced by an evolutionary
algorithm, and therefore having a probability of 1.
While this definition obviously refutes the claim that
evolutionary algorithms can produce "specified
complexity", it calls into question whether life is an
instance of "specified complexity", and, indeed, if
*anything* (including objects made by humans) can be
considered to exhibit "specified complexity".
WAD> In fact, the absence of such counterexamples is
WAD> very much under dispute. Indeed, if the criticism
WAD> against specified complexity being a reliable
WAD> empirical marker of intelligence is to succeed,
WAD> it must be because specified complexity can be
WAD> purchased apart from intelligence, and thus
WAD> because there are counterexamples to specified
WAD> complexity being generated by intelligence.
WAD> Consider Sober's reframing of William Paley's
WAD> famous watchmaker argument in his text Philosophy
WAD> of Biology (Westview, 1993). Sober reframes it as
WAD> an inference to the best explanation:
Following is a quote from Sober's book, as well as
Dembski's discussion of it, which I've removed for
brevity. I will not comment on it for the simple
reason that it is completely irrelevant with respect
to FS&S's review of TDI, that Dembski was supposed to
comment on.
Where FS&S raised issue with the difficulty of
empirically detecting "specified complexity", Dembski
seems to be focusing on whether "specified complexity"
is a valid marker of intelligent agency. Indeed, since
FS&S never seems to raise this issue, I am baffled by
Dembski's concentrating on it.
However, I have discovered that Amazon has a copy of
Sober's "Philosophy of Biology", and as soon as my
economy allows it, I will order it. If Sober is saying
what Dembski portrays him as saying, I can only
express my disagreement with Sober. However, if it is
the case that Dembski has miscontructed what Sober is
saying (or has misunderstood him), I will deal with
Dembski's argument in greater detail.
[...]
WAD> What, then, is the problem with claiming that
WAD> specified complexity is a reliable empirical
WAD> marker of intelligence? The problem isn't that
WAD> establishing specified complexity assumes the
WAD> form of an eliminative argument. Nor is the
WAD> problem that specified complexity fails to
WAD> identify a causal story. Instead, the problem is
WAD> that specified complexity is supposed to miscarry
WAD> by counterexample. In particular, the Darwinian
WAD> mechanism is supposed to purchase specified
WAD> complexity apart from a designing intelligence.
WAD> But does it? In two of my recent posts to META I
WAD> argued that the Darwinian mechanism-and indeed
WAD> any non-telic mechanism-is incapable of
WAD> generating specified complexity.
For a commentary of these two posts, see my last two
ID-commentaries, on-line at
<http://www.geocities.com/evolutionsteori/IDC.html>.
[...]
WAD> Although death by counterexample would certainly
WAD> be a legitimate way for specified complexity to
WAD> fail as a reliable empirical marker of
WAD> intelligence, Sober suggests that there is still
WAD> another way for it to fail. According to Sober
WAD> this criterion fails as a rational reconstruction
WAD> of how we detect design in common life. Instead,
WAD> Sober proposes a likelihood analysis in which one
WAD> compares competing hypotheses in terms of the
WAD> probability they confer (cf. the passage from
WAD> Sober quoted a few paragraphs back). Sober uses
WAD> this likelihood analysis to model inference to
WAD> the best explanation, a common mode of scientific
WAD> reasoning.
This corresponds to criticism #1 found above, and is
the only of FS&S's issues that Dembski actually
manages to adress! It seems like Dembski has failed to
"consider the main criticisms of specified
complexity". I will comment on Dembski's answer to
this criticism below.
WAD> To be sure, this likelihood analysis is useful as
WAD> a way of thinking about scientific explanation.
WAD> But it hardly gets at the root of how we infer
WAD> design. In particular, it doesn't come to terms
WAD> with specification, complexity, and their joint
WAD> role in eliminating chance.
WAD>
WAD> Take an event E that is the product of
WAD> intelligent design, but for which we haven't yet
WAD> seen the relevant pattern that makes its design
WAD> clear to us (take the SETI example where a long
WAD> sequence of prime numbers reaches us from outer
WAD> space, but suppose we haven't yet seen that it is
WAD> a sequence of prime numbers). Without that
WAD> pattern we won't be able to distinguish between P
WAD> (E takes the form it does | E is the result of
WAD> chance) and P(E takes the form it does | E is the
WAD> result of design), and thus we won't be able to
WAD> infer design for E. Only once we see the pattern
WAD> will we, on a likelihood analysis, be able to see
WAD> that the latter probability is greater than the
WAD> former. But what are the right sorts of patterns
WAD> that allow us to see that? Not all patterns
WAD> indicate design.
Dembski seems to have forgotten everything that we was
previously saying about "specified complexity" not
"being established via" "a purely eliminative
argument", when he here equalizes "elimination of
chance" with "detection of design".
Futhermore, listening to radio signals containing
binary numbers is hardly a representative example "of
how we detect design in common life."
I contend that in most of the cases were we detect
intelligent agency, we do so on the basis of the
explanatory power in asuming that the object in
question was brought about by an intelligent agent.
Consider, for example, the Rosetta Stone. Assuming
that the Rosetta Stone was produced by human beings,
attempting to convey a certain meaning to whoever
could read it has great explanatory power. It explains
why the marks on it correspond to written languages,
used at that time. It explains why they form sentences
with a certain, specific gramma, conveying a certain
message. And if one, by noting the similarity of the
texts written in hieroglyphics and in demotic
characters, respectively, launch the hypothesis that
the designer made the inscriptions to convey the same
message in three different languages, one can test
this hypothesis by translating the Greek part as well,
noting that this indeed *does* convey the same message
as the other two. All this is possible because we can
(and are allowed to) speculate about the identity and
objectives of the designer(s) in question.
Of course, that is not possible within ID-theory,
where the designer and its motives are unknown:
"Another problem with the argument from
imperfection is that it critically depends on
a psychoanalysis of the unidentified designer.
Yet the reasons that a designer would or would
not do anything are virtually impossible to
know unless the designer tells you
specifically what those reasons are." (Behe,
M. J., 1998, "Darwin's Black Box", pp. 223)
And that is, from my point of view, the whole problem
about ID-theory: Since we don't know what went through
the intelligent designer when it designed, we are
incapable of using the theory to explain anything. The
problem isn't that some things ("bad design")
contradicts the theory, but that since the theory
doesn't predict anything, it cannot even be
*contradicted*. A walk through some of ID-theories
most "impressing pieces of evidence" would go
something like this:
Q: "Why does all the major phyla appear abrubtly in
the Cambrian period?"
A: "Because the designer wanted it that way?"
Q:"Why does some bacterias have flagella?"
A: "I sure don't know. The designer haven't told me."
Q: "Why are the constants of the universe set in such
a way as to allow the naturalistic development of
life?"
A: "Beats me!"
Adding Dembski's "contribution" to this list, gets us:
Q: "Why did the designer want to fill the genome of
living organisms with specified complexity?"
A: "..."
WAD> What's more, the pattern to which E conforms
WAD> needs to be complex or else E could readily be
WAD> referred to chance. We are back, then, to needing
WAD> some account of complexity and specification.
WAD> Thus a likelihood analysis that pits competing
WAD> design and chance hypotheses against each other
WAD> must itself presuppose the legitimacy of
WAD> specified complexity as a reliable empirical
WAD> marker of intelligence.
WAD>
WAD> Consequently, if there is a way to detect design,
WAD> specified complexity is it.
WAD>
WAD> -- William A. Dembski
WAD>
META> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Footer
META> information below last updated: 1999/12/10.
[...]
META> Copyright 1999, 2000 by William Grassie. Copies
META> of this internet posting may be made and
META> distributed in whole without further permission.
META> Credit: "This information was circulated on the
META> Meta Lists on Science and Religion
META> <http://www.meta-list.org>."
-------------------------------------------------------
=====
Morgan
"Evolution is to the social sciences as statues are to
birds: a convenient platform upon which to deposit badly
digested ideas." (Steve Jones, 2000, "Darwin's Ghost", pp.
xxvii)
[More ID-Commentaries]