Posted by Wesley R. Elsberry on March 25, 2004 07:58 AM

Whether one may be attempting to apply William A. Dembski's "explanatory filter/design inference" (EF/DI) to an event to find rarefied design (see Wilkins and Elsberry 2001), or "specified anti-information" (SAI) to make an ordinary design inference (see Elsberry and Shallit 2003), you are likely to be in need of a calculating aid that can handle both very large and very small numbers. I have such a tool available online, the Finite Improbability Calculator. In addition to pointing to this (IMO) valuable resource, I also want to take up a couple of issues from Dembski's new book, *The Design Revolution: Answering the Toughest Questions About Intelligent Design* concerning question-begging and the proffered support for the claim that the "specified complexity" identified by use of Dembski's EF/DI is a "reliable" empirical marker of "intelligent design".

Paul King made a perspicacious comment in response to my earlier post, You Missed a Spot, Dr. Dembski. Consider Dembski's stance on biological examples of *specified complexity* being ruled out of order by critics:

Since the design of biological systems is precisely the question at issue, to argue that we have no experience observing the designs of an unembodied designer is mere question-begging.(From

*TDR*, p.284.)

It doesn't seem to dawn on Dembski that biologists see another form of question-begging in his work. Dembski offers an inductive argument that *specific complexity* is a reliable marker of *design*:

How can we see that specified complexity is a reliable criterion for detecting design? In other words, how can we see that the complexity-specification criterion successfully avoids false positives? The justification for this claim is a straightforward inductive generalization: in every instance where specified complexity is present and where the underlying causal story is known (i.e., where we are not just dealing with circumstantial evidence, but where, as it were, the video camera is running and any putative designer would be caught red-handed), it turns out design is present as well. This is true even where the person running the filter isn't privy to the firsthand information. That's a bold and fundamental claim, so I'll restate it:(FromWhere direct, empirical corroboration is possible, design actually is present whenever specified complexity is present.

*TDR*, pp.95-96.)

I agree with Dembski that this is a bold claim, but I will disagree on why. You see, it is easy to enumerate the cases that form the foundation of this particular "inductive generalization". That's because the EF/DI that Dembski has urged his critics for years to use and "do the calculation" has had exactly four (4) published instances where *any part* of the calculations Dembski outlines as needed for a "design inference" in either "The Design Inference" or "No Free Lunch" are actually provided for inspection. These are:

1) The Caputo case

2) Dawkins's "weasel" program example

3) The *Contact* SETI primes sequence

4) The flagellum of *E. coli* bacteria

That's the complete list of published examples (see Elsberry and Shallit 2003). A panda doesn't need all the digits of one paw to count them. They are all Dembski's work. No one else has published an example of the full, "rigorous" application of the EF/DI to any event whatsoever.

Here's a relevant question: do any of the above examples fulfill Dembski's criteria for the kinds of design inferences that found his "inductive generalization"?

1) The Caputo case, though founded on a true story, is based upon a fictional representation of the pattern of picks of candidates. This example fails the "circumstantial evidence" criterion Dembski gives above. Further, Dembski's analysis reveals that the improbability that the pattern would occur by chance is small and specified, it nonetheless does not fall below Dembski's "universal probability bound", and thus by his criteria given in TDR cannot be counted as an example of specified complexity.

2) Dembski's analysis of Dawkins's "weasel" program from *The Blind Watchmaker* at least targets a real-world example that is not based on circumstantial evidence. Again, Dembski finds a small probability, specified pattern, but one which does not fall below his "universal probability bound", and by his own usage in TDR, can't be held as an example of specified complexity. (Of course, I disagree with Dembski ruling out examples of specified complexity based on the "universal probability bound". Demsbki hasn't yet retracted the procedure for justifying a "local small probability" given in "The Design Inference", so that's still in play.)

3) The *Contact* primes sequence examined by Dembski fails spectacularly in several ways. First, it's a *fictional* scenario. Second, SETI researchers aren't even using the sort of analysis Dembski claims they are. Third, for the pattern of 1's and 0's that one finds actually printed in "No Free Lunch" that forms the basis of Dembski's calculation, Dembski provides a non-matching specification. (See Elsberry and Shallit 2003, pp. 21-25 for a complete explication.)

4) The flagellum of *E. coli* bacteria. This case fails to provide a specification in the technical sense Dembski develops in TDI and NFL. It fails to evaluate *any* evolutionary hypothesis at all. And, there is no way to claim that it is known that this is due to a designer in the "video camera" certainty sense Dembski specifies above.

That's it. There are exactly zero (0, zilch, zip, nada) published examples that form the foundation of Dembski's "inductive generalization". Jokes can be made about doing linear regression on two data points, but even that is two data points more than Dembski has provided.

I, for one, find an "inductive generalization" with no basis relatively unconvincing. But Dembski doesn't seem to be troubled by this, or understand why biologists see his statements as more reasonably conforming to what is termed "question-begging". (Just to be clear, I think Dembski's "inductive generalization" argument is trash, no matter how many examples he might "calculate" in the future. See below.) Witness this passage:

What about the positive evidence for design? As it turns out, biology is chock-full of specified complexity. And since specified complexity is a reliable empirical marker of actual design, that means biology is chock-full of evidence for design.(From TDR, p.283.)

I think that future editions of encyclopedias illustrating "begging the question" should seriously consider Dembski's conjoint claims on how the reliability of "specified complexity" is established and his pushing of use of biological examples (of which he has "calculated", incompletely, precisely one (1)) to show "positive evidence for design" as the premier case study.

Eager young design advocates might be waiting to pounce, though. "Wesley! What's your problem with inductive generalization! You urge an inductive approach to making ordinary design inferences yourself, you hypocrite!" Please, forebear. Dembski's usage is invalid for the simple reason given in my previous post: no number of examples of "known to be caused by design" events *can possibly* put Dembski's EF/DI procedure at risk. It will either classify these examples correctly as "designed" or yield a "false negative" of "not designed", and Dembski stipulates already that false negative performance of the EF/DI is not an issue. The only class of events that Dembski should be concerned about collecting, *calculating*, and **publishing** would be examples where Dembski is willing to stipulate, in advance, that the evidence for the event **not** being due to design is sufficient. Those are the only sorts of events that could test the "reliability" of the EF/DI. And they also happen to be the class of events that Dembski has studiously avoided going anywhere near. One wonders why.

This leaves me with one further observation, which is that Dembski's EF/DI appears to be a procedure with what would be called an enormous "public burden". As noted above, in the almost eight years since Dembski was writing up his dissertation, he has published exactly four (4) attempts (based on fiction, incomplete, or not yielding an example of specified complexity in the end) to actually calculate, in the manner he himself prescribes to others, the EF/DI's apparatus of inferring design. In that same time, he has authored books: TDI, ID, NFL, TDR. He has edited books, at least five such volumes. An "inductive generalization" from this *relatively* copious data set leads to an ineluctable conclusion: it is **easier** to write or edit a book than to actually apply the full-blown EF/DI to any actual real-world phenomenon. Dembski said it best, I think: the EF/DI "resists detailed application to real-world problems". Of course, Dembski's comment was about Murray Gell-Mann's work rather than his own, but it seems apropos.