Richard B. Hoppe posted Entry 229 on May 25, 2004 04:10 AM.
Trackback URL: http://www.pandasthumb.org/cgi-bin/mt/mt-tb.fcgi/228

Since I seem to be on a roll, I thought I’d take a look at a claim that’s made frequently by ID proponents, namely that archaeology uses a design detection procedure akin to that allegedly formalized by Dembski’s Explanatory Filter.  (Gary Hurd: Feel free to interject/comment/correct at will.)

Review of the Explanatory Filter

Recall that the Explanatory Filter classifies explanations into just three mutually exclusive kinds: regularity (or law), chance, and design.  It employs a decision tree to determine which explanation is appropriate for a given phenomenon, with the decisions determined at each node in the tree by a probability estimate.  The first decision to be made is whether a given phenomenon is best explained as a “regularity” — an inevitable consequence of natural law(s).  Such are high-probability events, and if the probability of the phenomenon is high, the decision is “regularity” and the decision process terminates.

The second decision is whether the phenomenon is due to chance.  One estimates the probability of occurrence of the phenomenon, where the events or parts comprising the phenomenon are assumed to be mutually independent, the phenomenon is assumed to be a discrete combinatorial object (in space for structures or sequence for processes), and the PDF is assumed to be uniform.  If the probability of spontaneous aggregation of the phenomenon is “intermediate,” which apparently means somewhere between 1 in 10^150 and the near vicinity of 1.0 (1.0 being the best-of-all-possible-worlds “regularity” probability and 1 in 10^150 being Dembski’s “Universal Probability Bound”), one attributes the phenomenon to “chance.”

If the phenomenon is “specified” and if its probability of occurrence (estimated as above) is less than 1 in 10^150, we are to attribute it to design.  “Specified” means (roughly) that the phenomenon displays a pattern that we recognize.  For example, the specification of the eubacterial flagellum is an “outboard rotary motor”  (No Free Lunch, p 289).  I will leave (nearly) unremarked Dembski’s assertion that

Specification is never a problem.  The irreducibly complex systems we consider, particularly those in biology, always satisfy independently given functional requirements (see section 3.7). … This is not to say that for the biological function of a system to constitute a specification humans must have independently invented a system that performs the same function. … At any rate, no biologist I know questions whether the functional systems that arise in biology are specified. (p. 289)

I will here note only that in section 3.7, which is slightly less than 2 pages long, Dembski manages to use “specified” (or “specification”) in at least 7 different senses and settles on none of them.  No wonder no biologist would question it — one doesn’t know what “it” is!  Since for Dembski “specification” reduces to ‘this system has a known function,’ then “specified complexity” reduces to ‘this system is very very very VERY unlikely to have spontaneously aggregated.’

The IDist claim about archaeology

Dembski and other IDists claim that a number of “special” sciences like SETI, forensics, and archaeology employ an informal “pretheoretic” form of the Explanatory Filter to ascertain whether a given phenomenon is due to natural causes (regularity), chance, or (human) design.  For example, in Reason & Revelation we read

Intuitively, we know that design cannot be a concept that is foreign to science because there are disciplines of a scientific nature that seek to tease apart natural causes from intelligent agency. Dembski is not proposing a change in the way that these scientists work. All he has done is to formalize the process that they (and many of us) already use, and to show that the process can detect design in a reliable fashion

One such discipline is forensic science. A typical task of forensic investigators is to determine the cause of death: Was it natural or suspicious? Should they be looking for a person—an agent—who caused this death? Another science is archaeology, which on a regular basis must distinguish genuine artifacts from stones, sticks, and other items that so often clutter excavation sites. One of the best-known examples of design detection is SETI (the Search for Extraterrestrial Intelligence). (Bolding added)

(I won’t remark on the groundlessness of the “reliability claim here.)  From the Institute for Christian Teaching:

This [Dembski’s The Design Inference] is all very exciting to the “design crowd” because complexity and specification are “testable” and “quantifiable” characteristics, and Dembski has established an objective method for testing structures, processes, etc. to determine if they are the result of law, chance, or design.  No longer are the defenses, “You just need more imagination” or “You believe in the ‘God of the Gaps’”, tenable, because the invocation of design is no longer a response to our ignorance, or our aesthetic judgment, or our biases.  What we have now is a scientifically defensible basis for saying “With current knowledge, we have shown that unintelligent causes are unable to produce this artifact.”  This isn’t really “new” science, either.  It borrows from the same kind of work that goes on constantly in archaeology, forensic science, etc.  (Bolding added)

Finally, from Dembski himself:

There does in fact exist a rigorous criterion for discriminating intelligently caused from unintelligently caused objects.  Many special sciences (e.g., forensic science, artificial intelligence, cryptography, archaeology, and the Search for Extraterrestrial Intelligence) already use this criterion, though in pretheoretic form.  I call it the complexity-specification criterion. (No Free Lunch, p. 6; Bolding added)

and later in the same book

Intelligent design is a scientific research program that examines the role of specified complexity in nature.  Since many special sciences already employ specified complexity as a criterion for detecting design (e.g., SETI and archaeology) there can be no principled objection to teaching intelligent design within a science curriculum, and particularly whenever the origin and history of life comes up in grades K-12.  (NFL, p. 314; Bolding added)

What do archaeologists actually do?

I took an undergraduate degree in anthropology back in the dark ages and was required to take some ‘bones and stones’ in the course of doing that degree.  I deem myself to be an informed layman about such matters:  I can read the literature and interpret it with some small effort.  I don’t recall from those courses or my subsequent reading that a “pretheoretic” version of Dembski’s Explanatory Filter was used then or now.  My most vivid memory, in fact, is of a junior faculty member sitting on a table in front of the lecture hall knapping stone tools while lecturing about them.  That is, he demonstrated how the tools were manufactured; he didn’t estimate their probablity of occurrence, considering them as spontaneously aggregated discrete combinatorial objects, against a uniform probability distribution.

Most interesting about his demonstrations were the several products.  Since he started from scratch (a nodule of flint), when he finished there was a nice projectile point, but there was also characteristic debris — the flint core off which he had hammered several large flakes that later became the points he knapped, some unsuitable flakes, and a slew of chips and pieces of varying sizes, ranging from largish chips produced by percussive blows to get the initial flake into some kind of rough shape, to the tiny chips produced by the final pressure flaking (using a deer antler) to finish the sharp edges of the point.  (Incidentally, I’m told that chips show a characteristic microscopic pattern on the struck edge created by a percussive blow that’s different from that caused by pressure flaking.)

Now, one can obviously identify the final product, a projectile point, as “designed” by its shape, referring it to any number of instances of known human manufacture.  But since the manufacturing process left characteristic debris, one could also say something about how the point was produced — one is not restricted to the object itself.  A year ago I found a projectile point manufacturing site on a hillside at the edge of a newly plowed field.  I found no points or cores, but I found a concentrated area thick with the kinds of chips produced by the manufacture of points from flakes.  Did I do a probability analysis to infer that they were produced by a human?  Nope.  I compared my observation of the distribution of chips (yes, I picked up as many as I could find to sort and count them!) to the known distribution of debris produced by manufacturing projectile points from good Mercer flint, a straightforward distributional comparison process. 

Consider the most primitive known tools manufactured by hominids, the Oldowan (sometimes Olduwan) pebble tool assemblages from the Olduvai Gorge explored for decades by the Leakeys.  There are hammerstones, cores, and flakes associated with the manufacturing of Oldowan assemblages, as well as choppers and scrapers.  That is, one finds the tools used to manufacture flake tools (hammerstones) and the two main components left by the manufacturing process, the flakes and the cores from which the flakes were struck.  There’s no probability calculation involved; human design is inferred from evidence left by the manufacturing process.

Experimental archaeologists have done a variety of studies that provide information on characteristic distributions of debris types, characteristic usage wear patterns, and the relative utilities of tools for various functions, as for example testing unretouched flakes against bifacial hand axes as large-animal butchering tools.  Again, there’s no probability calculation to eliminate regularity and chance in order to infer design, there is a systematic study of the artefacts and their properties and context in order to make inferences about early hominid tool manufacture.  Microscopic examinations of tool edges yield information about use, as well.  There is even a mathematical model that explains the consistency of the Pleistocene Levallois stone tool assemblage across a considerable geographic area in Eurasia and Africa in terms of the efficiency of a manufacturing process utilizing stone raw materials.

So rather than looking at an artefact in isolation and calculating some cockamamie probability, archaeologists look at objects in context, and make strong efforts to ascertain the method of manufacture, seek evidence left by the manufacturing process (both on the object itself and on scrap material and tools in the vicinity), and analyze the marks left by functional use.  And not least, they look for direct evidence — fossils — or other indications, independent of the artefacts themselves, of the presence of the humans who made the tools, for example, fire pits and other traces of dwellings or camp sites.

Nothing I see in that research methodology would be even mildly helped by Dembski’s Explanatory Filter, say nothing of embodying a primitive version of its alleged formalism.  It might help what Dembski imagines that archaeologists do, but his familiarity with archeological research methodologies seems even more tenuous than his familiarity with evolutionary biology.  I therefore conclude that claims that archaeology uses a “pretheoretic” version of Dembski’s Explanatory Filter or his “complexity-specification criterion” are false.

RBH

Commenters are responsible for the content of comments. The opinions expressed in articles, linked materials, and comments are not necessarily those of PandasThumb.org. See our full disclaimer.

Comment #2788

Posted by Frank Schmidt on May 25, 2004 8:22 AM (e)

A gedankenexperiment: Can the EF distinguish between the human faces on Mt. Rushmore and the (lamented) Old Man of the Mountain in New Hampshire? Obviously this has to be without reference to historical information - in other words, can it eliminate the hypothesis that the Old Man was merely a Mt. Rushmore that had weathered for some time?

Comment #2789

Posted by Frank Schmidt on May 25, 2004 8:24 AM (e)

A gedankenexperiment: Can the EF distinguish between the human faces on Mt. Rushmore and the (lamented) Old Man of the Mountain in New Hampshire? Obviously this has to be without reference to historical information - in other words, can it eliminate the hypothesis that the Old Man was merely a Mt. Rushmore that had weathered for some time?

Comment #2803

Posted by Gary Hurd on May 25, 2004 10:38 AM (e)

I have held off on this topic waiting for the new book edited by Matt Young and Taner Edis,
Why Intelligent Design Fails: A Scientific Critique of the New Creationism .

But this is too sweet an opportunity to mention the book and a little project of mine.

Let me add to your observation about Dembski’s error concerning “specification.” You point out that he can not use the idea consistently. Tools can only rarely be specified consistently either. Indeed, all the work classifying an object as “natural” or “designed” which Dembski asserts can be done by the Explanatory Filter (EF), is actually done by what he calls “side information.”

Rather that focus on object like the space shuttle, or a tractor on the moon, or a watch on the heath, consider the problem of a stone hammer, or even something as “high tech” as a screwdriver.

A stone becomes a designed object, a hammer, in the instant that it is used to strike an other object. It is detectable as such only in the event that it is used in such a way as to leave some physical trace of its being used. Dembski could not employ the EF to differentiate a hammer from a battered rock from a stream bed.

The screwdriver exposes Dembski’s scheme to the specification error. Make a list of all the a screwdriver’s uses. My partial list begins with twisting screws, but goes on, and on, from a simple lever, to a punch (including several cases I know where a screwdriver was used as a deadly weapon), to part of an electrical continuity checker. The screwdriver is a fine example of an object with many ‘evolved’ functions.

The screwdriver example came to me not long after the final drafts for Why Intelligent Design Fails were submitted. I am planning an extended piece on this that I call, “Dembski: Hammered and Screwed.”

Comment #2806

Posted by Johnnie C. on May 25, 2004 10:55 AM (e)

I am planning an extended piece on this that I call, “Dembski: Hammered and Screwed.”

Sound great. I can’t wait to read that!

I do recall that Jerry Don “I am willing to testify about ID in court” Bauer asserted at some point that there were some designed objects which were “too simple” to be detected by Dembski’s filter. Does Dembski include such caveats in any of his comments?

Comment #2810

Posted by RBH on May 25, 2004 11:22 AM (e)

Gary,

Sorry to have anticipated you, but every time I read that claim my teeth start to grind, and I had to get it out of my system.

RBH

Comment #2811

Posted by Mark Perakh on May 25, 2004 11:41 AM (e)

Richard, I’d not comment on those parts of your post dealing with archaeology but only on some of your points regarding Dembski’s notorious explanatory filter. You (as many others both among Dembski’s defenders and his detractors) seem to buy his thesis about how the first and the second nodes of his filter are supposed to be used. Indeed, you accept that in the first node we determine the event’s probability and, if it is found to be large, we attribute the event to regularity (necessity, law). This is impossible. To determine that the event’s probability is large, we have to first determine that it is due to necessity (law, regularity), not the other way around. Dembski’s prescription (which you seem to accept) requires to “read off the event” its probability which is impossible. Likewise, in the second node Dembski’s prescription requires reading probability “off the event” (contrary to your rendition, the estimation of probability assuming the chance origin of the event is employed by Dembski not in the second but only in the third mode). The realistic procedure can only be in reverse order compared with Dembski’s senseless prescription. Therefore any discussion of the filter as a triad-like procedure makes no sense since the first and the sedond nodes are fictional. The third node is the only part of the filter that can be discussed on its own merits rather than rejected out of hand (but it is where the senselessness of his specification criterion and with it of his entire scheme is revealed).

Comment #2814

Posted by RBH on May 25, 2004 1:25 PM (e)

Mark,

Thanks for your remarks. I must say that I myself don’t “accept” Dembski’s procedure. I don’t buy his thesis at all.

I was attempting to summarize what he and his followers, like those at the referenced URL, say it’s supposed to be, and hence what he and they claim archaeology’s research program supposedly does (in its sadly benighted pretheoretic way, of course!).

I do have your book on my shelf and I read it! :)

RBH

Comment #2877

Posted by Dene Bebbington on May 26, 2004 2:03 AM (e)

If Dembski thinks that specification is never a problem then it would have been easy for him to show how the specification of the flagellum meets his formal criteria. That he didn’t makes me think he can’t.

Comment #2884

Posted by Paul King on May 26, 2004 5:00 AM (e)

Specification isn’t a problem for any functional system - just describe the function. However, I think that there’s another reason why Dembski didn’t go into detail on the specification of the flagellum.

You see for the explanatory filter to “work” the probability to be calculated is the probability of meeting the specification. By avoiding the details of the specification Dembski avoids dealing with that important element and can simply use his bogus calculation which looks impressive but means nothing.

And this is the biggest fundamental problem with the explanatory filter and CSI. Actually calculating the relevant probability for real biological examples is too difficult. Even if the filter had no other problems that would kill it as a practical method for detecting design in biology.

Comment #2912

Posted by Dene Bebbington on May 26, 2004 2:06 PM (e)

Paul said: “Specification isn’t a problem for any functional system - just describe the function.” But merely describing the function is a fabrication in Dembski’s scheme because all you’d be doing is reading off the event.

Comment #2914

Posted by RBH on May 26, 2004 2:38 PM (e)

Dene wrote

But merely describing the function is a fabrication in Dembski’s scheme because all you’d be doing is reading off the event.

I provided Dembski’s claim in that respect in the main entry. From No Free Lunch:

Specification is never a problem. The irreducibly complex systems we consider, particularly those in biology, always satisfy independently given functional requirements (see section 3.7)…. This is not to say that for the biological function of a system to constitute a specification humans must have independently invented a system that performs the same function…. At any rate, no biologist I know questions whether the functional systems that arise in biology are specified. (p. 289; emphases added)

Sure it’s read off the event, and sure it’s a fabrication. But when did Dembski ever worry a whole lot about consistency?

RBH

Comment #2919

Posted by Paul KIng on May 26, 2004 3:38 PM (e)

Dembski is quite clear that it is legitimate to derive the specficiation from the event. It is however necessary that the specification could be derived without that knowledge. I would say that the function of a system is a legitimate specification.

However, if Dembski uses anything other than the specifciation for calculating the probability then he is using a fabrication.

(And Dembski fails to deal with the problem that the rejection region set up by using after-the-fact specifications consists of the spaces defined by all possible specification meeting the complexity criterion. THere is a missing factor in his UPB calculation).

Comment #2953

Posted by Dene Bebbington on May 27, 2004 2:32 AM (e)

Paul said: “Dembski is quite clear that it is legitimate to derive the specficiation from the event. It is however necessary that the specification could be derived without that knowledge. I would say that the function of a system is a legitimate specification.”

But only if that function was known in side information before the thing under consideration came to be. It’s all very well for Dembski to say that the specification of the flagellum is an “outboard rotary motor”, but that side information comes from human creations and has nothing whatsover to do with the origin of the flagellum - unless someone wants to argue that God created the flagellum after seeing how humans power boats!

This is a basic problem with Dembski’s scheme when it comes to biology, how can he show that a specification existed prior to the flagellum? A specification must come first otherwise it has nothing to do with the supposed design of the flagellum (or anything else).

Comment #2976

Posted by Paul King on May 27, 2004 8:33 AM (e)

Dembski’s method is expressly designed to avoid having to frame any sort of design hypothesis. So according to his method he doesn’t need to consider where the specfication came from. This is one place where Dembski’s method disagrees with the ways we often recognise design.

I believe that Dembski intends specification to simply indicate that the result is sufficiently “special” that it can be considered significant (unlike simply reading off whatever event occurs). I don’t believe that Dembski’s criteria are strong enough but the idea itself is not obviously wrong.