PvM posted Entry 2373 on June 18, 2006 06:59 PM.
Trackback URL: http://www.pandasthumb.org/cgi-bin/mt/mt-tb.fcgi/2368

The following posting is based on a response I provided to Allen MacNeill on his excellent blogsite. In addition to much needed checking of grammar and spelling, I also have added additional content and/or revised the argument for clarity.

Avid readers of Pandasthumb may remember that Allen MacNeill is a Cornell professor who will be teaching an Intelligent Design course this summer. The course in question is: BioEE 467/B&Soc 447/Hist 415/S&TS 447: Seminar in History of Biology, and has a blogsite. The first class will start June 27, 2006.

In the posting, to which I responded, Allen shows the many problems in one of Salvador Cordova’s postings. Sal is an avid ID activist and defender of Dembski and his postings can be ‘admired’ at Uncommon Descent. Sal stated that ““There are many designed features in biology that make no sense in terms of natural selection but make complete sense in terms of design.””

As Allen shows, this is a very flawed statement. In my response I make an attempt to explain in straightforward terms why Intelligent Design’s approach is flawed and makes ID scientifically vacuous or in other words, void of content.

PvM wrote:

Excellent points Allen. ID proponents seem to be quick to claim that science is using ID’s approach to detect design but on closer scrutiny these claims fall apart quickly.

ID is inherently a claim based on ignorance (elimination) and while it uses some ‘fancy sounding’ terms like complex specified information, the terms are used in a manner which conflates ID’s terminology with how science uses such terminology.

ID starts with an unfounded assertion that design is that which remains once natural processes of regularity and chance have been eliminated. Or in other words, ‘design’ is the set-theoretic complement of chance and regularity (Del Ratzsch).

Let’s stop at and consider the following:

Why should we accept this when available empirical evidence and logic suggests that there is nothing necessarily supernatural about intelligence. In fact, intelligent behavior seems quite well reducible to regularities and chance as polling, profiling, advertising and many other arenas show. Intelligence is in other words predictable and since intelligence has the ability to make choices given multiple options, there will be a certain level of variation or uncertainty present.

Amazon for instance uses this to propose to its users, items of interest based on their own past interests as well as based on the interests of those who have bought similar items.

And as Dembski himself admits, sciences such as criminology, archaeology, cryptography and SETI all rely on the design inference. Since Dembski also argues that science as it exists right now rejects the design inference a-priori, it seems clear that Dembski’s design is different from the design detected by the sciences.

But let’s for the moment accept ID’s Explanatory Filter approach. How is the EF applied in biology? Well,through the concepts of specification and complex information. Specification is trivial in biology as it refers to function and information refers to the negative log(2) of the probability. Now we get into some interesting territory. Dembski argues that if something can be explained as a regularity, its probability becomes close to 1 and the information goes to 0. But the same applies then to intelligent design. If something can be explained as intelligently designed, the amount of information is zero.

So that does not really work well. So perhaps we can define the amount of information as the likelihood that the item arose under uniform probability? Under that scenario, something is ‘designed’ if it has a function and if its pure chance probability is too low. But then we still do not know if designed means ‘designed by regularity/chance’ or ‘designed by an intelligence’ (remember I am for the moment accepting the distinction between the two and I am showing how even accepting the distinction the filter suggests that the two explanations are nothing different). So how does the filter work? Well, it argues that if chance alone does not explain it and if regularities cannot explain it (yet) then we have to accept ‘design’ as the default explanation. So ‘design’ includes anything from ‘intelligent designer’ to ‘an unknown regular process’. Once again ID fails to explain how to distinguish between actual and apparent design.
And now the best one. Even if we accept ‘design’, Dembski has shown that this does not necessarily need to involve an intelligent designer. Confused?… I bet… Ryan Nichols points out that:

Nichols wrote:

“Before I proceed, however, I note that Dembski makes an important concession to his critics. He refuses to make the second assumption noted above. When the EF implies that certain systems are intelligently designed, Dembski does not think it follows that there is some intelligent designer or other. He says that, “even though in practice inferring design is the first step in identifying an intelligent agent, taken by itself design does not require that such an agent be posited. The notion of design that emerges from the design inference must not be confused with intelligent agency” (TDI, 227, my emphasis).

Source: Ryan Nichols, Scientific content, testability, and the vacuity of Intelligent Design theory, The American Catholic Philosophical Quarterly, 2003 ,vol. 77 ,no 4 ,pp. 591 - 611

As Elsberry has shown, given Dembski’s logic, natural selection matches his definition of an intelligent designer. Once again we notice how ID fails to distinguish between apparent and actual design.

And since ID refuses to propose positive hypotheses, it is thus doomed to be unable to deal with the issue of apparent versus actual design in any scientifically relevant manner.

And that is why Intelligent Design is scientifically vacuous.

In future postings I will address various concepts related to the Intelligent Design thesis, discussing such topics as ‘Complex Specified Information’, the ‘Explanatory Filter’, the ‘argument from ignorance’, the concept and impact of ‘false positives’, the ‘law of conservation of information’, the ‘displacement theorem’ and various other topics to show not only why the foundation of Intelligent Design is fundamentally flawed but also that ID’s claims are outright incorrect. Rather than rejecting ID a-priori, I am willing to entertain the concept of ID being scientifically relevant. As I have shown and will show, ID, by virtue of its flawed foundation, is doomed to remain scientifically vacuous and that so far ID’s contributions, or perhaps better stated, lack thereof, to science have shown my predictions to be validated.
In the tradition of Laudan and pragmatic thinking, I intend to show that ID is ‘bad’ science.

Commenters are responsible for the content of comments. The opinions expressed in articles, linked materials, and comments are not necessarily those of PandasThumb.org. See our full disclaimer.

Comment #106468

Posted by Registered User on June 18, 2006 8:12 PM (e)

I am willing to entertain the concept of ID being scientifically relevant.

Huh? You must mean that you are willing to pretend that ID is scientifically relevant. Surely you know by now that it is not. I mean, I’ve known since its inception that it is not scientifically relevant. Hasn’t that been the case with most scientists?

In future postings I will address various concepts related to the Intelligent Design thesis, discussing such topics as ‘Complex Specified Information’, the ‘Explanatory Filter’, the ‘argument from ignorance’, the concept and impact of ‘false positives’, the ‘law of conservation of information’, the ‘displacement theorem’ and various other topics to show not only why the foundation of Intelligent Design is fundamentally flawed but also that ID’s claims are outright incorrect.

Just some friendly advice, Pim: both you and Allen McNeill tend to err on the side of the verbose in your analyses, offering unnecessarily large piles of dust for creationists to kick up in everyone’s faces. I think others have already addressed these topics succinctly and carefully. Why not just link to talkorigins? Why beat dead horses (and give the impression to newbies that the horse hasn’t been bludgeoned into unrecognizable highway jerky).

In my opinion, a better use of PT bandwidth is to document the new lies that the Discovery Institute and other creationist propagandists are continually spewing forth in their press releases and other announcements.

Just my two cents.

Comment #106473

Posted by PvM on June 18, 2006 8:27 PM (e)

Registered User wrote:

In my opinion, a better use of PT bandwidth is to document the new lies that the Discovery Institute and other creationist propagandists are continually spewing forth in their press releases and other announcements.

What is preventing you from doing this?

And yes, you are right many of these issues have already been addressed by others, thus you will notice that I provide many relevant links. So why do this? Because there are many ID creationists who come to the argument convinced of the relevancy of the arguments presented to them by ID Evangelical Activists (:-)) and are unaware of the many problems with the design approach.
If you believe that there are better ways to to use PT’s bandwidth then perhaps you may consider contributing in such a manner?

Comment #106491

Posted by Henry J on June 18, 2006 9:43 PM (e)

Not to mention that a scientific hypothesis is supposed to start with a set of observations that show a consistent repeatable pattern, so that the hypothesis has something that it’s intended to explain.

Evolution explains nested hierarchy of species living at the same time, the tendency of species in the same taxa to be modified copies of the same earlier species, and the tendency of related species to be geographically near each other. (Just as every successful theory in science has its own list of observed patterns to be explained.)

Wake me up when somebody identifies the consistent repeating pattern that is “explained” by I.D.

Henry

Comment #106503

Posted by Shalini, BBWAD on June 18, 2006 11:14 PM (e)

[Wake me up when somebody identifies the consistent repeating pattern that is “explained” by I.D.]

Henry, be careful. You might NEVER wake up again.

;-)

Comment #106535

Posted by Pete Dunkelberg on June 19, 2006 6:48 AM (e)

Here is a rousing review of another intro to IDC:
http://www.nysun.com/article/34637

Comment #106541

Posted by pwe on June 19, 2006 7:49 AM (e)

Ooh, I won’t claim to be any expert on these matters; but the way I believe to have understoood Dembski, the EF works like this:

(1) If the probability of an event occuring by chance (= not design) is high, conclude regularity.

(2) If the probability of the event occuring by chance is medium, conclude chance.

(3) If the probability of the event occuring by chance is low, but there is a simple (short) description of it, conclude design. Otherwise conclude chance.

Dembski’s way of looking at it is as a competition between design and chance. Since - as he claims - design can mimic chance, the EF needs to be eliminative. If we started out with a design hypothesis, there would be no filter, because anything can be designed. E.g. any accidental death could be a murder.

So, there is the requirement of low probability for the event in question to occur by chance, before anything with design enters into the picture. That the “complexity” part of “specified complexity”. The “specified” part is that a simple description is required.

Think about shuffling a deck of cards. There are 52! different possible shuffles, all proncipally equally probable. But say somebody shows you a deck with the cards suite by suite and in ascending order within each suite, you are more likely to think that wasn’t a result of a random shuffle than if there is no recognizable pattern to the order. For most shuffles, there is no short description, it’ll be an enumeration of each individual card, perhaps with an abbreviation somewhere such as “Two-Four of Clubs”, but rarely much better than that.

Comment #106620

Posted by Todd on June 19, 2006 11:34 AM (e)

pwe wrote:

Dembski’s way of looking at it is as a competition between design and chance. Since - as he claims - design can mimic chance, the EF needs to be eliminative. If we started out with a design hypothesis, there would be no filter, because anything can be designed. E.g. any accidental death could be a murder.

That only identifies design. It does not identify intelligent design. Evolution is just as capable of bringing about “designed” features (or seemingly so, depending on your point of view) as an intelligent designer. Just determining design still leaves the question open as to whether something is designed by a supernatural intelligence or designed by natural evolutionary processes. In other words, it accomplishes nothing of significance. It could be said it is a formalised way to determine whether something occured by either chance or some sort of design-like processes, but we don’t know the probabilities involved so in a practical setting it is useless even for this. But it cannot accomplish Debmski’s goal of differentiating between a supernatural designer and design by evolution because it does not at any point differentiate between the two. Nor will it ever, since the ID community steadfastly refuses to let themselves be pinned to a certain hypothesis or any detailed perspective because they don’t want their conjecture to be tested and refuted.

Registered User wrote:

Huh? You must mean that you are willing to pretend that ID is scientifically relevant. Surely you know by now that it is not. I mean, I’ve known since its inception that it is not scientifically relevant. Hasn’t that been the case with most scientists?

You must work from the assumption it is true in order to follow it through to it’s logical conclusion and thus reveal the inherent contradictions and logical inconsistincies present. You must operate under the assumption it is true in order to show that it is not.

Comment #106626

Posted by pwe on June 19, 2006 12:01 PM (e)

Todd wrote:

That only identifies design. It does not identify intelligent design. Evolution is just as capable of bringing about “designed” features (or seemingly so, depending on your point of view) as an intelligent designer.

Well, “design” and “intelligent design” is one and the same (and “intelligent design” and “stupid design” are as well). Design involves a designer. Translate “design” to “purpose” (and “designer” to “purposer”). That’s how I have had it explained.

You are right in that the phrase “nature’s design” were used in the 18th century and “intelligent design” supposedly involves some kind of intelligence; but as indicated, Dembski always implies a designer by “design”.

Comment #106629

Posted by Bruce Thompson GQ on June 19, 2006 12:44 PM (e)

If you believe that there are better ways to to use PT’s bandwidth then perhaps you may consider contributing in such a manner?

Unfair shot. An open contribution policy would invite submissions from everyone and require an editor and reviewers. In short, PT would become an online journal and contributions would no longer be timely because of reviews and rewrites.

Delta Pi Gamma (Scientia et Fermentum)

Comment #106635

Posted by Tyrannosaurus on June 19, 2006 1:12 PM (e)

As exposed by PvM, Nichols, and others since the IDiots and its myriad explanations cannot discern between “natural” design and “intelligent” design, how much more irrelevant their whole exercise can be? There is only one possible reason for their continued push for this IDea, and that is religious proselytism plain and simple. These religious Jihadist will stop at nothing and will bend any resemblance of truth to reach their forgone conclusion of a Creator.

Comment #106655

Posted by Bill Gascoyne on June 19, 2006 2:59 PM (e)

If you believe that there are better ways to to use PT’s bandwidth then perhaps you may consider contributing in such a manner?

Unfair shot. An open contribution policy would invite submissions from everyone and require an editor and reviewers. In short, PT would become an online journal and contributions would no longer be timely because of reviews and rewrites.

How did we get from an invitation to R.U. to be a guest contributor to an open contribution policy?

Comment #106658

Posted by Torbjörn Larsson on June 19, 2006 3:15 PM (e)

pwe says:

“So, there is the requirement of low probability for the event in question to occur by chance, before anything with design enters into the picture. That the “complexity” part of “specified complexity”. The “specified” part is that a simple description is required.”

You should have a look at http://scienceblogs.com/goodmath/2006/06/dembski… where computer scientist Mark Chu-Carroll explains what Dembski is nearly defining. Note “nearly” since Mark finds that Dembski hedges all definitions so they aren’t unchanging and explicit.

Dembski says “It follows that the collection of nonrandom sequences has small probability among the totality of sequences so that observing a nonrandom sequence is reason to look for explanations other than chance.”

This is, according to Mark, “demonstrates a total lack of comprehension of what K-C theory is about, how it measures information, or what it says about anything”.

Dembski identifies the low probability events with complexity, and simple description with specification. Mark notes: “In information-theory terms, complexity is non-compressibility. But according to Dembski, in IT terms, specification is compressibility. Something that possesses “specified complexity” is therefore something which is simultaneously compressible and non-compressible.”

Note that SC already is a vacuous concept. But Dembski wants to make sure of it. “The only thing that saves Dembski is that he hedges everything that he says. He’s not saying that this is what specification means. He’s saying that this could be what specification means. But he also offers a half-dozen other alternative definitions - with similar problems. Anytime you point out what’s wrong with any of them, he can always say “No, that’s not specification. It’s one of the others.””

Mark CC and PvM seems to be a nice tag team. Mark can explain why specific concepts like IC and SC are vacuous, PvM the whole IDiocy.

Comment #106668

Posted by Henry J on June 19, 2006 3:50 PM (e)

Re “Henry, be careful. You might NEVER wake up again.”

Oh. Yeah. Good point. Let’s then drop that thar paragraph from my previous reply. :)

Henry

Comment #106838

Posted by roophy on June 20, 2006 11:47 AM (e)

You must work from the assumption it is true in order to follow it through to it’s logical conclusion and thus reveal the inherent contradictions and logical inconsistincies present. You must operate under the assumption it is true in order to show that it is not.

My best friend here in Spain is a brilliant but self-effacing lawyer. He tells me, to my surprise and indignation, that arguments based on “Reductio ad Absurdum” are not admitted in Spanish law. Yet I have a hard time finding any Cretinists here… :-)

BTW I am Rudy on [Debunk_Creation]

Comment #106843

Posted by pwe on June 20, 2006 12:02 PM (e)

Torbjörn Larsson wrote:

Dembski identifies the low probability events with complexity, and simple description with specification. Mark notes: “In information-theory terms, complexity is non-compressibility. But according to Dembski, in IT terms, specification is compressibility. Something that possesses “specified complexity” is therefore something which is simultaneously compressible and non-compressible.”

True; but that’s only too fool the evilutionists. By “complexity” Dembski means “improbability”. I have it from an IDist, so it’s true … but confusing :-)

Comment #106855

Posted by Bruce Thompson GQ on June 20, 2006 1:01 PM (e)

Source: Ryan Nichols, Scientific content, testability, and the vacuity of Intelligent Design theory, The American Catholic Philosophical Quarterly, 2003 ,vol. 77 ,no 4 ,pp. 591 – 611[emphasis added]

I’ve noticed authors are enamored with the word vacuous/vacuity. I wondered about the fascination with this word, whether it was something related to English as a second language or the love of big words. At first I was curious about this but after some research I find its usage appropriate.

With the decreasing quantities of clean fresh water, conservation measures were instituted by the federal government. These standards went into effect in 1994 and mandated 1.6 gallon ultra low flush (ULF) toilets in new construction in the U.S. Many of the early ULF toilets were poorly designed and required several flushes to remove solid waste products. This led to dissatisfaction and was followed by several attempts to repel the legislation. Since then a number advancements have been made that have increased the efficiency of ULF toilets. One of these toilets, the Vacuity by Briggs has been recommended by consumer reports.

One of the claims of ID movement is its utility in exploiting biological features of organisms. When it comes to ULF toilets like the Vacuity the question arises, what organism inspired the Vacuity toilet? I suggest it was the sea squirt. The sea squirt has several characteristics in common with this technological marvel of the bathroom.

Like the sea squirt which contains a large branchial sac within it’s body for collecting water, the vacuity is designed with a “tank within a tank”. In the sea squirt the end of the gut joins the main tube where water is expelled. As water leaves the branchial sac and passes over this outlet, an area of low pressure is created pulling water and waste material out of the simple intestinal tract. When flushing the Vacuity toilet, a “unique vacuum action forcefully pulls water and waste out of the bowl”.

So remember when you sit down on your new throne for a few minutes of relaxation that you have ID to thank for that quiet single flush at the end.

Delta Pi Gamma (Scientia et Fermentum)

Comment #106932

Posted by Moses on June 20, 2006 4:39 PM (e)

I can explain that ID in one word:

Poof!

Comment #106944

Posted by Torbjörn Larsson on June 20, 2006 6:19 PM (e)

pwe,
I am symphatetic to the view that complexity is contingent. For example, in neuroscience Tononi proposes a measure on neural complexity which is based on the mutual information between subsystems. It makes neural complexity maximise between completely random and completely regular systems, which is consistent with what he observes in his application. ( http://www.striz.org/docs/tononi-complexity.pdf )

But Mark makes the valid point that if Dembski uses K-C theory to specify specification, he must also use it to define complexity. Otherwise he is inconsistent there too.

Comment #107080

Posted by pwe on June 21, 2006 8:31 AM (e)

Posted by Torbjörn Larsson on June 20, 2006 06:19 PM (e)

Torbjörn Larsson wrote:

pwe,
I am symphatetic to the view that complexity is contingent. For example, in neuroscience Tononi proposes a measure on neural complexity which is based on the mutual information between subsystems. It makes neural complexity maximise between completely random and completely regular systems, which is consistent with what he observes in his application. ( http://www.striz.org/docs/tononi-complexity.pdf )

But Mark makes the valid point that if Dembski uses K-C theory to specify specification, he must also use it to define complexity. Otherwise he is inconsistent there too.

Ok, I see your point - and I was of the same opinion as you (and Mark CC) are, until I gave up that idea :-)

The way it appears to work is like this: there are 2^N strings of length exactly N, and there are 2^(k+1)-1 strings of length at most k. Since 2^(k+1)-1 2^N, if k N, we cannot compress all strings of length to strings of length at most k. This is standard K-C stuff. The K-C complexity of a string is the length of the shortest program (including input) that can produce that string (on your TM of choice) - which can be translated to compressibility (more precise: a self-unpacking compressed version of a string). The higher the K-C complexity, the larger a value of k will we need. The larger the value of k, the more strings can be compressed into strings of length at most k. Assuming compressible strings to be chosen at “random” (with equal probability), this in turn means that each compressible string has a lower probability of being chosen. So increasing K-C complexity translates into decreasing probability. So far, so good.

The magic trick then is as follows: we have a string of length N that can be compressed into a string of of length at most k with k = N; BUT HOW LOW CAN WE GO? That’s the “specified” thingy. The complexity thingy gives an upper limit, and the specified thingy gives a lower limit. The more specified the string is (the lower the lower limit), the less random is the string. The less random the string is, the more the string is chosen by deliberate design.

Yes, it does require a few logical jumps to get there; but it’s how it works :-)

Bruce Thompson GQ wrote:

So remember when you sit down on your new throne for a few minutes of relaxation that you have ID to thank for that quiet single flush at the end.

LOL! Don’t underestimate the power of ID!

Comment #107312

Posted by peter on June 21, 2006 8:38 PM (e)

“Intelligent” design isn’t science, it’s magic. That’s your lesson, no further instruction necessary.

Comment #107458

Posted by Salvador T. Cordova on June 22, 2006 2:33 PM (e)

PvM,

In case you weren’t aware. One of my posts yesterday was held up in the moderation queue.

Salvador

Comment #107504

Posted by 'Rev Dr' Lenny Flank on June 22, 2006 5:56 PM (e)

Wow, first Donald, now Sal. FL can’t be far behind. Maybe even Heddle will pop in, and make it a grand reunion.

But hey, Sal, the last hundred or so times that you were here, you tucked tail and ran without answering a few simple questions. I have 31 simple questions for you, but I’m happy to take them two or three at a time.

Let’s start with:

What, precisely, about “evolution” is any more “materialistic” than, say, weather forecasting or accident investigation or medicine. Please be as specific as possible.

I have never, in all my life, ever heard any weather forecaster mention “god” or “divine will” or any “supernatural” anything, at all. Ever. Does this mean, in your view, that weather forecasting is atheistic (oops, I mean, “materialistic” and “naturalistic” —- we don’t want any judges to think ID’s railing against “materialism” has any RELIGIOUS purpose, do we)?

I have yet, in all my 44 years of living, to ever hear any accident investigator declare solemnly at the scene of an airplane crash, “We can’t explain how it happened, so an Unknown Intelligent Being must have dunnit.” I have never yet heard an accident investigator say that “this crash has no materialistic causes — it must have been the Will of Allah”. Does this mean, in your view, that accident investigation is atheistic (oops, sorry, I meant to say “materialistic” and “naturalistic” — we don’t want any judges to know that it is “atheism” we are actually waging a religious crusade against, do we)?

How about medicine. When you get sick, do you ask your doctor to abandon his “materialistic biases” and to investigate possible “supernatural” or “non-materialistic” causes for your disease? Or do you ask your doctor to cure your naturalistic materialistic diseases by using naturalistic materialistic antibiotics to kill your naturalistic materialistic germs?

Since it seems to me as if weather forecasting, accident investigation, and medicine are every bit, in every sense,just as utterly completely totally absolutely one-thousand-percent “materialistic” as evolutionary biology is, why, specifically, is it just evolutionary biology that gets your panties all in a bunch? Why aren’t you and your fellow Wedge-ites out there fighting the good fight against godless materialistic naturalistic weather forecasting, or medicine, or accident investigation?

Or does that all come LATER, as part of, uh, “renewing our culture” … . . ?

Oh, and hey Sal, why is it that all of DI’s funding comes from fundamentalist Christian political groups and Reconstructionist nutjobs? Why is it that the Templeton Foundation, which focuses on issues of science and religion (right up ID’s alley, eh?) won’t fund DI?

Time to tuck tail and run again, Sal.

Comment #107538

Posted by Torbjörn Larsson on June 22, 2006 9:43 PM (e)

pwe says:

“The magic trick then is as follows: we have a string of length N that can be compressed into a string of of length at most k with k = N; BUT HOW LOW CAN WE GO? That’s the “specified” thingy. The complexity thingy gives an upper limit, and the specified thingy gives a lower limit. The more specified the string is (the lower the lower limit), the less random is the string. The less random the string is, the more the string is chosen by deliberate design.”

Ok, I see your point too. You are saying that we could decode this as a minmax solution.

But Dembski isn’t doing this - he hasn’t given any condition for selecting the minmax. Mark’s analysis points to him believing he is asking for a low probability compressible string without realising that this could be a minmax problem.

And if he realises that this is what he asks for - is it what he wants to have? And how should the minmax be selected - has different deigners different requirements? How does the designers design? (As if Dembski would want to answer that!)

As it is, Mark’s analysis of this being inconsistent stands.

Comment #107540

Posted by Torbjörn Larsson on June 22, 2006 9:49 PM (e)

maybe I should point out that for Dembski correcting this he must not only specify how the designer works, he must also realise that he chooses the wrong definitions for information and complexity. If he does that the result would be that he is both falsifying the designer idea and confirming instead of rejecting evolution.

But yes, he would go from inconsistent ID to consistent scientific theory. As if!

Comment #107589

Posted by pwe on June 23, 2006 7:18 AM (e)

Torbjörn Larsson wrote:

Ok, I see your point too. You are saying that we could decode this as a minmax solution.

Yep, that’s what I’m saying.

But Dembski isn’t doing this - he hasn’t given any condition for selecting the minmax. Mark’s analysis points to him believing he is asking for a low probability compressible string without realising that this could be a minmax problem.

True, Dembski appears not to actually have understood, what he is writing. But since nobody else has understood, what Dembski means by “specified complexity”, that simply makes him appear almost human.

And if he realises that this is what he asks for - is it what he wants to have?

Good question! But I cannot answer it - to me Dembski’s articles are too fuzzy for me to gauge, what he really wants. It appears as if he thinks he has already got, what he wants - but unfortunately he is unable to communicate, what it is he has got.

And how should the minmax be selected - has different designers different requirements? How does the designers design? (As if Dembski would want to answer that!)

LOL! No, Dembski doesn’t want to get involved in discussions about the designer - not even if there’s one or many. He is quite careful to keep the designer’s (or designers’) identity, capabilities and methods in the dark. Maybe he doesn’t want to scare his UFO-believing fans away? Crop circles exhibit specified complexity, you know.

As it is, Mark’s analysis of this being inconsistent stands.

Well, at least Dembski is either unclear of confused - that mucj I’ll agree with.

maybe I should point out that for Dembski correcting this he must not only specify how the designer works, he must also realise that he chooses the wrong definitions for information and complexity. If he does that the result would be that he is both falsifying the designer idea and confirming instead of rejecting evolution.

I am unsure about, what exactly you mean here. Please elaborate.

But yes, he would go from inconsistent ID to consistent scientific theory. As if!

And strangely enough, that’s what he doesn’t want to do! As Dembski sees it, specified complexity is positive evidence (and not an argument from ignorance) and therefore ID is falsifiable: you only have to prove that something both has specified complexity and is verifiably not designed. He doesn’t want any verification/falsification of the designer.

Comment #109612

Posted by Torbjörn Larsson on July 1, 2006 11:48 AM (e)

The midsummer holiday was long and eventful, apparently so was this thread.

“that simply makes him appear almost human.”

It also makes him wrong.

““maybe I should point out that for Dembski correcting this he must not only specify how the designer works, he must also realise that he chooses the wrong definitions for information and complexity. If he does that the result would be that he is both falsifying the designer idea and confirming instead of rejecting evolution.”

I am unsure about, what exactly you mean here. Please elaborate.”

If Dembski would correct his definitions to be aligned with, instead of conclict with, normally used definitions they might be useful. He would for example see that his law of information conservation would be aligned with entropy and directly point to the need for environmental input of information instead of designer input. They would also support evolution if they were, since evolution already describes or purports to describe what he purports to describe.

“He doesn’t want any verification/falsification of the designer.”

Agreed.