Richard B. Hoppe posted Entry 1529 on September 30, 2005 01:35 PM.
Trackback URL: http://www.pandasthumb.org/cgi-bin/mt/mt-tb.fcgi/1524

Writing for the Discovery Institute, Casey Luskin has dissed evolutionary research performed using the Avida research platform. (Luskin is a new “program officer” for the DI.) As I wrote last year, computer models employing evolutionary mechanisms are a thorn (or maybe a dagger?) in the side of ID creationists. The models allow testing evolutionary hypotheses that in “real” life would take decades to accomplish or are impractical to run in wet lab or field. They also allow close control of relevant variables – mutation rates, kinds of mutations, the topography of the fitness landscape, and a number of others, enabling parametric studies of the effects of those variables on evolutionary dynamics. A number of publications using Avida (see also here) have established that it is a valuable complement to wet lab and field studies in doing research on evolutionary processes.

In his testimony at the Dover trial on September 28, Rob Pennock described a study that has particularly irritated ID creationsts, The evolutionary origin of complex features, published in Nature in 2003. In that paper Lenski, Ofria, Pennock and Adami showed that there are circumstances under which structures that meet Behe’s operational criterion for irreducible complexity (IC) – loss of function due to knockout – can evolve by random mutations and selection. Since IC is the core negative argument of ID – IC structures and processes allegedly cannot evolve by incremental “Darwinian” processes – the demonstration that they can evolve by Darwinian processes knocks out IC as a marker of intelligent design. And since IC is a special case of Dembski’s Specified Complexity, it also weakens Dembski’s core argument.

Various ID creationists have criticized the Lenski, et al., study on a variety of specious grounds, and I’ve discussed those critiques in several places, including an extended discussion here. Luskin’s critique is shallower and less informed than some I’ve read. I’ll hit a few low points in his critique.

Luskin wrote

Pennock and his other co-authors claim the paper “demonstrate[s] the validity of the hypothesis, first articulated by Darwin and supported today by comparative and experimental evidence, that complex features generally evolve by modifying existing structures and functions” (internal citations removed). Today in court, Pennock discussed the paper today asserting that it was a “direct refutation” of irreducible complexity and a “general test” of Darwinian theory.

I do not have access to Pennock’s testimony at the moment, but that’s about what I’d have said. Co-option and modification of existing structures is a ubiquitous phenomenon in evolution at levels ranging from molecular mechanisms to high-level structures like wings. And in the Lenski, et al., study, sure enough, those same phenomena were observed occurring under the Darwinian mechanisms – reproduction, heritable variation, competition, and mutation.

Luskin spent a good deal of space exploring the conjecture that Pennock’s co-authorship of the Lenski, et al., paper was a conspiracy to get an expert on Behe’s irreducible complexity involved without directly citing Behe. He wrote

I can think of no reason why a philosopher, who otherwise never authors technical papers in scientific journals, whose career specializes in rebutting ID, should be a co-author a [sic] technical research paper in a top technical science journal on the evolutionary origin of biological complexity, a claim which ID challenges, unless that paper somehow required some expertise on ID. Indeed, this paper now appears strategically arranged: is it mere coincidence that this paper appeared as a primary exhibit in the first trial against teaching ID? The reality is that Avida study, in which Pennock was third author, has much to do with strategically rebutting ID.

Looks more to me like it empirically rebuts irreducible complexity.

In his conspiracy theorizing Luskin neglected to mention that in addition to his appointment in philosophy, Pennock is also a Member of the Digital Life Laboratory at Michigan State, along with two of the other authors, Charles Ofria and Richard Lenski. Chris Adami is also associated with the Devolab as a collaborator. The work published in the Lenski, et al., paper is well within Pennock’s professional purview: it’s by four colleagues associated with the same lab. It wouldn’t surprise me at all to find that Pennock suggested the study’s main outlines to his co-authors, since IDists use the notion of irreducible complexity as their primary weapon in their culture war against evolutionary theory and Pennock is interested in that effort. Knowing something about Avida, I have no problem imagining that Pennock saw the Avida platform, a main tool in the Devolab, as an excellent tool to do some research on the question of the evolvability of IC structures.

That they didn’t mention intelligent design isn’t amazing. As far back as Darwin the question of how those kinds of structures could evolve has been raised, so Behe contributed no new issue to address. Lenski, et al., anchored their paper directly to Darwin in the first paragraph. Since the ID creationists have not published anything in the professional literature of biology to which to refer in the context of the Lenski, et al., paper, it seems strange to complain that they didn’t reference ID. If ID had some actual professional literature to cite one might sympathize, but it doesn’t. Luskin’s conspiracy theory is more than a little incongruous coming from the socio-political movement that didn’t bat an eyelash when the ID-sympathetic editor of an obscure taxonomic journal slid around the publishing society’s editorial guidelines to get Meyer’s Hopeless Monster published. Do I detect projection here?

Then Luskin repeated a common ID creationist criticism by writing

Pennock asserted on the witness stand that this study accurately modeled biological reality. Well, if biological reality was pre-programmed by an intelligence to evolve certain simple logic functions, then he’s right. Avida programmers knew that EQU was easily evolvable from the proper combination of only 5 primitive logic operations before the simulation even began. This is called “evolution by intelligent design,” because the environment seems literally pre-programmed to evolve the desired phenotype.

In fact, Avida programmers had no idea whether digital critters capable of performing input-output mappings corresponding to EQU could evolve in Avida. That human programmers could write an Avida instruction string that performed the EQU mapping is irrelevant to the question of whether it could evolve via Darwinian mechanisms. Programmers writing code is ID’s position, not evolution’s. (Incidentally, Luskin misrepresents what actually evolved in the Lenski, et al., experiments. “EQU” didn’t evolve any more than flight evolved in animals like birds and bats. Morphological structures that enable flight evolved; flight didn’t. Similarly, assembly language programs that performed the input-output mapping corresponding to EQU evolved, not EQU itself. That’s not a trivial distinction. A given function can be performed by a number of different structures.)

Further, while human programmers could write an Avidian critter to perform the input-output mapping corresponding to EQU using 5 “primitive” logic functions, the 23 lineages that evolved in the main condition in the Lenski, et al., study did so in 23 different ways. The 23 lineages that evolved to perform the mapping were all different, and in addition to EQU they performed 17 different combinations of the “primitive” functions, ranging from four to eight. None of them evolved the ‘EQU’ program the human programmers wrote. That phenomenon extends to other aspects of the Avida critters. For example, running Avida with no fitness landscape, so selection is on replication efficiency alone, evolves critters that perform self-replication in fewer instructions than any human-written program. Recall Leslie Orgel’s Second Law: Evolution is smarter than [programmers] are.

Luskin then wrote a particularly error-filled paragraph.

Pennock seemed impressed that the digital organisms “invented” many creative ways of performing EQU. But the flaw of the simulation lies therein: EQU was destined to evolve because the addition of each logic function greatly improved the fitness of the digital organism. Pre-programmed into the Avida environment were functionally advantageous digital mutations which were guaranteed to keep the digital organisms alive and doing interesting things. Thus, if one assumes that anything more than extremely minor cases of irreducible complexity never exist, then Pennock’s program show evolution can work.

There are four main problems with Luskin’s representation in those four sentences. First, “functionally advantageous mutations” were not “pre-programmed” into the Avida environment. Random mutations occurred, some of which were deleterious (in the sense of decreasing reproductive fitness) or even lethal, some were neutral, and some were advantageous. Gee. That’s just what a slew of biological research teaches us: mutations come in three basic flavors.

Second, Luskin claimed that those mutations were “… guaranteed to keep the digital organisms alive …”. That’s flatly false. Tens of thousands of digital organisms die in the course of an Avida run under the conditions Lenski, et al., used. Some die because they fail to replicate due to lethal mutations in their replication code, and some die because they’re killed – over-written – by a reproducing neighbor regardless of their advantageous mutations. Thousands of species emerge, flourish for a while, and then go extinct, and thousands of lineages go extinct. There are no guarantees at all.

Third, it is not necessary that “…the addition of each logic function greatly improve[s] the fitness …”. While the fitness landscape defined by the values assigned to the various logic functions in the Lenski, et al., study was fairly steep from simple to complex logic functions, a number of Avida runs I have done with a flatter topography produce lineages that also perform the most complex logic functions. It just takes longer because the dynamics of lineages evolving on flatter landscapes are slower. So long as there is at least some net selective advantage for performing a more complex function, more complex functions can evolve.

Finally, do we now have a distinction between micro-IC and macro-IC? Luskin’s reference to “extremely minor cases of irreducible complexity” suggests that we have to make that distinction, but where the boundary might be is not clear. Do I hear echoes of “microevolution is fine, but not macroevolution”?

I’ll note here that what one might reasonably believe to be cases of irreducible complexity, like a three-legged stool which cannot function if any of the three legs or the seat is ‘knocked out’, are no longer IC. William Dembski has recently added two new operational tests for ICness. In addition to Behe’s original knockout operational criterion, now Dembski tells us we must also determine (1) that a simpler structure cannot perform the same function, and (2) after a successful knockout one must show that no adaptation and/or rearrangement of the remaining parts can perform the original function. As Dembski tells us, that means that a three-legged stool is not IC since a solid block of wood can keep one’s butt off the ground. I have argued elsewhere that Dembski’s additional operational criteria mean the Death of Irreducible Complexity (see also here and Mark Perakh’s post here). On Dembski’s new and improved definition, not even Behe’s mousetrap is IC.

Finally, Luskin claimed that a control condition in the Lenski, et al., paper that employed a fitness landscape that was flat across logic functions except for EQU showed that

… when there is no selective advantage until you get the final function, the final function never evolved! This accurate modeling of irreducible complexity, for there is no functional advantage until you get some to some minimal level of complexity—that’s IC in a nutshell. Yet the study found that the evolution of such a structure was impossible.

Of course they claim this is what they “expected,” but without using the words “irreducible complexity,” they just demonstrated that high levels of irreducible complexity are unevolvable.

I’ll be darned. Luskin is back to If it can evolve incrementally by indirect routes involving intermediates that are themselves functional, it ain’t IC. If it can’t evolve, it is IC, and by the way, IC shows that it can’t evolve. In other words, we’re back to the “We don’t know how it could have, so it couldn’t have and therefore ID” style of argument. And there’s that “high levels of irreducible complexity” phrase – we’ve got to deal with microIC and macroIC again, too. Stuff that’s got just a little bit of IC can evolve, but stuff that’s got a whole lot of IC can’t. When did irreducible complexity become a scalar variable? Luskin must be resurrecting Behe’s even more question-begging “evolutionary” definition of irreducible complexity. I guess it’s handy to have a series of alternative definitions of a core concept ready to hand.

In a way I feel sorry for Luskin. It can’t be easy writing about genuine research when you have no clue what it did and what it means. On the other hand, he has plenty of role models for that behavior at the Discovery Institute, and I have no doubt that he’ll learn fast.

RBH

Commenters are responsible for the content of comments. The opinions expressed in articles, linked materials, and comments are not necessarily those of PandasThumb.org. See our full disclaimer.

Comment #50314

Posted by SteveF on September 30, 2005 12:43 PM (e)

Finally, do we now have a distinction between micro-IC and macro-IC? Luskin’s reference to “extremely minor cases of irreducible complexity” suggests that we have to make that distinction, but where the boundary might be is not clear.

Surely something is IC or it isn’t? Isn’t that the whole point of IC?

Comment #50317

Posted by Adam Ierymenko on September 30, 2005 12:53 PM (e)

I met some of the MSU/CalTech Digital Life Lab folks at ALife9 in Boston. They’re a nice bunch of folks.

Pennock is working on a version of Avida called Avida-ED which is designed for educational use.

http://www.msu.edu/~pennock5/research/Avida-ED.html

If he or one of the others is reading this, let me suggest the following:

Build the Avida-ED lesson plan specifically to refute, in sequence, each claim of ID. Don’t bother point this out explicitly… just write the lessons using ID’s antievolution tracts as a guide. It wouldn’t be hard, since all of ID’s basic claims (irreducible complexity, no free lunch, conservation of information, etc.) are easily refuted with a system like this.

It would also result in an excellent lesson about evolution, since many of ID’s claims rely on exploiting common misunderstandings. So, by debunking them, you would be sure to hit each major misconception.

Here’s another even simpler system that refutes a lot of Dembski’s nonsense about “conservation of information”:

http://www.lecb.ncifcrf.gov/~toms/paper/ev/

Comment #50322

Posted by Adam Ierymenko on September 30, 2005 1:22 PM (e)

Luskin wrote:

Pennock asserted on the witness stand that this study accurately modeled biological reality. Well, if biological reality was pre-programmed by an intelligence to evolve certain simple logic functions, then he’s right. Avida programmers knew that EQU was easily evolvable from the proper combination of only 5 primitive logic operations before the simulation even began. This is called “evolution by intelligent design,” because the environment seems literally pre-programmed to evolve the desired phenotype.

This is goint to be the major challenge to things like Avida. In some ways, it’s valid: Avida is not a real biological system. However, to levy this as a criticism of Avida, the IDers have to radically weaken their own arguments.

Behe and Dembski both make *strong* claims regarding the *inability* of evolutionary processes to generate certain results. Behe claims that evolutionary processes cannot generate structures that exhibit what he calls irreducible complexity. Dembski claims a kind of “law of conservation of information.” Trying to pin down what Dembski actually means is like nailing jelly to the wall as he constantly changes his definitions, but my understanding is that Dembski is claiming that no system can show an increase in ordered information without an external conscious designer manually adding information to the system. More specifically, Dembski defines something called complex specified information as information that lies beyond the universal probability bound. He claims that the presence of functional CSI implies design as the only explanation.

Both of these constitute claims that there exists a scientific law forbidding the evolutionary origin of X. Dembski’s is actually the strongest claim: he is essentially claiming a law of conservation. A conservation law is a *very* strong claim about the nature of the universe… perhaps one of the strongest types of claims that science can make.

When you propose a universal scientific law restricting X, what you are saying is that absolutely no system, natural or synthetic, can do X. Example: the law of conservation of energy. Neither nature nor man can produce a perpetual motion machine.

So if there is a law of conservation of information or a law prohibiting the evolution of irreducibly complex causal structures, then no system, natural or synthetic, should be able to show information increase and/or the evolution of irreducible structures.

Artificial life systems (and even some simple genetic algorithms) have demonstrated both. This flatly disproves both of these as universal laws. Case closed. It doesn’t matter that the systems are synthetic. Natural laws apply to all systems.

In order to criticize artificial life as inapplicable to ID, the IDers have to weaken their arguments to say “oh, well, I guess some evolutionary systems can show these effects, but natural systems are highly unlikely to do so if they have not been designed.” This weakens their arguments from proposals of fundamental scientific law to arguments from incredulity. It also puts a strict time limit on their arguments: from now until someone successfully shows what artificial life has shown in real chemistry. Tick, tick, tick… (I expect to see this within 10 years, if it hasn’t happened already and I don’t know about it.)

Or, it puts them in the theistic evolution camp, since one of the claims of theistic evolution is “yes, evolution might be responsible for life, but it could not have done so without a guiding intelligence.” Theistic evolution is one of the things that the Discovery Institute was contracted by Howard Ahmanson et. al. to specifically refute, so their financiers are not going to be happy if they end up as theistic evolutionists.

By the way: Avida is actually only the most well-publicized piece of artificial life work that is devastating to ID– in reality there is artificial life work going back to the early 90s that demolishes ID.

Comment #50325

Posted by shiva on September 30, 2005 1:38 PM (e)

How about dissing this one? Casey Luskin might want to pick his battles carefully. The volume of scientific research coming out of even an ongoing graduate research program is enough to swamp all the junk chirned out by DI since inception.

Scientists Uncover Rules that Govern the Rate of Protein Evolution
http://pr.caltech.edu/media/Press_Releases/PR12737.html

Comment #50339

Posted by JY on September 30, 2005 2:48 PM (e)

For William Dembski’s (and Luskin’s) claims to have any merit whatsoever, they MUST be able to submit the claims to a test like this:

Given a data set (i.e. a long string) that represents a computer program (such as Avida), and the inputs to that (a population of digital organisms and a fitness landscape), measure (algorithmically) the ‘CSI’ (or whatever he asserts is ‘conserved’) in that data set.

Given a 2nd data set that also represents a computer program (Avida), and any inputs to that program (a different population of organisms, and the same fitness landscape), measure (algorithmically) the ‘CSI’ in that data set.

For any two such data sets the if the measure of ‘CSI’ in the first data set is less than the measure of ‘CSI’ in the 2nd, then that implies that the 2nd data set can’t have been produced by merely running the first data set (as a program with inputs), since that would violate the ‘law of conservation’. If Dembski’s algorithm can always distinguish the evolutionary descendants from the ancestors (because their ‘CSI’ is always less), then, hey, he’s got something. Otherwise his claims are groundless. There’s no reason, other than CSI being an empty concept that is therefore impossible to measure, that Dembski shouldn’t meet this challenge.

Comment #50344

Posted by RBH on September 30, 2005 3:32 PM (e)

Adam Lerymenko wrote

In order to criticize artificial life as inapplicable to ID, the IDers have to weaken their arguments to say “oh, well, I guess some evolutionary systems can show these effects, but natural systems are highly unlikely to do so if they have not been designed.”

Another way to put that is that the various evolutionary simulators, Avida, Tom Schneider’s ev and others, have demonstrated that the purely mechanical processes invoked by evolutionary theory (random mutations, selection, etc.) can generate all the phenomena that IDists claim demonstrate intelligent design. That pushes them to the “Well, what designed those mechanisms?” question. That’s the theistic evolution move.

But that move is anathema to the core IDists (and to their straight creationist brethren) for several reasons. First, of course, there are stochastic contingencies associated with the evolutionary process that make the goal-directedness so beloved of IDists problematic. Evolution incorporating stochastic contingencies may come up with similar-functioning outcomes (e.g., the 23 different outcomes in the Lenski, et al., study), but can’t guarantee that one particular outcome will occur. Since IDists and their creationist brethren very badly want that one outcome – human beings – to be intended by the intelligent designer, a process incorporating stochastic contingencies can’t guarantee that a desired goal will occur and is therefore suspect for them.

Second, the principal IDists, with perhaps the exception of Behe, are committed to an interventionist intelligent designer, one that intermittently pokes a finger into the process to alter its course. Pushing the source of the “information” (whatever meaning that term has for IDists this week) in the biome out into the selective environment removes the interventions from (biological) view. You can’t look at biology and “see” evidence of interventions, you have to do physics or something to find evidence for interventions. That’s a chancier move, though the cosmological IDists (the “Privileged Planeteers”) make that move. But it still suffers from the stochastic contingency problem in biology: tinkering with the physical environment does not guarantee that human beings will result: the divine tinkering might have produced an intelligent T. rex in the image of … well, of what?

========================

JY wrote

Given a data set (i.e. a long string) that represents a computer program (such as Avida), and the inputs to that (a population of digital organisms and a fitness landscape), measure (algorithmically) the ‘CSI’ (or whatever he asserts is ‘conserved’) in that data set.

Given a 2nd data set that also represents a computer program (Avida), and any inputs to that program (a different population of organisms, and the same fitness landscape), measure (algorithmically) the ‘CSI’ in that data set.

I posed a similar sort of challenge to the IDists on ARN a few months ago. Not surprisingly, none took it up (including Salvador, who didn’t find it “interesting”). I find it very telling that IDists who advertise a methodology for detecting design (IC, SC, CSI, or whatever it’s called) decline the invitation to validate and calibrate their methodology on phenomena of known provenance. It almost looks like they don’t care whether it’s reliable and valid. Does that surprise anyone? Luskin was identified in a DI press release as a “scientist”. Perhaps that is a worthy extramural project he could do as a new employee to get in good with his new bosses. Or maybe not.

RBH

Comment #50354

Posted by jeffw on September 30, 2005 4:23 PM (e)

Maybe I’m missing something, but what does CSI say about information on a higher level, such as human creativity? For example, if I suddenly had an inspiration and wrote a symphony this weekend, where did that symphony come from? Does Dembski claim that it was pre-specified in the environment that produced me, by some kind of designer? Or was it planted in my brain by some kind of supernatural, (perhaps undetectable “quantum”) process? If so, then how can there be Free Will, a concept which, paradoxically, most creationists would say is god-given?

Comment #50355

Posted by Adam Ierymenko on September 30, 2005 4:30 PM (e)

RBH wrote:

I posed a similar sort of challenge to the IDists on ARN a few months ago. Not surprisingly, none took it up (including Salvador, who didn’t find it “interesting”). I find it very telling that IDists who advertise a methodology for detecting design (IC, SC, CSI, or whatever it’s called) decline the invitation to validate and calibrate their methodology on phenomena of known provenance. It almost looks like they don’t care whether it’s reliable and valid. Does that surprise anyone? Luskin was identified in a DI press release as a “scientist”. Perhaps that is a worthy extramural project he could do as a new employee to get in good with his new bosses. Or maybe not.

I work quite a bit on computer-based artificial life systems and contribute to a blog about that and related topics (click my name :).

I’m quite certain based on what I’ve seen that they have absolutely no interest in doing this… not because they wouldn’t want to test ID, but because at least a few of them understand things well enough to know that the result would… well… not be the kind of result that they would want. Doing an experiment that shows CSI *increasing* as a result of an externally unguided evolutionary process for several possible candidate ways of measuring CSI is not going to get you a lot of praise as an “intelligent design scientist.”

Another silly thing that they say repeatedly about things like Avida is that they can’t possibly work cause they violate the second law of thermodynamics. This is very silly. As far as I know, though I cannot personally verify this, the computers that ran Lenski et. al.’s experiments were consuming energy from the power grid. I would guess this to be the case, as my own artificial life work has always required something to be plugged into the wall.

Comment #50358

Posted by RBH on September 30, 2005 4:39 PM (e)

Interesting blog. Thanks for the link. See my brief bio.

RBH

Comment #50360

Posted by CJ O'Brien on September 30, 2005 4:44 PM (e)

Another silly thing that they say repeatedly about things like Avida is that they can’t possibly work cause they violate the second law of thermodynamics.

The ones who say that haven’t been informed that they’re supposed to be using code-names these days.

What I mean is that, as you pointed out in another comment above, what they need desperately for any of these arguments to hold water is some “law of conservation.” 2LoT used to be their law of choice in the good ol’ days.

IC, CSI, etc. are just place-holders for a long-ago demolished argument.

Comment #50372

Posted by darwinfinch on September 30, 2005 5:09 PM (e)

It is both heartening and astounding to me that some of us (especially the real scientists among us - I don’t ever wish to pretend to be more than an interested layperson here) - can discuss the POSSIBILITY that the ID crowd would ever allow their claims to be challenged by ANY sort of experiment, even one done by very sympathetic, but also rigorous and ethical, scientists outside of their control.
We are all now agreed on one opinion, I will daringly venture to say: the people in the DI have motives that have nothing to do with any general definition of “science” and are motivated (putting aside the $$$) all-but-entirely (I would now say, entirely, myself) by their “faith.” Science as now practised is their openly avowed enemy; one that they have shown may be attacked without regard to fact or truth by any means necessary (well, no violence is being done or called for, of course, change that to “almost” perhaps).
It is heartening that, very often, regular posters at PT and elsewhere can actually suggest that the DI and other IDers could ever, under any circumstances, allow the fair testing of the fai.. scientific claims. That we can credit them with even the smallest reasonableness speaks well for our own position and goals.
It is astounding because we know fully what these people are, and yet still postulate their being somehow within the grasp of reason, or honest debate.
I still recall my own sincere (naive) postings years ago, which often echoed this sort of confusion. I really thought (not a young man any longer) that, given honestly gathered and tested science against nothing but their own fixed ideas, non-loony Creationists would perhaps still be firm in their faith but acknowledge, as the earlier scientists, deeply Christian, who faced Darwin’s challenge mostly did, the current set of unrefutable facts. That no old timer bothered to respond, “You must be new ‘round here!” still amazes me.

Comment #50374

Posted by RBH on September 30, 2005 5:16 PM (e)

darwinfinch,

The intended audience is not the presuppositionalist DI/ID stalwarts whose faith defends them from evidence. It is the wider group of people who hear the DI claim to be doing “science” and wonder, but who will read and comprehend the fact that what the IDists do bears only the most superficial resemblance to what genuine scientists do.

Apologists like Luskin will not be convinced by evidence. The only reason to spend time and effort rebutting their ill-informed misrepresentations is that wider audience.

RBH

Comment #50378

Posted by darwinfinch on September 30, 2005 5:35 PM (e)

Yeah, I’ve heard that. But the limits have really been reached, haven’t they? For myself, they have.

Simply linking to any of many well-written refutations of these people, with a gentle aside r update, would do the job far better, I believe. The people at talkorigins (and elsewhere) do exactly that very successfully.
If anyone is truly convinced by the DI position, they are not lurking here at PT because they wish to engage the questions raised here, but to root for their side. As the worst sort of true fans, “fanatics,” they will never admire the efforts of the opposition, nor bother to read them, much less understand them.
I certainly don’t wish to dissuade others from taking on the idiocies of the full-time ID trolls who pop up here, whether for the reason you suggest, to test their own grasp of the topics, to polish their style, or to simply entertain themselves.
My comments were intended as frustrated (perhaps very frustrated) praise of such people’s efforts, which they, being at least as intelligent as myself and probably often much more sensible, must realize.

Comment #50388

Posted by Hiya'll on September 30, 2005 6:46 PM (e)

Jeff, an IDist would say that the symphony had come from an intelligent designer, in this case you.

For any two such data sets the if the measure of ‘CSI’ in the first data set is less than the measure of ‘CSI’ in the 2nd, then that implies that the 2nd data set can’t have been produced by merely running the first data set (as a program with inputs), since that would violate the ‘law of conservation’. If Dembski’s algorithm can always distinguish the evolutionary descendants from the ancestors (because their ‘CSI’ is always less), then, hey, he’s got something. Otherwise his claims are groundless. There’s no reason, other than CSI being an empty concept that is therefore impossible to measure, that Dembski shouldn’t meet this challenge.

You’ve made a slight misunderstanding of the law proposed by Dembski, he doesn’t claim no information is produced, but that any information would never, in the history of the universe, exceed, well it was 150 something, I forget what.

Comment #50391

Posted by jeffw on September 30, 2005 7:15 PM (e)

Hiya’ll,

Thanks for the explaination. But what criteria are used to establish the boundries for these “data sets”? Whatever they are, they are apparently able to make an “objective” distinction (if that’s possible) between “intelligence and “non-intelligence”. In other words, as a designer, I’m not a subset of the data sets - I’m outside the system. And I could also write a evolutionary computer program and call it “intelligent”, excluding it from the “data sets”, but not the data it generates. Sounds arbitrary to me.

Comment #50397

Posted by Norman Doering on September 30, 2005 7:42 PM (e)

Hey, Adam Ierymenko,

Would you (or anyone else here) be willing to take over an argument I really botched up here:
http://www.uncommondescent.com/index.php/archives/353

I only have a dab of experience with L-systems and some general reading on genetic algorithms and evolutionary programming and I started making mistakes.

Comment #50399

Posted by Hiya'll on September 30, 2005 7:55 PM (e)

The stuff about the Data sets wasn’t written by me, it was written by someone else on this board, whom I was trying to quote in comment , and then correct with the third paragraph. I forgot to put quote marks around it, the comment is Comment #50339 ( I suggest you read it before you read the rest of this post, otherwise the post won’t make sense). I think the boundaries of the data sets referred to would be determined by the fact that they were different programs ( The author I was quoting was talking about a simulation.) In terms of deciding which data set was which in real life (Distinguishing the designer from the designed), the boundaries between the two would be set in the same way we determine when two rocks aren’t the same rock, i.e chronological and spatial dissimilarity, as determined by a rational agent (the problem of coming up with a precise method for deciding when two objects are two different objects, and not part of the one object is a venerable problem of philosophy.)

As a designer, in the original quotation, you would be the first data set ( I know, it’s creepy to think of yourself as a set of data). In this sense I suppose that the intelligent designer of ID might also be a data set, albeit a transfinte one if he is the Christian God as most IDist’s believe.

The computer program you talked about writing would only count as intelligent (In the narrow ID sense) if it was over a certain level of complexity. A better way to describe ID then intelligent design is actually CD (complex designer) the designer of ID need not be intelligent in the sense of everday paralance, only complex.

Comment #50402

Posted by Hiya'll on September 30, 2005 8:01 PM (e)

Norman Doering

I really sympathise with you. The whole ID debate get’s so petty, every time an IDist or a Darwinist see’s his or her opponent making a simple mistake the whole thread degenerates into an arguement over that singular little factual mistake, it’s petty, there’s a lot more important issues at stake in this debate then proving that we’re more erudite then our opponents.

By the way, that last post was meant to be addressed to Jeffw, but I forgot to put it in. Sorry, it’s just I assumed there would be no more posts before I had finished writing mine, how wrong I was, how very, very wrong.

Comment #50406

Posted by jeffw on September 30, 2005 9:07 PM (e)

Ok, here’s a little thought experiment which I hope will help me understand this CSI “law of conservation” of complexity (or whatever it’s called).

First you write a program that generates all possible bit combinations in 4GB of memory (or whatever memory your computer has). Fairly simple algorithm, probably less than ten lines of code (although it would generate enormous output). Call it “program A”.

Then you write a program that interprets those strings as Intel 80x86 instructions. Not a trivial program, but not too difficult (I’m sure Intel uses quite a few simulators like this to test their chip logic). Call this “program B”.

Define CSI “data set” A as the union of programs A and B.

Define CSI “data set” B as output of all data generated by passing program A’s output to program B and executing it. Note that is the output of running of all possible programs in 4GB memory on an Intel machine (without user input).

Now would Dembski’s conservation law tell me that data set B can’t be any more complex than data set A? Data set B seems vastly more complex to me.

Comment #50413

Posted by Norman Doering on September 30, 2005 9:24 PM (e)

jeffw wrote: “… write a program that generates all possible bit combinations in 4GB of memory …”

Not pratically possible due to limitations of data storage in our universe. There are more bit combinations possible than there are electrons in the universe.

Remember the mathematical argument against 6 monkeys typing for a billion years ever writing Hamlet. (Or find it on the net if you don’t remeber it.)

Comment #50415

Posted by jeffw on September 30, 2005 9:31 PM (e)

“jeffw wrote: “… write a program that generates all possible bit combinations in 4GB of memory …”
Not pratically possible due to limitations of data storage in our universe. There are more bit combinations possible than there are electrons in the universe.”

Well, this is a theoretical thought experiment so we don’t consider practicality, but yes it is even practically possible. You just write some nested loops which generate each string combination, and then you pass it to program B before generating the next one. It would certainly take an impractical amount of time, but again, this is a thought experiment. Practically has nothing to do with it.

Comment #50430

Posted by Norman Doering on September 30, 2005 10:55 PM (e)

jeffw wrote: “… … we don’t consider practicality,… this is a thought experiment. Practically has nothing to do with it.”

I don’t like those sort of angels dancing on pin arguments.

However, I think judging the programs A and B complexity might be achieved by Kolmogorov and ideas about “Algorithmic information theory” and “algorithmic complexity.”

http://en.wikipedia.org/wiki/Algorithmic_information_theory
http://en.wikipedia.org/wiki/Andrey_Nikolaevich_Kolmogorov

Comment #50434

Posted by Hiya'll on September 30, 2005 11:19 PM (e)

Norman. counting the number of angels on the head of pin isvery, very important.

Comment #50436

Posted by jeffw on September 30, 2005 11:39 PM (e)

“jeffw wrote: “… … we don’t consider practicality,… this is a thought experiment. Practically has nothing to do with it.”
I don’t like those sort of angels dancing on pin arguments.”

Then scale it down to a level that you feel comfortable with. Instead of 4GB, use 1Meg, 1K, or 128 bytes.

The main question posed by my example this: Is a turing machine + the “description” of a set of all possible programs for that machine (not the set itself), more or less “complex” than the output generated by the set of all the described programs?

I’m not sure that Kolmogorov complexity is useful here. All you have to do is change the description of the set of programs to “4GB” from 1K, and you will increase the functional complexity of CSI Set “B” enormously (factorially?), while the length of CSI Set A increases by only a few bytes.

Comment #50437

Posted by Norman Doering on October 1, 2005 12:18 AM (e)

jeffw wrote: “… Then scale it down to a level that you feel comfortable with. Instead of 4GB, use 1Meg, 1K, or 128 bytes.”

That changes everything. You asked: “Is a turing machine + the “description” of a set of all possible programs for that machine (not the set itself), more or less ‘complex’ than the output generated by the set of all the described programs?”

I haven’t got a clue. I need a way to actually measure complexity and I can’t really figure out how to do that. That’s why I hate these kind of angels on pins arguments.

What follows is my last post on Dembski’s site, I had the same problem with coming up with real answers – so I just reject the argument and find a different one:
———

Gumpngreen wrote: “Dembski considers your primary objection to be what he calls a ‘gatekeeper’ objection.”

I suppose it is. I don’t think, based on my limited reading, that ID qualifies as science. I don’t care about Karl Popper or other philosophical arguments because I have my own intuitive “science detector.” It works this way: Real science engages the real world when ever it can. Miller and Urey engage the chemicals of life, fossil hunters engage fossils, programmers write genetic algorithms…

Dembski’s ideas might be used to engage the real world in other ways, but they are not being used to do so.

For example, if Dembski can really detect specified complexity then he should get some people to go out into the real world and actually measure the amount of specified complexity in, perhaps, animal communications. There is a controversy about whether dolphins have a language:

Engage that controversy, do dolphins have a language? Shouldn’t Dembski’s concepts have a value there?
http://www.dauphinlibre.be/langintro.htm

Compare the specified complexity of dolphin language, bird songs, whales, octopi, etc.. Make it at least a real scalar value, (if not a multidimensional one), by testing the concept against the real world. If you don’t, you’re arguing about how many angels can dance on the head of a pin.

Once you do that the real world will challenge your ideas with its reality, just like a good theory about mitochondrial DNA can get shot down by one little fact.

Gumpngreen wrote: “…objections are made in attempts to find fault with design because of the threat that design is claimed to pose to ’science’…”

It is a threat to science if done the way it currently seems to be done, by fighting court battles because of religious motivations.

Gumpngreen wrote: “… philosophies improperly equated with being science.”

My philosophy is simple: engage the real world and stop sounding like you’re arguing about angels dancing on pins.

Gumpngreen wrote: “These objections are not made because the theoretical or empirical case for design is scientificially substandard.”

A lot of scientists say it is substandard and I’m inclined to take their word for it because it agrees with my own subjective evaluation.

Gumpngreen wrote: “I suggest you try reading Dembski’s books before you attempt further critiques.”

I should, you’re right. But I’m not that motivated too. In the end my opinion will not matter.

Before I take more interest in ID than I do now, I have to see it engage the real world. I have to see scientists using it on something other than a negative argument against evolution.

Comment #50448

Posted by Hiya'll on October 1, 2005 4:05 AM (e)

“Before I take more interest in ID than I do now, I have to see it engage the real world. I have to see scientists using it on something other than a negative argument against evolution.”

My theory of ID has it neither as a negative arguement, or as a theory. Basically I think it is the consquence of 1 theory and 1 obseravation. The theory is that all complex objects have a designer, the observation is that the universe, and life is a complex thing. This leads to the syllogism

1- All complex things are designed (theory)
2- The universe is a complex thing (observation)
3- Therefore the universe is designed

The main points at which my idea can be criticised are as follows

1- We have no evidence for (1) on a cosmic scale. ( Humes arguements against the telelogical arguement are relavant at this point).
2- Darwinian evolution ( and or the anthoporic principle) makes an exception to (1).

I would argue against 1- that we have no reason not to expand the inference, and clearly we would accept it in many expanded forms (i.e the SETI analogy), and I would say to (2) time will tell. That’s what’s so exciting about evolutionary simulations, the whole things going to be settle beyond reasonable doubt one way or another soon, once the first extremely complex AI lifeforms which leave closed the possibility that information has been frontloaded into the system have evolved then Selectionist Evolution winds, alternately if they don’t evolve…

Comment #50451

Posted by Norman Doering on October 1, 2005 5:21 AM (e)

Hiya’ll wrote: “… This leads to the syllogism… 1- All complex things are designed (theory)…”

I don’t buy that first item as it is worded. A rock you find in forest is complex, (at least in the sense of having lots of different atoms and molecules and crystal structures - maybe even a fossil) it even contains a lot of “specified” information about it’s history if you know to interpret it. But does anyone think it’s designed? – I suppose an IDer might. Is a rock designed to give us information about the past?

Comment #50454

Posted by Hiya'll on October 1, 2005 6:05 AM (e)

Norman Doering

The rock isn’t complex enough to qualify under (1). How do we define “complex enough?” I don’t know, but we have to assume there’s a definition somewhere because the idea of “complex enough” works so well in practice (i.e archeology, the Search for Extraterestial Intelligenence to use the usual analogies ). Maybe Dembski’s filter is applicable, I don’t know whether it works in practice or not, my intrests are philosophy and psychology, I don’t know a lot about maths. I am under the impression that a bunch of presitigious maths guys thinks that, at least in certain circumstances it’s a good idea ( Dembski’s design inference book was published by cambridge after all). But I really don’t have a clue whether or not you could apply it to biology

Comment #50455

Posted by Edin Najetovic on October 1, 2005 6:11 AM (e)

Quoth Norman Doering: “A rock you find in forest is complex, (at least in the sense of having lots of different atoms and molecules and crystal structures - maybe even a fossil) it even contains a lot of “specified” information about it’s history if you know to interpret it. But does anyone think it’s designed? — I suppose an IDer might.
(Emphasis mine)

But that’s sort of the point isn’t it? The ID’er doesn’t, but should. Then again I never really thought the complexity argument helped creatio… I mean ID much.

Comment #50457

Posted by Norman Doering on October 1, 2005 6:56 AM (e)

Edin Najetovic wrote: “But that’s sort of the point isn’t it?”

Yes, but go here:
http://www.uncommondescent.com/index.php/archives/353

And make that point and see how far you get. There is an unbridgeable chasm between the conceptions of the world.

Comment #50460

Posted by W. Kevin Vicklund on October 1, 2005 8:08 AM (e)

Hiya'll wrote:

I am under the impression that a bunch of presitigious maths guys thinks that, at least in certain circumstances it’s a good idea ( Dembski’s design inference book was published by cambridge after all).

Actually, the “prestigious maths guys” - that is, information theorists that have published important, peer-reviewed papers - that have bothered to review Dembski’s work have been very critical (in the negative connotation) of it. Especially those whose work Dembski references; one of the authors of the No Free Lunch theorem, David Wolpert, is one such person.

Comment #50481

Posted by jeffw on October 1, 2005 11:56 AM (e)

.Compare the specified complexity of dolphin language, bird songs, whales, octopi, etc.. Make it at least a real scalar value, (if not a multidimensional one), by testing the concept against the real world. If you don’t, you’re arguing about how many angels can dance on the head of a pin.

Once you do that the real world will challenge your ideas with its reality, just like a good theory about mitochondrial DNA can get shot down by one little fact.

Yes, real (repeatable) scientific observation trumps everything in the end, but you can’t just ignore theoretical implications. If well-conceived, they often make useful predictions or define limits in the real world. Einstein’s theories are good example.

If logically ill-conceived (as I believe dembski’s are), they should be shot down and torn apart on their own grounds. However, complaining about “angels dancing on the head of a pin” doesn’t solve anything. That argument won’t get you anywhere.

Comment #50514

Posted by Hiya'll on October 1, 2005 6:05 PM (e)

“Actually, the “prestigious maths guys” - that is, information theorists that have published important, peer-reviewed papers - that have bothered to review Dembski’s work have been very critical (in the negative connotation) of it. Especially those whose work Dembski references; one of the authors of the No Free Lunch theorem, David Wolpert, is one such person.”

Kevin, I am not saying that all prestigious maths guy’s ( and gal’s) support it, only that a small subset who peer review monographs at cambridge think it might have some merit and I am not talking about his NFL work, just his design infrence work ( I know, I know, the two are linked.)

Comment #50516

Posted by Norman Doering on October 1, 2005 6:16 PM (e)

jeffw wrote: “…complaining about ‘angels dancing on the head of a pin’ doesn’t solve anything. That argument won’t get you anywhere.”

It gets me out of the argument. I’m simply not going to bother with “angel” arguments. I leave that to you. Feel free to sign in here:

http://www.uncommondescent.com/index.php/archives/353

And take over.

The guy who is arguing against my position is now saying:
———-

Very well then…forensics, criminology, SETI, cryptography. That is just a couple examples of where design arguments are used in the real world. In his books Dembski states he would like to see ID used in a variety of scientific disciplines.

Now the design arguments in actual use are usually not as rigorously defined compared to Dembski’s work from what I’ve seen. For example, I was watching a science program where Japanese/Indonesian scientists claimed to have found an ancient temple that is under the water along an island coastline. The problem was that this discovery conflicted with current historical narratives. Though the structure contained large blocks with right angles, several other scientists who investigated later thought the “temple” was the result of natural processes (geology, wave motion). The original scientists used a design argument and several pieces of evidence (small, internal rock cuts comprised of right angles) in their defense. Since their design arguments were weaker than Dembski’s methods the final result was pretty much inconclusive, with no clear “winner” as defined by the program. When the program ended I was left thinking that the temple would make an interesting test case for ID (hey, Bill, like scuba diving?).

Your other objections are covered in depth in Dembski’s books, and this isn’t the place to rewrite them. Though…

“It is a threat to science if done the way it currently seems to be done, by fighting court battles because of religious motivations.”

You do realize that in the court cases like Dover the ID side is the defendant? It’s not like they WANT to be dragged into court. And you’re right, the prosecution apparently does have religious/philosophical motivations…
—————–

Have at him if you want him, I’m no longer learning anything by bothering with him.

Comment #50521

Posted by Russell on October 1, 2005 6:45 PM (e)

Hiya'll wrote:

I am not saying that all prestigious maths guy’s ( and gal’s) support it, only that a small subset who peer review monographs at cambridge …

What do we know about the review process at Cambridge University Press? Do we know that any “prestigious maths guys”? Why is it we can’t find a single qualified reviewer willing to reveal his or her name who has vouched for Dembski’s books?

Comment #50522

Posted by Russell on October 1, 2005 6:47 PM (e)

oops. make that

Do we know that any “prestigious maths guys” reviewed Dembski’s books?

Comment #50555

Posted by Hiya'll on October 1, 2005 11:55 PM (e)

“I suppose it is. I don’t think, based on my limited reading, that ID qualifies as science. I don’t care about Karl Popper or other philosophical arguments because I have my own intuitive “science detector.” It works this way: Real science engages the real world when ever it can. Miller and Urey engage the chemicals of life, fossil hunters engage fossils, programmers write genetic algorithms…”

Some geniunely scientfic ideas can’t be used to engage the real world at all. Imagine that the existence of a particile were predictided by a well verified theory, yet it was known that we could never observe this particile, or any of it’s effects, would it be unscientfic to believe in it? Surely not. I am arguging that ID is like that, the unobservable consquence of a set of well verified theories, hence despite the fact that we cannot observe it, it is reasonable to believe in it.

Intrestingly though, ID could be turned into a actual hypothesis ( rather then the consquence of a hypothesis ) if auxilary hypothesises were added, i.e we could add the auxilary hypothesis that the intelligent designer had designed life so that it would have a organ harvesting bank, in which every animal had a heart, to feed it’s great brood of heart eating children to create the ( False ) prediction that every animal has a heart. Intrestingly we can regard, in this model, creationism as a set of auxilary hypotheseses added to intelligent design, hence intelligent design isn’t creationism in a cheap tuxedo, Creationism is intelligent design in a cheap tuxedo.

Comment #50556

Posted by Hiya'll on October 1, 2005 11:55 PM (e)

“I suppose it is. I don’t think, based on my limited reading, that ID qualifies as science. I don’t care about Karl Popper or other philosophical arguments because I have my own intuitive “science detector.” It works this way: Real science engages the real world when ever it can. Miller and Urey engage the chemicals of life, fossil hunters engage fossils, programmers write genetic algorithms…”

Some geniunely scientfic ideas can’t be used to engage the real world at all. Imagine that the existence of a particile were predictided by a well verified theory, yet it was known that we could never observe this particile, or any of it’s effects, would it be unscientfic to believe in it? Surely not. I am arguging that ID is like that, the unobservable consquence of a set of well verified theories, hence despite the fact that we cannot observe it, it is reasonable to believe in it.

Intrestingly though, ID could be turned into a actual hypothesis ( rather then the consquence of a hypothesis ) if auxilary hypothesises were added, i.e we could add the auxilary hypothesis that the intelligent designer had designed life so that it would have a organ harvesting bank, in which every animal had a heart, to feed it’s great brood of heart eating children to create the ( False ) prediction that every animal has a heart. Intrestingly we can regard, in this model, creationism as a set of auxilary hypotheseses added to intelligent design, hence intelligent design isn’t creationism in a cheap tuxedo, Creationism is intelligent design in a cheap tuxedo.

Comment #50562

Posted by qetzal on October 2, 2005 1:51 AM (e)

Hiya'll wrote:

I am arguging that ID is like that, the unobservable consquence of a set of well verified theories, hence despite the fact that we cannot observe it, it is reasonable to believe in it.

What are these well verified theories that predict ID?

Your hypothetical of a particle that can never be observed and has no effects is non-sensical. There would be no reason to predict such a particle based on any theory, unless the particle was required to explain some observable fact. But if the particle has no effects, there’s nothing to explain, and no reason to predict its existence.

Comment #50566

Posted by Hiya'll on October 2, 2005 2:05 AM (e)

Your hypothetical of a particle that can never be observed and has no effects is non-sensical. There would be no reason to predict such a particle based on any theory, unless the particle was required to explain some observable fact. But if the particle has no effects, there’s nothing to explain, and no reason to predict its existence.

As I quite clearly outlined in my post there could indeed be a reason to believe in this particle, if a theory that had been verified predicted it, then that would be a reason to believe it. To use another example, say that there was a theory of everything, and this theory predicted a number of observable phenomena, but in addition to to this it predicted that there were parallel universes, which could never be observed from the standpoint of this universe. Would it not be reasonable to believe in these parallel universes because of the evidence for the theory provided by the observation of other phenomena?

The theory, and the empirical observation which I feel, in conjunction, predict ID were outlined earlier in this post by me, they are reproduced below:

“My theory of ID has it neither as a negative argument, or as a theory. Basically I think it is the consequence of 1 theory and 1 observation. The theory is that all complex objects have a designer, the observation is that the universe, and life is a complex thing. This leads to the syllogism

1- All complex things are designed (theory)
2- The universe is a complex thing (observation)
3- Therefore the universe is designed

The main points at which my idea can be criticised are as follows

1- We have no evidence for (1) on a cosmic scale. ( Humes arguments against the teleological argument are relevant at this point).
2- Darwinian evolution ( and or the anthoporic principle) makes an exception to (1).”

Comment #50607

Posted by qetzal on October 2, 2005 12:32 PM (e)

Hiya’ll,

You’re saying IF a well-verified scientific theory predicted something, THEN it would be scientific to believe in its existence, EVEN IF it was intrinsically unobservable and had no testable consequences.

I’m saying that a well-verified scientific theory COULD NEVER predict something that is intrinsically unobservable and has no testable consequences. I think that’s a logical contridiction. If I’m correct, it’s meaningless to discuss whether belief in such a prediction would be scientific, because such a prediction is impossible.

Regarding ID, I understand your syllogism, but you originally claimed that ID is the consequence of well-verified theories (emphasis added). “All complex things are designed” is not a well-verified theory.

Comment #50616

Posted by Wesley R. Elsberry on October 2, 2005 1:22 PM (e)

What do we know about the review process at Cambridge University Press?

Not a lot. There’s this bit from Dembski:

William A. Dembski wrote:

The Design Inference had to pass peer-review with three anonymous referees before Brian Skyrms, who heads the academic review board for this Cambridge series, would recommend it for publication to the Cambridge University Press editors in New York. Brian Skyrms is on the faculty of the University of California at Irvine as well as a member of the National Academic of Sciences. It is easy enough to confirm what I’m saying here by contacting him. Scott either got her facts wrong or never bothered to check them in the first place.

http://www.discovery.org/scripts/viewDB/index.php?command=view&program=CSC%20Responses&id=1621

I did contact Skyrms, and he was unwilling to confirm much of anything. He asserted that Dembski’s book received “normal” review, but would not detail what sort of process was “normal”. One interesting note: Skyrms asserted that if I had read TDI, I would know that it didn’t even mention the evolution/creation controversy. Section 2.3 of TDI is titled, “Case Study: The Creation-Evolution Controversy”.

One can then move on to Jeff Shallit’s expert report rebutting Dembski’s expert report, where he says this about review at CUP (p.5 of the PDF):

In his Disclosure, page 42, Dembski claims that his book The Design Inference was “peer-reviewed”. As the author of a book published by the same publisher (Cambridge University Press). I know that book manuscripts typically do not receive the same sort of scrutiny that research articles do. For example, it is not uncommon for a 10-page paper to receive 5 pages or more of comments, whereas a book manuscript of two hundred pages often receives about the same number of comments.

(You may wish to read more of Jeff’s report.)

I have Dembski’s dissertation that was the basis for TDI. Sometime along, I’ll see about comparing the dissertation text to the book to see just how much changed in between.

Comment #50641

Posted by jeffw on October 2, 2005 4:27 PM (e)

One can then move on to Jeff Shallit’s expert report rebutting Dembski’s expert report,

Thanks, good report. I won’t waste much more time on this.

.That changes everything. You asked: “Is a turing machine + the “description” of a set of all possible programs for that machine (not the set itself), more or less ‘complex’ than the output generated by the set of all the described programs?”

I haven’t got a clue. I need a way to actually measure complexity and I can’t really figure out how to do that. That’s why I hate these kind of angels on pins arguments.

Actually, now I that think about it, the output dataset has to be much more complex, since one of the possible outputs would be the input dataset itself. So from input dataset {A, B}, you get output dataset {A, B, C, D, and zillions more}.

It would seem that once you reach a certain theshold of complexity, that additional information & complexity can come from nowhere (contrary to dembski’s conservation claims).

Comment #50660

Posted by Hiya'll on October 2, 2005 7:45 PM (e)

“Regarding ID, I understand your syllogism, but you originally claimed that ID is the consequence of well-verified theories (emphasis added). “All complex things are designed” is not a well-verified theory.”

Maybe it is, maybe it isn’t, that’s what the more celebral aspects of the ID debate are about, you obviously believe that the chance that propostion 1 is true is very, very close to zero, while I think that there’s a possibility involving whole numbers that it’s true, by outlining the whole thing in syllogism form the probability becomes easier to work out.

Comment #50663

Posted by qetzal on October 2, 2005 8:09 PM (e)

There’s no maybe about it. “All complex things are designed” is not a well-verified scientific theory. It’s just a claim.

If you don’t see this, I respectfully suggest you don’t understand what a scientific theory is.

Comment #50668

Posted by Norman Doering on October 2, 2005 9:09 PM (e)

jeffw wrote: “…now I that think about it, the output dataset has to be much more complex,…”

Based on what definition of complexity? If you use “algorithmic complexity,” it’s not because the algorithm that generates it is simple.

jeffw wrote: “… once you reach a certain theshold of complexity, that additional information & complexity can come from nowhere (contrary to dembski’s conservation claims).”

If you equate complexity with chaos, then the 2nd law of themodybamics arguments would say the complexity of the universe increases, not decreases.

But Dembski isn’t talking about any complexity… What is “specified complexity”?? How do you measure that in your example?

So, doesn’t this sound like arguing about angels dancing on the heads of pins?

Comment #50672

Posted by 'Rev Dr' Lenny Flank on October 2, 2005 10:26 PM (e)

What is “specified complexity”??

Indeed. Specififed when? Where? By whom? IDers seem awfully reluctant to answer that.

I suspect it’s because IDers are just giving us the same tired old “Texas Marksman” routine — they blaze away at a barn door, walk over and paint bullseyes around all the bullet holes, then exclaim how utterly remarkable and improbable it is that *all the bullets hit bullseyes*.

Of course, if all the bullets had his two feet to the left of where they did hit, the IDers would then be crowing just as fervently about how wonderful and improbable it was that THOSE bullets all hit THOSE bullseyes. (shrug)

Comment #50677

Posted by jeffw on October 2, 2005 10:42 PM (e)

jeffw wrote: “…now I that think about it, the output dataset has to be much more complex,…”

Based on what definition of complexity? If you use “algorithmic complexity,” it’s not because the algorithm that generates it is simple.

Based on common sense. Anyone can see that dataset A={a,b} is much simpler than dataset B={a,b} + {all-the-programs-you-can-possibly-think-of + those-you-can’t}.

The “algorithm” that generates B from A is “A” itself, and relatively simple. Yes, A is the kolgomorov string that describes B, but what can B then describe? And what can the string that B describes then describe? It becomes too mind-boggling to grasp. The point is, that contrary to dembski’s claims, there is no “conservation” or “deterioration” of information. On the contrary, there is a mind-boggling exponential (or factorial?) expansion of information & complexity from a simple starting point (A).

jeffw wrote: “… once you reach a certain theshold of complexity, that additional information & complexity can come from nowhere (contrary to dembski’s conservation claims).”

If you equate complexity with chaos, then the 2nd law of themodybamics arguments would say the complexity of the universe increases, not decreases.

But Dembski isn’t talking about any complexity… What is “specified complexity”?? How do you measure that in your example?

To be honest, even after reading alot about “specified complexity” this weekend, I still don’t really know what the hell he’s talking about. My take on is that an “intelligent designer” supposedly specifies something “complex” that would be “improbable” without him, and that “specificiation” can do nothing but deteriorate down the line - the universe is “front loaded” with a fixed amount of complexity and information, and it either decays or doesn’t change until perhaps an intelligent designer intervenes.

In my example, I guess the “specified complexity” would be dataset A. However, as I’ve shown, it sure doesn’t deteroriate, and the complexity is not fixed, and appears to vastly increase with each time a new dataset is generated.

Comment #50685

Posted by Norman Doering on October 3, 2005 12:34 AM (e)

jeffw wrote: “…even after reading alot about ‘specified complexity’ this weekend, I still don’t really know what the hell he’s talking about.”

That’s what I’m trying to get at with my “angels dancing on pins” metaphor. How many angels can dance on a pin depends on what an angel is and there’s no definition agreed on.

Now, I wouldn’t give up just yet on “specified complexity” if it could be used to give me comparative measures of specified complexity in various animal communications – a ratio of complexity for bird songs, dolphin language, etc..

Look at this:
http://www.dauphinlibre.be/markovhtm.pdf

There is a kind of implied “design inference” going on in their analysis of why they think dolphins have a language. Aren’t they measuring something you could call specified complexity?

Comment #50699

Posted by Ron Okimoto on October 3, 2005 6:42 AM (e)

This is just for my own curiosity, but what would a “program officer” like Luskin do for a legitimate organization? Is he at the Dover trial on his own nickel or is the Discovery Institute sending him there for some reason? What would it have to do with his duties for a legitimate organization?

Comment #50722

Posted by jeffw on October 3, 2005 9:38 AM (e)

Now, I wouldn’t give up just yet on “specified complexity” if it could be used to give me comparative measures of specified complexity in various animal communications — a ratio of complexity for bird songs, dolphin language, etc..

Look at this:
http://www.dauphinlibre.be/markovhtm.pdf

There is a kind of implied “design inference” going on in their analysis of why they think dolphins have a language. Aren’t they measuring something you could call specified complexity?

Yes, design inference can be an interesting, and potentially useful endevour. SETI is another example that the Discovery Institute seems to bring up often (although I doubt SETI wants or needs any publicity from them).

But my own interests lie more in the evolutionary computing domain. I’m working on a new computer language that’s based on evolutionary computation & genetic programming - making this type of programming simpler and more practical for the masses. I’ve been working on it for 8 years now, and it’s in an advanced stage of development. I was looking into Dembski to see if there was anything theoretical that I should be concerned about. At this point, I’m not too worried.

Comment #50744

Posted by RBH on October 3, 2005 10:34 AM (e)

Remember that “complexity” in the sense of Dembski’s “specified complexity” means no more than improbability. It has no other meaning than that. One major difficulty with that meaning is deciding on the probability density function (PDF) to be used in estimating improbability. One cannot blithely decide that a uniform PDF is appropriate, particularly where the physics, chemistry, and biology of systems very clearly tell us that the probabilities of all possible alternatives are not equal. One has to justify the choice of PDF, not merely assume a uniform PDF because it’s mathematically tractable.

RBH

Comment #50752

Posted by Russell on October 3, 2005 10:54 AM (e)

jeffw wrote:

SETI is another example that the Discovery Institute seems to bring up often (although I doubt SETI wants or needs any publicity from them).

Indeed they don’t. See here.

Comment #50798

Posted by jeffw on October 3, 2005 2:34 PM (e)

Remember that “complexity” in the sense of Dembski’s “specified complexity” means no more than improbability. It has no other meaning than that.

Are you sure? That would mean that any extremely rare event (such as a proton decaying, for example), would have a high specified complexity. Wouldn’t there have to be some additional measure of information content, such as kolmogorov, “shannon uncertainty”, or whatever?

One major difficulty with that meaning is deciding on the probability density function (PDF) to be used in estimating improbability. One cannot blithely decide that a uniform PDF is appropriate, particularly where the physics, chemistry, and biology of systems very clearly tell us that the probabilities of all possible alternatives are not equal.

Yes, this is something else I don’t understand. I assume he uses some kind of non-trivial bayesian calculation to infer design - a divide & conquer strategy of some sort. How are the variables & parameters determined? Wouldn’t they be unique and highly subjective for each design inference case? Just seems too arbitrary to be useful.

But it’s this whole idea of “conservation of information & complexity” idea that bothers me the most. If you have an initial threshold of complexity, some time and a small amount of energy (power for your computer and others that communicate with it), complexity and information can grow exponentially. Same thing with biological life - some initial complexity (dna/rna), some time and little energy from the sun, and complexity & information will build on itself.

Comment #50808

Posted by RBH on October 3, 2005 3:39 PM (e)

Yes, I’m sure. In Dembski’s scheme “complexity” equals “improbability”. “Specified” means (very briefly) “matches a pre-specified independently given pattern”.

Sure, proton decay is “complex” in Dembski’s terms. Whether it has specified complexity is up for grabs. “Specified” can mean ‘matches some known structural pattern’ or ‘performs some discernable function’. (Both are too brief, of course.) But at bottom, “specified” means ‘this thing resembles (structurally and/or functionally) to some degree or other something I recognize’.

RBH

Comment #50824

Posted by RBH on October 3, 2005 6:43 PM (e)

I should add that “specified” can also mean a post hoc pattern match, if the pattern being matched (say an outboard motor as a pattern for the bacterial flagellum) is somehow “detached” from the perception of the pattern in the object under analysis. Of course, Dembski also says a structure’s function can be a specification too, so “bacterial motility device” is as valid a specification as “outboard motor” for the flagellum. So “specification” is in the eye (and level of analysis) of the beholder, not an objective intersubjectively reliable marker.

Another problem for all this is that while the flagellum is specified in terms of an outboard motor, with motor and propellor and universal joint, Dembski calculates its probability according to the number of proteins composing it. It’s as though one said the major constituents of an outboard motor are the motor, propellor, and U-joint, and we have to multiply the probabilities associated with all the nuts, bolts, rods, pistons, seals, etc., etc., to get the probability of “chance” formation of it. Bizarre.

RBH

Comment #50851

Posted by jeffw on October 3, 2005 8:40 PM (e)

Another problem for all this is that while the flagellum is specified in terms of an outboard motor, with motor and propellor and universal joint, Dembski calculates its probability according to the number of proteins composing it. It’s as though one said the major constituents of an outboard motor are the motor, propellor, and U-joint, and we have to multiply the probabilities associated with all the nuts, bolts, rods, pistons, seals, etc., etc., to get the probability of “chance” formation of it. Bizarre.

Well, I guess the DNA molecule can’t be very complex then, since it only uses 4 or 5 nucleotides. And how can computers be capable of any complexity at all, with only two possible values for a bit?

Comment #51223

Posted by RBH on October 6, 2005 1:11 AM (e)

Jonathan Wells has chimed in on the chorus of creationist critiques of Pennock’s work with Avida, repeating a piece he originally wrote in response to Discover Magazine’s story on it last year. Wells’s critique is no more coherent than Luskin’s. For example, Wells wrote

Frustrated by the stubborn refusal of real organisms to obey Darwin’s dictates, researchers at Michigan State University have turned to computer code. Using a software program called Avida, they have now succeeded in proving that if a computer is instructed to generate a program capable of doing basic arithmetic it can eventually… do basic arithmetic!

Wells slides past how the computer came to do “basic arithmetic” (nine logic functions, actually): it evolved programs to do so. It was not “instructed to generate a program”, it was instructed to enable evolution by random mutations and selection to ascertain whether programs capable of doing that would evolve. And they did. Evolution works.

And I’m wondering who are those scientists who are allegedly “frustrated by the stubborn refusal of real organisms to obey Darwin’s dictates”.

Wells goes on

Someone might naively object that Darwin’s theory is supposed to be about the evolution of living things, and neither computers nor computer programs are alive. But Darwin’s followers have cleverly overcome this objection by re-defining “life” to mean “that which evolves by mutation and selection.” Reporting on the Michigan State research in Discover magazine,* science writer Carl Zimmer writes: “After more than a decade of development, Avida’s digital organisms are now getting close to fulfilling the definition of biological life.”

“Darwin’s theory”, as modified, elaborated, and developed over 150 subsequent years, is about evolution. Evolution is a process that occurs when one has a population of replicators, with heritable variation, that compete for reproductive resources in an environment where those resources are limited. Those replicators may be organic – bacteria or bulldogs – or they may be programs in a computer. The same general evolutionary principles operate in both sorts of systems, and the latter can serve as a research platform for exploring questions about evolution in the former.

And Wells wrote

Another [author of the Lenski, et al., study described in the Discover article] is microbiologist Richard Lenski, who has been trying for decades to produce new species of bacteria through intense selection. Having failed at that, Lenski is now tempted to close his laboratory and turn to Avida: “In an hour I can gather more information than we had been able to gather in years of working on bacteria.”

That’s a misrepresentation of Lenski’s long-term study of bacterial cultures. The main study in that research program has been to maintain 12 cultures of E. coli, all originally cloned from the same ancestor so they started identical, under identical selective conditions. If I were “trying for decades to produce new species of bacteria”, I sure wouldn’t maintain identical selective conditions! Note that Wells provides no reference for his “intense selection” phrase because he can’t: that’s not the case. And as for Wells’s remark that Lenski is now tempted to close his laboratory, it sure seems to be going strong more than two years after the Nature paper was published. Maybe Wells was thinking of his own … erhm. Hm. Sorry. I got carried away there. Wells has never had a research laboratory to think about closing.

RBH

Comment #51230

Posted by jeffw on October 6, 2005 2:32 AM (e)

Wells wrote:

Frustrated by the stubborn refusal of real organisms to obey Darwin’s dictates, researchers at Michigan State University have turned to computer code. Using a software program called Avida, they have now succeeded in proving that if a computer is instructed to generate a program capable of doing basic arithmetic it can eventually… do basic arithmetic!

That’s like saying, “if biochemistry is instructed to evolve life then it can eventually… evolve life!” Maybe Wells is catching on.

Creationists/ID’ers will always discredit any computer simulation as not being the real thing, and therefore irrelevant to “real evolution”. That’s like saying weather simulations are irrelevant to predicting the weather.

I predict that within a few years, we’ll see some amazing things evolving on computer platforms. Computers have the advantage of speed and convenience over biological life. They are also not restricted to traditional evolutionary mechanisms, or the confines of biochemistry, and can explore all kinds of new evolutionary paradigms inaccessible to biology. Even things like lamarkism.

Comment #51298

Posted by Henry J on October 6, 2005 12:40 PM (e)

Re “That’s like saying weather simulations are irrelevant to predicting the weather.”

Sometimes they are. ;) LOL

Comment #51308

Posted by jeffw on October 6, 2005 2:16 PM (e)

Re “That’s like saying weather simulations are irrelevant to predicting the weather.”
Sometimes they are. ;) LOL

They’re good enough to keep the weather channel in business ;)

Comment #51310

Posted by Flint on October 6, 2005 2:36 PM (e)

Sometimes they are. ;) LOL

You are confusing “relevant” with “accurate”.

Comment #51334

Posted by 'Rev Dr' Lenny Flank on October 6, 2005 6:03 PM (e)

Re “That’s like saying weather simulations are irrelevant to predicting the weather.”

Sometimes they are. ;) LOL

But I notice that when the computer models predict a hurricane is going to approach them, people move out of the way. Quickly. ;>