Dave Thomas posted Entry 2391 on July 5, 2006 12:31 PM.
Trackback URL: http://www.pandasthumb.org/cgi-bin/mt/mt-tb.fcgi/2386

Genetic Algorithms are simplified simulations of evolution that often produce surprising and useful answers in their own right. Creationists and Intelligent Design proponents often criticize such algorithms for not generating true novelty, and claim that these mathematical recipes always sneak the “answer” into the program via the algorithm’s fitness testing functions.

4steiner.gif

There’s a little problem with this claim, however. While some Genetic Algorithms, such as Richard Dawkin’s “Weasel” simulation, or the “Hello World” genetic algorithm discussed a few days ago on the Thumb, indeed include a precise description of the intended “Target” during “fitness testing” on of the numerical organisms being bred by the programmer, such precise specifications are normally only used for tutorial demonstrations rather than generation of true novelty.

In this post, I will present my research on a Genetic Algorithm I developed a few years ago, for the specific purpose of addressing the question Can Genetic Algorithms Succeed Without Precise “Targets”? For this investigation, I picked a math problem for which there is a single, specific answer, yet one for which several interesting “quasi-answers” - multiple “targets” - also exist.

PT readers, you are about to enter the Strange and Curious world of “The MacGyvers.” Buckle up your seat belts, folks - our ride through Fitness Landscapes could get a little bumpy.

BACKGROUND

When Phillip Johnson spoke in Albuquerque some five years back, I was tapped to give a follow-up speech at UNM, and decided to present a genetic algorithm that, unlike the Weasel or “Hello World” programs, did not require any information on the specific details of “target” solutions. While I’ve shown this work at several talks, and even discussed it with William Dembski himself (after his November 13th, 2001 debate with Stuart Kauffman at UNM), I’ve been remiss in getting it properly documented. That is, until now.

In addition to lack of any “target specifications,” I wanted an algorithm that was easily visualized, and one that had a physical analog. I wanted to build a playground on the very edge of complexity. And that’s why I decided to use what is called “Steiner’s problem” as my inspiration. And while I’ve oft been surprised at the outputs of my computer programs, I have never been as astonished as I was in this case.

DAWKIN’S WEASEL ALGORITHM

The “Weasel” genetic algorithm discussed by Richard Dawkins in his 1986 book The Blind Watchmaker is a prime example of a “Targeted” Genetic Algorithm. In the book, Dawkins shows an evolutionary algorithm that breeds strings of characters, and uses mutations and selection, until the target Shakespearean phrase “METHINKS IT IS LIKE A WEASEL” appears, typically in only a few dozen generations.

Dawkins himself emphasized that this was a tutorial exercise more than a rigorous simulation of biology:

Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective ‘breeding’, the mutant ‘progeny’ phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn’t like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success.

Despite this disclaimer, creationists of all varieties have latched on to “Weasel” as an easy target - a convenient straw version of “evolution” that is easy to poke holes in. Indeed, we shall see that the ID community is united in cavalierly dismissing all genetic algorithms as suffering from the same shortcomings as “Weasel.”

Here is Royal Truman of Answers in Genesis:

Prof. Dawkins’ experiment is nothing more sophisticated than this…. the outcome is rigged. You have a target outcome and cannot fail to reach it through the process used.

Dembski himself has also dismissed Weasel, here:

Given Dawkins’s evolutionary algorithm, what besides the target sequence can this algorithm attain? Think of it this way. Dawkins’s evolutionary algorithm is chugging along; what are the possible terminal points of this algorithm? Clearly, the algorithm is always going to converge on the target sequence (with probability 1 for that matter). An evolutionary algorithm acts as a probability amplifier.

and here:

And nevertheless, it remains the case that no genetic algorithm or evolutionary computation has designed a complex, multipart, functionally integrated, irreducibly complex system without stacking the deck by incorporating the very solution that was supposed to be attained from scratch (Dawkins 1986 and Schneider 2000 are among the worst offenders here).

And programs like “Weasel” even play a prominent role in Discovery’s Stephen Meyer’s “peer-reviewed ID paper,” “The Origin of Biological Information and the Higher Taxonomic Categories”, which appeared in the Proceedings of the Biological Society of Washington (volume 117, no. 2, pp. 213-239). Meyer states

Genetic algorithms are programs that allegedly simulate the creative power of mutation and selection. Dawkins and Kuppers, for example, have developed computer programs that putatively simulate the production of genetic information by mutation and natural selection (Dawkins 1986:47-49, Kuppers 1987:355-369). Nevertheless, as shown elsewhere (Meyer 1998:127-128, 2003:247-248), these programs only succeed by the illicit expedient of providing the computer with a “target sequence” and then treating relatively greater proximity to future function (i.e., the target sequence), not actual present function, as a selection criterion. As Berlinski (2000) has argued, genetic algorithms need something akin to a “forward looking memory” in order to succeed. Yet such foresighted selection has no analogue in nature. In biology, where differential survival depends upon maintaining function, selection cannot occur before new functional sequences arise. Natural selection lacks foresight.

“Weasel” is not the only simulation in which a “target” is indeed specified beforehand. Consider the “Hello World” program (HWP) simulation, wherein the authors note

Two fitness phases, with related criteria and scores, are employed for the HWP. They are:• Phase 1: This fitness scoring is strictly based upon the correctness of the output string.• Phase 2: In so long as the ML agent is outputting the correct string, the fitness score is augmented by a value related to the shortness of the agent.

In the “Hello World” program, as in “Weasel,” the desired “Target” is available during the entire simulation, as part of the “Fitness Test” specification itself. But are these the only Genetic Algorithms out there? Of course not!

Demsbki has studiously avoided genetic algorithms more general and powerful than “Weasel” for almost a decade. PT’s own Wesley Elsberry discussed Dembski’s refusal to consider genetic algorithms used for difficult problems with no easy “answers” in 1999:

In my draft article, I utilize the same test case as I proffered to Dembski in 1997: Where does the “infusion” of information occur in the operation of a genetic algorithm that solves a 100-city tour of the “Traveling Salesman Problem”? At the time, and in my draft report, I considered and eliminated each potential source of “infusion”. Now, it appears that Dembski has ignored the test case I offered in 1997, and instead takes as an archetype the WEASEL program described by Richard Dawkins in “The Blind Watchmaker”.

To see a Genetic Algorithm actually solve a Traveling Salesman Problem of your very own specification, in just seconds, click here.

STEINER’S PROBLEM

For my investigation of Evolutionary Algorithms, I picked “Steiner’s Problem” as a candidate for a genetic algorithm workspace. Steiner’s problem is: given a two-dimensional set of points (which could represent cities), what is the most compact network of straight-line segments that connects the points? The segments represent “roads”, and the segments can meet at junctions, which can exist independently (outside) of the “cities.”

Here is the Steiner Solution for a four-node system. The four nodes are connected by five line segments, which meet at two “variable” nodes in the interior of the figure. The 120-degree angles between segments are typical of Steiner networks.

4steiner.gif

I first became interested in Steiner networks because of their connection to minimal surfaces, and to physical analogs like soap films. These are useful in some minimization problems because surface tension in the soap films acts to minimize the total area of film. This property allows Steiner network problems to be solved directly with soap films. First, two parallel clear plates are connected by posts which represent the nodes or “cities” of the problem. Then, the assembly is dipped into a solution of soapy water, and then carefully withdrawn to produce the Steiner Solution (one hopes).

See R. Courant and H. Robbins, What is Mathematics?, Oxford University Press 1941, pages 385-397 for a marvelous discussion of Steiner’s Problem and minimal surfaces.

Here is a pair of plates set up to demonstrate the four-node system shown above with soap films. The Steiner Solution appears in animation.

4soap.gif

Because I wanted something a little more challenging than the simple four-node system, I chose five nodes arranged as shown (the Steiner Solution appears in animation).

5steiner.gif

Here is a soap-film realization of the five-node system. Here, seven segments are joined with three variable nodes to make the compact network shown - the proper Steiner Solution for the 5-node system. Again, the segments meet at 120-degree angles.

5soap.gif

Besides being visually complex, Steiner Solutions are irreducibly complex - if any segment is removed or re-routed, connectivity between nodes can disappear completely. And Steiner Solutions are Complex Specified Information. For the four and five-node systems shown, Steiner Solutions are “complex” because generation of a proper Steiner Solution by making random guesses has a very low probability of success. And, they are “Specified” because there is but one proper Steiner Solution for each system.

BASICS OF THE GENETIC ALGORITHM

I wanted my genetic algorithm to be able to develop structures like those shown here - not as complicated as real organisms, to be sure, but just complicated enough to be interesting. I set up a structure with five fixed points representing the permanent nodes to be connected. These are red in the diagram below. I chose a maximum of four “variable” nodes, shown in green in the figure. The “variable” nodes can have arbitrary locations within the overall problem space (1000 by 1000 units). The five fixed nodes sit comfortably inside the 1000-by-1000 region, ranging from 200 to 800 horizontally, and 300 to 733 vertically.

5nodewpt.gif

For the Genetic Algorithm approach I was developing, I wanted the competing digital “organisms” to be described by expressing strings of “DNA.” A string with 62 elements was established as the “DNA” template. An element can be either a base 10 digit (0-9) or binary bit F(or 0, False) and T(or 1, True).

02420381349550575404627243FFFFTFFFFTFTFFTFFFTFTFFFFFFTTTTFFFFF
This 62-element string represents the “DNA” of a single “organism.”

The first two digits of the string are “expressed” (by the subroutine that is used to “transcribe DNA”) as the number of extra (“variable”) nodes for this particular organism. These two digits are limited to 00, 01, 02, 03, and 04, and can be mutated to any of those five values.

02420381349550575404627243FFFFTFFFFTFTFFTFFFTFTFFFFFFTTTTFFFFF

#nodes
02

Since all digital organisms will contain the same five fixed points for the five-node Steiner experiment, these were not included in the representation of DNA. Since these fixed-node locations are not subject to changes like mutations or sex, it was easier to set up the fixed node coordinates as non-DNA data available to all organisms equally. Thus, “DNA” was used to represent the variable parts of solutions only.

After the first two digits (used to specify the number of “active” variable nodes), the next 24 digits represent four pairs of three-digit numbers, which are read as the (X,Y) coordinates of the four possible variable nodes. If only two nodes are activated, then the last two pairs of variable node coordinates are not actually expressed, and act a little like “junk DNA” in the simulation. This information can be carried along, and affected by mutations and sex, all without affecting selection in the least. Later mutations can “re-activate” such junk DNA, enabling significant changes in genetic expression.

02420381349550575404627243FFFFTFFFFTFTFFTFFFTFTFFFFFFTTTTFFFFF

Node 6 Node 7 Node 8 Node 9
(420,381) (349,550) (575,404) (627,243)

Following the 26 (=2+24) digits specifying number of active nodes and locations for all variable nodes, there are 36 bits describing the connections between nodes. The five fixed nodes are numbered 1-5, and the four possible variable nodes 6-9. The map is a compact representation of the “9 objects 2 at a time” (9C2) = 36 possible connections between 9 nodes. The first 8 bits represent connections (T=True) or lack of connections(F=False) between fixed node 1 and nodes 2-9; the next 7 bits show connections between node 2 and nodes 3-9, and so on, as shown in the diagram. Depending on how many variable nodes are activated, many of the 36 T/F connection bits can be “junk DNA” as well.

02420381349550575404627243FFFFTFFFFTFTFFTFFFTFTFFFFFFTTTTFFFFF

nodemap.gif

The complete “expression” of the digital organism
02420381349550575404627243FFFFTFFFFTFTFFTFFFTFTFFFFFFTTTTFFFFF
is shown below. Because only two of the variable nodes are activated, nodes 8 and 9 (and any possible connections) are not actually “expressed.” In the diagram, the fixed nodes (common to all organisms) are in black, the activated variable nodes in red, and inactive variable nodes and connections in green. The organism’s “fitness” is simply the sum of the lengths of the active (black) line segments.

dnaexamp.gif

In the Genetic Algorithm itself, the DNA strings for a population of around 2000 random solutions are tested, mutated, and bred over a few hundred generations. Mutation is easily performed, and requires no knowledge of actual coordinates or connections. If a mutation is to affect one of the first 26 digits of an organism’s DNA, that digit will be replaced by a random digit from 0 to 9, ensuring the mutated DNA can still be “read.” (The first two digits are limited to just 00-04.) Likewise, mutations in the 36-bit connections map part of the DNA string will be replaced by a random new bit (T or F). Sex is also easy to simulate: for two organisms to be mated, a crossover position is selected, and corresponding sections of DNA are exchanged to form two new organisms. (So, if AB mates with CD, the offspring are AD and CB, with sections A and C having equal lengths, and likewise sections B and D.) Both offspring inherit “genes” from both parents. As is common in Genetic Algorithms, some “elite” members of the population are usually retained as-is (i.e. asexual reproduction instead of sexual reproduction).

All that remains is getting the whole thing started, and implementing a “Fitness Test” to see which organisms have a better chance of contributing to the next generation’s gene pool. The first generation of 2000 strings is generated randomly, under constraints that the first two digits can only be 00-04, the next 24 digits can be any digits 0-9, and the last 36 bits T or F. Each generation is tested for “fitness,” and the individuals with higher fitness are assigned a higher probability of making it into the next generation. While mating is biased toward organisms with better fitness, because it is a stochastic process, even low-fitness individuals can overcome the odds and breed once in a while. (By analogy to human society, “even nerds and wallflowers can get lucky occasionally.”).

THE FITNESS TEST

The first generation, being generated randomly, is typically a sorry-looking bunch of recruits for Steiner Solution Candidates. Three such initial solutions are shown below. In this figure, the first and third “organisms” connect all the nodes, while the second has a fatal flaw: the top node is not connected to any other node. I defined the “fitness” of the organism as simply as the net length of all activated segments, or 100,000 if any fixed node is unconnected. It’s important to note that the “fitness” thus defined does not depend on the exact number of active variable nodes, or the angles between connected segments, or upon anything other than the total length of active segments. While both first and third solutions at least connect the fixed nodes, they are both far different than the proper Steiner Solution for the five-node system. The Fitness Test knows nothing of this solution, however; all it tells us is that the solution on the right is a little shorter, and therefore “fitter,” than the solution on the left. Because the middle solution misses a node, it is “unfit.”

gen-0.gif

The genetic algorithm proceeds by simulating heredity, mutations, and selection. Each population (of 2000 or so solutions) is graded, and those solutions that do better (shorter networks) are given a better shot at contributing to the next generation than ones that do worse. During the process, occasional mutations, and sex (via crossovers) are employed in every generation.

RESULTS

After 100-200 generations, the algorithm inevitably converges on answers that represent very compact, efficient networks. By “converge,” I mean that the most fit members of the population represent the same “adapted” configuration. After hundreds of simulations, I found that there were only a dozen or so viable configurations that were evolved. Eliminating left-right symmetrical solutions, the six most common adaptations appear below. The structure on top, and its left-right mirror image, occurs most frequently (62 % of the runs), and others less often (see figure). About one out of 200 runs, or 0.5% of the time, the solution happens to be the elegant, symmetrical shape shown on the bottom of the figure. This happens to be the actual “Steiner’s Solution” for this five-node network. And while the ideal Steiner Solution was indeed a goal of my experiment, it was not a pre-specified “target.” And the other five common adaptations were not specified as “targets” either, yet they evolved time and again. I call the non-Steiner solutions “MacGyvers,” in honor of the MacGyver television show, whose star was famed for finding novel uses for objects at hand, such as using the graphite in a pencil to short out an electric circuit, in lieu of a wire or metal conductor. The “MacGyver” solutions are not as elegant and pretty as the formal Steiner solutions, but they get the job done, and often quite efficiently.

ga-sols.GIF

Why does the formal Steiner solution appear relatively rarely? One reason is that I was testing total length only, and not “shortest connectivity between all pairs of fixed nodes,” which is a hallmark of formal Steiner solutions. Another is that, while the genetic distance between various “MacGyver” solutions and the single Steiner solution is quite large, the actual difference in total length units is small. For example, the third “MacGyver”solution, which occurs 9% of the time, and which has length 1217, is only five length units longer than the proper Steiner solution (1212 units), or just 0.4% longer. Given the population size and number of generations, that’s not much of a fitness difference for Selection to act on, and because the 1217-length solution has simpler geometry (only two variable nodes required, in contrast to the Steiner’s three), it evolves more often. It’s worth noting that Selection over many generations is optimizing not only the connection maps between nodes, but the actual locations of variable nodes as well. That’s why the Genetic Algorithm produces solutions in which segments often meet at 120-degree angles, much as soap films do.

It’s also worth noting that, while the proper Steiner solution requires three “variable nodes,” the “MacGyvers” are simpler, with two, one, or even no “variable nodes.” In fact, the second-most-common solution (26%) has no “variable” nodes at all, making this network a viable solution to the “Traveling Salesman Problem” for this 5-node system.

Now, I could have implemented a “Fitness Function” that was designed to produce the formal Steiner solution only. For example, I could have taken steps to favor “shortest connectivity between all pairs of fixed nodes,” or perhaps “junctions between three segments meeting at 120-degree angles.” Or, I could have simply made the fitness test measure the deviation from the geometry of the proper Steiner solution, with less deviation equal to higher “fitness.” Alternatively, I could have calculated the contents of the DNA string for the Steiner, and tested for proximity to that specific string, just as in “WEASEL.” But that would have required “knowledge of the target” at each and every step, and that was something I purposely wanted to avoid.

ANALYSIS

The exercise was a splendid success. Simply given the environment “shorter is better, connectivity critical,” a suite of digital organisms with solutions comparable to formal Steiner systems were evolved. The “MacGyver”solutions were found to be “irreducibly complex” - move or remove any segment, and the system could fall apart completely. And in one of 200 simulations, the “irreducibly complex” and “Complex Specified” shape of the formal Steiner solution emerged. This is of interest because this is precisely the type of innovation that is supposedly precluded by Behe’s “Irreducible Complexity” or Dembski’s “Complex Specified Information.” That it happens at all should be the end of claims by creationists that physics or math themselves preclude evolution.

When I discussed this with William Dembski in 2001, he said I was simply “front-loading” the specific Steiner solution into the mix. Clearly, I was not doing so. My fitness function only does two things: rewards structures for having shorter length, and for connecting all of the cities. Nowhere am I specifying any actual details of the solution. All I am doing is applying a test to the population, and defining “winners,” which then receive only a better shot - not even a sure thing - at continuing their line.

It wasn’t until I started investigating whether some of the “MacGyver” solutions could also be realized with soap films that things really got interesting. I quickly found that several of the configurations that evolved from the genetic algorithm could also be obtained with soap films, simply by pulling the parallel plates out of the soap solution at angles other than horizontal. A soap-film incarnation of one of the “MacGyver” shapes appears below.

macguyvr.gif

ALGORITHM RESULTS WITH NO CORRESPONDING PHYSICAL ANALOGS

Not all of the “MacGyvers” could be obtained with soap films, however. The shape below, which I named the “Face Plant,” features four segments meeting at a common point. While this presented no problem for DNA representations of solutions, it is almost impossible in real soap films, as the junction of four films is invariably a very unstable equilibrium. In soap films, such junctions of four segments will quickly resolve into a bow-tie shape as typified in the solution to the four-node Steiner System discussed above. The “Face Plant” turned out to be a MacGyver solution that could easily exist in the genetic algorithm, but could not be realized with minimal-surface soap films.

join4.gif

PHYSICAL ANALOGS WITH NO CORRESPONDING ALGORITHM RESULTS

If that wasn’t strange enough, soon I stumbled on “The Doggie” - a stable soap-film configuration that never appeared during the genetic algorithms simulations. Even the formal (but topologically tricky) Steiner Solution popped out one of 200 runs on average - why did the Doggie, or related structures like the Dubya, never appear?

doggiean.gif

After several frustrated attempts at “doggie” evolution, I decided to go ahead and do what Dembski implies I am doing for all such shapes - deliberately perform some “genetic engineering” to “front-load” the system with a specified solution.

Accordingly, I deduced the DNA configuration for a typical “Doggie,” and forced this particular organism to be present as one individual of the very first generation of a simulation.

“the Doggie” length = 1403
01540600667530350405390474FFFFTFFFFTFFFFFFTFFFFFTFFFTFFFFFFFFF

doggie.gif

Sure enough, the “Doggie” was much more fit than most members of the initial (random) population, and persisted for several generations. However, at 150 to 200 units longer than all of the “MacGyver” solutions, it was quickly out-competed and forced to extinction by such solutions. After a dozen generations or so, “The Doggie” was simply wiped out by the competition.

Had I actually been feeding the proper Steiner Solution into the algorithm - “front-loading” in Dembski’s parlance - that would have always triumphed, and I would never have found the bizarre and wonderful world of MacGyver also-rans. The same boring result would also have been obtained had I defined “fitness” as deviation from a single, specific “target” - the proper Steiner solution itself. Either way, I wouldn’t have found that some (but not all) of these new structures could be realized with soap films, and I wouldn’t have found that some stable soap-film configurations are far longer than the minimum possible, and are not retained in evolutionary algorithms. As I said, I have never been as astonished at the unexpected output of one of my digital programs.

ID’S “MESA” PROGRAM

Have “Intelligent Design Theorists” done any of their own work on Genetic Algorithms? Well, “sort of.” ISCID has released the MESA program (Monotonic Evolutionary Simulation Algorithm), by Micah Sparacio, John Bracht, and William Dembski.

MESA models evolutionary searches that employ monotonic smooth fitness gradients. It presupposes a fitness landscape that converges gradually to a single optimum (peak or valley) and asks how quickly evolution can locate the optimum when fitness is randomly perturbed and/or when variables are coupled.

On the MESA About Page, the link to the “Summary Page” is broken, despite the management having been informed of that almost two years ago! (Click to the bottom of the “Discuss” page for the sordid details.) The Summary Page is actually here, and says

The (very)basic algorithm:

1. Randomly generate initial population

2. Apply fitness function in which couplings are only given a beneficial fitness value if each bit in the coupling matches the target. Then apply the fitness perturbation if it is being used.

3. Re-populate: if elitism is on, keep the most fit half of population the same, and reproduce new half with mutation from the most fit half. If elitism is off, take most fit half of population, and reproduce each one twice with mutation.

4. If crossover is on, do crossovers based on crossover rate and randomly generated crossover pairs

5. Do steps 2-4 until you have reached the target organism

(emphasis added)

In other words, MESA, like “Weasel,” is based on matching a very specific “target.”

If all genetic algorithms were like “Weasel,” then the MESA program might have had some utility. But, as it stands, MESA only serves to refute strawman versions of such algorithms. Now that the ID folks have latched on to “Weasel” as the Prime “Target,” it’s as if they never heard of anything more advanced. But that’s to be expected, since the aim of ID/creationism is simply to discount the science of evolution in any way possible. Dawkins’ tutorial demonstration has been deemed vulnerable - and they certainly don’t want the rubes to learn about the more interesting algorithms out there.

Although MESA is limited to “specified-target” simulations, Dembski pitches it as though it were much more general. Writing in commentary on a Textbook Hearing in Austin, Texas on September 10, 2003, Dembski states

Design theorists explore the relationship between these two types of evolutionary computation as well as any design intrinsic to them. One aspect of this research is writing and running computer simulations that investigate the scope and limits of evolutionary computation. One such simulation is the MESA program (Monotonic Evolutionary Simulation Algorithm) due to Micah Sparacio, John Bracht, and William Dembski. It is available online at www.iscid.org/mesa

Clearly, Dembski et. al. would have us believe that ALL evolutionary computation requires answers beforehand, like “Weasel,” and thus that MESA has anything to say about the “scope and limits of evolutionary computation.

ID RESPONSE TO THE MACGYVERS

Although this post is the first hard-copy discussion of my work on Steiner’s Problem and genetic algorithms, it caught the attention of John Bracht, then a student at New Mexico Tech, when I discussed it there in 2001. Bracht, who went on to help Demsbki produce MESA, had this to say about the topic:

While an undergraduate student at New Mexico Tech I attended a presentation that demonstrated another key feature of genetic algorithms: the crucial role of the fitness function. The public lecture was given on February 21, 2001, and the speaker was Dave Thomas, president of the local skeptics group New Mexicans for Science and Reason, and alumnus of New Mexico Tech in mathematics. In his lecture, Thomas presented a genetic algorithm that was designed to solve the Steiner problem. The problem entails finding the network that connects five pre-given points with minimal path-length. In true Darwinian fashion the program begins with a set of random networks, and with rounds of mutation and selection it converges on a small set of minimal networks. Occasionally, the program even finds the universal optimum Steiner solution. Most of the time the program gets stuck in local optima with very short networks that are not quite as good as the Steiner solution. After the demonstration I had an email exchange with Thomas (personal communication), and I pointed out that the program created no real novelty and no information besides the information originally contained within the fitness function itself. My logic was as follows: the desired solution has (1) all five points connected, and (2) the shortest path-length. The program selected for networks that (1) connect all five points, and (2) have shortest path-lengths. It is no wonder that the program converges regularly upon short, optimum networks; it has been told precisely what to do by explicit instruction in the fitness function. Furthermore, the encoding of the program is also a key part of the problem solving process. The encoding of the program was carefully selected to fit the problem to be solved—the program was given five pre-existing fixed points, the possibility of adding floating points, and some way of interpreting its “genome” as line segments. All this encoding places the program in the hypervolume of networks and the Steiner problem. Furthermore, the fitness function explicitly targets the Steiner solution within that hypervolume—and the program simply follows the fitness function to find the answer. This holds with perfect generality; in any and every evolutionary algorithm it is possible to pinpoint precisely those parameters that have been set by an intelligent agent, parameters that must be carefully coordinated to allow the program can do the design work the programmer has in mind.

This is flat wrong. The “hypervolume” of possible solutions that can exist in my genetic algorithm includes a huge number of possible solutions, each characterized by five fixed nodes (points), zero to four possible additional nodes at arbitrary (variable) locations, and up to 36 possible connections between the nine possible nodes. For the connection map alone, there are 236 = 7*1010 possible combinations; then, for each variable node, there are 10002 = 106 possible locations. Thus, there are a million ways to pick the first node, another million ways to pick the second node, and so on, for a total of 104*6 = 1024 possibilities. All together, there are about 7*1024+10 = 7*1034 possible organisms in the “hypervolume” - a large number indeed.

It is simply false that “the fitness function explicitly targets the Steiner solution within that hypervolume,” as I have explained. What kind of “explicit” specification obtains the desired response only once out of 200 trials?

For the Brachts and Dembskis of the worlds, here is yet another physical analogy to make this crucial point clear. If the connections in the Genetic Algorithm are physically represented by wooden sticks of various lengths, and the nodes are physically represented by bolts that can be used to connect the sticks at their ends, then the “hypervolume” of possible solutions includes any and all arrangements of sticks, provided that the bolts used to connect the sticks have one of the five fixed positions, or any allowed “variable” position.

Here are two physical realizations of such networks - just two members of untold billions from the “hypervolume” of possible solutions. The first network here connects all fixed points, as required; it consists of 11 sticks joining the five fixed and four variable nodes (bolts), and has a (scaled) length of 2874 units. This is representative of the typical “organism on the street,” selected randomly from the vast hypervolume of solutions.

2874.jpg

The second network also connects the fixed points, but with only seven sticks, with a total length of 1212 units. This “house-like” structure represents the formal Steiner Solution for the five-node system

1212.jpg

When Bracht or Dembski say that Steiner Solutions are somehow implicitly imbedded in the fitness function, that’s like saying the specific design of a house is implicitly imbedded in the concepts of nails, lumber and economy. No, it’s not.

Other creationist/ID critics have written me to say that the desired answers were already present in the initial population, and that all my program was doing was removing “redundant” connections to reveal the best minimal networks. Sorry, but that doesn’t wash either. I tried to reduce the 2874-unit-long “monster” above by removing redundant connections. My first attempt got the length down to 1473 units (or 21% longer than the Steiner’s 1212).

1473.jpg

I tried again, this time acheiving a length of 1368 units (or about 13% longer than the Steiner).

1368.jpg

So, evolving solutions like the Steiner or the MacGyvers involves more than just knocking out some redundant segments of random networks. It really requires selection, reproduction, and mutation - just like real evolution.

Given a big pile of nails and lumber, it’s obvious that many different “houses” could be built by someone using the available hardware. The shape - the design - of the house is NOT spelled out in a loose assemblage of lumber and nails, not even if the person or “intelligence” who is to actually build the house understands that less lumber means lower cost.

CONCLUSION

The bottom line is that this simulation shows, once again, “how possibly” the basic processes of selection, mutation and sex, occurring over hundreds of generations, can result in some very striking innovations. That’s a reason that Genetic Algorithms are being increasingly used in industry.

In my algorithm, the Fitness Test is easy to apply: calculate the length of active segments. Shorter connected systems are more “fit.” In real life, the “fitness test” is likewise simple to apply: if an organism lives long enought to have offspring, it “passes”, otherwise it “fails.”

Despite the ID crowd’s claims to the contrary, Evolutionary Algorithms can produce produce striking novelty, without any pre-specifications of a “target.” While the Weasel and Hello World simulations are interesting in their own right, they are not intended as serious analyses of the real thing. Some simulations that study evolution seriously, like the Traveling Salesman Problem, are easily visualized. Others, like Avida, are more difficult to grasp on an intuitive level.

However, “Weasel” is not representave of all Genetic Algorithms, and there is simply no excuse for ID’s continuing “Bait and Switch” tactics as regard such algorithms.

And that’s why I’ve decided to release my Steiner MacGyvers on an unsuspecting world.

Will Dembski ever change his tune? Not likely, especially considering these comments from page 221 of No Free Lunch, regarding a genetic algorithm that was used to develop antennas given a fitness function that simply favored uniform gain:

A particularly striking example is the “crooked wire genetic antennas” of Edward Altshuler and Derek Linden. The problem these researchers solved with evolutionary (or genetic) algorithms was to find an antenna that radiates equally well in all directions over a hemisphere situated above a ground plane of infinite extent. Contrary to expectations, no wire with a neat symmetric geometric shape solves this problem. Instead, the best solutions to this problem look like zigzagging tangles. What’s more, evolutionary algorithms find their way through all the various zigzagging tangles–most of which don’t work–to one that actually does. This is remarkable. Even so, the fitness function that prescribes optimal antenna performance is well-defined and readily supplies the complex specified information that an optimal crooked wire genetic antenna seems to acquire for free.

Again, giving an engineer a pile of wire segments and instructions on “uniform gain” does not inherently specify the design of an optimal antenna, any more than the design of a house (or a Steiner network) is inherent in a pile of lumber and nails, even with instructions to “hold down lumber costs.”

As PT’s own Alan Gishlick, Nick Matzke, and Wesley R. Elsberry stated in Meyer’s Hopeless Monster,

While some genetic algorithm simulations for pedagogy do incorporate a “target sequence”, it is utterly false to say that all genetic algorithms do so. Meyer was in attendance at the NTSE in 1997 when one of us [WRE] brought up a genetic algorithm to solve the Traveling Salesman Problem, which was an example where no “target sequence” was available. Whole fields of evolutionary computation are completely overlooked by Meyer. Two citations relevant to Meyer’s claims are Chellapilla and Fogel (2001) and Stanley and Miikkulainen (2002). (That Meyer overlooks Chelapilla and Fogel 2001 is even more baffling given that Dembski 2002 discussed the work.) Bibliographies for the entirely neglected fields of artificial life and genetic programming are available at these sites:

http://users.ox.ac.uk/~econec/alife.html
http://www.cs.bham.ac.uk/~wbl/biblio/gp-bibliogr….

A bibliography of genetic algorithms and artificial neural networks is available here.

I’ve shown how one genetic algorithm, in the absence of a specific intended “target,” can produce multiple innovations for given network problems. But the ID/creationist crowd is still stuck on criticizing Dawkin’s “Weasel.”

Will Dembski/Berlinski et.al. ever take on, say, the Traveling Salesman Program, or the present work? Don’t hold your breath!

POSTSCRIPT: A BRIEF COMMENT ON DEMBSKI’S “NO FREE LUNCH” ARGUMENTS

Dembksi’s claims regarding the “No Free Lunch” theorem have been thoroughly fisked at the Thumb, here and here for example. His version of this theorem basically states that, when averaged over ALL POSSIBLE fitness functions, genetic algorithms do no better than chance. But that much is obvious. For example, if I had specified “longer is better” in my Genetic Algorithm - or not considered length at all - I would have not ended up with compact structures. (I know - I did the experiments!) Of course, thousands upon thousands of different “fitness tests” could be conceived. Averaging over all of these is irrelevant to the question at hand - what can be done with one simple fitness function? If you placed a large population of brown bears in a polar environment, and could wait long enough, you might see the bears evolve into white-furred animals like Polar Bears. But, if you averaged over all possible environments - polar, desert, mountaintop, marine, tundra, rainforest, etc. - you would not see the emergence of white-furred bears, not even after many millions of years. Evolution responds to the environment at hand, not fictional families of all possible environs. Dembski’s “No Free Lunch” claims are on the same “sound” mathematical foothold as his factoring of 59, or his understanding of normalization. (And Dembski’s been messing up so much lately, folks are piling on.)

POSTSCRIPT: SERIOUS USE OF ‘TARGETED’ SELECTION

Sometimes the concept of a “target” is used in real studies of the evolution of specific characters or traits, or combinations of those features. For example, D. Waxman and J.J. Welch (“Fisher’s Microscope and Haldane’s Ellipse”, American Naturalist, 166:447-457, 2005) describe the following type of fitness functions:

“In Fisher’s and Kimura’s analyses, it was assumed that all traits are under stabilizing selection of identical intensity. In particular, it was assumed that the fitness of a phenotype is a monotonically decreasing function of its Euclidean distance from the optimal phenotype. Geometrically, this corresponds to a “fitness landscape” that is spherically symmetric. “Surfaces” of constant fitness are hyperspheres (i.e., circles when n = 2, spheres when n = 3, …) that are centered on the optimal phenotype. If we choose to measure each trait in such a way that its optimal value is 0, then the optimal phenotype will lie at the coordinate origin, z = 0 = (0,0,…,0). Fitness is then a function of ||z|| = (z_1^2+z_2^2+…+z_n^2)^{1/2}.”

The point is not that “optimal phenotypes” or “targets” have no utility in any discussions of evolution. They certainly do, in the proper context. Rather, the point is that some computer analyses, and evolution itself, do not require any sneaking in of complex information beforehand.

A CHALLENGE TO ID ADVOCATES/CREATIONISTS

I have placed the complete listing of the Genetic Algorithm that generated the numerous MacGyvers and the Steiner solution, at the NMSR site.

If you contend that this algorithm works only by sneaking in the answer (the Steiner shape) into the fitness test, please identify the precise code snippet where this frontloading is being performed. The listing includes documentation on how the “Doggie” was geneticially engineered into certain experiments. For the main set of runs (no engineering), the “Doggie” codes were commented out (C or !).

FURTHER READING

Ian Musgrave’s Weasel Page:
http://www.health.adelaide.edu.au/Pharm/Musgrave…

“Touched by nature: Putting evolution to work on the assembly line” by C. W. Petit (U.s. News & World Report)
http://www.genetic-programming.com/published/usn…

“36 Human-Competitive Results Produced by Genetic Programming”
http://www.genetic-programming.com/humancompetit…

There are now 36 instances where genetic programming has produced a human-competitive result…. These human-competitive results include 15 instances where genetic programming has created an entity that either infringes or duplicates the functionality of a previously patented 20th-century invention, 6 instances where genetic programming has done the same with respect to a 21st-centry invention, and 2 instances where genetic programming has created a patentable new invention.

“Publications using Avida as a research platform”
http://dllab.caltech.edu/pubs/

Commenters are responsible for the content of comments. The opinions expressed in articles, linked materials, and comments are not necessarily those of PandasThumb.org. See our full disclaimer.

Comment #110179

Posted by DragonScholar on July 5, 2006 1:11 PM (e)

Excellent job. Good reading, very informative.

This has me thinking that a book ON Genetic Programming like this, written for an average audience, would be an excellent publication.

Comment #110181

Posted by steve s on July 5, 2006 1:22 PM (e)

To see a Genetic Algorithm actually solve a Traveling Salesman Problem of your very own specification, in just seconds, click here.

That’s a really neat applet. It took me 5 tries before I could come up with a layout whose solution still occasionally changed after 1000 generations.

Comment #110185

Posted by Timothy Chase on July 5, 2006 2:56 PM (e)

I remember reading not too long ago that it had been predicted that optimal solutions could be discovered much more rapidly through sexual recombination than through simple point mutation. However, results performed by one investigator via a genetic algorithms suggested that sexual recombination wasn’t any more efficient. Then his work was reviewed, and the problem discovered: the population size he was using was too small. To make use of the power of sexual recombination, you need larger population sizes – and this achieves a combinatorial power that is absent in point mutation approaches.

Alternatively, given Ohta’s near-neutralist approach (the successor to Kimura’s neutralism and what may be viewed as “splitting the difference” between neutralism and selectionism), smaller populations can tolerate more experimentation since slightly deleterious mutations (which in the real world would include gene duplication, segmental duplication, chromosomal rearrangements and polyploidy) may remain largely invisible to selection. This would be encouraged by bottlenecks. Of course, population size will rebound, but the (frequency) effective population size rebounds much more slowly as it is roughly the harmonic mean of successive population sizes. As such, the effects of selection will increase much more slowly, giving time for compensating mutations, and as such, would suggest that genomic complexity will increase with a series of appropriately spaced bottlenecks, much in line with the thoughts of Ohno. In essence, you have a complexity rachet which relies upon periods of relaxation (greater tolerance for slightly deleterious mutations) and tightening.

Incidently, the first demo uses a form of soft selection along the lines (first?) suggested by Bruce Wallace. But I assume this is common. Likewise, it is typically correct.

Comment #110187

Posted by steve s on July 5, 2006 3:05 PM (e)

Dembski doesn’t say that the information is snuck in through the fitness functions because he really believes it, he says it to satisfy his followers. They never ask him to calculate how much information the functions sneak in, and he never volunteers it.

Comment #110188

Posted by David B. Benson on July 5, 2006 3:09 PM (e)

Excellent! Thank you!

Comment #110189

Posted by ivy privy on July 5, 2006 3:11 PM (e)

There’s a little problem with this claim, however. While some Genetic Algorithms, such as Richard Dawkin’s “Weasel” simulation, or the “Hello World” genetic algorithm discussed a few days ago on the Thumb, indeed include a precise description of the intended “Target” during “fitness testing” on of the numerical organisms being bred by the programmer, such precise specifications are normally only used for tutorial demonstrations rather than generation of true novelty.

These have also recently been discussed in Allen MacNeill’s course at Cornell, where they have recently read from Dawkins’ book The Blind Watchmaker, and where Creationist kiddy Scott informs the discussion:

If there is NO TARGET, there is no way to compare the probabilities associated with single-step selection as compared to cumulative selection nor the number of steps it requires to reach the target if we apply cumulative selection, and thus no basis upon which to make the assertion that cumulative selection is any more “powerful” than single-step selection.

That’s the whole point of my argument and it shows Dawkins’ argument to be completely misguided and misleading.

If you disagree, then show how “cumulative selection” is more powerful than single-step selection without introducing a target.

You cannot do it. It’s logically impossible. Go ahead, try.

Comment #110200

Posted by Coin on July 5, 2006 4:36 PM (e)

Excellent article.

It’s funny how quickly ID proponents run and hide behind the “fine-tuning” objection that by specifying a problem to solve (in this case, connect five dots using short lines) you’re somehow sneaking in the “information” that comprises the solution. I would never have guessed the word short contains so much “information”.

If you think about it, the “fitness functions are information” cop-out seems to presage a point where the IDer expects he’ll no longer be able to claim enough gaps in the theory of Evolution to comfortably hide God in, and will have to retreat to the idea of the designer “sneaking in” information to the universe by defining the laws of physics themselves. Okay, so maybe the designer didn’t create the flagellum by hand, but it created the information in that flagellum, because the designer created the fitness functions (from which all information flows) that exist all around us in the world… Hm, “Information” is starting to sound like the Force at this point or something.

Anyway, I did want to note a slight problem with one of your postscripts:

His version of [the NFL] theorem basically states that, when averaged over ALL POSSIBLE fitness functions, genetic algorithms do no better than chance. But that much is obvious. For example, if I had specified “longer is better” in my Genetic Algorithm - or not considered length at all - I would have not ended up with compact structures.

You have identified the correct flaw in Dembski’s argument, but I am not sure your representation of what Dembski is saying is exactly correct. Your example about polar bears works quite well as an analogy, but I think it somewhat misrepresents what is actually happening with the NFL theorems.

My understanding is that the NFL theorems say that genetic algorithms and chance (as well as all other possible algorithms that fit the NFL theorems’ limitations) do equally well when averaged over the set of all possible fitness functions, and they “do well” with respect to the fitness function being used at that exact moment.

In other words, if your fitness function had been “long paths are better” rather than “short paths are better”, then you would not have found compact structures. But you still would have found fit structures– you would have found structures with optimally long paths, and this is exactly what the fitness function wanted.

Where NFL comes in would be if your fitness function, instead of being something reasonable like “reward structures with short paths” or “reward structures with long paths”, were something like “select a fitness value at random for every structure we test”. Your evolutionary algorithm could optimize its little heart out trying to find structures that make the fitness function happy, but the 10,000th generation wouldn’t be any closer to finding a fit set of paths than the first generation, because the fitness is always random.

This is where Dembski’s arguments come in– because while there are some problems, like “find the shortest path” or “find the longest path”, where evolutionary algorithms work well, they disappear when we average them against the gigantic multitude of possible perverse fitness functions whose output are essentially random. Dembski jumps on this to claim that because, it doesn’t work period. But this is also where Dembski’s argument falls apart entirely, because– as you said to begin with– we don’t care about all possible fitness functions, we only care about a certain specific set of fitness functions, specifically those that arise as a result of natural law as it exists on the planet earth! And your paper here has given a specific example of an evolutionary algorithm performing fantastically at solving a specific problem. (And, since it’s effectively the same as the surface optimization problem that soap bubbles solve, it’s even an example of a problem that arises naturally as a result of physical laws on Earth.)

The difference between this way of looking at the NFL theorems and the way you presented them in your postscript above is quite minor, but I think it is important to be exact because there is always the risk of encountering the kind of person who will jump on a tiny mistake in a postscript and use it as an excuse to dismiss the actual substance of the paper.

If I am mistaken about anything above please let me know.

Comment #110209

Posted by Dave Thomas on July 5, 2006 5:33 PM (e)

Interesting comments re NFL, Coin!

They seem quite reasonable to me. I will ponderize further.

Cheers, Dave

Comment #110216

Posted by Bruce Thompson GQ on July 5, 2006 5:55 PM (e)

I find references to 3 observed soap film solutions, the Doggie, Dubya, and Penguin which had no simulation analogues. So why do you see the “Doggie” in the real soap film experiments and not in the simulations?
Possible explanations.

1. The designer likes real doggies and not simulated doggies.
2. Fantasy does not beat reality.
3. The occurrence of Soap Doggies are the result of the chemistry.

Is something in the soap, water, plastic stabilizing these forms? Have you repeated your experiment with different soaps to test how different additives may effect the frequency of each of these forms? Do different additives preferentially stabilize different forms?

I would also like to point out that Richard Dean Anderson has gone on since MacGyvers to fight space aliens.

Delta Pi Gamma (Scientia et Fermentum)

Comment #110218

Posted by BlackGriffen on July 5, 2006 5:58 PM (e)

Side Note: most of your so called MacGyvers are really Steiner solutions to different problems that just happen to be “close enough”. This becomes more readily apparent if you mentally remove any fixed nodes that are connected only to their nearest neighbor fixed node directly. The reason for the removal is that the node is participating as the Steiner solution to the 2 node problem. So what you’ve found, for most of these, are close solutions that are built up by considering the big problem as a collection of smaller Steiner problems instead of one whole.

Still interesting, nonetheless.

P.S. You should be able to physically realize the one with four lines at a movable intersection if you promote that that intersection to a fixed post. This looks like an artifact of considering the fixed and movable posts as indistinguishable when measuring fitness, aside from the requirement that all fixed posts must be connected.

Comment #110219

Posted by jeffw on July 5, 2006 6:02 PM (e)

Nicely done and explained! It would be interesting to see what would happen though, if you were to alter the fitness function slightly during execution. For example, let the fixed nodes move around a bit, or favor certain lengths temporarily over others. This might perturb the fitness function enough to generate more novelty, and is perhaps closer to what happens in nature, since many fitness functions in nature change dynamically over generations.

Comment #110222

Posted by BlackGriffen on July 5, 2006 6:16 PM (e)

Another point of interest: part of the reason of the infrequency of the best solution may come about because the limits you have placed on your parameters have required you to approach the solution from the bottom. That is, the mean number of nodes in the initial population is roughly 1.5 ± .05 or so (error bars from assuming poisson statistics without actually checking). Since the ideal solution as three nodes, you’re virtually guaranteed to need to navigate from the bottom up. If you were to vary the number of possible nodes, you might find that you’re hitting the correction solution more often. 8 possibilities (0 through 7) would seem to be the tipping point. Though that would require tweaking the rest of the program, it would still be interesting to see how the results vary as you tweak the number possible, perhaps even approaching the solution from above.

Another interesting possible tweak: see how often and to what extent deliberately seeding the optimal Steiner solution, or something near it, can bias the results. How often will it be accidentally selected against and removed from the population entirely?

Last, an observation: you’re treating this as one single organism. In considering the MacGyver solutions as collections of close Steiners, it occurs to me that what you have might be analogous to multiple organisms that fill the roll of one. Each organism, in this case a set of fixed nodes connected to each other by only movable nodes, is perfectly optimized for its niche, even if the overall system is not perfectly organized. It might be going a little far to think of them as analogous to multi-cellular organisms/solutions, but it seems to be an apt comparison.

Comment #110223

Posted by Ichneumon on July 5, 2006 6:20 PM (e)

Will Dembski ever change his tune? Note likely, especially considering these comments from page 221 of No Free Lunch, […]

I believe you have a typo here: “Note” should be “Not”, it seems.

Comment #110224

Posted by Dave Thomas on July 5, 2006 6:29 PM (e)

Note=>Not.

Thanks, Dave

Comment #110226

Posted by Coin on July 5, 2006 6:55 PM (e)

BlackGriffen wrote:

Side Note: most of your so called MacGyvers are really Steiner solutions to different problems that just happen to be “close enough”. This becomes more readily apparent if you mentally remove any fixed nodes that are connected only to their nearest neighbor fixed node directly. The reason for the removal is that the node is participating as the Steiner solution to the 2 node problem. So what you’ve found, for most of these, are close solutions that are built up by considering the big problem as a collection of smaller Steiner problems instead of one whole.

Very interesting. This answers Bruce Thompson’s question, then:

So why do you see the “Doggie” in the real soap film experiments and not in the simulations?

We see the “Doggie” in the real soap film experiments and not in the simulations because, unlike the MacGuyvers, there is no subgraph of the “doggie” which is itself a Steiner solution. The MacGuyvers (so BlackGriffen hypothesizes) occur because the program finds a Steiner solution for some subset of three or four of the nodes, and then adds the additional nodes to that sub-solution in a jury-rigged fashion. However it is not possible to reach the “doggie” by this method.

In other words we’re basically seeing here a demonstration of evolutionary exaptation (Or maybe co-option? What’s the difference between those two again?). The evolutionary algorithm prefers to produce MacGuyvers because when evolutionary processes produce “irreducibly complex” structures such as these Steiner solutions, they are most likely to do so by co-opting older structures to serve new purposes (i.e. “solve the 5-node Steiner problem” rather than “solve the 4-node Steiner problem”).

Here’s an interesting one, though: The actual optimal solution does evolve, even though it too has this property that no subgraph is itself a steiner solution. Why does the optimal solution evolve but not the doggie?

Comment #110233

Posted by RBH on July 5, 2006 7:50 PM (e)

BlackGriffen wrote

Side Note: most of your so called MacGyvers are really Steiner solutions to different problems that just happen to be “close enough”. This becomes more readily apparent if you mentally remove any fixed nodes that are connected only to their nearest neighbor fixed node directly. The reason for the removal is that the node is participating as the Steiner solution to the 2 node problem. So what you’ve found, for most of these, are close solutions that are built up by considering the big problem as a collection of smaller Steiner problems instead of one whole.

and then later

Last, an observation: you’re treating this as one single organism. In considering the MacGyver solutions as collections of close Steiners, it occurs to me that what you have might be analogous to multiple organisms that fill the roll of one. Each organism, in this case a set of fixed nodes connected to each other by only movable nodes, is perfectly optimized for its niche, even if the overall system is not perfectly organized. It might be going a little far to think of them as analogous to multi-cellular organisms/solutions, but it seems to be an apt comparison.

Put another way, the ‘MacGyvers’ are modular. Gee. Guess what’s hot in evo-devo these days?

RBH

Comment #110234

Posted by steve s on July 5, 2006 7:54 PM (e)

Put another way, the ‘MacGyvers’ are modular. Gee. Guess what’s hot in evo-devo these days?

One of the sucky things about being an ID creationist, would be missing cool connections like this.

Comment #110237

Posted by RBH on July 5, 2006 8:21 PM (e)

For more on modularity and cooption see PZ’s post on hormones today.

RBH

Comment #110238

Posted by steve_h on July 5, 2006 8:24 PM (e)

Thanks for the article, I found it quite fascinating, although I’m in two minds about certain aspects. I’m satisfied that you haven’t coded the solution in your fitness function but I foresee a line of attack from the bad guys.

The fact that this seemingly complex problem can be solved by simple soap films suggests it’s really not a such a difficult problem and creation/ID-ists could wave the problem away as trivial.

On the other hand, if you hadn’t told me that such a problem could be solved by soap, I’d have estimated the problem to be much more difficult to solve than it is. I’d have estimated the difficulty of arriving at a solution based upon my personal ignorance (as all of ID does) and come up with an I-don’t-understand-it factor of 10^makesomethingupandthendoubleit.

Comment #110239

Posted by Don Baccus on July 5, 2006 8:32 PM (e)

Well, one person over at Alan MacNeill’s class blog claims that Dave Thomas is encoding the target into the program by virtue of the fitness function.

I don’t know if Dave wants to waste his time engaging the hardcore ID memberes of Alan’s class or not, but thought I’d post the link …

Comment #110240

Posted by Sam Lewis on July 5, 2006 8:34 PM (e)

If you were really “front loading” wouldn’t it be obvious from just reading the code?

Comment #110241

Posted by steve s on July 5, 2006 8:40 PM (e)

You’d think that, wouldn’t you Sam? But no. IDiots will say anything to excuse the results. They have even said that any simulation code is written by an intelligent programmer whose influence contaminates the results. It’s all hand-waving nonsense.

Comment #110242

Posted by Michael J on July 5, 2006 9:09 PM (e)

Fascinating, but I think that you can never win with the IDiots. While this is more sophisticated than the weasel program the IDiots will still say that the solution was preloaded by the fitness function. While we now that this is tautological because you can say exactly the same about physical evolution.

Even if you had a very life like model (Sharks and fish etc) they might grant you micro-evolution but still say that macro-evolution is impossible. I don’t think that you will get any concession from them until somebody can model the entire physical world starting from the big bang.

Michael

Comment #110251

Posted by steve s on July 5, 2006 10:07 PM (e)

Fortunately, you don’t have to win with the IDiots. Dave’s post is fascinating, and scientists go on studying the connections between these programs, and math, and evolution, and they go on publishing papers, and figuring things out, and the world goes on spinning, &c &c….

A million pouting engineers and lawyers populating a thousand Uncommonly Dense blogs can’t unpublish a paper, can’t undecode a genome, can’t unevolve the Steiner solutions.

Comment #110252

Posted by Caledonian on July 5, 2006 10:10 PM (e)

I recognize the sociopolitical reasons why choosing a simulation in which the fitness space optima weren’t explicitly defined, but really, does it make any difference? A fitness space is a fitness space, whether it was generated around a defined optimum or selected randomly. Sure, it’s nifty to be able to point out that the IDists objections are bunk, but they’re bunk for inherent logical reasons. There’s nothing wrong with defining a fitness space in terms of a particular solution, and that point really needs to be driven home.

Very nice work, by the way. Thank you for presenting it to us.

Comment #110254

Posted by Patrick on July 5, 2006 10:22 PM (e)

Wow, that is some really interesting stuff. Thanks for sharing! :) The discussions here in the comments are good too, especially the stuff on how it’s solving Steiner sub-problems.

Comment #110260

Posted by Caledonian on July 5, 2006 11:01 PM (e)

That first sentence should include “is desirable”. Drat my exhaustion-feebled brain.

I once wrote a computer program to determine the optimum angle at which three lines connecting three points should meet, and it was indeed 120 degrees. I never considered writing an evolutionary algorithm to solve the problem, though.

Hmm… that gives me an idea.

Comment #110267

Posted by Bob O'H on July 6, 2006 12:42 AM (e)

Thanks for the article. One question springs to mind, after reading Coin’s comments and staring at the doggie for too long: how did you pull the plates out? Is it done in such a way that the sub-graph is the first out of the soap, so that forms, and then the Steiner solution can’t be formed after that?

Aha! Because the sub-graph you pull out first is at a local optimum, the soap would have to go downhill along it’s fitness function to reach the Steiner solution: the fitness landscape is a bit rugged. But of colurse the GA can jump around the phenotype space, which is why it can find the solution.

I’m not a mathematician, so I’ll leave it to someone else to prove why the Face Plant doesn’t appear from the soap (or how to make it appear!).

Bob

Comment #110272

Posted by Marek 14 on July 6, 2006 2:27 AM (e)

A suggestion: how about running the simulation again, this time with a more complex set of points where Steiner solution is neither as symmetrical, neither actually known in advance? In that case, front-loading the solution is not just a silly idea - it’s actually impossible.

Comment #110310

Posted by Emerson José Silveira da Costa on July 6, 2006 7:10 AM (e)

I think it would be more effective if someone came up with a (I know, much more complex) simulation where the organisms are made up of small “mechanical” (or otherwise “interactive”) “parts”. The genetic code would define the set of parts and its interconnections, and the “mechanical” organism thus assembled would try to survive in competition with others in a number of environments. Such a simulation would give rise to novelties and eventually irreducibly complex structures in a way that is much more close to the way we see biological structures: as tiny (chemical) machines. The results of such a simulation would be, I guess, harder for the ID folk to dismiss.

Comment #110311

Posted by Emerson José Silveira da Costa on July 6, 2006 7:11 AM (e)

I think it would be more effective if someone came up with a (I know, much more complex) simulation where the organisms are made up of small “mechanical” (or otherwise “interactive”) “parts”. The genetic code would define the set of parts and its interconnections, and the “mechanical” organism thus assembled would try to survive in competition with others in a number of environments. Such a simulation would give rise to novelties and eventually irreducibly complex structures in a way that is much more close to the way we see biological structures: as tiny (chemical) machines. The results of such a simulation would be, I guess, harder for the ID folk to dismiss.

Comment #110313

Posted by Caledonian on July 6, 2006 7:32 AM (e)

I think it would be more effective if someone came up with a (I know, much more complex) simulation where the organisms are made up of small “mechanical” (or otherwise “interactive”) “parts”.

It’s been done. Search for an episode segment of Scientific American Frontiers called ‘Robot Independence’ from 12/19/2006. There’s a video online.

Comment #110320

Posted by Ichneumon on July 6, 2006 10:05 AM (e)

Well, one person over at Alan MacNeill’s class blog claims that Dave Thomas is encoding the target into the program by virtue of the fitness function.

The funny part about that argument is that using exactly the same “logic” (that the simple constraints of the fitness function implictly encode any/all workable solutions), one can claim that the fitness function of natural selection implicitly encodes the existence of birds, whales, butterflies, and humans, too. Oops! I just love it when the ID arguments shoot themselves in the foot.

Comment #110322

Posted by Steve Watson on July 6, 2006 10:23 AM (e)

Pah, these guys (the IDists) are incorrigible. I dabbled in stochastic optimization (simulated annealing) over a decade ago, and even then SA, GA and other SO techniques were well-known as tools for solving problems both in pure math and the real world of engineering. There was plenty of literature on the subject, and none of it was about aiming at a pre-specified target. To go on yammering about “Weasel” is disingenuous, and to claim that the fitness function completely specifies the final result is classic goalpost-moving – does anyone really believe that the physical layout of a circuit is mysteriously “hidden” in the instruction “Make the interconnect as short as possible”? Even an engineer (see the Salem Hypothesis) should be able to see through that one!

These people disgust me.

Comment #110341

Posted by Scott Walters on July 6, 2006 12:07 PM (e)

ID debunking aside, this is a great tutorial on genetic algorithms themselves, and I really enjoyed it for that reason – absolutely fascinating. As far as the ID goop, it’s a cop out to say something can’t do something without some formalization or proof. Otherwise, it’s just O.J. and his glove… oh, look, he can’t put his hand in there, it must not fit.

Thanks again for a great article!
-scott

Comment #110345

Posted by steve s on July 6, 2006 12:35 PM (e)

Comment #110311

Posted by Emerson José Silveira da Costa on July 6, 2006 07:11 AM

The results of such a simulation would be, I guess, harder for the ID folk to dismiss.

Reworking your arguments until the creationists have no response is futile. They’d deny 2+2=4 if addition got in the way of jesus. These experiments are for the edification of non-fanatics.

Comment #110320

Posted by Ichneumon on July 6, 2006 10:05 AM

The funny part about that argument is that using exactly the same “logic” (that the simple constraints of the fitness function implictly encode any/all workable solutions), one can claim that the fitness function of natural selection implicitly encodes the existence of birds, whales, butterflies, and humans, too. Oops! I just love it when the ID arguments shoot themselves in the foot.

I’m glad someone pointed that out. What the IDiot is saying is, ‘if there’s a fitness function, evolution can work.’

Creationism is one big, long retreat. From ‘evolution is impossible’, to ‘macroevolution is impossible’, to ‘macroevolution is only possible because it was front-loaded’…

Comment #110346

Posted by gordonsowner on July 6, 2006 12:53 PM (e)

So, I’ve seen that information theory gets called out by IDers now and then. They use it to claim that information cannot increase. So, in the Travelling Salesman problem, you can use the same algorithm for 2 points as well as 100. Doesn’t the 100 node solution have more information in it than the 2 node problem? If this is true, then there is no upper bound on the information provided by the GA (information increases as number of nodes increases, both without bound), yet the information in the algorithm must be a constant. Isn’t that a simple illustration of why in this problem that all of the information cannot be frontloaded? I haven’t done the grungy math, but could someone speak to that as a logical refutation (or fallacious line of reasoning)?

Comment #110348

Posted by GuyeFaux on July 6, 2006 1:38 PM (e)

Excellent article. How well does the process scale? I.e. 6 or more fixed posts?

Re. so-called “front-loading” in the fitness function, e.g.

Ichneumon wrote:

The funny part about that argument is that using exactly the same “logic” (that the simple constraints of the fitness function implictly encode any/all workable solutions), one can claim that the fitness function of natural selection implicitly encodes the existence of birds, whales, butterflies, and humans, too. Oops! I just love it when the ID arguments shoot themselves in the foot.

This is a pretty big shot in the foot. Yes, information is “sneaked-in” through the fitness function. Who can disagree with that? The environment (i.e. the fitness function) determines the genome. I thought this is what ID-ists claim can’t happen.

I forget who said (Dawkins?) that a critter’s DNA contains information about how to survive in the various environments that occurred throughout its evolutionary history. It’s like every genome is a survival manual for all the times/places the creature has survived.

Comment #110394

Posted by W. Kevin Vicklund on July 6, 2006 6:15 PM (e)

Suggestion for future experiments with this model (some already suggested by others):

Increase the number of variable nodes

Increase the number of fixed nodes

Create niches - an example of a niche could be the number of active variable nodes. To do this, set the sub-population limit to 400 (assuming we stay with the 2000 organism limit) and run the selection algorithm on the sub-populations, rather than the whole population. This may permit the evolution of a Doggie.

Add infrequent catastrophic events (bottlenecking). Make those events random. Compare convergence times.

Add a fitness variable for angle of intersection at various nodes. This could preclude the evolution of the Faceplant. Play with the relative weighting of angle v. length. Remove the length fitness requirement.

Increase the number of genes.

Implement start codons and allow strings to be variable length. This would require a major overhaul, though.

Comment #110400

Posted by RBH on July 6, 2006 6:43 PM (e)

GuyeFaux remarked

I forget who said (Dawkins?) that a critter’s DNA contains information about how to survive in the various environments that occurred throughout its evolutionary history. It’s like every genome is a survival manual for all the times/places the creature has survived.

One such is from Dawkins:

“The information that … experience packs away is information about ancestral environments and how to survive them.”

Unweaving the Rainbow

Someone – perhaps Dawkins, perhaps Sagan – referred to the genome as a palimpsest of ancestral selective environments.

RBH

Comment #110401

Posted by 'Rev Dr' Lenny Flank on July 6, 2006 7:08 PM (e)

I forget who said (Dawkins?) that a critter’s DNA contains information about how to survive in the various environments that occurred throughout its evolutionary history. It’s like every genome is a survival manual for all the times/places the creature has survived.

Indeed, there seem to be three distinct steps in the evolution of life: (1) DNA, which stores what a SPECIES has learned about survival. (2) memory, which stores what an INDIVIDUAL has learned about survival. And (3) culture, which allows an individual to learn what OTHER INDIVIDUALS have learned about survival.

I suspect the latter two steps are far more ancient than we prideful humans would like to believe …. .

Comment #110402

Posted by Dave Thomas on July 6, 2006 7:14 PM (e)

Kevin Vicklund’s suggestions are cool. I’m planning on upgrading the old program into a new C++ environment over the next few months, and may be able to incorporate some of those suggestions.

In the meantime, I’m not the only one of the 6 billion inhabitants of the planet to play around with a Genetic Algorithm for Steiner’s problem.

Here are some obscure references - I don’t have these, and they look pre-web, but might be found on Google Scholar or something.

[1] Bryant A. Julstrom, “A genetic algorithm for the Rectilinear Steiner problem”, Proc. of the 5th Intl. Conf. on Genetic Algorithms, Morgan-Kaufmann, 1993.

[2] Hesser, Manner & Stocky, “Optimization of Steiner trees using Genetic algorithms”, Proc. of the 3th Intl. Conf. on Genetic Algorithms, Morgan-Kaufmann, 1989.

[3] Kapsalis, Rayward-Smith & Smith, Solving the Graphical Steiner tree problem using Genetic Algorithms”, Journal of the Operational Research Society 44 (4) 1993.

This just scratches the surface - click here for more.

Cheers, Dave

Comment #110412

Posted by Bruce Thompson GQ on July 6, 2006 7:36 PM (e)

I’m not the only one of the 6 billion inhabitants of the planet to play around with a Genetic Algorithm for Steiner’s problem.

While I see a lot of papers solving Steiner problems using GA, I don’t see papers testing GA solutions in such a clean tidy way, especially one that could be translated into a high school science class.

Delta Pi Gamma (Scientia et Fermentum)

Comment #110430

Posted by Adam Lee on July 6, 2006 10:13 PM (e)

There are a lot more references to the scientific and commercial uses of genetic algorithms in this Talk.Origins article:

Genetic Algorithms and Evolutionary Computation

A great point to bring up is that many of the cited instances are ones where the researchers running the algorithm did not know the answer in advance - which is why they used a genetic algorithm in the first place, they were trying to find out what it was - thus rendering utterly implausible any creationist claim that the algorithms were “front-loaded” or otherwise modified to enhance their chances of success.

Comment #110458

Posted by fontor on July 7, 2006 12:21 AM (e)

What problem could creationists have with Weasel? So what if it has a ‘target’ end state?

Aren’t they the ones who always say “Evolution is false because it’s way too unlikely that things could have randomly evolved into this precise configuration that we have today“?

Creationism has the targets; not evolution. At least Dawkins took pains to explain that real evolution is atelic.

Comment #110465

Posted by Dilettante on July 7, 2006 2:07 AM (e)

2 statements in your thesis are :

1. Simply given the environment “shorter is better, connectivity critical,” a suite of digital organisms with solutions comparable to formal Steiner systems were evolved

2. In my algorithm, the Fitness Test is easy to apply: calculate the length of active segments. Shorter connected systems are more “fit.”

Query : Was there a purposeful agency which created the environment “shorter is better, connectivity critical?”. Who set the fuzzy “fitness” criteria?

Comment #110471

Posted by Ben on July 7, 2006 3:15 AM (e)

Very interesting article. I think you do an excellent job of exposing the ‘front-loading’ lie for what it is. I think it’s also important that you *leave out* things like targeting 120 degree joins, as that *is* front-loading information.

One thing I think could improve your simulation is different mutation and combination operators. My impression is that the system is a bit too ‘jittery’ and that may also be why it doesn’t mimic soap films that closely. For example

1) introduce new intermediate points *by copying existing ones*. This has analogs in biology, and should get rid of the unstable ‘planes crossing’ solutions that currently occur. You could mix this with the existing ‘recessive’ method you currently have.

2) have distance mutations work on a delta system (e.g. +/- 0..20), rather than jumping. (Again you could mix this with true ‘point’ mutations)

3) when cross-breeding, (sometimes) average, rather than just doing a cut.

Neither of these introduces new information, they just make the navigation through the search space smoother.

As a final thought, did you ever think about separating your info into chromosomes, ie. positions get crossed, and connections/# intermediates as well?

Comment #110493

Posted by DrFrank on July 7, 2006 6:37 AM (e)

I’ve done a lot of work which genetic algorithms and other population-based evolutionary techniques, and they generally are amongst the best optimisation methods across a wide variety of problems.

The main mistake that Dembski seems to make is the assumption that defining an objective function means that you know what the optimum is, and thus optimising it is pointless. In the case of Weasel, this is true, but then Weasel is simply a demonstration and no one in the global optimisation community would even classify it as a problem.

Someone should hand Dembski a 4x4 Rubik’s Cube at a talk and then, when he can’t immediately solve it, ask him why: the objective function’s been defined so where’s the problem, Billy?

If evolutionary optimisation methods (or any optimisation methods) didn’t yield new information about good solutions on a given problem then no one would use them. They don’t exist purely as a prop to evolutionary biology, you know.

Comment #110506

Posted by arensb on July 7, 2006 8:01 AM (e)

When the creationists accuse you of embedding the solution in the problem, they are really confusing the specification of the problem with its solution. That’s like confusing “Take me to the nearest hospital” with “Take me to County General Hospital at 10025 Main St.”

The clause “the nearest hospital” uniquely specifies a building, but actually finding it is an entirely different matter.

Comment #110512

Posted by RBH on July 7, 2006 11:25 AM (e)

arensb wrote

When the creationists accuse you of embedding the solution in the problem, they are really confusing the specification of the problem with its solution. That’s like confusing “Take me to the nearest hospital” with “Take me to County General Hospital at 10025 Main St.”

The clause “the nearest hospital” uniquely specifies a building, but actually finding it is an entirely different matter.

That’s a very nice distinction. One could phrase it slightly differently – “Take me to a nearby hospital” – and capture the “good enough” property of evolved systems. After all, in evolution one doesn’t need the nearest hospital, just one near enough to survive. :)

RBH

Comment #110517

Posted by Henry J on July 7, 2006 12:12 PM (e)

Re “The clause “the nearest hospital” uniquely specifies a building, “

Unless one happens to be exactly half-way between the nearest two. ;)

Henry

Comment #110520

Posted by steve s on July 7, 2006 12:30 PM (e)

This is just a reminder that there is a creationist troll going unanswered:

Comment #110465

Posted by Dilettante on July 7, 2006 02:07 AM (e) | kill

2 statements in your thesis are :

1. Simply given the environment “shorter is better, connectivity critical,” a suite of digital organisms with solutions comparable to formal Steiner systems were evolved

2. In my algorithm, the Fitness Test is easy to apply: calculate the length of active segments. Shorter connected systems are more “fit.”

Query : Was there a purposeful agency which created the environment “shorter is better, connectivity critical?”. Who set the fuzzy “fitness” criteria?

Considering that he’s setting up for the dumbest possible objection to an evolutionary simulation, my amusement has an interest in seeing him engaged.

Comment #110525

Posted by RBH on July 7, 2006 12:49 PM (e)

Dilettante asked

Query : Was there a purposeful agency which created the environment “shorter is better, connectivity critical?”. Who set the fuzzy “fitness” criteria?

The experimenter did so in setting up experimental conditions to study the behavior of the system. It’s sort of like an experimenter in a bcateriology lab sets out Petri dishes (themselves intelligently designed) and varies a nutrient medium to ascertain the effects of the variation on the growth of E. coli colonies. Gee. Experiments are intelligently designed. What a breakthrough!

And what was learned in this particular research? Why, that given a fitness function that selectively rewards (via differential probability of reproduction) shorter paths, an array of irreducibly complex outcomes evolve. What does that tell us? It tells us that good old random mutations and selection can produce astonishing results, results that give the lie to ID creationsts’ blathering about the inability of evolution to produce novel complex structures without specifying a target state but merely manipulating local fitness calculations according to a simple figure of merit. And guess what ‘natural’ environments do, environments that vary in any number of physical dimensions. Why, they set implicit fitness criteria for survival and reproduction in subranges of those physical variables (known as “niches”) such that differentail reproduction as a function of relative adaptation to the niches produces populations of novel ‘solutions’ to the demands of the environments. Gosh. Evolution works just like that research platform. Amazing. The difference is that in the one it’s easier to manipulate selective environments (fitness functions) to study the process.

Now, if Dilettante wants to argue that the selective environment in nature was itself designed, let him go argue with the physicists and chemists and geologists, and leave biologists the hell alone.

RBH

Comment #110553

Posted by Torville on July 7, 2006 2:52 PM (e)

Excellent, interesting article. I had no idea that the Travelling Salesman Problem would converge so quickly (as in, “don’t blink”).

I’m not sure that there is a significant difference between “targeted” and “non-targeted” algorithms, in that the code that is running the simulation doesn’t have access to how the fitness function derives its responses in either case. All it gets is “warmer, warmer, cold, so cold” feedback. It’s not like the targeted algorithm is passing hints to the code, or that the code can look over the wall and form a canny guess about what the fitness function wants.

Comment #110579

Posted by Emerson José Silveira da Costa on July 7, 2006 4:44 PM (e)

Caledonian wrote:

Emerson J. S. da Costa wrote:

I think it would be more effective if someone came up with a (I know, much more complex) simulation where the organisms are made up of small “mechanical” (or otherwise “interactive”) “parts”.

It’s been done. Search for an episode segment of Scientific American Frontiers called ‘Robot Independence’ from 12/19/2006. There’s a video online.

Thanks! I found it at http://www.pbs.org/saf/1103/segments/1103-3.htm

And the webpage of Karl Sims is here: http://www.genarts.com/karl/

Comment #110581

Posted by Scott on July 7, 2006 4:46 PM (e)

Absolutely fascinating stuff. To address any possible “front loading” claim, consider this. Organisms don’t evolve in isolation, or in a fixed environment. Would it be possible to somehow set up the experiment so that the fitness function *itself* evolved? That is, add another set of elements whose sole purpose was to evolve a fitness function for the existing elements nodes, connections>, in response to some feedback behaviour of the existing elements. One possible fitness function might be “shortest length”. Another might be “longest length”. The “fitness function” organism could “eat” the Steiner organisms that weren’t “fit” enough. This might end up with some truly bizarre, yet stable systems!

Comment #110591

Posted by GuyeFaux on July 7, 2006 5:11 PM (e)

Scott wrote:

Would it be possible to somehow set up the experiment so that the fitness function *itself* evolved?

You speak of co-evolution. This has been done. The results were incredible, but of course this was lost on the creationists.

Comment #110601

Posted by W. Kevin Vicklund on July 7, 2006 6:14 PM (e)

Thanks, Dave.

Another suggestion, or rather a refinement of an earlier one. Rather than have the fitness functon evaluate how much the angles deviate from 120 degrees, which frontloads for 3 lines from a variable node, evaluate on how much the angles deviate from their average, which would permit any number of lines radiating from a variable node while still selecting for stability.

Comment #110610

Posted by Dave Thomas on July 7, 2006 6:57 PM (e)

There is finally reaction from the ID Blogosphere.

Enjoy!

Dave

Comment #110621

Posted by Coin on July 7, 2006 7:57 PM (e)

Darwiniana wrote:

…Here, at least,we get an admission that Dawkins’ weasel program was inadequate…

So the only part of the article “Darwiniana” read, apparently, was the part explaining that the weasel program is inadequate– in the process skipping even the part noting that the explanation of why the weasel program is inadequate is taken from the original Dawkins work which proposed the Weasel program.

Wow. That is a degree of selective vision impressive even for a creationist blog.

However, I’m not sure if this guy really counts as the “ID Blogosphere”, exactly, as the guy’s front page offers several boasts along the lines of:

BTW, this blog is unpopular both with Darwinists and the ID gang.

He doesn’t explain why he thinks the ID crowd doesn’t like him, but it apparently has something to do with the “eonic effect”.

Comment #110777

Posted by jeffw on July 8, 2006 5:05 PM (e)

Would it be possible to somehow set up the experiment so that the fitness function *itself* evolved?

In a very real sense, this implicit in evolution itself. Since the fitness function describes the environment, and the environment is composed of competing members of the same (and other species), and these competing members are also subject to mutation, crossover, selection, etc, then the fitness function is itself also evolving dynamically in a darwinistic sense. This applies to environments where other life makes up a significant componenent of the fitness function, which today includes just about all life. Early in the earth’s history it may have been different. But at a certain point, life began to “design” life.

Comment #110812

Posted by Henry J on July 8, 2006 8:48 PM (e)

Re “But at a certain point, life began to “design” life.”

The “arms race” analogy.

Henry

Comment #110849

Posted by jeff on July 9, 2006 2:34 AM (e)

Re “But at a certain point, life began to “design” life.”

The “arms race” analogy.

Certainly one of the better examples. Could also be sexual selection, symbiosis, parasitism, or other more complex inter or intra species relationships.

Comment #110982

Posted by Roger Rabbitt on July 9, 2006 6:04 PM (e)

Besides being visually complex, Steiner Solutions are irreducibly complex - if any segment is removed or re-routed, connectivity between nodes can disappear completely.

I think your IC test fails for several reasons. Let me list just two.

First, the knockout test is a necessary but not sufficient characteristic to define IC.

Second, one can approach the Steiner solution in a step-by-step fashion, because the “basic function” being selected for is “shorter”. So, precursors can be selected for if there are other candidates that are “longer”

Comment #110983

Posted by Andrew McClure on July 9, 2006 6:16 PM (e)

Mr. Rabbitt, could you please define exactly what it is you think “irreducibly complex” means? Please be specific.

Comment #110987

Posted by Roger Rabbitt on July 9, 2006 6:29 PM (e)

A single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. (Darwin’s Black Box p39)

Comment #110988

Posted by Andrew McClure on July 9, 2006 6:42 PM (e)

A single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. (Darwin’s Black Box p39)

Okay. Then in what way do the Steiner graphs not fully fit this definition you provide here? The graphs in this article provide systems of graph nodes and edges which interact to serve a purpose, that of connecting a certain set of graph nodes to one another. (They optimally serve this purpose when the sum of the edge lengths is minimized.) The components are well-matched, as each of the edges and each of the nodes are defined in the same way. In most of these graphs, there is a single edge or node which can be removed which causes the system to effectively cease functioning– that is, the system no longer serves its purpose of connecting nodes, because the edges which connect the nodes have been severed. In fact, in all of the “solution” graphs, the removal of any of the non-fixed nodes or edges in the system would cause the system to stop serving its purpose.

You offer two reasons why “the IC test fails”, but neither objection seems to actually have any substance. You say “the knockout test is… not a sufficient characteristic”, but don’t say what characteristic is missing. Your second objection seems to have no connection to the definition you provided from “Darwin’s Black Box” at all.

In what way do the steiner graphs not qualify for the definition of “Irreducible complexity” you have just provided?

Comment #110989

Posted by Roger Rabbitt on July 9, 2006 6:43 PM (e)

I have placed the complete listing of the Genetic Algorithm that generated the numerous MacGyvers and the Steiner solution, at the NMSR site.

If you contend that this algorithm works only by sneaking in the answer (the Steiner shape) into the fitness test, please identify the precise code snippet where this frontloading is being performed.

Well, the complaint isn’t that it “sneaks in the answer”. But it does direct the “organisms” toward the answer. As you yourself point out:

For example, if I had specified “longer is better” in my Genetic Algorithm - or not considered length at all - I would have not ended up with compact structures. (I know - I did the experiments!)

Why doesn’t the “longer is better” fitness function work to find the Steiner’s solution?

Comment #110990

Posted by Andrew McClure on July 9, 2006 6:55 PM (e)

But it does direct the “organisms” toward the answer… Why doesn’t the “longer is better” fitness function work to find the Steiner’s solution?

“Shorter is better” is the definition of the Steiner’s solution.

What if instead of calling it the “Steiner’s solution”, we just pretend Steiner never existed, and refer to the solution as the graph which provides the optimal value for the fitness function “minimize total edge length, all fixed nodes must be connected”? How would this be different?

“Steiner’s solution” here is just a name that has been given to a certain problem, and this problem can be defined by the “shorter is better” fitness function. The fitness function “longer is better” defines a different problem, and this problem unsurprisingly has different solutions. What exactly is the issue here?

Comment #110992

Posted by Roger Rabbitt on July 9, 2006 7:02 PM (e)

“Shorter is better” is the definition of the Steiner’s solution.

Exactly. So when Dave issues his challenge about showing how he is “sneaking in the answer”, that’s it. Look for “shorter with all permanent nodes connected”.

It’s not really sneaking. It’s done in broad daylight.

Comment #110993

Posted by Andrew McClure on July 9, 2006 7:08 PM (e)

Exactly. So when Dave issues his challenge about showing how he is “sneaking in the answer”, that’s it. Look for “shorter with all permanent nodes connected”.

But that is not an answer. That is a question.

If you are actually trying to say that a fitness function is all that is necessary for evolution to work, doesn’t this concede that evolution works everywhere in nature that fitness functions exist? After all, surely fitness functions arise naturally in nature. Have you actually thought about what you’re trying to say here?

Comment #110994

Posted by Roger Rabbitt on July 9, 2006 7:17 PM (e)

The components are well-matched, as each of the edges and each of the nodes are defined in the same way.

I’ll just pick a couple of the ways it fails. First, the steiners solution is not a system that performs a function. And although I’m not saying that one couldn’t model IC with anything less, I think the modeler assumes responsibility for trying to make sure the mapping captures the essential features. Avida’s EQU program suffered from the same problem. To talk about “basic function” quickly leads the analogy to fall apart.

As to “well-matched”, that is absurd. The edges are represented by bit-flags, which have a near infinite number of applications. They aren’t well-matched to the specific issues in the program. Not to mention that having only two states, it is never far between what’s needed, and what is intolerable.

Can one imagine an “organism”, if allowed to live another generation, having an amazingly large “length” one generation, and hitting Steiner’s solution the next? Of course, if the “length” was due to a single edge flag.

Comment #110995

Posted by Roger Rabbitt on July 9, 2006 7:32 PM (e)

But that is not an answer. That is a question

I’m sorry, I literally don’t understand what you are saying. Are you saying the definition is a question?

As to your broader point, it depends on what you mean by evolution. To use Dawkins definition, no. Even he admits that the WEASEL program and its fitness function do not demonstrate “evolution” as he uses the term because the fitness function is not one that Natural Selection can use.

But maybe you disagree with Dawkins about WEASEL.

Comment #111010

Posted by Andrew McClure on July 9, 2006 8:51 PM (e)

I’m sorry, I literally don’t understand what you are saying. Are you saying the definition is a question?

I am saying that the is the definition of the Steiner graphs is a problem, not a solution. Defining a problem is hardly the same thing as providing a solution, just like defining a question is not the same thing as providing an answer.

First, the steiners solution is not a system that performs a function.

Why not? What exactly do you consider a “function” to be? If “connect all the nodes in the graph in a minimal way” is not a “function”, then what is it? Do roads perform the “function” of connecting cities? Do soap bubble film configurations perform the “function” of minimizing surface area?

Is there any way a two-dimensional, static graph could “perform a function”, the way you are defining “function”?

It sounds to me like the way you are trying to demarcate what is and isn’t a “function” is more like a constraint designed to make it difficult to mathematically model IC systems, than a pragmatic constraint which exists for some good reason.

And although I’m not saying that one couldn’t model IC with anything less, I think the modeler assumes responsibility for trying to make sure the mapping captures the essential features. Avida’s EQU program suffered from the same problem.

“Captures the essential features” of what exactly? IC? If so, it looks to me like it captures the essential features of IC quite precisely.

To talk about “basic function” quickly leads the analogy to fall apart.

Perhaps this is because “function” is an unnecessarily subjective term? Because that certainly seems to be a big problem here to me. “Function” is something perceived by humans; it isn’t inherent to an undirected process such as evolution.

This is one of the big differences between normal science and things like Intelligent Design. Scientific disciplines, like evolutionary biology, can be defined in objective terms, in terms of things like allele frequency and regulatory genes and point mutations. Intelligent Design must resort to analogy and subjective, near-philosophical concepts like “function”, which means few substantive discussions can occur without quickly getting bogged down in word games. (Evolutionary biologists also sometimes talk about things like “function”, of course, but would be more likely to do so when doing analysis– you don’t require words like that in order to form the basic definitions of the science.)

As to “well-matched”, that is absurd. The edges are represented by bit-flags, which have a near infinite number of applications.

Oh, I see. I misunderstood; I thought “well-matched” meant that the parts are similar to one another. That is, they match each other well. Whereas you seem to be saying “well-matched” should mean the parts match the problem well.

Actually, looking around, I kind of think I had it right to begin with. “Well-matched” in the dictionary means “(of a couple) existing together harmoniously”; googling for “Well-matched behe” I find mostly sources that seem to be saying “well-matched” refers to how well the parts match one another, including one source which quotes Behe as saying “An irreducibly complex system is one that requires several closely matched parts in order to function and where removal of one of the components effectively causes the system to cease functioning”. Surely “closely matched” implies the match is between the parts, not between the parts and the problem at hand?

And how would it even make sense to say that the “well matching” criterion means that the parts match the problem? Who gets to decide what “matches” the problem? Surely if the parts solve the problem, that must mean they were well-matched to it, right?

Do you have anything that might make explicit exactly what “well-matched” means in Behe’s example?

Anyhow, I don’t see how the use of the bit flags is any less well “matched” to the Steiner graph problem than anything else would be. The bit flags are just an encoding– they’re a representation for switches that describe parts that are and aren’t present. There’s lots of other things that ones and zeros could encode, but they aren’t important here.

They aren’t well-matched to the specific issues in the program. Not to mention that having only two states, it is never far between what’s needed, and what is intolerable.

Oh, but that’s quite wrong. A bit string is an extremely logical, natural, and “well-matched” way of representing the edges in this system. CS people will with extreme frequency represent a graph in terms of which edges are and aren’t present. They wouldn’t be terribly likely to do this with a bit string in specific, because those don’t scale very well. But a more common method of representing the graph would be with an edge list (this works well when, as is often the case, there are not many more edges than there are nodes), which is functionally quite similar to the bitmask– it contains one entry for each 1 in the bitstring, and the 0s are assumed to be whereever the 1s aren’t. The point is, the natural way of encoding a graph relationship is by somehow enumerating the presence or absence of each potential edge. When the number of nodes is both fixed and small, a bit string is as good a way to do that as any. (Mr. Thomas’ simulation could of course be easily modified to use edge lists, but doing so would make the example needlessly complicated.)

Meanwhile, yes, the presence or absence of one individual part (one edge, which is another way of saying one line) is represented by only two states, presence or absence. But if you look at the state of the system, the matter of which parts are and aren’t present comprise two to the power of thirty-six (that’s a very large number) different states.

Can one imagine an “organism”, if allowed to live another generation, having an amazingly large “length” one generation, and hitting Steiner’s solution the next? Of course, if the “length” was due to a single edge flag.

It would be possible. I am not exactly sure it would be the most probable thing, though.

Anyhow, the fact that a single bit change can sometimes cause the entire system to stop or start functioning can hardly be considered a problem when we are asking whether the system is irreducibly complex. After all, the potential for a single removal to cause the system to stop functioning is, for all practical purposes, a requirement of irreducible complexity in the first place.

Comment #111094

Posted by Roger Rabbitt on July 10, 2006 8:22 AM (e)

I am saying that the is the definition of the Steiner graphs is a problem, not a solution. Defining a problem is hardly the same thing as providing a solution, just like defining a question is not the same thing as providing an answer.

That’s one on which maybe we’ll have to agree to disagree. All those definitional issues (problem, solution, question, answer, definition) can lead to a lot of posts back and forth that will illuminate nothing useful. From my perspective, the definition of SS, partially encoded in the FF, make the exercise relatively trivial and not one that produces any significant information, nor does it demonstrate that a blind watchmaker can easily reproduce such results unless it is properly “guided”. I understand that you may think otherwise.

Is there any way a two-dimensional, static graph could “perform a function”, the way you are defining “function”?

It sounds to me like the way you are trying to demarcate what is and isn’t a “function” is more like a constraint designed to make it difficult to mathematically model IC systems, than a pragmatic constraint which exists for some good reason.

To answer your first question, I don’t know. But even if not, that isn’t necessary fatal. One might be able to analogize in a manner that satisfies the relevant criteria. But I think we differ on who carries the onus. Dave claims IC for his program. It would seem to me he, along with anybody else defending his claim, bears the burden to demonstrate that is the case. Again, you are free to disagree.

As to your second point, I think it reveals an unhealthy suspicioun. IC came about from Behe’s observation of certain biological systems. I think “observation” is a good reason for the characteristics Behe describes. As I said, I don’t know whether this is an absolute “constraint”, but merely leave it to those who claim to satisfy IC to make the non-trivial mapping.

Perhaps this is because “function” is an unnecessarily subjective term? Because that certainly seems to be a big problem here to me. “Function” is something perceived by humans; it isn’t inherent to an undirected process such as evolution.

I’ll just say that I, along with many Darwinists, would disagree with that last statement. The function is indeed something that NS can act on. For example, a pair of mammals, one with a functioning circulatory system, and one with a non-functioning circulatory system, can be differetially selected for by NS.

Oh, I see. I misunderstood; I thought “well-matched” meant that the parts are similar to one another. That is, they match each other well. Whereas you seem to be saying “well-matched” should mean the parts match the problem well.

“Well-matched, interacting parts” means parts that well-suited to the function and to the other parts. It doesn’t mean they are similiar, although it doesn’t absolutely rule out that some may be similiar.

As far as an attempt to look up definitions or google a word or phrase to resolve the issue of what Behe meant by this phrase, it seems a tad bit bizarre, but quite maybe quite common in internet discussion. As to your inquiry of “Do you have anything that might make explicit exactly what “well-matched” means in Behe’s example?”, you might try reading his book Darwin’s Black Box, especially the sections dealing with his famous examples of the mousetrap and bacterial flagellum. I’m sorry if I assumed incorrectly that you would have been familiar with those writings.

Oh, but that’s quite wrong. A bit string is an extremely logical, natural, and “well-matched” way of representing the edges in this system.

Again, I think that’s one on which we’ll have to disagree. But note that even if I accept that as true for the sake of argument that the bit-string represents “well-matched interacting parts”, the bit-string didn’t come about as a result of the GA. It was created by the intelligent agent we call a programmer as part of the GA itself. The only thing that “evolved” was the values, which have odds of 50-50 (and that’s BEFORE considering the selective pressure). If that models in any practical manner the real biological world, then get out of the way buddy, cause I’m gonna climb on the Darwinian bandwagon.

Comment #111138

Posted by Salvador T. Cordova on July 10, 2006 10:11 AM (e)

Syntax Error: mismatched tag 'blockquote'

Comment #111151

Posted by Salvador T. Cordova on July 10, 2006 10:42 AM (e)

This offering by Dave Thomas can be described by a fictional story:

One kid goes up to another with a paint ball gun and shoots another, and says,

“Don’t get mad, I wasn’t aiming at you, I was aiming at the shirt you were wearing.”

I can have a computer add 1000 numbers. The answer is the target. Did I specify the answer by the way I designed and framed the solution strategy????? If the answer is yes, then by way of extension Dave’s example is an example of design.

What Dave might have done is describe the space of successful strategies versus non-successful, and that would have been better metric of feasibility. Even an estimate would have been better than nothing. Because that would have told us how narrow the true target space (space of successful search strategies) was. Did he do that? Heck no. A cursory look however, is that the space of successful strategies to unsuccessful strategies is probably beyond the UPB.

He gave 3 strategies that would work, but how many will not work? He thus assumes at the onset he’ll model a strategy that will work. Did he mentally select that out of vast pool of non-working strategies? In effect he did so….

Further, Dave’s illustration asserts a tautology: “if it is possible to solve something via evolution, it is possible to solve via evolution”. It does not actually address whether it is feasible in biological reality. What the likelihood is for a successful search strategy to exist in the first place. Using his mind, he merely takes one he suspects or knows will work and uses it…

The PT consumers walk away with the impression such a thing must be true in biological reality, and the more fundamental and important question get glossed over by mathematical and computational theatrics.

Salvador

Comment #111156

Posted by steve s on July 10, 2006 10:51 AM (e)

Salvador, before you gang up on us, why don’t you Uncommon Descent guys get your story straight.

Here’s you:

Comment #9119

Posted by Salvador T. Cordova on October 26, 2004 12:32 PM (e) | kill

6,000 years is a Young Earth Creationist (YEC) account of the time involved. It is very young compared to present geological models.

However, even if one can not accept that the Earth is that young on philosophical grounds, one is confronted with some purely empirical challenges to Old Earth.

With the rates of erosion as they are, we would expect the continents to have been recycled every several million years. Kind of trumps all the paleontological data.

Also there is an upheavil in the geological community which the creationists foresaw:

www.mantleplumes.org

Many an Old Earth theory is about to go down, and that website just warms my heart. To add insult to injury, the YECs of Loma Linda/GRI made the cover of the peer reviewed journal Geology in February 2004. :-)

We are finding unracemized and even viable bio-polymers in supposedly 90 to 500 million year old fossils. That does not bode well for trustworthiness of our dating methods.

Further, if the speed of light has decayed (and there is evidential and theoretical support for this), radiometric dates will indicate the Earth is Young.

The Big Bang is as much a POOF scenario as is creationism. Therefore empirically speaking it’s a matter of which POOF is more consistent with the observed data. Young Earth has the edge however because celestial dynamics is not consistent with Solar System Evolution but with instantaneous formation. Essentially, a ready made solar system.

So yes, under the hypothesis of a Young Earth, let us celebrate.

Salvador

http://www.pandasthumb.org/pt-archives/000573.ht…

(emphasis added by me)

and here’s your buddy Davescot:

The point is that no one can *imagine* a universe where any kind of life could exist with different physical constants.

http://www.uncommondescent.com/index.php/archive…

Which is it?

Comment #111187

Posted by Dave Thomas on July 10, 2006 12:08 PM (e)

Comment #110987
Posted by Roger Rabbitt on July 9, 2006 06:29 PM (e)

A single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. (Darwin’s Black Box p39)

Roger, simply identify which one of the Steiner solution’s seven (7) segments are not essential to its basic function. Which segment, upon being removed, does not cause the system to effectively cease functioning?

If you can not identify such a segment, then the Steiner solution is irreducibly complex according to Behe’s definition.

If you can identify such a segment … oh, sorry, that can’t be done.

Re Salvador,

It does not actually address whether it is feasible in biological reality.

Um, the purpose of this post was to discuss creationist claims that evolution, in light of physics and mathematics, is inherently impossible. Rather than engaging some real math (as was presented here), I suggest you slink back to Dembski/Behe et. al., read up on “irreducible complexity” and “complex specified information” again, and perhaps re-read “No Free Lunch.” Again.

Remember, the whole point of Behe’s “irreducible complexity” was to point out that there would be no way for natural selection to choose any possible precursors to the system, because any such precursor would necessarily be unviable - “the system would effectively cease functioning.”

This is a mathematical claim as much as it is a biological claim. As a physicist/mathematician, I explored the issue from that perspective.

Summary: ID folks say evolution is impossible for purely physics and mathematical reasons. Dave Thomas posts lengthy essay explaining why this is wrong. Salvador complains Thomas’s approach doesn’t apply to Biology.

Moving goalposts, anyone?

Andrew, you at least get a nice fresh carrot for trying to explain things to Mr. Rabbit.

Cheers, Dave

Comment #111210

Posted by Dilettante on July 10, 2006 1:02 PM (e)

RBH answered me on July 7, 2006 12:49 PM:

“Why, they set implicit fitness criteria for survival and reproduction in subranges of those physical variables (known as “niches”) such that differentail reproduction as a function of relative adaptation to the niches produces populations of novel ‘solutions’ to the demands of the environments”

So, the “environments” set up “demands”. They set up implicit criteria for survival. Does it not mean they have goals? The method of doing it may be mutations, random selections, elimination and favouring of certain characteristics. In other words, evolution. But that is just process, a trial-and-error method. But is there not a purpose behind evolution itself?

I think G.B. Shaw writes about the emergence of organization from chaos in one of his plays. The evolution of the eye for the purpose of seeing, ear for the purpose of hearing etc. Sometimes outsiders have more imagination than insiders.

Comment #111212

Posted by Roger Rabbitt on July 10, 2006 1:08 PM (e)

Dave Thomas says:

Roger, simply identify which one of the Steiner solution’s seven (7) segments are not essential to its basic function. Which segment, upon being removed, does not cause the system to effectively cease functioning?

You’ve ignored my previous postings. I’m depending on you to explain to me what the “single system”, “basic function” and “functioning” are in your example. Since it is your claim that this example models IC, you should be able to tell me how those concepts are represented in your model.

If you can not identify such a segment, then the Steiner solution is irreducibly complex according to Behe’s definition.

Again putting aside your mistaken assumption that the knockout test is all there is to IC, I can’t begin to identify the problems in your model without knowing what you perceive the mappings to be. I certainly could offer some off the top of my head, but those lead me to conclude that it doesn’t meet the definition of IC. Rather than me offering that, and you accusing me of being deliberately disingenuous in my mappings, it makes more sense logically, and in the interest of keeping the dialogue somewhat productive, for you to explain those various concepts and how they map to your model.

Comment #111213

Posted by secondclass on July 10, 2006 1:13 PM (e)

Salvador wrote:

A cursory look however, is that the space of successful strategies to unsuccessful strategies is probably beyond the UPB.

Salvador, you’re conflating solution probability with solution/problem space ratio. Unless a solution is selected from a uniform distribution, the two numbers are not the same.

The number of successful strategies is infinite, as is the problem space. As these two counts approach infinity, their ratio approaches zero, since the problem space grows infinitely faster than the solution space. So if Dave selected his strategy from a uniform distribution, the probability of selecting a successful strategy would be zero.

But he didn’t select his strategy from a uniform distribution. He was biased toward dumb strategies with both deterministic and random elements, since that is what we find in nature.

Your quarrel, then, is with nature. You want to know why natural laws are simple and predictable, yet partially random. Maybe those laws are designed, or maybe we’re part of a large multiverse, or maybe the explanation is something that we haven’t thought of. Whatever the explanation is, it will also require an explanation, and following that infinite explanatory regress takes us out of the purview of science.

Comment #111225

Posted by Dave Thomas on July 10, 2006 1:49 PM (e)

Roger Rabbitt said

Dave Thomas says:

Roger, simply identify which one of the Steiner solution’s seven (7) segments are not essential to its basic function. Which segment, upon being removed, does not cause the system to effectively cease functioning?

You’ve ignored my previous postings. I’m depending on you to explain to me what the “single system”, “basic function” and “functioning” are in your example. Since it is your claim that this example models IC, you should be able to tell me how those concepts are represented in your model.

Behe said

A single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. (Darwin’s Black Box p39)

The Single System in this case is a collection of line segments, which serve to connect given fixed points (nodes)to each other, as well as to additional variable points.

The Basic Function of the system is to connect the given fixed points with line segments.

A Functioning System is one in which every fixed point is connected to every other fixed point.

The Parts of the system are the line segments which join points (nodes), and the points/nodes themselves.

According to Behe’s own definition, the Steiner Solution in this case is irreducibly complex.

Was that so hard?

Dave

Comment #111237

Posted by GuyeFaux on July 10, 2006 2:22 PM (e)

Was that so hard?

Let me just point out that arguments over Behe’s semantics will not be fruitful. Behe has been afforded ample opportunity to clarify the various important terms in this definition and has mostly declined to do so.

Dave’s mapping is as clear as anything Behe has said.

Comment #111262

Posted by Roger Rabbitt on July 10, 2006 3:31 PM (e)

According to Behe’s own definition, the Steiner Solution in this case is irreducibly complex.

Was that so hard?

I don’t know. You tell me. But let me help you with whether it meets the definition of IC.

Let me first say that your mappings are what I will call “definitional”, and that such mappings are gonna be problematical for your position, but I will proceed with what you gave me.

The most obvious problem is that one doesn’t need the Steiner’s solution to meet your definition for the “basic function”. Indeed, two of the three initially generated random organisms meet that definition, yet are not Steiner’s solutions. And yet they are subject to further selection that can possibly bring about a Steiner’s solution. Hence, according to your mappings, the steiner’s solution isn’t IC.

That’s a pretty major flaw. In addition, the one initial example that doesn’t meet your definition of “basic function” only needs a single point mutation to enable the “basic function” (and has several positions at which that mutation can occur). So, that is a pretty clear direct Darwinian pathway to the “IC basic function”. According to A Classification of Possible Routes of Darwinian Evolution by RICHARD H. THORNHILL*- AND DAVID W. USSERY?

2.1. SERIAL DIRECT DARWINIAN EVOLUTION

This means change along a single axis. Although it can generate complicated structures, it cannot generate irreducibly complex structures.

These fellows are neither IDers, nor apologists for such, so I might stand aside while you hash this out with them.

Is there a need to proceed, or do you wish to modify your modeling at this time.

Comment #111266

Posted by Dilettante on July 10, 2006 3:38 PM (e)

RBH on July 7, 2006 12:49 PM

“The experimenter did so in setting up experimental conditions to study the behavior of the system. It’s sort of like an experimenter in a bcateriology lab sets out Petri dishes (themselves intelligently designed) and varies a nutrient medium to ascertain the effects of the variation on the growth of E. coli colonies.”

1. The experiment is intelligently designed
2. Equipment (Petri dish and nutrient media) are designed/selected/added.
3. The nutrient media are varied to ascertain the effects of variation on the growth of E.Coli colonies.

Suppose Nature is the experimenter?

1. Environments are set up for the creation and sustenance of life forms
2. Sources of life are created (amino acids, oxygen etc)
3. Single-celled organisms evolve.
4. They invent cell division. Multi-celled organisms evolve.
5. Various forms of life evolve and fit various niches
6. They influence each other and their habitats
7. Nature varies the environmental variables to favour certain characteristics and discourage other characteristics.
8. Macgyver and Steiner solutions emerge.

Such a system is meta-intelligent, since it creates environments where intelligent adaptation occurs.

Far-fetched?

A dog hears a whistle. It then smells food. Runs towards the plate with the food and eats it. It does not see anyone providing the food.

It hears a drum. Smells food again. Runs towards it, touches it. Gets a shock. After some trial and error, it knows when to eat, when to avoid. It never sees the creator of these criteria.

Does the invisible food provider have something in mind?

Comment #111270

Posted by Matt Peterson on July 10, 2006 3:44 PM (e)

The most obvious problem is that one doesn’t need the Steiner’s solution to meet your definition for the “basic function”. Indeed, two of the three initially generated random organisms meet that definition, yet are not Steiner’s solutions. And yet they are subject to further selection that can possibly bring about a Steiner’s solution. Hence, according to your mappings, the steiner’s solution isn’t IC.

The Steiner is one of several possible solutions to the problem, many of which are IC. The fact that multiple solutions are possible doesn’t disqualify any of them them as being IC.

Comment #111279

Posted by Dave Thomas on July 10, 2006 4:09 PM (e)

Mr. Rabbitt wrote

The most obvious problem is that one doesn’t need the Steiner’s solution to meet your definition for the “basic function”. Indeed, two of the three initially generated random organisms meet that definition, yet are not Steiner’s solutions. And yet they are subject to further selection that can possibly bring about a Steiner’s solution. Hence, according to your mappings, the steiner’s solution isn’t IC.

Ahem. You are confusing the randomly-generated members of the initial population, which may or may not be irreducibly complex, with the actual answer to the posed math problem - Steiner’s solution - which clearly is irreducibly complex. (As are the other ‘MacGyver’ solutions, as well).

Look again at the section with the pictures of wood-and-bolts physical models, and read again about how I knocked off un-needed segments to try to reduce the length in that example. Clearly, the longish “monster” was not irreducibly complex, or I could not have made it shorter and yet, still “viable.”

To convince me that the Steiner Solution itself is not irreducibly complex, please identify the segment of that solution which, upon being removed, does not cause the system to effectively cease functioning. What could be simpler??

Now regarding your claims of the work in “A Classification of Possible Routes of Darwinian Evolution by RICHARD H. THORNHILL- AND DAVID W. USSERY,” I noticed you didn’t supply a LINK.

Is that because you wanted people to think that

2.1. SERIAL DIRECT DARWINIAN EVOLUTION

is the only mechanism that Thornhill and Ussery discussed?

Mr. Rabbitt, why didn’t you also mention

2.3. ELIMINATION OF FUNCTIONAL REDUNDANCY … Redundancy elimination can generate irreducibly complex structures of functionally indivisible components, and a Darwinian evolutionary route of this type has been suggested for biochemical cascades, such as the blood-clotting system (Robison, 1996).

That mechanism is much more related to the present work than the ‘serial direct’ mechanism.

Did you see where the authors said it can produce irreducible complexity, unlike the ‘serial direct’ mechanism?

These fellows are neither IDers, nor apologists for such, so I might stand aside while you hash this out with them.

Having actually read their interesting paper, I have no quarrel with the authors.

Is there a need to proceed, or do you wish to modify your modeling at this time.

I see no need to revise my model in light of your comments. I recommend that you, however, might consider taking a remedial reading course, or, at least, a course on how to quote out-of-context in such a way that you don’t look totally foolish upon inspection of the original source.

Dave

Comment #111297

Posted by AC on July 10, 2006 5:21 PM (e)

Dilettante wrote:

[I]s there not a purpose behind evolution itself?

There is no evidence of one. Do you have such evidence?

1. Environments are set up for the creation and sustenance of life forms…
Does the invisible food provider have something in mind?

Do you have evidence that the universe (or some subset of it) is an environment set up for the creation and sustenance of life forms, much less that this was done by a cosmic “invisible food provider”?

Comment #111300

Posted by Roger Rabbitt on July 10, 2006 5:30 PM (e)

Ahem. You are confusing the randomly-generated members of the initial population, which may or may not be irreducibly complex, with the actual answer to the posed math problem - Steiner’s solution - which clearly is irreducibly complex. (As are the other ‘MacGyver’ solutions, as well).

I hope you are sitting down. I disagree about who is confused. For example:

Look again at the section with the pictures of wood-and-bolts physical models, and read again about how I knocked off un-needed segments to try to reduce the length in that example. Clearly, the longish “monster” was not irreducibly complex, or I could not have made it shorter and yet, still “viable.”

I neither mentioned the “monster”, nor asserted that any of the organisms was IC. I pointed out that numerous non-SS organisms performed the “basic function” as defined by you. Back to Behe:

By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional.

But of course, numerous of your precursors are indeed “functional”, hence the SS is not IC based on how YOU defined the basic function.

To convince me that the Steiner Solution itself is not irreducibly complex, please identify the segment of that solution which, upon being removed, does not cause the system to effectively cease functioning. What could be simpler??

Simple, but not useful. As I’ve said before, the knockout test is a necessary but NOT sufficient condition to determine IC. Many non-IC systems can have their functions cease with a knockout. That doesn’t make them IC.

Comment #111302

Posted by Matt Peterson on July 10, 2006 5:45 PM (e)

But of course, numerous of your precursors are indeed “functional”, hence the SS is not IC based on how YOU defined the basic function.

I don’t follow. Are you saying that the SS is not IC because it had precursors that were not IC? Isn’t the “ICness” of a structure determined independently of whatever forms may have existed before?

Comment #111305

Posted by Roger Rabbitt on July 10, 2006 5:58 PM (e)

I don’t follow. Are you saying that the SS is not IC because it had precursors that were not IC?

No, because it had precursors that exhibited the “basic function”.

Isn’t the “ICness” of a structure determined independently of whatever forms may have existed before?

Not at all. The essence of IC is that it can’t be reduced to a functional Darwinian precursor. If it can, it isn’t IC, and can be reached by a direct Darwinian pathway.

Comment #111309

Posted by Dave Thomas on July 10, 2006 6:07 PM (e)

Roger Rabbitt wrote

Simple, but not useful. As I’ve said before, the knockout test is a necessary but NOT sufficient condition to determine IC. Many non-IC systems can have their functions cease with a knockout. That doesn’t make them IC.

Um, what Matt said.

Also, look once again at Behe’s Definition of Irreducible Complexity (IC):

By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional.

For the longish 2874-unit long assembly, this is clearly not IC because several different segments can be “knocked out” with no loss of function.

For the Steiner 1212-unit long assembly, this clearly IS IC because the removal of any one of the parts causes the system to effectively cease functioning.

To convince me that the Steiner Solution itself is not irreducibly complex, please identify the ONE SEGMENT of that solution which, upon being removed, does not cause the system to effectively cease functioning. What could be simpler?? (Hint - there are only seven to choose from.)

Dave

Comment #111314

Posted by Roger Rabbitt on July 10, 2006 6:19 PM (e)

A little reality check before I wrap it up for the day. Let us take Dave at his word that this
is a realistic model that can demonstrate something significant about the natural world. I’m less impressed by the fact that he upended IC, than the fact that he upended Darwinian evolution. Look at the organisms randomly generated to start the exercise, before any selection, that apparently already perform the “basic function” of an IC system, with functional redundancy to boot. If nature can do that in such numbers before selection, why do we need selection?

Of course, the problem is that he has abstracted away all the details and difficult steps, and reduced complex systems whose solution space exceeds the UPB by more orders of magnitude than one can imagine, and reduced the problem to a search for 10 bit flags and a two bit number.

If it is all that easy, we don’t even need Darwinian evolution.

Comment #111322

Posted by Dave Thomas on July 10, 2006 6:52 PM (e)

Well, Mr. Rabbitt, I think you have helped reveal the real problem with Behe’s original definition of Irreducible Complexity:

… any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional.

What Behe is missing, and what you are missing, Mr. Rabbitt, is that Behe is defining “precursors” to IC systems as necessarily being IC AND non-functional themselves.

As Niall Shanks and Karl Joplin have noted

… Behe’s central claim is that the fact of design (regardless of how it was effected) can be empirically detected in observable features of physical systems. Such features cannot be explained, he contends, on the basis of mindless natural processes. In our earlier essays on Behe’s ideas, we introduced the idea of biochemical redundant complexity. A redundantly complex biochemical system is one that contains redundant subsystems — subsystems that can be removed without complete loss of function achieved by the system as a whole. Behe has conceded the existence of redundant complexity (see his “Self-organization and irreducible complexity: A reply to Shanks and Joplin,” Philosophy of Science 2000; 67: 155–62).

The admission is crucial. Reduce the redundancy in a redundantly complex system to the point where the further removal of a subsystem causes the system as a whole to lose function completely, and a redundantly complex system has evolved into an irreducibly complex system. Irreducibly complex systems are thus limiting cases of redundantly complex systems. Mutations resulting in gene duplication can give rise to redundancy. Mutations transforming functional genes into pseudogenes can reduce redundancy to the point where a system once manifesting redundant complexity is now irreducibly complex.

Dave

Comment #111328

Posted by Darth Robo on July 10, 2006 7:42 PM (e)

Sal amusingly wrote:

“The Big Bang is as much a POOF scenario as is creationism. Therefore empirically speaking it’s a matter of which POOF is more consistent with the observed data. Young Earth has the edge however because celestial dynamics is not consistent with Solar System Evolution but with instantaneous formation. Essentially, a ready made solar system.”

Wow! “Young Earth has the edge”? “celestial dynamics is not consistent with Solar System”? I must tell all the astronomers how wrong they are!

Actually, I just noticed the date of the post. I guess I’m a bit late. :(

But it did show me the archives may be good for keeping myself amused anytime I get bored.

Dilettante

“Suppose Nature is the experimenter?”

You seem to imply that you wouldn’t have a problem with evolution as long as there is an intelligence behind it. But since there is no evidence whatso-ever at all at anytime of the designer doing his ‘poofing’, at which points is the designer doing this exactly and by what processes is he/she/it using to do them? Maybe then you could tell us why evolution couldn’t happen naturally.

(And OT a moment, why doesn’t the spell check work for me no more? :( )

Comment #111356

Posted by Anton Mates on July 10, 2006 10:32 PM (e)

Roger Rabbitt wrote:

Isn’t the “ICness” of a structure determined independently of whatever forms may have existed before?

Not at all. The essence of IC is that it can’t be reduced to a functional Darwinian precursor. If it can, it isn’t IC, and can be reached by a direct Darwinian pathway.

I think–at least, for Behe’s sake, I hope–you’re misreading his definition. So far as I can tell, his definition of IC is

By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning.

Whereas the subsequent

An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional.

is a claim about IC, not part of its definition. And it’s a claim that Dave’s experiment (along with a zillion other studies, but props to Dave for this particular interesting and easy-to-understand version) has disproved.

OTOH, particularly given Behe’s readiness to shift definitions whenever challenged, it’s possible that the second bit is part of the definition. But in that case the concept is useless, since you can never determine by observation that a structure is IC–you can only determine that by first proving that it could not have evolved. Which, in fact, is what you said–if it could have an evolutionary precursor, it isn’t IC.

So once you guys have proven the impossibility of evolution, we can start pointing to things as IC–but until then, by your own definition, “irreducible complexity” is about as useful to science as “leprechaun.”

Comment #111378

Posted by GuyeFaux on July 11, 2006 2:35 AM (e)

RogerRabbit wrote:

…and reduced complex systems whose solution space exceeds the UPB by more orders of magnitude than one can imagine…

I smell the Filter. How come people always forget to check their Filter?

Anyhow, I’d like to point out that this problem is quite close to replicating an UPB result (10^150, or about 500 bits). The solution space consists of 26 decimal digits (approximately 86 bits) for coordinates &c. and 36 bits for the connectivity graph for a very nice 122 bits in the solution space. Depending on how you look at it this is only an order of magnitude off. Were we to go to 14 fixed nodes and 14 variable nodes, we would need 14*2*4=112 bits for coordinates (assuming 4 bits a number; could probably be increased), and 2*14^2=392 bits for the connectivity graph. Also, we still need 4 digits for the node count, which means a whopping 508 bits of solution space.

So Steiner Solutions for 14 nodes, generated by genetic algorithms, will register as “designed” by the explanatory filter. But I’m sure we won’t go there.

Comment #111477

Posted by Dilettante on July 11, 2006 12:58 PM (e)

AC on July 10, 2006 05:21 PM:

Dilettante wrote:

Is there not a purpose behind evolution itself?

There is no evidence of one. Do you have such evidence?

1. Environments are set up for the creation and sustenance of life forms…

Does the invisible food provider have something in mind?

Do you have evidence that the universe (or some subset of it) is an environment set up for the creation and sustenance of life forms, much less that this was done by a cosmic “invisible food provider”?”

I prefer to say Nature rather than Cosmos. However, if you accept that Nature and Cosmos are synonyms, then I don’t mind that pejorative.

1. If the micro-organisms on the Petri dish could think, would one of them know that they were part of an experiment? Would be be easy to know the duration of the experiment?

2. If the nodes and connectors in the Steiner solution were alive and concious, would they know that Dave Thomas set them up?

3. 400 types of bacteria live in my colon and small intestines, making up 1.5 kg of my weight. For them, it’s a habitat. Their home. They have no idea that they are there to make enzymes, digest food, help eliminate waste matter. They just live and breathe in their colonies along with their families, procreate, defend against enemies, die some time.

If told that their habitat is part of a larger organism, they might shake their head. Another bacterium cannot use the words small intestine, enzymes etc. They might ask for evidence, which cannot be provided by the bacterium with strange ideas.

If the bigger organism wishes to communicate with the bacteria, it cannot. Firstly, there is no common language. Second, the principles and the concepts are too remote and complex to be communicated. Third, the bacteria do not have the interests, brains and vision to comprehend the principles.

The Mind of Nature cannot be understood and proved so easily. If we start with the idea of Nature as an Object, then the first reasons that occur to us will be the utilitarian ones - Survival and Domination. We can inductively reason about its purposes and means of achievement if we start with the idea of Living Nature.

Comment #111481

Posted by GuyeFaux on July 11, 2006 1:29 PM (e)

Dilettante wrote:

We can inductively reason about its purposes and means of achievement…

Absolutely not, for pretty much for the reasons you suggested:

If the bigger organism [Nature] wishes to communicate with the bacteria [humans], it cannot. Firstly, there is no common language. Second, the principles and the concepts are too remote and complex to be communicated. Third, the bacteria [humans] do not have the interests, brains and vision to comprehend the principles.

The answers to your 1. and 2. are a justified no. If you are interested in such questions, I suggest you read some philosophy starting with Descartes. These notions are not terribly useful for science, precisely because they don’t have an inductive basis to stand on.

Scientists don’t have much of an issue with believing/wishing that the Universe was designed. For all we know, we’re all part of a super-being’s high school project and running on It’s hard-drive. But thinking this way gets us nowhere scientifically; there’s absolutely zero proof for any such conjectures, ID notwithstanding.

Comment #111569

Posted by trrll on July 11, 2006 9:59 PM (e)

Syntax Error: mismatched tag 'i'

Comment #111570

Posted by trrll on July 11, 2006 10:00 PM (e)

Not at all. The essence of IC is that it can’t be reduced to a functional Darwinian precursor. If it can, it isn’t IC, and can be reached by a direct Darwinian pathway.

This seems to be the reverse of what I understand Behi to be arguing. I understood his argument to be that because some structures appear to be irreducibly complex, in that no single part is redundant to function, such structures cannot have evolved from simpler precursors.

Now this argument has problems, such as ignoring scaffolding and repurposing as mechanisms, but at least it is a real argument. But it seems that you are accusing Behe of making a vacuous circular argument: IC structures are, by definition, structures that cannot have evolved, therefore evolution could not have produced such structures.

Comment #111716

Posted by LT on July 12, 2006 12:29 PM (e)

Many of the (fallacious) front-loading arguments can also be demonstrated to be significantly out of date when one looks at some of the earliest forms of genetic algorithms.

A good read on the topic can be found at: http://www.infidels.org/library/modern/meta/geta…

Some of the earliest codes that competed for ‘life’ had no ‘front loaded’ goal that to survive and reproduce. Hrm, that sounds familiar, I know I’ve heard it somewhere before… :-)

From the link:
‘Years before, an MIT computer hacker had introduced him to the idea of self-replicating computer code in a virtual machine. He wondered if he might be able to build some kind of artificial life or a-life software, that would let him run experiments in evolution. He built a computer program to model a virtual computer similar to the one in the Core Wars game, and called his virtual world Tierra. However, Ray added a new feature to the virtual world that had been missing from Core Wars: mutation….….
Ray altered the Tierra system to simulate a computer with a slight flaw. Every now and again, the machine code instruction which copied data between memory cells would randomly flip one of the bits during copying. If the data being copied was the machine code of the program itself as it tried to reproduce, the result would be a slightly different mutant program.
High-end computers use special error-correcting memory, specifically to avoid bits getting flipped. They do this because if you flip bits in the machine code of a piece of software, it will almost certainly crash. Conventional wisdom before Tierra was that randomly flipping bits of a machine code program could never result in improvement to that program–the chances against it were astronomical.
Like in the real world, Tierra had natural selection. Mutant programs that crashed were eliminated as unfit. In addition, a process called The Reaper would pick off the oldest programs to free up space–meaning this new virtual world had death, as well…….
Ray decided to start his Tierra system off with a population of the most simple programs possible. He wrote a piece of code that simply copied itself elsewhere in memory then spawned the copy. It was 80 bytes long, so he named it 80. He spread a few 80s through the Tierra system’s memory, and started the clock.
For the first few thousand generations, nothing much of interest happened. There were a few minor mutations that didn’t break the code, but that was about it. Before long, though, there were a number of 81s–mutants with an extra byte of program code. A little later, a 79 appeared. Because the 79 had one less byte of code, it took less time to reproduce, and was more successful than the 80 or 81. It began to take over the Tierra ecosystem.
Next, something astonishing happened. A 45 appeared. Ray was initially mystified; he’d written the simplest code he could imagine and it was 80 bytes. A 79 seemed reasonable, but how could a 45 reproduce in only just over half the space? Examining the code of 45 provided the answer–and a new surprise. The 45 was a parasite: instead of reproducing itself, it hunted for the reproductive code of an 80, then called that code. It was almost like a biological virus, which reproduces by inserting its DNA into a host cell and using the cell’s reproductive apparatus to build more viruses.
No parasite code had been written at any stage in Tierra’s development, and the system had not been designed to support parasites; the fact that one program could make use of code in another was an accident. Yet the system had reproduction, death, natural selection and mutation, and that seemed to be enough to cause parasites to appear from nowhere. Suddenly Tierra was an ecosystem in balance. If there were too many 45s, then the 80s would die out, unable to compete; and then the 45s would die out too, unable to reproduce without a host.
It turned out that the 79s were immune to the 45 parasite. Ray placed some 45s in a Tierra world heavy with 79s, and soon a new 51 parasite appeared which was able to use 79s to reproduce. If the system was left running long enough, parasites of parasites began to appear.
Then came another surprise: after a long period of mutation and natural selection, another new program appeared. This was a 22, and it was completely self-contained, not a parasite. Somehow Tierra had evolved a program that was smaller than any human being had managed to come up with.’

There’s more. It’s a very interesting article. And it puts to rest many of the IC and other arguments made about how this doesn’t actually reflect anything otehr than ID. And most of this was done years before anyone even coined the terms ID or IC. :-)

Cheers.

Comment #111756

Posted by Roger Rabbitt on July 12, 2006 3:54 PM (e)

Anton Mates says:

Whereas the subsequent

An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional.[M Behe]

is a claim about IC, not part of its definition.

I don’t wish to argue about what is officially a “definition”, other than to point out that Behe uses the term “by definition” in that second statement. At a minimum, I think we can infer that is what he means by IC.

OTOH, particularly given Behe’s readiness to shift definitions whenever challenged, it’s possible that the second bit is part of the definition.

I think that fails as an explanation, since the statement was from his book DBB, and the “challenges” really commenced with its publication. It is possible that statement surfaced as a clarification based on misunderstandings he encountered in dialogues he had before writing the book, but there isn’t anything sinister about that, IMHO.

But in that case the concept is useless, since you can never determine by observation that a structure is IC—you can only determine that by first proving that it could not have evolved. Which, in fact, is what you said—if it could have an evolutionary precursor, it isn’t IC.

You seem to be confusing the concept, called IC, which Behe is defining, and actual biological candidates for IC, such as the bacterial flagellum. Secondly, its “usefulness” doesn’t necessarily depend on “proving” that a specific system meets the criteria. It can also be useful as the best inference from the available evidence. Finally, you are confusing a generic word such as “evolved”, with Behe’s more precise language. It really is important to read the words carefully if you wish to understand his position. He hasn’t said in his formal sources, such as DBB, that IC structures “can’t evolve”. He is saying that “An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional”.

If you wish to challenge Behe, you need to challenge that.

Comment #111761

Posted by Roger Rabbitt on July 12, 2006 4:14 PM (e)

trrll says:

This seems to be the reverse of what I understand Behi to be arguing. I understood his argument to be that because some structures appear to be irreducibly complex, in that no single part is redundant to function, such structures cannot have evolved from simpler precursors.

I’m sorry, but I don’t see where the two statements are the reverse of each other. Maybe you could focus on the key word or phrase in my statement that you differ with, and explain why you differ.

Now this argument has problems, such as ignoring scaffolding and repurposing as mechanisms, but at least it is a real argument. But it seems that you are accusing Behe of making a vacuous circular argument: IC structures are, by definition, structures that cannot have evolved, therefore evolution could not have produced such structures.

Like Anton, you aren’t reading Behe’s words very carefully. “Cannot evolve” isn’t in there. As for being a “vacuous circular argument”, you might consider why Thornhill and Ussery went to all the trouble to consider IC, and address it in a peer-reviewed publication. And they agreed with Behe, that IC systems, if they exist, cannot be produced by a direct Darwinian pathway.

While it is true that the couple of sentences cited don’t address “scaffolding and repurposing”, not every sentence can be expected to address all issues in a discussion. Behe certainly discusses “indirect pathways” on the very next page. And although that discussion is sketchy and his conclusion may not be satisfying to you, I haven’t seen any substantial counter-evidence in the decade since the book was published.

Comment #111767

Posted by Emerson José Silveira da Costa on July 12, 2006 4:30 PM (e)

That’s why I suggested a world where “mechanical” organisms could evolve… It would be harder for the ID’ers to (pretend to) “fail to see” the IC structures that would appear in such world, as Roger Rabbit has been doing with the Steiner Solution.

Comment #111769

Posted by GuyeFaux on July 12, 2006 4:32 PM (e)

Behe’s argument, correct me if it’s a strawman:
1) Define: IC systems are complicated and if you remove a part it stops working.

2) IC systems cannot be built gradually by selecting for their function.
3) IC systems therefore can’t evolve.
4) Natural systems are IC.
5) Evolution doesn’t explain complexity.

Point 3) is an inference based on 1 & 2, and not part of the definition of IC. This inference is precisely what this experiment proves is wrong.

Comment #111780

Posted by Dave Thomas on July 12, 2006 5:05 PM (e)

Roger Rabbitt, I think Anton Mates got it exactly right. (Props to GuyeFaux as well!)

If the concept of “Irreducible Complexity” is to have any bearing on whether or not some structure can be achieved via evolutionary means, then it must be a stand-alone definition, such as

By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. [Behe, DBB]

If we need to know the “history” of the structure - whether or not there was a direct or indirect evolutionary path - in order to declare the structure to be Irreducibly Complex (IC) or not, then the concept of IC is scientifically useless, as Anton rightly pointed out.

Either an object is IC, or it is not. Who cares about the history? Behe’s whole point is that when we observe such irreducible structures, that is evidence of Design, because they couldn’t have evolved directly.

If you start changing the definition of IC to include a priori knowledge of the object’s history, including direct or indirect evolutionary events, so that you can declare objects like the Flagellum to be IC, while simultaneously declaring objects like the 7-segment Steiner Solution discussed here to be “Not IC,” then the whole concept is exposed as a vacuous sham.

And Mr. Rabbitt is indeed including history as a requirement for deciding if an object is IC or not. In his own words,

But of course, numerous of your precursors are indeed “functional”, hence the SS [Steiner Solution] is not IC based on how YOU defined the basic function.

If IC really means

By irreducibly complex I mean a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning.

then the Steiner Shape is undeniably IC.

Not one segment of the Steiner can be removed without total loss of function. Not one segment can be re-routed without significant impact upon its function (minimal length).

If you contend that the Steiner is not IC because of its history (which included functional precursors), then how do you know - really know - that The Flagellum didn’t have a similar history?

Remember, Behe’s argument is like Paley’s: if you find a watch, or a Flagellum, in the woods, there’s no way something that intricate and non-reducible could have evolved by natural selection.

In Paley’s view, we don’t need to know the history of the Watch. All we need is observe that this object we’ve stumbled upon is complex, couldn’t have evolved, and so, must be Designed.

The “Flagellum” (there are actually many flavors of “flagellum”) is no different.

Boy, I’m so glad I’m not a creationist. I don’t think I could withstand the required mental gyrations for extended periods.

Dave

Comment #111782

Posted by Henry J on July 12, 2006 5:27 PM (e)

Re “If we need to know the “history” of the structure - whether or not there was a direct or indirect evolutionary path - in order to declare the structure to be Irreducibly Complex (IC) or not, then the concept of IC is scientifically useless, as Anton rightly pointed out.”

Or just useless, period, since there’s also no nonscientific use for it, either. (Well, unless one counts its use in propaganda efforts.)

Henry

Comment #111862

Posted by Roger Rabbitt on July 13, 2006 4:41 AM (e)

If the concept of “Irreducible Complexity” is to have any bearing on whether or not some structure can be achieved via evolutionary means, then it must be a stand-alone definition, such as

I’m not exactly sure what you mean by “stand-alone definition”. Maybe you can give me a “stand-alone definition” of “stand-alone definition”, and then a “stand-alone definition” of evolution.

If we need to know the “history” of the structure … And Mr. Rabbitt is indeed including history as a requirement …

No, I don’t believe I said “need” or “requirement” anywhere, but merely “we have”. Indeed, wasn’t that your point in this thread? That you can use the “history” of your runs of your GA to demonstrate something about IC? But that cuts both ways.

… whether or not there was a direct or indirect evolutionary path - in order to declare the structure to be Irreducibly Complex (IC) or not, then the concept of IC is scientifically useless, as Anton rightly pointed out.

Then I suggest you write an article taking on Ussery and Thrornhill. Because they thought that the prohibition of direct Darwinian pathways was inherent in the definition of IC.

Maybe you could start off your article by proclaiming:

Boy, I’m so glad I’m not Thornhill or Ussery. I don’t think I could withstand the required mental gyrations for extended periods.

Comment #111870

Posted by Darth Robo on July 13, 2006 6:27 AM (e)

Roger Rabbitt said:

“I’m not exactly sure what you mean by “stand-alone definition”. Maybe you can give me a “stand-alone definition” of “stand-alone definition”

You were given a perfectly good example of a stand alone definition as defined by Behe himself. However, in the statement above, you have just implied that you wouldn’t understand a “stand-alone definition” of “stand-alone definition” if one was given to you, so what would be the point?

Maybe it’s time to just start some wabbitt jokes. :)

Comment #111896

Posted by Dave Thomas on July 13, 2006 9:20 AM (e)

Roger Rabbitt, you are again making claims about Thornhill and Ussery that are easily dismissed by simply reading their paper.

You say

Then I suggest you write an article taking on Ussery and Thrornhill. Because they thought that the prohibition of direct Darwinian pathways was inherent in the definition of IC.

Maybe you could start off your article by proclaiming:

Boy, I’m so glad I’m not Thornhill or Ussery. I don’t think I could withstand the required mental gyrations for extended periods.

Interestingly, Thornhill and Ussery actually provide a STAND ALONE DEFINITION of “Irreducible Complexity”:

DEFINITIONS
Irreducible Complexity
The quality of a structure such that at least one of its components is essential, with its loss rendering the whole structure absolutely nonfunctional. This term was coined by Behe (1996a,p. 39).

This is exactly what I’ve also been saying, lo these many comments.

Since you’ve repeatedly demonstrated an inability to comprehend Thornhill and Ussery’s work, let me spell it out for you.

Behe says that IC structures are inaccessible via Darwinian evolution. Period.

Thornhill and Ussery say that IC structures are accessible via Darwinian evolution, specifically by the mechanism of loss of redundant complexity:

2.3. ELIMINATION OF FUNCTIONAL REDUNDANCY … Redundancy elimination can generate irreducibly complex structures of functionally indivisible components, and a Darwinian evolutionary route of this type has been suggested for biochemical cascades, such as the blood-clotting system (Robison, 1996).

Indeed, Thornhill and Ussery provide another mechanism by which IC can be evolved. But I’m not going to say what it is here. You’ll have to actually go read their paper. It’s only a few pages long.

Dave

Comment #111897

Posted by GuyeFaux on July 13, 2006 9:24 AM (e)

RogerRabbit wrote:

Because they thought that the prohibition of direct Darwinian pathways was inherent in the definition of IC.

It’s not inherent; it’s a deduction based on the definition. It’s point 2) from my post above:

1) Define: IC systems are complicated and if you remove a part it stops working.
2) IC systems cannot be built gradually by selecting for their function.
3) IC systems therefore can’t evolve.
4) Natural systems are IC.
5) Evolution doesn’t explain complexity.

The key phrase here is “direct Darwinian pathway”. Logically, I don’t see any problems with the claim that IC systems cannot evolve by selecting for their function (=direct Darwinian pathway). That doesn’t preclude the existence of other pathways, however, so point 3 doesn’t follow as this and countless other experiments show.

Comment #111900

Posted by Dave Thomas on July 13, 2006 10:02 AM (e)

Independent Confirmation

One Codec from the UK has been having a splendid time exploring Steiner Genetic Algorithms in Perl and MFC, over at Internet Infidels.

Check it out!

Dave

Comment #111948

Posted by AC on July 13, 2006 3:15 PM (e)

So the short answer to my questions is “no“….

Back to the mists for this math nerd. Great article, Dave.

Comment #111981

Posted by Anton Mates on July 13, 2006 5:18 PM (e)

Roger Rabbitt wrote:

Anton Mates wrote:

Whereas the subsequent

An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional.[M Behe]

is a claim about IC, not part of its definition.

I don’t wish to argue about what is officially a “definition”, other than to point out that Behe uses the term “by definition” in that second statement. At a minimum, I think we can infer that is what he means by IC.

Perhaps you’re unfamiliar with the phrase, but “by definition” implies that the definition in question has already been presented, thus making it clear that the second statement is not part of the definition.

Moreover, “by definition” in that statement refers only to the last clause, which is indeed a logical consequence of–but not equivalent to–the prior definition of IC in the first statement.

Lastly, I must point out that you yourself invoked only the first statement when you originally provided a definition of IC earlier in this thread.

OTOH, particularly given Behe’s readiness to shift definitions whenever challenged, it’s possible that the second bit is part of the definition.

I think that fails as an explanation, since the statement was from his book DBB, and the “challenges” really commenced with its publication. It is possible that statement surfaced as a clarification based on misunderstandings he encountered in dialogues he had before writing the book, but there isn’t anything sinister about that, IMHO.

I should have been more precise. It’s fairly obvious that, when writing DBB, he didn’t intend the second bit as part of the definition; but it’s quite possible that he would present it as being part of the definition at a later time (including the present), once the challenges started coming in. Again, Behe shifts definitions quite readily when challenged.

But in that case the concept is useless, since you can never determine by observation that a structure is IC—you can only determine that by first proving that it could not have evolved. Which, in fact, is what you said—if it could have an evolutionary precursor, it isn’t IC.

You seem to be confusing the concept, called IC, which Behe is defining, and actual biological candidates for IC, such as the bacterial flagellum.

I’m not entirely sure how to respond to this. Since you yourself refer to “IC structures” below, you’re clearly aware that “IC” is often used as an abbreviation for the adjectival phrase “irreducibly complex,” just as I used it above. Where, then, is the confusion?

Secondly, its “usefulness” doesn’t necessarily depend on “proving” that a specific system meets the criteria. It can also be useful as the best inference from the available evidence. Finally, you are confusing a generic word such as “evolved”, with Behe’s more precise language. It really is important to read the words carefully if you wish to understand his position. He hasn’t said in his formal sources, such as DBB, that IC structures “can’t evolve”. He is saying that “An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional”.

If you wish to challenge Behe, you need to challenge that.

The entire point of this thread is that Dave Thomas has successfully done so. But if you’re disavowing the claim that “IC structures “can’t evolve”,” no further challenge seems necessary….

Comment #112476

Posted by Jim Vogan on July 15, 2006 5:16 PM (e)

After reading this excellent article and the comments and thinking about it for a couple days, I found myself mentally composing an argument about one part of the discussion which bothered me. I realize that when Ragnarok occurs I will just be carrying spare spears for a warrior on the E-team, not qualified to do more, but perhaps speaking up here will be good practice for any minor skirmishes I have to fight on my own.

It seemed to me that there is a point somewhere in what Roger Rabbitt was saying, i.e. that the GA discussed here, while it accomplishes the objective for which it was initiated, and gives some other interesting results, does not serve as a counterexample which totally refutes the concept of IC - although it does if you use the Behe definition which was cited (which I think was not a good definition).

If I have read correctly, the IC concept was discussed by Paley and Darwin (although I don’t think they used the term IC) as a possible falsification test for biological evolution (which term was not used by Darwin either, but I am using as a short-hand term here). The idea was to propose some kind of complex biological organ or process, such that no serial train of subsets of its components could provide any useful functions - only the complete assembly of components provided a useful function.

Therefore, it would be very unlikely (even on evolutionary time scales) for one subset of components to randomly evolve and be in existence long enough for the final components to also evolve and have a chance to combine with them. IF such an organ or process could be found, that would be a serious blow for Darwin’s theory - I think Darwin agreed. The C/ID proponents have proposed some candidates (the eye, flagellum, Krebs Cycle), but been refuted - by showing that there are subsets of the set of components which do have uses, and could have been precursors to the final set.

It is not completely impossible that an IC organ as defined above could randomly evolve, but the whole point of the thought experiment, it seems to me, was to construct the definition of IC such that an IC organ would be very, very unlikely to evolve naturally - and hence serve as a good falsification test.

(I have developed an analogy about a “flashlight” organ, composed of a “battery” organ, a “light bulb” organ, an cylindrical casing, an on/off switch, a reflective lens, and a transparent end-cap, which I could illustrate this with … only take a few minutes … no? Oh well.)

In the comments at the end of this interesting and brilliant article, Dave Thomas says that not only has he shown the lack of need for specific blue-print-type targets in GA’s, but that the results have refuted the whole notion of IC as well - as defined by Behe. However, the real point of the IC test, as I see it, involves the probability of a certain type of complex organ having evolved in nature. At the least, it seems to me, it is hard to say that outcomes from a computer program are well-calibrated in probabilities to events that occur in nature. In other words, what is IC in nature may not be IC for a particular computer program.

Trying to look at the argument about software GA’s versus nature from the C/ID point of view, I would summarize as follows:

Let’s say somebody writes a piece of software using a GA to “evolve” something - let’s say the famous “flashlight organ” (or FLO). The GA combines and mutates “genes” which can form good batteries, dud batteries, no battery, good bulbs, dud bulbs, no bulb, transparent end-caps, opaque end-caps, et cetera. It uses a fitness test for breeding rights in which a perfect FLO (good battery, good bulb, etc.) gets a 1.0, and everything else gets a number between 0 and 1 depending on how many of the components of the perfect FLO which it has.

Some IDist will then say, no fair, you used a specific target for your fitness criteria!

Then, somebody rewrites the software to test for fitness by measuring how much light each individual in a generation of the FLO population produces. Again the perfect FLO evolves - as well as some interesting variations of it.

Still, some IDist could protest, saying, your software allowed an otherwise-perfect FLO with no battery to mate with one that had no bulb but was otherwise perfect. Sure, 1/4 of their offspring were perfect FLOs, but how likely would that be to happen in real life, when each is an evolutionary sport which wastes biological resources to no useful purpose? Your computer program does not accurately model that fact.

The best way to answer “how could a FLO have evolved in nature” is, I think, to show that first there was a “club” organ, which mutated into a “cattle-prod” organ, which mutated into a “flashlight” organ. In which case the FLO is not IC. Again, a legitimate purpose of the IC concept, it seems to me, was to check Darwin’s Theory by constructing a falsification test for it. As “evolutionists”, we expect that no provably-IC organs will ever be found. The C/IDists hope there will be.

Because we have different axioms (like plane-geometers and spherical-surface-geometers, another useful analogy of mine), we spend a lot of time talking past each other, rather than trying to understand each other’s points. This was my attempt to understand what the other guy was saying.

I hope both sides can agree that if GA’s did not exist, then either Evolution or the IDer would have invented them, because they are interesting and useful.

Comment #112532

Posted by Andrew McClure on July 15, 2006 11:11 PM (e)

Jim Vogan wrote:

It seemed to me that there is a point somewhere in what Roger Rabbitt was saying, i.e. that the GA discussed here, while it accomplishes the objective for which it was initiated, and gives some other interesting results, does not serve as a counterexample which totally refutes the concept of IC - although it does if you use the Behe definition which was cited (which I think was not a good definition).

The thing is, if we don’t use the Behe definition, what do we use? Though he wasn’t the one to first discuss the issue, Behe is the one who is ultimately responsible, I think, for most people having heard of “Irreducible Complexity” today; his conception of and claims about IC are the ones which pervade the ID movement. I am personally aware of no noteworthy “Intelligent Design” treatments of the IC concept which do not reference Behe at some point.

If the ID movement is making claims about “Irreducible Complexity” and expecting any credibility at all, they need to be upfront and straightforward about what those claims are. If you allow otherwise, then ID supporters get to play a terminology shell game where they’re talking about Behe’s writings and strict definition of IC whenever they play to the public, and some looser nebulous definition whenever people start actually attacking Behe’s work on its own merits. IC can’t be refuted at all if every time you try to nail it down the definition changes.

If you think it would be a good idea to do further testing which more closely mirrors biological structures than the graph problem here does (that is, tests which approach the question of IC structures in nature instead of IC structures by their basic definition) then this is of course an excellent idea and could be interesting. But the single test here was sufficient to refute IC by its basic definition, and refute many categorical statements which Behe personally has made in the past about things which fit IC’s basic definition. This by itself is significant– especially since I for one fully expect that despite their cheerfulness about casting about for new definitions of IC at the moment somebody points out flaws in it, outside of this one blog thread creationists will as always continue standing by those portions of Behe’s writings and definition of ID that have already been shown to be flawed.

However, the real point of the IC test, as I see it, involves the probability of a certain type of complex organ having evolved in nature.

Hm. Then why all the talk about spring-loaded mousetraps, which never did evolve in nature?

At the least, it seems to me, it is hard to say that outcomes from a computer program are well-calibrated in probabilities to events that occur in nature. In other words, what is IC in nature may not be IC for a particular computer program.

Perhaps. But the thing is, mathematical models and computer simulations are most convincing when they are simple. The more minimal a model is, the easier it is to believe it is analogous to the thing being modeled. This means there’s kind of a catch-22 going on here. If the model of an evolutionary computer simulation is made complex, then that complexity can be used to denounce the simulation as having “snuck in” information. If the model is simple, then the simulation can be denounced as not close enough to biology.

Comment #112539

Posted by Anton Mates on July 16, 2006 12:58 AM (e)

Jim Vogan wrote:

AIf I have read correctly, the IC concept was discussed by Paley and Darwin (although I don’t think they used the term IC) as a possible falsification test for biological evolution (which term was not used by Darwin either, but I am using as a short-hand term here). The idea was to propose some kind of complex biological organ or process, such that no serial train of subsets of its components could provide any useful functions - only the complete assembly of components provided a useful function.

Therefore, it would be very unlikely (even on evolutionary time scales) for one subset of components to randomly evolve and be in existence long enough for the final components to also evolve and have a chance to combine with them. IF such an organ or process could be found, that would be a serious blow for Darwin’s theory - I think Darwin agreed. The C/ID proponents have proposed some candidates (the eye, flagellum, Krebs Cycle), but been refuted - by showing that there are subsets of the set of components which do have uses, and could have been precursors to the final set.

If you could provide a reference I’d love to see it, but I actually don’t think IC in itself would mean much to either Paley or Darwin. Paley of course was very impressed by complexity period, but since he died well before the Origin or even the Vestiges he wasn’t too concerned with how to falsify any remotely modern model of evolution.

As for Darwin, he took pains to explain how an apparent loss or initial absence of function could be explained by redundancy or multifunctionality, and given that, I don’t think he’d consider IC particularly meaningful or testable:

Chuck wrote:

We should be extremely cautious in concluding that an organ could not have been formed by transitional gradations of some kind. Numerous cases could be given amongst the lower animals of the same organ performing at the same time wholly distinct functions; thus the alimentary canal respires, digests, and excretes in the larva of the dragon-fly and in the fish Cobites. In the Hydra, the animal may be turned inside out, and the exterior surface will then digest and the stomach respire. In such cases natural selection might easily specialise, if any advantage were thus gained, a part or organ, which had performed two functions, for one function alone, and thus wholly change its nature by insensible steps. Two distinct organs sometimes perform simultaneously the same function in the same individual; to give one instance, there are fish with gills or branchiae that breathe the air dissolved in the water, at the same time that they breathe free air in their swimbladders, this latter organ having a ductus pneumaticus for its supply, and being divided by highly vascular partitions. In these cases, one of the two organs might with ease be modified and perfected so as to perform all the work by itself, being aided during the process of modification by the other organ; and then this other organ might be modified for some other and quite distinct purpose, or be quite obliterated.

Comment #112574

Posted by Dilettante on July 16, 2006 4:39 AM (e)

AC on July 13, 2006 03:15 PM:

So the short answer to my questions is “no“….

Short answers save time, but lose depth. Contemplation may give deeper understanding, but it takes time.

Comment #112826

Posted by Roger Rabbitt on July 17, 2006 7:49 AM (e)

Anton Mates says:

If you could provide a reference I’d love to see it, but I actually don’t think IC in itself would mean much to either Paley or Darwin.

If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case. - Charles Darwin The Origin of Species

Comment #112865

Posted by Roger Rabbitt on July 17, 2006 10:30 AM (e)

I think that we are trending pretty close to having our exchange here exhibit no new “information”. That being said, I’ll address a few of the points raised, and let others have the last word if they wish.

Darth Robo says:

You were given a perfectly good example of a stand alone definition as defined by Behe himself. However, in the statement above, you have just implied that you wouldn’t understand a “stand-alone definition” of “stand-alone definition” if one was given to you, so what would be the point?

I can’t dispute that is was “a perfectly good example of a stand alone definition”, but that claim is irrelevant. Since “stand-alone definition” remains undefined, I can’t appreciate why it was an example, why the other statement wasn’t, and how that justifies the claim by Dave that only concepts that have these “stand-alone definitions” have bearing to the issues.

And assuming all that can be demonstrated, then we have the issue of why Thornhill and Ussery understood the term differently than Dave, and saw bearing on the issues.

Dave Thomas says:

Since you’ve repeatedly demonstrated an inability to comprehend Thornhill and Ussery’s work, let me spell it out for you.

Behe says that IC structures are inaccessible via Darwinian evolution. Period.

Actually, he says they “cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional.” That’s also what T&U say, ruling that IC systems cannot be produced by Direct Darwinian pathways. And that is why your claim that “whether or not there was a direct or indirect evolutionary path … Who cares about the history?” is in conflict with their claims. If the history shows a direct Darwinian pathway, then T&U would assert it wasn’t IC.

I’m well aware that T&U assert two different pathways that could, in principle, produce an IC system. But your program doesn’t demonstrate them.

Anton Mates says:

But if you’re disavowing the claim that “IC structures “can’t evolve”,” no further challenge seems necessary….

Not only do I disavow that claim, Behe did also in DBB. So why all the challenges to that book?

Jim Vogan says:

If I have read correctly, the IC concept was discussed by Paley and Darwin (although I don’t think they used the term IC) as a possible falsification test for biological evolution (which term was not used by Darwin either, but I am using as a short-hand term here). The idea was to propose some kind of complex biological organ or process, such that no serial train of subsets of its components could provide any useful functions - only the complete assembly of components provided a useful function.

I’m not sure I understand you clearly, but just to clarify: Behe doesn’t say that no subset of components could provide “ any useful function”. His point is that no subset of its components provides the “basic function” of the IC system. Hence, no slight successive Darwinian improvements of the “basic function” of a precursor system can produce an IC system.

Comment #112872

Posted by Flint on July 17, 2006 10:45 AM (e)

Behe doesn’t say that no subset of components could provide “ any useful function”. His point is that no subset of its components provides the “basic function” of the IC system. Hence, no slight successive Darwinian improvements of the “basic function” of a precursor system can produce an IC system.

I’m afraid I must still be missing it. OK, Behe has made the point that certain structures could not have been built by assembly-line style addition of successive parts, with the whole structure serving the same function continuously during the process.

Yes, fine, I seriously doubt that ANY biological structures were built according to the model Behe has revealed (gasp) doesn’t work. What I don’t understand is how finding fault with a simplistic and unrealistic model in any way casts any doubt on evolution. If Behe’s IC structures are inaccessible via a pathway biology does not follow, then so what? In what way is the theory of evolution supposed to have any difficulty with this?

Comment #112997

Posted by Anton Mates on July 17, 2006 6:32 PM (e)

Roger Rabbitt wrote:

If you could provide a reference I’d love to see it, but I actually don’t think IC in itself would mean much to either Paley or Darwin.

If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case. - Charles Darwin The Origin of Species

Surely you’re not saying that that has anything to do with irreducible complexity as Behe uses it? Where’s the “basic function” that must be maintained throughout evolution, and the “several well-matched interacting parts” to fulfill that function?

But if you’re disavowing the claim that “IC structures “can’t evolve”,” no further challenge seems necessary….

Not only do I disavow that claim, Behe did also in DBB. So why all the challenges to that book?

I guess no one realized that Behe wasn’t actually attacking evolutionary theory at all, and that he considered all his claims perfectly consistent with mainstream biology. In fact, even Behe himself seems not to have realized that, to judge by his Kitzmiller performance. Odd, that.

Comment #114485

Posted by Dave Thomas on July 24, 2006 4:43 PM (e)

COMMENTS ARE NOW CLOSED

… as it’s been a week or so, and only SPAM is being submitted at this point in time.

The discussion continues over at the Internet Infidels.

Here’s a smidgeon of what we can expect as an Official Response from the IDC community:

I will also post on Dave Thomas’s evolutionary algorithms, but in brief, his disproof can be illustrated by this fictional scenario:

PZ Myers finds DaveScot in the park one day, walks up with a paint ball gun and shoots DaveScot in the chest. Ouch!

Shortly before DaveScot confers a little retribution for this act, PZ pleads, “Don’t be mad Dave, you were not the target of my gun. Honest, I was aiming at the shirt you happen to be wearing, not at you specifically, only your shirt.”

Salvador

I’m curious to see if the “shirt” will turn out to be analogous to the Environment (“Shorter is better, connectivity critical”), and DaveScot analagous to an adapted organism (e.g. the Steiner solution itself)…

Cheers, Dave