PvM posted Entry 2363 on June 10, 2006 10:59 PM.
Trackback URL: http://www.pandasthumb.org/cgi-bin/mt/mt-tb.fcgi/2358

Sometimes serendipity presents you with an opportunity to educate those who are confused by the claims of Intelligent Design and somewhat unfamiliar with evolutionary theory. So let me start with the answer and then look at the question.

Genetic Programming (GP) has a proven capability to routinely evolve software that provides a solution function for the specified problem. Prior work in this area has been based upon the use of relatively small sets of pre-defined operators and terminals germane to the problem domain. This paper reports on GP experiments involving a large set of general purpose operators and terminals. Specifically, a microprocessor architecture with 660 instructions and 255 bytes of memory provides the operators and terminals for a GP environment. Using this environment, GP is applied to the beginning programmer problem of generating a desired string output, e.g., “Hello World”. Results are presented on: the feasibility of using this large operator set and architectural representation; and, the computations required to breed string outputting programs vs. the size of the string and the GP parameters employed.

Genetic Evolution of Machine Language Software Ronald L. Crepeau NCCOSC RDTE Division San Diego, CA 92152-5000 July 1995

The Results?

From Figure 5 it can be seen that this run achieved a correct output (fitness = 352) at about 150,000 spawnings (100 to 1200 generations). By about 450,000 spawnings, the agent was composed of less than 100 instructions. Ultimately, the agent size reduced to 58 instructions before the process was terminated.

Next question?

Of course, the question proposed at Uncommon Descent is flawed for many reasons. In fact, an unfamiliarity with evolutionary theory combined with a false analogy quickly results in what is known as a strawman argument.

GilDodgen wrote:

What is the probability of arriving at our Hello World program by random mutation and natural selection?

Source

Pretty darn good as I have shown.

GilDodgen wrote:

How many simpler precursors are functional, what gaps must be crossed to arrive at those islands of function, and how many simultaneous random changes must be made to cross those gaps? How many random variants of these 66 characters will compile? How many will link and execute at all, or execute without fatal errors? Assuming that our program has already been written, what is the chance of evolving it into another, more complex program that will compile, link, execute and produce meaningful output?

Source

I’d love to see some research in this area. Appeal to ignorance is not really that appealing to me. These are excellent questions and should be answered before rejecting the plausibility in a somewhat ad hoc fashion. It’s time to abandon these ‘just so stories’ and do some real scientific work.

Of course, one may object to my choice of method and one may raise a myriad of objections based on the (unjustified) claim that the method required significant intelligent design or the fact that the fitness function is smooth, and so on but it shows that under ‘reasonable assumptions’ natural selection and variation can indeed create the required output string. In fact, this is hardly surprising given the state of knowledge about evolutionary computing.

Perhaps, it would be better if ID activists would present an argument based on an analogy which shows at least some minimum similarity with evolution such as for instance a redundant genotype-phenotype mapping, self-replications, and a way to introduce selection into the process in an acceptable format or replacing a single fixed goal with a more realistic evolutionary goal. As Lenski and others have shown however, that the processes of variation and chance can indeed generate complexity and even irreducibly complex systems.

In the end the question is not much dissimilar from Dawkin’s “Weasel” example and thus all known limitations apply. So what have we learned from this example?

  1. That a quick google search can once again answer many of the questions
  2. Intelligent Design once again excels at creating strawmen
  3. Intelligent Design once again lacks scientific relevance
  4. In fact, most of the argument was based on one from ignorance
  5. GilDodgen wrote:

    I can’t answer these questions, but this example should give you a feel for the unfathomable probabilistic hurdles that must be overcome to produce the simplest of all computer programs by Darwinian mechanisms.

While indeed the hurdles may appear intuitively to be unfathomable, it quickly becomes clear that a combined process of variation AND selection can be very efficient in overcoming these hurdles. Dawkins already showed this several decades ago.

Commenters are responsible for the content of comments. The opinions expressed in articles, linked materials, and comments are not necessarily those of PandasThumb.org. See our full disclaimer.

Comment #104947

Posted by Mark Frank on June 11, 2006 2:03 AM (e)

I was struck by the relationship between this posting and the previous one on Kirschner’s work. If I understand Kirschner correctly he is proposing that “random” variation might be biased towards mutations that are at the subroutine or object level rather than the individual instruction. Which I guess would make the evolution of quite complex programs much quicker.

Comment #104950

Posted by normdoering on June 11, 2006 2:26 AM (e)

GilDodgen muddies the waters further when he asks “How many simpler precursors are functional?” Translated to biology that can be seen as asking “what was the first self-replicator (or other-replicator plus symbiosis)?”

This changes the problem into a question of abiogenesis rather than of evolution with a given replicator. Genetic algorithms all assume self-replication in their modeling of biology.

Comment #104951

Posted by normdoering on June 11, 2006 2:38 AM (e)

Here’s another distortion: “what gaps must be crossed to arrive at those islands of function,…?”

The term “islands of function” is also misleading. Drain his pool a little and instead of islands you get mountain ranges and hilly valleys that let a “hill-climbing” algorithm do its work.

Island hopping algorithms might need foresight, hill climbers can be blind.

http://gaul.sourceforge.net/tutorial/hillclimbin…

Comment #104952

Posted by k.e. on June 11, 2006 2:59 AM (e)

GilDodgen (apparently a member of the increasing desperate Dembski personality cult) wrote:

What is the probability of arriving at our Hello World program by random mutation and natural selection?

Once again demonstrates the failure of the Infinite monkey theorem

The theorem graphically illustrates the perils of reasoning about infinity by imagining a vast but finite number. If every atom in the Universe were a monkey producing a billion keystrokes a second from the Big Bang until today, it is still very unlikely that any monkey would get as far as “slings and arrows” in Hamlet’s most famous soliloquy.

It does appear however that a finite number of atoms (plus or minus a few on a daily basis) arranged in an evolved ape called William Shakespeare had very little trouble producing that line and no amount of evolving will allow some people to see that.

Gil continues and ponders the enormity of death before enlightenment,:

I can’t answer these questions, but this example should give you a feel for the unfathomable probabilistic hurdles that must be overcome to produce the simplest of all computer programs by Darwinian mechanisms.

“The fault, dear Brutus, is not in our stars,
But in ourselves, that we are underlings.”

Comment #104953

Posted by PvM on June 11, 2006 3:10 AM (e)

GilDodgen muddies the waters further when he asks “How many simpler precursors are functional?” Translated to biology that can be seen as asking “what was the first self-replicator (or other-replicator plus symbiosis)?”

I think that Gil did not fully think through his example and failed to appreciate the requirement for a self replicating system to arise before evolution can take over but I think that can be relatively easily rectified.

In the end all comes down to the shape of the “fitness landscape” and without looking at actual examples, it is hard to make any specific claims about such landscapes. For RNA for instance it seems that the landscape is quite open to evolution. For protein the concept of ‘holey landscapes’ makes evolution almost inevitable (Gavrilet).

Comment #104958

Posted by snaxalotl on June 11, 2006 4:21 AM (e)

staring, intrigued, at the description “Dembski personality cult”. can anybody think of an example of a personality cult which revolved around a smaller amount of personality?

Comment #104960

Posted by Jeannot on June 11, 2006 4:32 AM (e)

I fail to see how this ‘hello world’ program can be taken as an example of evolution by mutations and natural selection. AFAIK, this process is not supposed to reach any objective or solution.

The probability to produce Homo sapiens from Homo habilis with RM + SN is almost zero, of course. This number is also completely irrelevant.

???

Comment #104962

Posted by Corkscrew on June 11, 2006 5:27 AM (e)

one may raise a myriad of objections based on the (unjustified) claim that the method required significant intelligent design

I’m always amused by that claim. It’s like saying that, by the mere act of painting a bunch of concentric circles on the wall, you’ve given someone all they need to be a master archer.

Comment #104964

Posted by dave42 on June 11, 2006 5:41 AM (e)

Good to see that GP research is feeding back into the biology field. This kind of experiment is directly testing Darwin’s hypothesis in the true scientific spirit and method: predict (there exists a system that, implementing Darwin’s evolutionary rules, can produce a behavior meaningful to us but not to the system), design build and run the experiment so that all factors other than those operated on by the process under test are known and fixed, and see it produce the hypothesized result. Very nice.

Somewhat irrelevant are the questions about the logical level chosen for the experiment - could be any level, or all, from atomic through processor architecture thru instructions through algorithmic entities. In life all levels are simultaneously undergoing select and test. But in an experiment, it is only useful when the questioned independent variables are the only ones manipulated, in order to show the relationships. So choosing the highest level while keeping the lower levels fixed and known is the way. Point of interest in scale: this result is obtained on a very small collection of parallel processes and spawnings. Scale up several trillion trillion times in both number of parallel experiments and run time and you begin to approach what actually happened here on Earth.

Now I’ll bet the virtual machine itself underwent evolutionary changes along the path to its final design - that is the normal creative process at work, and anyone whose life work involves the development of ideas is familiar with this, though most probably do not (yet) relate their intellectual experience to it’s being a direct experience of the evolutionary process in fact, not merely a passing fancy.

Comment #104965

Posted by Torbjörn Larsson on June 11, 2006 5:53 AM (e)

Jeannot says:

“I fail to see how this ‘hello world’ program can be taken as an example of evolution by mutations and natural selection.”

“GEMS employs mutation of two types.” (Crepeau, p5.)

“As each off-spring is bred, it is evaluated for insertion in the pool using a modification of the process which [Altenberg 1994] calls “upward mobility” selection.” (Crepeau, p4.)

“AFAIK, this process is not supposed to reach any objective or solution.”

The immediate objective is to increase fitness for survival to reproduce. What that means varies as the species and environment changes so there is no overall objective.

“The probability to produce Homo sapiens from Homo habilis with RM + SN is almost zero, of course. This number is also completely irrelevant.”

Exactly. We know it happened, however unlikely it was. We also know evolution happens, so there is no probability associated with that either. What was your point to raising this straw man?

“???”

:| :| :| !!!

Comment #104966

Posted by Torbjörn Larsson on June 11, 2006 6:12 AM (e)

Mark,
Re Kirchner, Crepeau says that he thinks his complicated multiple instruction environment succeeds in outputting strings because he intentionally specified a minimum amount of input and output instructions. He also speculates that his phase 2 of shortening the successful programs will be better if he intentionally specifies a minimum amount of halt instructions at this phase.

Corkscrew,
A nice view and metaphor.

The claim is also misunderstanding experiments. All our experiments, instruments or data handling are designed in some manner or they wouldn’t give results. That doesn’t make them illustrations of creationist theory. It is analogous ‘thinking’ to their requirement for supernatural explanations. It is also wrong to single out biology.

You are right, it is laughable.

Comment #104968

Posted by jeannot on June 11, 2006 7:07 AM (e)

Torbjörn, you didn’t understand me. But my English might not be perfect.
Evolution doesn’t have any specific objective nor problem to resolve. A fitness increase is not a specific objective, contrary to the program described here.

Using this environment, GP is applied to the beginning programmer problem of generating a desired string output, e.g., “Hello World”.

Therefore, I concluded that this computer program is totally irrelevant to evolutionary biology.
Calculating the chance to reach the state “hello world” is as useless as calculating the probability for Homo habilis to become H. sapiens.

Or did I miss something ?

Comment #104973

Posted by Gerard Harbison on June 11, 2006 8:45 AM (e)

staring, intrigued, at the description “Dembski personality cult”. can anybody think of an example of a personality cult which revolved around a smaller amount of personality?

In the country of the blind, the one-eyed man is king – Erasmus

Comment #104974

Posted by Gerard Harbison on June 11, 2006 8:45 AM (e)

staring, intrigued, at the description “Dembski personality cult”. can anybody think of an example of a personality cult which revolved around a smaller amount of personality?

In the country of the blind, the one-eyed man is king – Erasmus

Comment #104975

Posted by Caledonian on June 11, 2006 8:51 AM (e)

Evolution doesn’t have any specific objective nor problem to resolve. A fitness increase is not a specific objective, contrary to the program described here.

Fitness, although difficult for humans to evaluate, is an objective concept.

A strain of bacteria that lives in a nutrient-poor but nylon-rich pool is not given a goal of evolving a nylon-digesting enzyme by some higher authority. Nevertheless, we can show that deriving energy from nylon compounds is indeed a selective criteria.

You are indeed missing something.

Comment #104977

Posted by Jeannot on June 11, 2006 9:13 AM (e)

Caledonian, I think you missed something in what I said.
But we have the same ideas. What if I told you I am a graduate student in evolutionary biology? ;-)

I said that a fitness increase was not a specific objective (goal if you prefer) similar to the ‘hello world’ program, I didn’t said it wasn’t an objective concept (that’s another matter).

I totally agree that nylonase never was an objective. Thus, it would be useless to calculate the probability to produce the nylonase gene in a initial bacterial population.

Comment #104978

Posted by jeannot on June 11, 2006 9:55 AM (e)

Just to make myself clear :

talkorigins wrote:

Genetic algorithms are not perfect evolutionary simulations in that they have a predefined goal which is used to compute fitness. They demonstrate the power of random variation, recombination, and selection to produce novel solutions to problems, but they are not a full simulation of evolution (and are not intended to be). In simulations of biological evolution, fitness is evaluated only locally; survival and reproduction is based only on information about local conditions, not on ultimate goals. However, the simulations demonstrate that distant fitness peaks will be reached if there are conditions of intermediate fitness (Lenski et al. 2003). Evolutionary processes do not “search.” They respond to local fitness topography only. The fact that evolution (occasionally) reaches fitness peaks is a by-product of evolving on correlated fitness landscapes using purely local fitness evaluation, not an intended outcome.

http://www.talkorigins.org/indexcc/CF/CF011.html…

Comment #104982

Posted by nilekim on June 11, 2006 10:01 AM (e)

Jeannot, I think you raise an important distinction, but I also think it’s a little harsh to call GAs/GP “totally irrelevant” to evolutionary biology. When we plate bacteria on nylon-rich media, we effectively define a fitness function which favors the ability to produce nylonase. When, lo and behold, nylonase-producing bacteria arise and flourish, should we not attribute this to RM/NS because we (effectively) set an artificial objective? After all, this objective is not one that would arise in nature (without manmade nylon).

I think it can be argued that in the GP example, the objective is similarly contained within the fitness function, which is this time explicitly defined; the “organisms” are not “aware” of it and cannot “consciously” work towards it. Of course, there are a lot of other issues in drawing any direct comparisons, but I think GAs can be helpful to evolutionary biologists insofar as they illustrate the general computational principles underlying RM/NS.

On the other point, I also can’t believe that at this point anyone is still parading around “Sequence length n, alphabet length k, probability 1/k^n, HA!”. It’s a total emabarassment.

Comment #104989

Posted by jeannot on June 11, 2006 10:13 AM (e)

Yes, I may have been a little harsh with my ‘totally irrelevant’ if these simulations can be assimilated to a climb toward an adaptive peak.
But at most, regarding evolutionary biology (not informatics), these simulations are rather useless since the fundamental theorem of natural selection has been demonstrated by Fisher 76 years ago.

Comment #104991

Posted by Todd on June 11, 2006 10:20 AM (e)

jeannot wrote:

Evolution doesn’t have any specific objective nor problem to resolve. A fitness increase is not a specific objective, contrary to the program described here.

It may not have a specific objective, but evolution often does have a specific problem to be solved. For instance, say a prey animal hides in narrow gaps. In order to get more food, an predator would get a selective advantage from getting the food out of the gap. This is a specific problem to be solved. The exact biological solution to the problem varies (some birds use sticks, others use narrow beaks, chimpanzees use sticks, octopi have narrow tentacles and a highly deformable body). There are often many possible solutions to a given problem. You can easily often give artificial problems to be solved. For instance surviving an antibiotic would be a problem for bacteria to solve. There may be multiple different mutations that could allow for a solution to that problem. What could be said here is that they are supplying a specific problem to be solved. Such specific problems are extremely common in nature. Organisms that are better able to solve a specific problem, even if they are not perfect at it, would still have an advantage. Animals that can only get food from shallow or fairly wide gaps would still have an advantage over those than can’t get food out of gaps at all. Bacteria that are only partially immune to antibiotics often have a selective advantage, especially if the antibiotic treatment is ended early.

The problem they want the program to solve here may be arbitrary, but it is fundamentally no different than any other specific problem in nature that organisms have evolved to solve. In fact if there was no problem to be solved, then natural selection could not occur because the organism must already be perfectly suited to its environment. I say this is directly analogous to real-world evolution. So-called “filling an ecological niche” is to solve a specific biological problem, even if the evological niche is artifically induced. There is no purpose in it, just the selective advantage that comes from solving a problem that the competitors cannot solve or solving it better or more efficiently than they do. The solution may not be pre-ordained, in fact that whole point of evolutionary algorithms is to find a new solution to a problem. But there nevertheless are specific problems in nature that organisms have evolved to solve.

Comment #104993

Posted by Mark Isaak on June 11, 2006 10:29 AM (e)

Another point, quite apart from genetic algorithms, may be worth mentioning. The beginning programmer quite likely did not get his “Hello world” program correct the first time. Then he had to make modifications to it and select the version which worked best. In other words, his program evolved. This is certainly the case in more complicated computer programs. The programmers of these take much of their code from previous programs, and even then most of their work goes into fixing bugs which were inadvertently introduced. The process is different in important ways from biological evolution, to be sure, but the process still embodies the basics of evolution: modification and selection of existing forms. In short, design is a kind of evolution. To accept design is to accept evolution, and to reject evolution is to reject design.

Comment #104994

Posted by secondclass on June 11, 2006 10:35 AM (e)

DaveScot responds:

DaveScot wrote:

The response is empty. Pim Van Meurs cites a program (written by intelligent agents I presume) that can create a “Hello World” program from some unspecified genetic algorithm.

Although the actual code isn’t given, the algorithm is described in detail. On the other hand, the process used by ID’s designer remains a mystery.

DaveScot wrote:

The way this is accomplished is not disclosed and if it were disclosed I’m sure we’d find the program is cheating by sneaking information in via the filter which ranks the “fitness” of the intermediate outputs.

Did you read the paper, Dave? It specifically states that the fitness scoring is based on the correctness of the output string. There’s nothing sneaky about it. If the fitness criteria were generated randomly and it turned out to be based on the closeness to the string “ewij fopsdajf ifjsofjeij”, then then end result would be an agent that outputs “ewij fopsdajf ifjsofjeij” and the point of the paper would be unchanged.

DaveScot wrote:

Trial and error and the choosing of partially successful intermediate solutions requires purpose and direction. These are supplied by the programmers of the so-called genetic algorithms.

And in nature they’re supplied by environmental selection.

Comment #104995

Posted by jeannot on June 11, 2006 10:38 AM (e)

The frequency of structures that reproduce more efficiently increases, independently of the notions of problem and solution. These notions only exist in intelligent minds that can identify them.

Similarly, adaptive peaks don’t exist before they are reached.

But this is just a problem of semantics.

Comment #104996

Posted by stevaroni on June 11, 2006 10:40 AM (e)

Several things pop out immediately from this data.

The first is the incredible power of a little bit of natural selection pressure.

The odds of generating the final 58 line program by random chance is truly huge (660^58). That kind of number has the flavor of the improbability numbers that the ID proponents like to throw out everyday. I doubt that there’s enough storage space on the planet to hold all those permutations.

Yet throw in a little survival-of-the-fittest pressure and you can get answers like that in a few thousand generations.

Secondly, look at the graph of fitness over time. Damned if there wasn’t a little mini “Cambrian Explosion” right at generation 400, where the program suddenly became much, much more fit in a very short span.

Comment #104997

Posted by stevaroni on June 11, 2006 10:43 AM (e)

Oh, and the third thing is how quickly a primitive precursor of “Hello World” established itself.

Comment #104998

Posted by secondclass on June 11, 2006 10:45 AM (e)

And GilDodgen responds:

GilDodgen wrote:

I would be curious to see the intimate details of the Panda’s Thumb program. I’ll bet dollars to donuts that the programmer cheated by defining intermediate fitness goals with the Hello World program in mind.

I’ll take that bet. Want to put your money (or donuts) where your mouth is, Gil?

At the very least, I think an apology is in order unless you have actual evidence that the programmer cheated.

Comment #104999

Posted by stevaroni on June 11, 2006 10:52 AM (e)

DaveScot wrote:

Trial and error and the choosing of partially successful intermediate solutions requires purpose and direction. These are supplied by the programmers of the so-called genetic algorithms.

In these simulations, the least fit programs are removed by filters, and the more fit programs are left to breed, improving the program pool.

In the real world, the least fit animals are removed by snow leopards and the more fit animals are left to breed, improving the gene pool.

The rules are simple and no intelligence required. Deal with it.

Comment #105000

Posted by Todd on June 11, 2006 11:12 AM (e)

wrote:

The frequency of structures that reproduce more efficiently increases, independently of the notions of problem and solution. These notions only exist in intelligent minds that can identify them.

The notion of problem and solution may only exist in intelligent minds, the goal to solve the problem may exist only in intelligent minds, but the problem itself exists in nature. It is a fact that in an environment with a high concentration of an antibiotic, bacteria will die unless they evolve resistance to it. This is a natural, objective problem, outside of any intelligent mind. It was a problem bacteria have had to overcome since long before humans had evolved, and will continue to be as long as bacteria exist. The actual understanding of the problem and the ability to identify it as such is a trait of intelligent minds, but the problem itself exists independently of any intelligence. Similarly, food does exist in crevices or holes. That is a natural problem, and has been a problem since long before land animals even evolved. The ability to conceptualize it as a problem is unique to an intelligence, the desire to solve it may be unique to an intelligence, but the problem itself nevertheless exists in nature. Notions exist only in intelligent minds, but the notions can still reflect objective natural phenomenon.

In some cases, evolution may be directed towards solving a specifc problem. Bacteria that do not develop resistance to a high concentration of antibiotics do not survive, period. When forest turns to grassland, tree-dwelling apes that do not evolve the ability to move around on the grassland do not survive. Other times, it is more open. There could be a great many problems an organism could solve. There are any number of possible ways an organism could get food. They could look for food in crevices, or perhaps dig it out of the ground, or catch it on the run or on the wing. Nevertheless, getting a hold of a given type of prey is a real problem that exists in nature outside of an intelligent mind. Insects do live under bark, getting ahold of them is a problem. It is by no means the only problem, and in most cases the organisms are not specifically directed to one particular problem above all others (although many organisms end up evolving to solve one particular problem in a given area, like finding food), but the problems are still there. The organisms may not be specifically trying to solve the problem, in most cases they are not even aware of it, and there is nothing making them orient themselves towards one problem except perhaps too much competition for another, but that does not mean the problem does not really exist.

I think this particular experiment would be more akin to an environment where an antibiotic starts off at low concentration and slowly increases in concentration in time with the evolution of more successful antibioitic resistance. Any bacteria that does not keep up is killed off. This sort of thing could happen if a bacteria evolves a new antibiotic and begins to slowly spread through the environment (this is apparently a major issue for soil-dwelling bacteria). The bacteria must get better and better at coping with the antibiotic or they will not survive. Shortening the code would be analagous to antibiotic-resistant bacteria competing with each other by developing more and more efficient antibiotic resistance genes that do not require as much resources or do not negatively impact the bacteria as much.

Comment #105001

Posted by Jim Harrison on June 11, 2006 11:24 AM (e)

Circuits can be and are designed both by versions of the genetic algorithm and by conscious human design. While both kinds of solutions work, those produced by artificial natural selection tend to be more robust in actual use than those made rationally. Like living systems such as metabolic pathways, artificially evolved systems go on functioning even when some of their parts are damaged while designed systems tend to be much more brittle. Andreas Wagner discusses this constrast in his book Robustness and Evolvability in Natural Systems. It is the zillionth reason to believe that living things were produced by chance and selection rather than conscious design.

Comment #105004

Posted by normdoering on June 11, 2006 12:14 PM (e)

Jim Harrison wrote:

Circuits can be and are designed both by versions of the genetic algorithm and by conscious human design. While both kinds of solutions work, those produced by artificial natural selection tend to be more robust in actual use than those made rationally. Like living systems such as metabolic pathways, artificially evolved systems go on functioning even when some of their parts are damaged while designed systems tend to be much more brittle. Andreas Wagner discusses this contrast in his book Robustness and Evolvability in Natural Systems. It is the zillionth reason to believe that living things were produced by chance and selection rather than conscious design.

Give us a break, man. We’re only working with about three pounds of jello-like cerebral grey matter!

Maybe we should call these genetic algorithms intelligent and realize that we’ve already passed into Vernor Vinge’s technological singularity?

http://www-rohan.sdsu.edu/faculty/vinge/misc/sin…

Comment #105005

Posted by steve s on June 11, 2006 12:30 PM (e)

On the other point, I also can’t believe that at this point anyone is still parading around “Sequence length n, alphabet length k, probability 1/k^n, HA!”. It’s a total emabarassment.

Have you read the comments at Uncommonly Dense? Total Embarassment is their bread and butter.

Comment #105006

Posted by steve s on June 11, 2006 12:40 PM (e)

DaveScot wrote:

The way this is accomplished is not disclosed and if it were disclosed I’m sure we’d find the program is cheating by sneaking information in via the filter which ranks the “fitness” of the intermediate outputs.

Lucky for the people at Uncommonly Dense, they’ve never had any training in science, or they’d recognize this as hand-waving.

Comment #105007

Posted by stevaroni on June 11, 2006 12:54 PM (e)

DaveScot wrote:

The way this is accomplished is not disclosed and if it were disclosed I’m sure we’d find the program is cheating by sneaking information in via the filter which ranks the “fitness” of the intermediate outputs.

This is the difference between ID and science.

Science looks at this sort of stuff and says “Wow, that’s a different little piece of data - I wonder what that means?”

ID looks at the same result and says “It’s obvious that someone must be cheating”.

Comment #105008

Posted by Caledonian on June 11, 2006 1:01 PM (e)

Caledonian, I think you missed something in what I said.
But we have the same ideas. What if I told you I am a graduate student in evolutionary biology? ;-)

Then I would decry the decreasing standards of our graduate schools.

One of the most common ID arguments is that the specific attributes of living creatures could not have arisen through natural selection. This experiment is a quick way of demonstrating what has been proven again and again in more sophisticated and decisive ways: with the right selection pressures, any specified structure can be produced quite easily.

Comment #105015

Posted by steve s on June 11, 2006 1:23 PM (e)

Davetard is always guaranteed to produce humor.

Funny, the most natural thing in the world is for organized systems to become less organized over time. It’s called the second law of thermodynamics. Dr. Behe’s theory follows this law. And by the way, for you anti-sticker apologists, a law is more of a fact than a theory.

http://www.pandasthumb.org/archives/2005/01/a_co…

When he tries to talk about information theory he’s especially funny

Davescot

If you can give me a clear and precisely worded example of an ‘intelligent’ agency causing a violation of the second law, please do.

Me writing this sentence. -ds

Comment by physicist — March 7, 2006 @ 11:30 am

http://www.uncommondescent.com/index.php/archive…

Comment #105017

Posted by PvM on June 11, 2006 2:16 PM (e)

Fascinating how my posting has generated such a level of cognitive dissonance amongst the UcD people.

Trial and error and the choosing of partially successful intermediate solutions requires purpose and direction. These are supplied by the programmers of the so-called genetic algorithms.

I am not sure if Dave is familiar with the work by Charles Darwin, who used artificial selection to formulate his ideas on natural selection. Indeed, as Dave has noticed, the environment is which provides the ‘purpose and direction’, which is why evolutionary processes appear to be teleological in nature.
In addition, Davescot ignores the questions raised by Gil. But at least I predicted the responses…

Who says that ID is without predictability :-)

Seems that some still believe in the fairy tale of ‘conservation of information’ which really is not a conservation law after all. Davescot is in full damage control mode it seems. Postings are disemvoweled or removed…
Ah the smell of cognitive dissonance in the morning, it does make a happily married PhD’er quite content :-)
So when will the first ID activist admit that ID’s arguments are scientifically vacuous and that the displacement or conservation laws are meaningless? Or, as some mathematician (David Wolpert actually, who is the lead author of several NFL theorem papers) referred to Dembski’s work on No Free Lunch, ‘written in Jello’

Comment #105018

Posted by 'Rev Dr' Lenny Flank on June 11, 2006 2:47 PM (e)

Funny, the most natural thing in the world is for organized systems to become less organized over time. It’s called the second law of thermodynamics.

Geez, are the creationists STILL dragging out this old horse … ?

Next thing ya know, the IDiots will be blabbering (again) about “no fossil transitions”, “bombardier beetles can’t have evolved”, and “radio-dating is unreliable”.

So much for that whole “ID isn’t creationism” thingie, huh.

In the immortal words of Bugs Bunny, “What a maroon”.

(snicker)

Comment #105019

Posted by Sir_Toejam on June 11, 2006 3:02 PM (e)

Next thing ya know, the IDiots will be blabbering (again) about “no fossil transitions”, “bombardier beetles can’t have evolved”, and “radio-dating is unreliable”.

gees, Lenny, why don’t you visit one of AFDave’s threads on ATBC and join in the fun!

he’s recently been addressing the whole radio dating issue.

talk about cognitive dissonance….

Comment #105021

Posted by steve s on June 11, 2006 3:27 PM (e)

Lenny would love AFDave

Comment #105022

Posted by David B. Benson on June 11, 2006 3:35 PM (e)

Second Law of Thermodynamics and Biology: Read “Into the Cool”, a cool book about NET(Non-Equilibrium Thermodynamics) and a relationship to biological processed.

Comment #105023

Posted by PvM on June 11, 2006 3:54 PM (e)

Speaking of cognitive dissonance. Gil has ‘responded’

The bottom line is that the theory of random mutation and natural selection is dead as an explanation for the origin of life’s complexity, diversity, information content and functionally integrated machinery. Actually, it isn’t even a theory; it’s wildly wishful speculation that flies in the face of common sense and hopelessly huge improbabilities. There isn’t a shred of evidence that RM+NS has the creative power attributed to it. This is not science.

Despite the various lines of evidence that actually show that RM+NS has ‘creative powers’ and can increase the complexity and information in the genome, Gil asserts that RM+NS is ‘dead in the water’. Of course from such a perspective any critical evaluation of evolutionary science becomes quite tricky.

But it’s all the Darwinists have, so it will be defended to the death by any means available, no matter what.

Is this why Gil is unable or unwilling to defend his own claims? At what expense I wonder?
Gil raised a question about the evolution of ‘hello world’ programs under RM+NS. I obliged and the response becomes one which seems not dissimilar from someone sticking his fingers in his ears stating “I can’t hear you”.
I find this fascinating as it serves as a useful example of how critical evaluation can be often relaxed because of pre-conceived assumptions.

I understand that natural selection requires each step to be functional but not a step toward a predetermined goal. That is my point about why these programs are invalid. Teleology is invariably smuggled into the algorithms. Goals and fitness criteria suitable to reach them are predefined (although sometimes subtly), and intermediate islands of “function” are rigged to be easily reachable by trial and error.

If Gil’s question was irrelevant to evolutionary processes then why did he raise the challenge to show how processes of variation and selection could generate such programs? Teleology indeed is invariably ‘smuggled’ into the algorithms via the environment which does the selecting. Seems to me that Gil agrees that his request was nothing more than a poor analogy and strawman.

Would Gil be interested in exploring how for instance RM+NS can in fact increase the information in the genome? Can Gil explain how natural selection differs from artificial selection? Can Gil perhaps formulate a relevant analogy that explores concepts of evolutionary science such as RM+NS? After all, the empirical evidence for RM+NS is far more extensive than the evidence for the supernatural intelligence required by ID. And yes I am aware that ID activists insist, contrary to logic flowing from their own arguments, that ID is not about the supernatural.

Comment #105024

Posted by PvM on June 11, 2006 3:57 PM (e)

On UcD Mark Frank also responded to Gil

I know very little about these programs but my guess is that you are asking them to be a complete simulation of evolution when all they are doing is illustrating some aspect of how complexity can be achieved through trial and success. After all the example that kicked off this thread was hardly an accurate representation of life and yet you felt that there lessons to be learned from it.

Indeed, Mark has realized that Gil’s posting was a poor analogy, a strawman. Now that the challenge has been met, Gil seems to object to his own challenge. I can understand why he would regret his example, and the best response would be to admit that the challenge had no real relevance to the abilities of RM+NS to explain the biological complexity and information content of the genome. But who am I to give advice to ID activists :-)

Comment #105028

Posted by RBH on June 11, 2006 4:03 PM (e)

Discussions of computer models of evolutionary processes typically dissolve into confusion due to the failure to carefully distinguish between two kinds of models that differ in the information used to calculate fitness.

1. Models with global fitness calculations. These are Dawkinsian METHINKSITISLIKEAWEASEL sorts of models, where the fitness of a replicator is calculated as the distance of its phenotype from some target phenotype. The fitness equation “knows” the target state, and replicators are more or less fit (and therefore survive to replicate and/or recombine) based on relative similarity (e.g. the Hamming distance) to that target state. These kinds of models are not models of biological evolution, and claims that they are such models flatly misconstrue biological evolution. However, they are useful in demonstrating the power of cumulative selection, which is all Dawkins sought to do with his METHINKS illustration. He explicitly said that the METHINKS program was not a model of evolution, but only of cumulative selection and its power to transform tiny probability into high probability. Creationist have consistently and persistently misconstrued that program since it was published, and Dodgen’s post is yet another example of that misconstrual.

2. Models with local fitness calculations. These are models in which the algorithm does not “know” what a target phenotypic state might be, but “knows” only what is better or worse in the local environment, where “local environment” means the values of relevant environmental variables in the volume of phenotype space actually occupied by the current population. The fitness equation of the algorithm can calculate relative fitness among the replicators in the population, based on some defined properties of the replicators in that environment. Most GAs are of this nature. If I want to evolve a population of artificial stock traders, I cannot write down the specific target phenotype – if I could, I wouldn’t bother to use a GA, I’d just write it down and trade on it. However, I know some properties a good artificial trader should have, and I can write a fitness function that tests for values of those properties. For example, I might use risk-adjusted return over some historical data as the relevant property. All members of the population paper trade over that period, and the algorithm calculates each artificial trader’s risk-adjusted return and ranks the traders on that measure to determine which survive to replicate, their probability of entering into recombination, and so on. The algorithm “knows” a property of artificial traders – relative risk-adjusted return, where better traders are higher – but “knows” neither what an excellent trader’s phenotype would look like nor what the global maximum risk-adjusted return might be. It “knows” only how risk-adjusted return differs among the members of the current population.

Biological evolution is an algorithm of the second sort. The algorithm does not “know” a target phenotype in order to determine fitness on the basis of similarity to that target phenotype. Rather, the algorithm of biological evolution “knows” only locally determined fitness, where fitness is “calculated” implicitly as survival and relative reproductive success of the actual replicators in the population in a specific environment composed of physical variables and biological variables (conspecifics and other species).

As a consequence, any algorithm that incorporates a fitness calculation that refers to some phenotype (or genotype) not currently in the population is not a model of biological evolution. Biological evolution “knows” what’s better or worse in the current population only by virtue of the differential survival and reproduction of the members of that population; it does not “know” an optimal phenotype or genotype toward which it should evolve.

RBH

Comment #105029

Posted by PvM on June 11, 2006 4:08 PM (e)

Thanks RBH for your clarification. Gil asked the following question

What is the probability of arriving at our Hello World program by random mutation and natural selection?

Which I addressed. Of course, as I pointed out, Gil’s request was not dissimilar from Dawkin’s Weasel and shows how selection can significantly affect the probabilities.

Comment #105030

Posted by Caledonian on June 11, 2006 4:16 PM (e)

METHINKS is a representation of biological evolution; it’s just that it explicitly defines what the environment will favor. Other simulations implicitly define this by generating environments in which certain traits will prove more robust.

From a mathematical and theoretical perspective, it makes no difference if you set up the fitness space explicitly or implicitly. The best solutions within the fitness space will be the states which the simulation will tend towards either way.

Comment #105031

Posted by steve s on June 11, 2006 4:34 PM (e)

I briefly considered forwarding Gil’s article to MarkCC at Good Math / Bad Math, but there’s nothing really interesting about it. We’ve seen people like Salvador make the same arguments in the past. It’s pretty poorly executed.

Writing Computer Programs by Random Mutation and Natural Selection

The first computer program every student writes is called a “Hello World” program. It is a simple program that prints “Hello World!” on the screen when executed. In the course of writing this bit of code one learns about using the text editor, and compiling, linking and executing a program in a given programming environment.

Here’s a Hello World program in the C programming language:

(code stuff)

This program includes 66 non-white-space text characters. The C language uses almost every character on the keyboard, but to be generous in my calculations I’ll only assume that we need the 26 lower-case alpha characters. How many 66-character combinations are there? The answer is 26 raised to the 66th power, or 26^66. That’s roughly 2.4 x 10^93 (10^93 is 1 followed by 93 zeros).

“Let me pick a single target. Boy that’s unlikely!”

To get a feel for this number, it is estimated that there are about 10^80 subatomic particles in the known universe, so there are as many 66-character combinations in our example as there are subatomic particles in 10 trillion universes. There are about 4 x 10^17 seconds in the history of the universe, assuming that the universe is 13 billion years old.

“Now I’ll compare it to a different, irrelevant number”

What is the probability of arriving at our Hello World program by random mutation and natural selection? How many simpler precursors are functional, what gaps must be crossed to arrive at those islands of function, and how many simultaneous random changes must be made to cross those gaps? How many random variants of these 66 characters will compile? How many will link and execute at all, or execute without fatal errors? Assuming that our program has already been written, what is the chance of evolving it into another, more complex program that will compile, link, execute and produce meaningful output?

“How many other targets could there be?”

I can’t answer these questions, but this example should give you a feel for the unfathomable probabilistic hurdles that must be overcome to produce the simplest of all computer programs by Darwinian mechanisms.

“Who knows! But Darwin was wrong.”

Now one might ask, What is the chance of producing, by random mutation and natural selection, the digital computer program that is the DNA molecule, not to mention the protein synthesis machinery and information-processing mechanism, all of which is mutually interdependent for function and survival?

The only thing that baffles me is the fact that Darwinists are baffled by the fact that most people don’t buy their blind-watchmaker storytelling.
Filed under: Intelligent Design — GilDodgen @ 7:34 pm

“Wonder why the experts don’t agree with me?”

Comment #105033

Posted by jeannot on June 11, 2006 4:44 PM (e)

caledonian wrote:

Then I would decry the decreasing standards of our graduate schools.

One of the most common ID arguments is that the specific attributes of living creatures could not have arisen through natural selection. This experiment is a quick way of demonstrating what has been proven again and again in more sophisticated and decisive ways: with the right selection pressures, any specified structure can be produced quite easily.

My point of view appears in the article I quoted from talkorigins, which BTW is a response to Dembski.
Do you imply that the standards of talkorigins are decreasing or that it should not be a source of information for graduate students?

Comment #105037

Posted by steve s on June 11, 2006 4:48 PM (e)

If Dembski was serious about having an intellectually respectable blog, he’d fire all those idiots and get somebody who had some familiarity with science. A guy with a degree in some kind of actual science.

I presume he doesn’t do this because he enjoys the comedy as much as I do.

Comment #105038

Posted by PvM on June 11, 2006 5:10 PM (e)

The level of cognitive dissonance (and strawmen) continue

Avida, Weasel and other simulations are desperate attempts of Darwinists to prove evolutionary mechanisms of RM+NS using computer simulations. In heart of all these simulations you see a fitness functions that *intelligently* selects what should be selected and what should be not. The *designers* of all of these softwares subtly feed their system with external intelligence which normally is not available in the nature.

Still missing the part that involves selection by the environment. Weasel, as well as the ‘Hello World’ example specify the fitness landscape a-priori but that is mostly a strawman argument. The real dissonance arises when arguing that a fitness function requires intelligent designers avoiding the simple fact that the environment can play the same role as an artificial ‘selector’.

Comment #105039

Posted by jeannot on June 11, 2006 5:11 PM (e)

Todd wrote:

The notion of problem and solution may only exist in intelligent minds, the goal to solve the problem may exist only in intelligent minds, but the problem itself exists in nature. It is a fact that in an environment with a high concentration of an antibiotic, bacteria will die unless they evolve resistance to it.

Sure. But what I mean, is that the conception of problem and solution is not necessary in evolutionary biology. In nature, they are abstractions. These notions disappear when you formulate this way ‘bacteria that have some mutations can survive and reproduce in a environment with a high concentration of antibiotic’.
Saying that mutations and SN are trying to resolve a problem or reach a goal is not scientific. The mutations you transmitted to your children (if you have some), what problem are they resolving? What goal is your lineage trying to reach? Will it stop evolving once it reaches it?

RBH, thanks you for the clarification. :-)

Comment #105040

Posted by Flint on June 11, 2006 5:55 PM (e)

So would it be correct to say that theistic evolutionists picture their god as surreptitiously diddling with the environment, manipulating the fitness function so as to direct RM+NS to produce the desired lineages? Would this manipulation be sufficient in and of itself, or would this flavor of god also be required to interfere in other ways, such as drift control, or directing mutations according to the biase required to get the target results?

In any case, it’s quite an elegant technique if the target result isn’t known in advance, but its capabilities are.

Comment #105042

Posted by RBH on June 11, 2006 6:15 PM (e)

Caledonian wrote

METHINKS is a representation of biological evolution; it’s just that it explicitly defines what the environment will favor. Other simulations implicitly define this by generating environments in which certain traits will prove more robust.

Caledonian left out a phrase in the first sentence: “… and calculates fitness as a function of distance from the most favored state.” But that’s precisely what biological evolution does not do. Providing an experimental environment does not define the fitness function, the equation that calculates the relative fitness of replicators in a population. And it’s that function that determines whether an algorithm is a goal-seeking algorithm or an evolutionary algorithm. One can use random mutations and selection in a goal-seeking algorithm, but one is not emulating biological evolution when one does so.

Other simulation platforms – e.g., Avida – calculate fitness from information available in the immediate local environment. In calculating relative fitness the evolutionary algorithm (in contrast to the experimental shell around it) does not have information about global properties of the fitness landscape. Those platforms allow manipulating variables like the topography of the fitness landscape to test various hypotheses about biological evolutionary dynamics. METHINKS-style algorithms tell us little or nothing about those dynamics because they are search algorithms, not evolutionary algorithms.

The confusion is a product of using (explicitly or implicitly) a search metaphor for biological evolution. Biological evolution finds satisficing solutions (at great cost in replicator mortality), but does so as a side effect – a by-product – of the operation of an algorithm that “knows” only local fitness values and local values of environmental variables.

RBH

Comment #105044

Posted by Reed A. Cartwright on June 11, 2006 6:19 PM (e)

I’m going to have to disagree with RBH. Models that involve a global optima can be models of biological evolution. They are not going to approximate a complete biological fitness function, but they can approximate parts of fitness functions, i.e. areas with optima. I’ve used them to look at adaptation to environmental distrubance and stochasticity. I have a friend who’s used them to look at domestication and recombination.

I should also point out that the WEASLE program is not a model of mutation and selection, but rather of substitution and selection.

Comment #105046

Posted by Reed A. Cartwright on June 11, 2006 6:26 PM (e)

One can take a goal-oriented fitness function and cast it in terms that are goal-less.

The optimal phenotype is 1111.

versus

Each 1 adds 1/4 to the fitness.

Comment #105047

Posted by Wheels on June 11, 2006 6:29 PM (e)

It’s like these chuckleheads don’t understand what the word “MODEL” means.

Comment #105050

Posted by normdoering on June 11, 2006 6:52 PM (e)

Flint wrote:

So would it be correct to say that theistic evolutionists picture their god as surreptitiously diddling with the environment, manipulating the fitness function so as to direct RM+NS to produce the desired lineages?

That sounds right to me.

But I suspect that if there really were an intelligent designer it wouldn’t need a couple billion years to evolve something as confused as Jerry Falwell.

Comment #105069

Posted by Caledonian on June 11, 2006 8:42 PM (e)

Caledonian left out a phrase in the first sentence: “… and calculates fitness as a function of distance from the most favored state.” But that’s precisely what biological evolution does not do.

Yes, it does! More precisely, it *can*. It’s not often that fitness spaces are configured that way, but there’s no reason they can’t be. In a simplified simulation, such a space is perfectly appropriate.

Comment #105070

Posted by Torbjörn Larsson on June 11, 2006 8:42 PM (e)

jeannot says,

“Torbjörn, you didn’t understand me. But my English might not be perfect.
Evolution doesn’t have any specific objective nor problem to resolve. A fitness increase is not a specific objective, contrary to the program described here.”

No, it is my english that is failing. “Objective” implies teology, so I shouldn’t have used that, it is antropomorhising of software.

However, the software or algorithmn solves a problem and finds a solution (at least if we naively think of the fitness function as constant) since it can find a local maxima of fitness. This is what it’s called in math, physics or software, so it should preferably be possible to say so without confusing it with antropomorphic purpose.

“Therefore, I concluded that this computer program is totally irrelevant to evolutionary biology.”

I see that you resolved this later.

Comment #105071

Posted by Torbjörn Larsson on June 11, 2006 8:59 PM (e)

“DaveScot wrote:

“The way this is accomplished is not disclosed and if it were disclosed I’m sure we’d find the program is cheating by sneaking information in via the filter which ranks the “fitness” of the intermediate outputs.””

IDiots still doesn’t understand peer review! The paper should not pass peer review if not enough information is disclosed for repetition of experiments. If it does anyway, an attempted repetition with the help of the authors will resolve the issue, or it is easily refuted, perhaps even deemed a fake. This makes it desirable for the authors to save the information they feel is redundant or too complex to put in the paper for a reasonable time.

Perhaps this explain why ID so easily claim ‘peer review’ on some of their papers.

Comment #105072

Posted by Torbjörn Larsson on June 11, 2006 9:02 PM (e)

““Objective” implies teology”

“Objective” implies teleology. (Not that the difference matters much. :-)

Comment #105073

Posted by RBH on June 11, 2006 9:07 PM (e)

Reed wrote

I’m going to have to disagree with RBH. Models that involve a global optima can be models of biological evolution. They are not going to approximate a complete biological fitness function, but they can approximate parts of fitness functions, i.e. areas with optima. I’ve used them to look at adaptation to environmental distrubance and stochasticity. I have a friend who’s used them to look at domestication and recombination.

and

One can take a goal-oriented fitness function and cast it in terms that are goal-less.

The optimal phenotype is 1111.

versus

Each 1 adds 1/4 to the fitness.

Two terms to keep straight:

1. Fitness function: the equation that calculates fitness values for members of a population.

2. Fitness landscape: the topography of fitness values over the range of permissible values (alleles) of ‘genes’ and combinations of ‘genes’. Fitness landscapes are induced by evolutionary operators (most notably mutations of various kinds and recombination).

My point is not that models that “involve” global optima on the fitness landscape are not models of biological evolution. It is that models that employ knowledge of a global optimum in the fitness function to calculate the fitnesses of current members of an existing population for purposes of determining relative reproductive success of those members are not models of biological evolution. If the fitness function of the simulation of evolution within the model has access to higher-order information about the distant topography of the fitness landscape, information not embodied in the distribution of fitness values of current members of the population, then it is not a veridical model of biological evolution.

Obviously we – experimenters – may know more about the topography of an experimental fitness landscape than does the fitness function in our models. Or we may not. In my company we visualize 2-D and 3-D ‘slices’ of the high-dimensioned fitness landscapes our GAs evolve on in order to assess their statistical properties, but we can’t ‘see’ the whole landscape – they can run to over 100 dimensions with on the order of 50 alleles per dimension – and we have no bloody idea what a global optimum looks like. As I remarked, if we did we’d just write that down and use it. But the selection leg of the mutation-selection algorithm in a model cannot have access to that higher-order topographical information if it is to be a veridical model of biological evolution. In biological evolution there are no signposts along the path saying “This way to the summit”.

RBH

Comment #105074

Posted by steve s on June 11, 2006 9:09 PM (e)

Perhaps this explain why ID so easily claim ‘peer review’ on some of their papers.

because

“If it weren’t fer baaaaad luck,
I’d have no luck ay-tall…”

Comment #105075

Posted by steve s on June 11, 2006 9:35 PM (e)

Astrophysicists program astrophysics models.
Chemists program chemistry models.
Biologists program biology models.

The IDists try to dismiss any computer simulations of evolution with hand waving garbage like “Well, a programmer acted intelligently on the system, thereby injecting information, so it’s not really evolution.”

Take that stupidity to it’s conclusion–it can only be a computer model of evolution if nobody set up the model. If that were true, evolution would be uniquely prohibited from ever being modeled by scientists on a computer.

That’s the kind of Grade A thinking you can only get from The Short Bus on the Information Superhighway, Uncommonly Dense®.

Comment #105077

Posted by Caledonian on June 11, 2006 10:03 PM (e)

It is that models that employ knowledge of a global optimum in the fitness function to calculate the fitnesses of current members of an existing population for purposes of determining relative reproductive success of those members are not models of biological evolution.

That statement is wrong. The fitness function in the models in question is defined in terms of a specific strategy – it’s not knowledge of an optimum, it’s *defined* to be the optimum.

It really makes no difference whether we define where the optimum strategy is explicitly and shape the fitness space around it, or set up a random mathematical space with a local optimum and let the programs shape themselves to match *that* strategy.

You don’t seem to be grasping this point, though, so I don’t believe future discussion would be productive.

Comment #105085

Posted by PvM on June 12, 2006 12:04 AM (e)

“The way this is accomplished is not disclosed and if it were disclosed I’m sure we’d find the program is cheating by sneaking information in via the filter which ranks the “fitness” of the intermediate outputs.””

I provided the reference to the paper. If Davescot has problem comprehending the paper then he surely can ask relevant questions. Cognitive dissonance may however complicate matters.

Comment #105089

Posted by Reed A. Cartwright on June 12, 2006 12:19 AM (e)

RBH, I agree that in biological evolution fitnesses are not determined by the distance to an optimal genotype. That does not mean that they can’t in some way correlate with distance from an optimal genotype. (Phages are an example where optimal genotypes have been observed.) Model builders can exploit such correlation when employing simple fitness functions in their research.

Comment #105091

Posted by Jim Harrison on June 12, 2006 1:42 AM (e)

In a corporation, the CEO always has an advantage on his engineers because they are judged by how well they meet their goals while the CEO can always claim that whatever happened was what he had in mind. Natural selection has the same kind of executive edge. Just as the Big Cheese in a company only cares about profits, evolution only searches for higher levels of fitness. To that end, anything goes, including becoming a blind parasite in the bladder of an octopus.

The American philosopher C.S.Pierce made a similar point about human creativity, which involves a search of what Pierce called “the Sea of Musement” for results that fit vague and ambiguous goals such as Beauty, Goodness, or Truth. Whenever people have to find something specific, they automatically become stupider since the probabilities of locating the desired result are so small. Finding something interesting and then retroactively declaring it’s what you wanted is much easier. Unfortately, in most areas of human endeavor only a minority of individuals have the right to operate in this fashion. As Mel Brooks pointed out, “It’s good to be the king.”

Comment #105133

Posted by jeffw on June 12, 2006 6:13 AM (e)

RBH, I agree that in biological evolution fitnesses are not determined by the distance to an optimal genotype. That does not mean that they can’t in some way correlate with distance from an optimal genotype. (Phages are an example where optimal genotypes have been observed.

I’m curious as to what is considered “optimal”. Seems like a highly relative and subjective term. It may have meaning in a tightly controlled laboratory environment, but not in nature where such “optima”, even when you can define them, would constantly be changing, along with countless other interrelated pararmeters and “optima”. Optimally, I’d be 7 foot tall, could lift 1000lbs, and have an IQ of 250, but that would be complete overkill as far as nature is concerned. A species “optima”, if one exists, would be relative to it’s own highly mutable environment, created jointly by the non-living world, other species, and itself (feedback).

Comment #105143

Posted by Caledonian on June 12, 2006 7:44 AM (e)

It doesn’t matter one whit whether the slope of the fitness function is calculated by referencing a known strategy, or whether it’s generated by a specific equation that doesn’t explicitly refer to any particular strategy, if the shapes of the functions are identical.

As long as the fitness function is continuous, there are going to be local regions where the surrounding terrain slopes towards optima. The size of the regions depends on how steep that slope is, granted, but the slope remains.

Claiming that a simulation with such slopes doesn’t speak about biological evolution not only incorrect, but almost certainly disingenuous.

Comment #105198

Posted by jeannot on June 12, 2006 12:02 PM (e)

Caledonian wrote:

It doesn’t matter one whit whether the slope of the fitness function is calculated by referencing a known strategy, or whether it’s generated by a specific equation that doesn’t explicitly refer to any particular strategy, if the shapes of the functions are identical.

What if they are not?
Of course, model with similar equations are equivalent. But the problem is “what kind of model is more likely to produce relevant answers to an evolutionary question ?”

Evolutionary models (at least, all I’ve seen) don’t involve explicit phenotypes/strategies as goals to be reach. This is the other way around. They determine the evolutionary stable outcome (ESS, phenotype…).
Of course, once your model has determined the stable outcome (at a peack of adaptive landscape) you can produce an homologous model where the fitness function is based on the distance to this outcome. But this would be pointless.

Comment #105204

Posted by Sir_Toejam on June 12, 2006 12:41 PM (e)

Claiming that a simulation with such slopes doesn’t speak about biological evolution not only incorrect, but almost certainly disingenuous.

I’m curious if you or Jean have actually seen whether models have correctly predicted the direction a trait will take in the field.

All the models I examined in the 90’s were quite general, mainly because it was so difficult to be inclusive of all selection variables in a field situation, and have a good idea as to how to quantify them.

That said, I have seen some specific situations (See some of the work by John Endler, for example), where it seems clear that there are identifiable and quantifiable selective pressures on color and fin shape in male poecilliids (guppies in this case). However, I haven’t seen any models produced.

Any references you’ve run across recently, that attempt to model specific traits for a specific population in the field?

Comment #105209

Posted by RupertG on June 12, 2006 1:37 PM (e)

Steve S wrote:

Astrophysicists program astrophysics models.
Chemists program chemistry models.
Biologists program biology models.

The IDists try to dismiss any computer simulations of evolution with hand waving garbage like “Well, a programmer acted intelligently on the system, thereby injecting information, so it’s not really evolution.”

Take that stupidity to it’s conclusion—it can only be a computer model of evolution if nobody set up the model. If that were true, evolution would be uniquely prohibited from ever being modeled by scientists on a computer.

That’s the kind of Grade A thinking you can only get from The Short Bus on the Information Superhighway, Uncommonly Dense®.

It’s better than that, Jim. These people are saying that we’ve achieved actual artificial intelligence in evolutionary models.

Lordy, if they could just write that up in a paper and get it into a journal, they’d be some of the most famous people in the business - and beyond. Think of the kudos that would attach to ID thinking.

Why do they keep such a huge secret to themselves? Are they hoping to create their own AI in the basement of Dembski’s BBQ And Grill, and unleash it on the world to precipitate the Rapture?

The least they could do would be to write a model that models evolution as they see it and show that it doesn’t work. That would be creative, interesting and productive.

R

Comment #105216

Posted by Reed A. Cartwright on June 12, 2006 2:13 PM (e)

I’m curious as to what is considered “optimal”. Seems like a highly relative and subjective term.

Phages are one example where “optima” have been observed, defined as a genotype or groups of genotypes that have maximal growth rate. Phages are small viruses that attack bacteria. Of course in the long run, the phages environment is not constant, however phages can probably go through billions or trillions of generations before their environment changes in a way that affects them.

Comment #105217

Posted by Caledonian on June 12, 2006 2:21 PM (e)

What if they are not?

Well, what if they aren’t? The point is that it doesn’t matter how the fitness slope is determined – obviously two different fitness slopes will produce different kinds and rates of adaptation.

Comment #105228

Posted by jeannnot on June 12, 2006 3:44 PM (e)

Sir_T wrote:

I’m curious if you or Jean have actually seen whether models have correctly predicted the direction a trait will take in the field.

Hi Thomas.

I’m not saying that models specifying an optimum can not be compared to some kind of biological process nor that they can’t produce predictions (I wasn’t clear on this). After all, the highly disputed Optimal Foraging Theory produced models whose predictions have been validated. Similarly, the evolution of camouflage or mimicry can be assimilable to an evolution towards an optimum (perfect imitation) where fitness may be correlated to the distance to this optimum.

What I am saying is that we can make all the metaphors we like (problems, solutions, goals), they will always remains metaphors, and not biological realities. Structures that reproduce more efficiently increase in frequency. That is a reality. (to some degree of course, I personally think that reality is not attainable, but this is a philosophical problem).

Comment #105229

Posted by JAllen on June 12, 2006 3:52 PM (e)

NewScientist: Nuclear reactors ‘evolve’ inside supercomputers

NewScientist wrote:

Nuclear reactors could be built more efficiently using supercomputers to artificially “evolve” designs, say engineers from the US Department of Energy’s Oak Ridge National Laboratory in Tennessee.

They have found they can speed up the extremely complex process of designing a reactor and generate novel designs from scratch by simulating natural selection.

NewScientist wrote:

Qualls and his colleagues were looking for a more efficient design approach and found inspiration in biological evolution. They used software tools known as genetic algorithms to evolve different reactor designs.

NewScientist wrote:

“[Simulated evolution] will come up with some systems we would just never have thought of,” Qualls says. “It won’t replace the experts or come up with a finished design, but it makes it possible to consider options they wouldn’t have had otherwise.”

Comment #105237

Posted by stevaroni on June 12, 2006 4:19 PM (e)

It seems like we’re arguing about the shape of the container when the important point is that the fluid flows in such a way to fill it.

The given criterion, spell “Hello World”, is a simple random filter and the fact that some intelligent force picked it out with the highly technical method of “hmm, that’s kinda funny, I think I’ll test this one” doesn’t mean anything significant.

There are endless possible “fitness criteria”. I have a bowl of M&M’s on my desk and it seems in that particular species the red M&Ms die young and dark brown M&M’s live into old age.

Even in a single environment there may be many possible survival strategies competing at the same time. You can stay away from the lions by getting faster (gazelle) growing bigger (elephants) learning to climb trees (monkeys) or out-thinking them (h.erectus).

The real important point isn’t that the fitness criterion draws you to a pre-arranged solution, it’s that it it makes you move in the first place.

Secondly, the date on the paper we’re talking about is 1995. That’s at least a century and a half ago in computer years. Isn’t there follow-on data available about this kind of experiments that shows their strengths and weaknesses?

Comment #105238

Posted by Sir_Toejam on June 12, 2006 4:20 PM (e)

I’m not saying that models specifying an optimum can not be compared to some kind of biological process nor that they can’t produce predictions (I wasn’t clear on this). After all, the highly disputed Optimal Foraging Theory produced models whose predictions have been validated.

Thanks; I think I understood what you meant. It’s just that not having institutional access I was wondering if you had run across more recent modeling attempts in the literature. I’m sure they’re there, I’m just missing them is all.

Comment #105245

Posted by jeannot on June 12, 2006 4:44 PM (e)

Caledonian wrote:

From RBH: “It is that models that employ knowledge of a global optimum in the fitness function to calculate the fitness of current members of an existing population for purposes of determining relative reproductive success of those members are not models of biological evolution.”

That statement is wrong.

If a global optimum is defined a priori, these models don’t simulate soft selection acting on a population.
If I am correct (you tell me), they are unrealistic, and RBH is right.

Comment #105248

Posted by David B. Benson on June 12, 2006 5:16 PM (e)

Re #105001: Jim Harrison — You state something to the effect that circuits which have been artificially evolved are more robust than designed circuits. I would greatly appreciate a reference. Thanks.

Comment #105250

Posted by RBH on June 12, 2006 5:34 PM (e)

Caledonian wrote

As long as the fitness function is continuous, there are going to be local regions where the surrounding terrain slopes towards optima. The size of the regions depends on how steep that slope is, granted, but the slope remains.

Claiming that a simulation with such slopes doesn’t speak about biological evolution not only incorrect, but almost certainly disingenuous.

I’m unaware that I or anyone else here has argued that a fitness function (equation) that does not have reference to distant features of the topography of a fitness landscape denies that fitness landscapes have slopes. The question of how fitness is calculated for members of a population is separate from questions about the topography of the fitness landscape. I can fully concede (indeed, I can proclaim, as I have done in a number of venues) that ‘natural’ fitness landscapes are chock full of slopes and gradients leading to local optima – they display locally autocorrelated surfaces – while simultaneously insisting that a model purporting to represent biological evolution cannot use knowledge of properties of the topography outside the specific region occupied by members of a population at a given moment.

RBH

Comment #105251

Posted by Jim Harrison on June 12, 2006 5:51 PM (e)

David Bensen asks: Re #105001: Jim Harrison —- You state something to the effect that circuits which have been artificially evolved are more robust than designed circuits. I would greatly appreciate a reference. Thanks.

The complete references are in a late chapter of Andreas Wagner’s book Robustness and Evolvability in Natural Systems (2005)—I’d give you page numbers, but I think I’ve loaned out my copy of the book.

Comment #105262

Posted by David B. Benson on June 12, 2006 6:27 PM (e)

Jim, thanks. I check it out of my library.

Comment #105268

Posted by jeffw on June 12, 2006 6:45 PM (e)

Phages are one example where “optima” have been observed, defined as a genotype or groups of genotypes that have maximal growth rate. Phages are small viruses that attack bacteria.

Ok thanks, but doesn’t “optima” refer to a phenotype (not genotype) on the fitness landscape? If you can say with certainty that a genotype is optimal, then wouldn’t you also have to possess a complete understanding of the ontogeny of the species, in absolute detail? It that possible now?

Comment #105288

Posted by steve_h on June 12, 2006 8:40 PM (e)

Warning: Well-adjusted people should just skip this.

The problem faced by the PT-linked program is much simpler than the one posed by UD. AIUI, the PT program simulates a computer which uses a subset of a Z80 machine in which a sequence of 8 bit bytes represents instructions and/or data. Any 256 8-bit byte value represents a hexadecimal number and most will also also represent a valid instruction of some sort depending on context.

[ Warning the follow may contain a lot of errors in detail, especially
with interpreting byte ordering 0102hex as 258Dec or 513Dec) or the
order in which bytes get loaded into a register]

For instance the sequence: 21 11 01 65 66, occupying consecutive memory locations from 0100 to 0104 can be interpreted rather differently depending on which of those bytes you look at first.

[geeky aside]
The Z80 has several 8 bit registers (which each can hold a number from 00 to FF (hex) or 0-255 (unsigned decimal), or -128-127 (signed decimal), or a character such as “A”, depending on how you are looking at them).

These registers are labeled A, B, C, D, E, F, H, and L. Some of the registers are combined to form 16 bit numbers by some instructions. Eg AF, BC, DE, HL. In such a case, if D=1F and E=03, then DE=1F03 (or 7939 decimal).

There are some other 16 bit registers IX, IY which I’ll skip, and one called PC (Program Counter) which contains the memory address of the next instruction. Generally 16 bit values can be numbers from 0-65536 (or -32768 to +32767) or can reference any single byte of memory in a computer with 65536 bytes of RAM (=64k).

If your Program Counter (PC) contains 0100, those numbers will be interpreted as

0100  21 11 01   LD HL, 1101    (21 means load register pair H and L with the next two bytes, 
                                 17 and 1 respectively [11Hex represents the decimal 16*1 + 1],
                                 Alternatively HL combined =273 decimal)
0103  65         LD H,L         (65 means copy the number in register L to register H,
                                 so now both H and L contain 01)
0104  66         LD H, (HL)     (66 means take the 16 bit value of H and L combined, 
                                 use that to form an address. 0101 in hex is 257 in decimal
                                 so we take whatever 8-bit value is stored in memory location 
                                 257 and copy that to register H)

and would have the effect of setting L to 01, and H to whatever was in memory location 257.

If, OTOH, the program counter started at 0101, these instructions would be interpreted as

0101 11 01 65   LD DE, 0165  (Or load register D with 1 and E with 65(hex)=101(decimal)='e' (Ascii) 
0104 66         LD H,  (HL)  (As in previous example)

and if PC was 0102 you’d get

0102 01 65 66   LD BC, 6566  (BC=25216 decimal or B=101/'e' and C=102/'f')

if you swap 21 and 11 you get

0100 11 21 01  LD DE, 2101
0103 65        LD H,L
0104 66        LD H, (HL)

[/geeky aside][resume slightly less geeky mode]

Almost any input is valid to the computer, but the results vary in usefulness to its human owner.

OTOH, if you write “rpintf(“Hello World”);” or mis-spell printf in any way in a c program you get a link error (function ‘rpintf’ not found) and your program doesn’t run. If you forget the “;” or either of the quote characters you get a compiler error and your program doesn’t run. In fact, almost any mistake not inside the string “Hello World” will result in your program failing to compile. This is because high level compilers are designed to trap common human errors - that is, really designed, not just “I don’t understand it therefore it was designed”-designed. They allow us to write human-friendly programs which free the programmer from a lot of tedious detail. One mistake anywhere in a 10000 line program
and the program prevents the program from working. This gives the human programmer a change to fix the program before it causes any damage; A program which doesn’t run is better than a program which introduces subtle errors. Conversely, in simple machine code almost any input causes something to happen, but often not what was intended.

DNA is rather different from high level computer language; Everyone’s DNA contains some replication errors. These errors are not
caught by any compiler or any programming tool - instead you live your life and then maybe one day you get some “access violation” or infinite loop and you die (and maybe get submitted for analysis), or you’re just constantly sick. There’s no friendly “‘G’ expected but got ‘T’ at location 1F374C47” message or “some quote was unmatched” from the compiler which the midwife corrects.

But that’s not to say that any DNA sequence will produce something - I just don’t know - but is by no means clear to me that DNA is better represented by a high level langauge computer language (which I think was chosen to inflate the numbers) or a low-level one, if indeed it’s a formal language at all and not just a part of a really complex series of chemical reactions.

And of course, I am not arguing that the Z80 was not designed; It was, and by humans.

Also, a lot of the posts at the UD site are going on about the additional complexity required by the operating system and the BIOS and the etc, but the PT paper doesn’t need those. Operating systems,
assemblers and compilers came later in the ‘(non-biological) evoltion’ of computers in order to make life easier for humans, but the computer does fine without them. C is easier for humans but harder for machines or evolution models. To a computer, the horrible hex stuff is a given, and it doesn’t matter if it was preprogrammed, random, entered bit by bit using electrical switches, paper tape or
fancy compiler program. Output is produced by executing an OUT instruction or by copying the number representing output into a memory location mapped to the display unit, etc. The paper dealt with a simple virtual computer chip using a subset of Z80 with no operating system or BIOS. Their VM writes output by writing to ‘physical’ ports directly. The chances of hitting an “OUT” instruction and producing some sort of output on the Z80 are better than 1 in 256 whereas your chance of stumbling
upon ‘printf(“X”) is much worse (1 in 256^8). Also they didn’t implement block transfers which would allow the output of whole strings with one two-byte sequence and a few bytes of setup.

Granted, the Z80 architecture itself was no accident, but we are taking that as a given.

One thing that bugs me. Many IDers and creationists often describe DNA as a computer code. So, OK you guys, what are the basic instructions in DNA? How many bits are used to represent
them. Does the architecture have registers, busses, stacks, formal syntax, debugging tools, formal methadologies, data structures, standard algorithms? What’s the DNA equivalent of a GOTO, or a CALL/RETURN or indirect or indexed addressing mode, or, at the level of ‘C’, formal function parameters, preprocessor directives, loops, conditionals, BNF. What library functions are available (and what arguments do they take)?

We’ve known about DNA for about half a century now, and the ID/creationist side argue that an inability to produce a full mutation by mutation history is proof that science is bogus and the
desiIMeanGoddidit, so I think it’s only fair that I ask for a detailed blow by blow account of the DNA based computer architecture.
When we have that, we can look of how a mutation in an instruction affects the program, and how that affects the individual, and, ultimately, how individual mutations affect the individual’s survival in a complex environment of interactive DNA machines. After that, we can decide of the required step by step mutation history of any given life form makes it impossible or not.

Comment #105290

Posted by steve s on June 12, 2006 9:15 PM (e)

Granted, the Z80 architecture itself was no accident, but we are taking that as a given.

You and I can take that as a given. Creationists can’t. Any part of any simulation or experiment that a human’s gotten within 10 feet of, has been infected by information cooties and the whole thing is worthless.

I know, I know, it’s the dumbest thing you’ve ever heard. That doesn’t stop them, however.

Comment #105352

Posted by Caledonian on June 13, 2006 8:08 AM (e)

RBH wrote:

The question of how fitness is calculated for members of a population is separate from questions about the topography of the fitness landscape.

Oh, really? Let’s keep that in mind while we read the rest of your comment.

I can fully concede (indeed, I can proclaim, as I have done in a number of venues) that ‘natural’ fitness landscapes are chock full of slopes and gradients leading to local optima — they display locally autocorrelated surfaces — while simultaneously insisting that a model purporting to represent biological evolution cannot use knowledge of properties of the topography outside the specific region occupied by members of a population at a given moment.

There’s the spot where you seem to become more than usually confused. The program isn’t “using” knowledge of the optimum strategy in any way other than determining the shape of the fitness space. The pressures on the populations are determined only by the fitness space that is calculated. The fact that the slope of the space depends upon the predefined, correct strategy is irrelevant.

Defining a fitness space, and then adding in another pressure that is not included in the fitness space, is indeed cheating. That is not what is happening in this case, and not what any reasonable and intelligent person would conclude is implied in this case.

If instead of using a predefined, correct strategy to calculate the shape of the fitness space, we used a complex mathematical equation that made no explicit reference to any particular strategy, the computer would still “know” what the slope of the surrounding territory at any moment. If we accept the inherent premises of your argument, then no fitness space model could ever be considered to demonstrate evolution – in fact, since fitness spaces in reality are not independent of their surroundings either, actual biological evolution cannot serve as an example of biological evolution.

In short, your point is incorrect, your argument in incorrect, and your position in the context of this experiment is specious. Do not bother with further questions – you have already been recategorized as a fool, and I will not waste my time further attempting to educate you.

Comment #105418

Posted by jeannot on June 13, 2006 3:51 PM (e)

Caledonian wrote:

The program isn’t “using” knowledge of the optimum strategy in any way other than determining the shape of the fitness space.

But that’s what we have been criticizing since the beginning, haven’t we?

The fitness of an allele can’t be defined by the global optimum alone (via the fitness function, as in the “hello world” program), i.e. without explicit knwoledge of the local and variable environment, particularly other alleles in the population (my reference to soft selection).

Comment #105420

Posted by Rilke's Granddaughter on June 13, 2006 4:08 PM (e)

Caledonian wrote:

In short, your point is incorrect, your argument in incorrect, and your position in the context of this experiment is specious. Do not bother with further questions — you have already been recategorized as a fool, and I will not waste my time further attempting to educate you.

This reply appears to be entirely disproportionate…. Not to mention the fact that I don’t think you’ve understood what RBH is saying.

Comment #105454

Posted by RBH on June 13, 2006 5:02 PM (e)

RGD wrote

This reply appears to be entirely disproportionate…. Not to mention the fact that I don’t think you’ve understood what RBH is saying.

I agree on both counts. Random name-calling doesn’t contribute much to understanding and doesn’t advance conversations. Let me try one more time.

Caledonian wrote

There’s the spot where you seem to become more than usually confused. The program isn’t “using” knowledge of the optimum strategy in any way other than determining the shape of the fitness space. The pressures on the populations are determined only by the fitness space that is calculated. The fact that the slope of the space depends upon the predefined, correct strategy is irrelevant.

If a model’s calculation of the relative fitness of an individual in a population takes into account values on the fitness landscape that are not occupied by that individual, then that model does not veridically represent biological evolution.

For example, if in an algorithm the fitness of individual X = (optimum body size - current body size), where “optimum body size” is the value of some peak on the fitness landscape distant from the point currently occupied by the individual, then that algorithm does not veridically represent biological evolution. It may illustrate some aspect of evolution – e.g., the power of cumulative selection to transform tiny probabilities into large probabilities, as in Dawkins’ METHINKS illustration – but it does not represent biological evolution with anything like the necessary fidelity to do serious work on most questions in evolution.

Much earlier in this thread Caledonian wrote (in response to jeannot)

One of the most common ID arguments is that the specific attributes of living creatures could not have arisen through natural selection. This experiment is a quick way of demonstrating what has been proven again and again in more sophisticated and decisive ways: with the right selection pressures, any specified structure can be produced quite easily.

I fully agree (subject to constraints by things like developmental canalization and the production of the necessary variants). We have no argument there, and if Caledonian imagines that I am arguing otherwise, then RGD is wholly correct: Caledonian does not understand what I’m saying.

RBH

Comment #105458

Posted by Sir_Toejam on June 13, 2006 5:19 PM (e)

If we accept the inherent premises of your argument, then no fitness space model could ever be considered to demonstrate evolution…

hmm, I can’t recall seeing a “fitness space model” that actually WAS used “to demonstrate evolution”. All the models I’ve ever seen were used to test general assumptions and predictions, not to specifically model and predict the evolution of a specific trait within a real-world population.

Hence the exact reason I was curious as to whether any have been used in such a way in the last 10 years or so, which is about the last time I ever examined the literature on the subject.

Rather than discussing the “potential” of fitness models, perhaps it would be useful to examine just how close any have come to modeling an example in-situ?

Comment #105468

Posted by RupertG on June 13, 2006 6:30 PM (e)

steve_h wrote:

These registers are labeled A, B, C, D, E, F, H, and L. Some of the registers are combined to form 16 bit numbers by some instructions. Eg AF, BC, DE, HL. In such a case, if D=1F and E=03, then DE=1F03 (or 7939 decimal).

There are some other 16 bit registers IX, IY which I’ll skip, and one called PC (Program Counter) which contains the memory address of the next instruction. Generally 16 bit values can be numbers from 0-65536 (or -32768 to +32767) or can reference any single byte of memory in a computer with 65536 bytes of RAM (=64k).

Aw, man! You forgot the alternate register set! And the stack pointer! And the flags and the interrupt page and the refresh registers! If I were a creationist/IDer, I’d say that these oversights completely disprove your analogy and thus God Did It (but that my complete lack of programming prowess is entirely insignificant).

R

Comment #105470

Posted by stevaroni on June 13, 2006 7:18 PM (e)

Aw, man! You forgot the alternate register set!

The alternate register set! What a freakin’ awesome idea that was!

I miss that on the proscessors I’ve had to use since then. Why did nobody else do that? how much hardware could it have taken, anyhow, a handful of muxes and a few registers?

Do the current Zilog microcontrollers still have that?

Comment #105472

Posted by steve_h on June 13, 2006 7:21 PM (e)

RupertG wrote:

Aw, man! You forgot the alternate register set! And the stack pointer! And the flags and the interrupt page and the refresh registers! If I were a creationist/IDer, I’d say that these oversights completely disprove your analogy and thus God Did It (but that my complete lack of programming prowess is entirely insignificant).

I apologise for any offence which may have been caused by these oversights. You’ve got me bang to rights and I am contrite.
I sort of alluded to the flags in the ‘F’ of ‘AF’ and to SP in one of the questions which will never be answered, and I fickly discarded alternate registers sets as irrelevant to the problem in hand, but that in no way diminishes my guilt. While I’m in confession mode, I never used I or R and don’t recall ever being aware of them (Sobs and/or rejoices, not sure which).

Comment #105475

Posted by Andreas Bombe on June 13, 2006 7:32 PM (e)

steve_h wrote:

One thing that bugs me. Many IDers and creationists often describe DNA as a computer code. So, OK you guys, what are the basic instructions in DNA? How many bits are used to represent
them. Does the architecture have registers, busses, stacks, formal syntax, debugging tools, formal methadologies, data structures, standard algorithms? What’s the DNA equivalent of a GOTO, or a CALL/RETURN or indirect or indexed addressing mode, or, at the level of ‘C’, formal function parameters, preprocessor directives, loops, conditionals, BNF. What library functions are available (and what arguments do they take)?

That is completely the wrong way to think about it. General purpose processors are linearly executing code which accesses storage and processes data. The genome, simply put, is a collection of genes with associated triggers and inhibitors. At the basic level these work independently, a gene is transcribed when the logical conditions set by the combination of triggers and inhibitors are met.

(Short interlude: I’m a programmer, not a biologist, all I write results from an interest in the area, represents my own understanding and may be oh so very wrong. Corrections very welcome.)

As such the genome is best described in computer science terms as a collection of state machines (working in parallel). However these are non-deterministic and probabilistic since the simple presence of a trigger molecule in the cell isn’t enough, it has to be at the right spot when the transcription machinery also happens to be there. This makes it dependent on the concentration of the trigger (higher concentration = higher probability for one to be there at the right time) and other variables, like how strong a trigger molecule is bonded to the trigger slot which determines how long it stays there on average.

There’s input - molecules from the environment which directly trigger genes, or which trigger the production of signaling molecules in some other part of the cell which in turn trigger genes. There’s output - genes transcribed into RNA which represent building instructions for proteins. By the way, there is some actual code here: The RNA to protein translation happens in words of three RNA bases which each encode one of 20 amino acids, plus start and stop codes.

There’s also internal processing - transcriptions which have no function outside the nucleus and only trigger other genes (the protein production started by one gene can also control another gene directly or by side effects). Since each state machine can potentially influence any number of others, they combine to a powerful state machine that is enormously complex seen as a whole, thanks to the sheer combinatorial complexity of the interactions.

To me it would seem that this “interacting independent state machines” model of the genome is vastly better suited for evolving compared to the “sequential instruction execution” model we use in our processors. For one thing, duplicating a code sequence in sequential execution requires that it is inserted into the program flow somewhere, or it’s just dead code. Duplication alone won’t do the trick, it needs an additional mutation that references the duplicate at its new location. In contrast, an additional state machine is included into the parallel workings of the genome simply by its very existence.

In sequential execution, every instruction in a working program may depend on the correct execution of every preceding instruction. A wrong bit 5000 instructions ago may defeat its purpose. Similarly, one wrong bit in a jump target address may derail the entire execution of the program, most likely to the point of crashing. The thing is, any code executed at any point can kill the program - no matter how insignificant or outright useless its execution was in the first place.

With independent state machines there is comparatively little problem. Gene transcriptions trigger when the conditions are met, indifferent to the exact path leading there. If a gene stops working all the time (say, one trigger failed), the organism will keep chugging along at an occasionally lower level of efficiency - unless it happened to be a critical gene. If a gene is obsolete and never triggered anymore, it is free to decay by mutation in many ways which simply make it stop working - without ever influencing the rest of the organism.

Same goes for duplicating, changing trigger conditions, co-opting and whatever else imaginable. The atomic units of the genome are genes+triggers and provide a function independently of each other. The atomic units of program code are instructions. Basic functions need a chain of instructions working together and depend on everything that happened before to work correctly.

Whoa, that grew to one of those long posts nobody ever reads anyway…

Comment #105492

Posted by Henry J on June 13, 2006 10:33 PM (e)

Andreas,
Re “that grew to one of those long posts nobody ever reads anyway…”

Oops, guess that makes me nobody? ;) Course, being a software engineer myself I found that analysis interesting. If it’s computer code, it’s not the traditional sequential. Either multiple CPU’s or multiple threads on one CPU.

I wonder though if recipe might be a better analogy than computer code (and never mind that my idea of “cooking” is punching buttons on a microwave), since at least a large part of the function seems to be adding of “ingredients” when they’re called for.

Henry

Comment #105498

Posted by Caledonian on June 14, 2006 12:01 AM (e)

hmm, I can’t recall seeing a “fitness space model” that actually WAS used “to demonstrate evolution”. All the models I’ve ever seen were used to test general assumptions and predictions, not to specifically model and predict the evolution of a specific trait within a real-world population.

I don’t see what predicting the development of a specific trait within a real-world population has to do with demonstrating evolution – that is, while doing so would clearly count as a possible way to demonstrate evolution, it’s certainly not required, necessary, or even particularly useful.

Artifical populations evolve. They do so under a variety of conditions, in very different fitness spaces, and in fairly predictable ways (although the result isn’t always predictable). This is completely in accordance with both common sense and mathematical knowledge.

The experiment that started all this was an attempt to show how easily a complex system could be evolved, given a fitness space that favored it. It has demonstrated its point very effectively. I’m not really sure what else needs to be said.

Comment #105500

Posted by Sir_Toejam on June 14, 2006 12:07 AM (e)

I’m not really sure what else needs to be said.

*shrug*

your loss then.

Comment #105528

Posted by Caledonian on June 14, 2006 8:52 AM (e)

This reply appears to be entirely disproportionate…. Not to mention the fact that I don’t think you’ve understood what RBH is saying.

If you think so, I can only conclude that you do not understand the nature of the error RBH is making. His objection is based on a faulty understanding of how the fitness of a program is evaluated.

The purpose of this experiment was to show how easily natural selection can guide a population to a local optimization, and for that purpose, it utilized a fitness space designed to put the optimization region at a specific point.

The algorithm used to define the fitness of a strategy does not permit the programs to “look ahead” and anticipate what the correct strategy will be. Natural selection only causes the programs to follow the direction of the slope – the population change is determined only through examination of the immediate local environment of the fitness space. The fact that the fitness space was designed to slope directly to a particular location doesn’t change that. Nothing about the experiment invalidates it as an idealized example of the same principles responsible for biological evolution.

This is a trivially simple point, and for a person not to have grasped it by now requires either a hostile agenda, a lack of abstract reasoning, or both.

Comment #105543

Posted by Erik 12345 on June 14, 2006 10:12 AM (e)

RBH wrote:

For example, if in an algorithm the fitness of individual X = (optimum body size - current body size), where \x{201C}optimum body size\x{201D} is the value of some peak on the fitness landscape distant from the point currently occupied by the individual, then that algorithm does not veridically represent biological evolution. It may illustrate some aspect of evolution \x{2014} e.g., the power of cumulative selection to transform tiny probabilities into large probabilities, as in Dawkins\x{2019} METHINKS illustration \x{2014} but it does not represent biological evolution with anything like the necessary fidelity to do serious work on most questions in evolution.

Any functional relation between fitness (w) and body size (B) can be trivially expressed in terms a distance to the optimal body size (Bopt):

w = f(B) = f((B - Bopt) + Bopt).

Conversely, any functional relation between fitness and the distance to an optimal body size can be expressed in ways that does not refer to the optimal body size. The distinction between calculations of fitness that do and do not refer to optimal body size is therefore entirely subjective and I cannot see any significance in it. In particular, the choice of one of several equivalent expressions for fitness has no effect on the predictions that follow from the model.

Having said that, I must also say that I think some of Caledonian’s comments above were uncalled for.

Comment #105573

Posted by RBH on June 14, 2006 11:53 AM (e)

Erik wrote

Conversely, any functional relation between fitness and the distance to an optimal body size can be expressed in ways that does not refer to the optimal body size. The distinction between calculations of fitness that do and do not refer to optimal body size is therefore entirely subjective and I cannot see any significance in it. In particular, the choice of one of several equivalent expressions for fitness has no effect on the predictions that follow from the model.

Then perhaps Erik or Caledonian would kindly supply me with the equivalent of “optimal body size” for the GAs I use to model financial markets and drive derivatives trading. It would be so much easier and more efficient than having to evolve those damned artificial traders.

Both Erik and Caledonian are conflating what the experimenter/designer knows and what the algorithm “knows”. And that’s a pretty simple point. Mathematical equivalence does not necessarily translate to equally valid mappings of reality. That’s a common misconception among mathematicians.

RBH

Comment #105576

Posted by jeannot on June 14, 2006 12:10 PM (e)

I agree with RBH. Even if an optimum exists, estimations of fitness using this optimum as a parameter are approximations, which may be usefull and precise enough in many cases, but may not be applicable in others.
It reminds me of the comparison between Newton’s mechanics and relativity.

Comment #105587

Posted by RBH on June 14, 2006 12:45 PM (e)

Above I wrote

Mathematical equivalence does not necessarily translate to equally valid mappings of reality.

Let me make that concrete. To reduce the temperature a bit, consider a (toy) problem in a different content area, cognitive psychology, concerning how people do aithmetic. One hypothesis suggests that in order to multiply two digits, people use a mental counting procedure. For example, they calculate 3*4=12 by mentally visualizing 3 sequences of 4 objects placed side-by-side and counting all of them. Another hypothesis argues that for adults, multiplication of pairs of single digits is a cognitive ‘primitive’. People have an acquired table lookup capability, the table having been memorized in elementary school, that allows them to ‘read off’ the result from the table in memory. Obviously, we could model – represent – the input-output relationships predicted by both hypotheses with the counting procedure; they are equivalent in that respect. However, if we actually do an experiment and find that the reaction time to produce an answer to each of a set of single-digit multiplication problems varies linearly – or at least monotonically – in the cardinality of the product, we have strong reason to prefer one representation (the counting procedure) over the other equivalent representation, the lookup table. Different system dynamics are predicted by the two different-but-equivalent representations (models). That’s a non-trivial issue in doing the science, as distinguished from doing the math.

RBH

Comment #105589

Posted by Erik 12345 on June 14, 2006 12:52 PM (e)

RBH wrote:

Then perhaps Erik or Caledonian would kindly supply me with the equivalent of “optimal body size” for the GAs I use to model financial markets and drive derivatives trading. It would be so much easier and more efficient than having to evolve those damned artificial traders.

Determining the global optimum or optima of a function defined on a many-dimensional space is typically very difficult for a mere human. That we cannot in practice determine them does not mean they do not exist. Rewriting the function in terms of distances to such optima is typically also a daunting task in practice. That we cannot in practice do the rewriting does not mean that it is in principle impossible. For one-dimensional cases it is easy to do it practice.

What is your opinion of this article…

R. Burger and R. Lande. On the Distribution of the Mean and Variance of a Quantitative Trait Under Mutation-Selection-Drift Balance, Genetics, 138:901-912, 1994.
http://www.genetics.org/cgi/content/abstract/138…

…where the authors model (see Eq. (16)) directional selection for some quantitative phenotypic value (z) using a time-dependent fitness function of the form

w = exp(-(z-kt)^2 / (2a^2))

?

Does the fact that k*t is obviously the optimum and that the fitness is expressed in terms of the distance, z - kt, to this optimum mean that their model is not veridical?

RBH wrote:

Mathematical equivalence does not necessarily translate to equally valid mappings of reality.

So if I take Burger & Lande’s distance-to-optimum-form of their fitness function and rewrite it into something which is completely equivalent, so that all of Burger & Lande’s predictions remain unaffected, I might end up with something that is either a better or worse mapping of reality?

Comment #105592

Posted by Erik 12345 on June 14, 2006 1:00 PM (e)

jeannot wrote:

Even if an optimum exists, estimations of fitness using this optimum as a parameter are approximations,

Do you say this because you think that all estimations of fitness are inaccurate? Or because you think that estimations of fitness using some optimum are particularly inaccurate?

Comment #105593

Posted by Erik 12345 on June 14, 2006 1:05 PM (e)

RBH wrote:

Obviously, we could model — represent — the input-output relationships predicted by both hypotheses with the counting procedure; they are equivalent in that respect. However, if we actually do an experiment and find that the reaction time to produce an answer to each of a set of single-digit multiplication problems varies linearly — or at least monotonically — in the cardinality of the product, we have strong reason to prefer one representation (the counting procedure) over the other equivalent representation, the lookup table. Different system dynamics are predicted by the two different-but-equivalent representations (models). That’s a non-trivial issue in doing the science, as distinguished from doing the math.

Sure, we must first decide what it is we wish to model. For example, we need to decide whether we consider the response time part of the “output” and we need to decide if we wish to also model some intermediate states that occur before the output has been produced.

Comment #105594

Posted by RBH on June 14, 2006 1:12 PM (e)

Erik asked

So if I take Burger & Lande’s distance-to-optimum-form of their fitness function and rewrite it into something which is completely equivalent, so that all of Burger & Lande’s predictions remain unaffected, I might end up with something that is either a better or worse mapping of reality?

Yes. See my arithmetic example above. On a first reading, Burger & Lande’s specific predictions appear to be math-form-insensitive. Even if they are, however, other properties of the system – e.g., the specific system dynamics – may be sensitive to the form of the fitness function and those properties may interact with the specific properties under examination in Burger & Lande’s context to produce differences in outcomes. My central point is that one cannot assume as the default that mathematical equivalence translates directly to equally valid mappings of reality. One cannot test that assumption by looking at the math; one must establish it in experiment.

RBH

Comment #105611

Posted by Caledonian on June 14, 2006 2:11 PM (e)

These objections leveled against the example are inane.

IDists frequently claim that because the statistical likelihood of producing a pattern through raw chance is small, that pattern could not have evolved. This simulation does not need to reproduce fitness spaces commonly found in nature to refute that claim, nor does it need to attempt to emulate the development of any organism, nor does it need to utilize randomly generated fitness spaces.

All it has to do is have a defined fitness algorithm and a system for “reproducing” with occasional mutations programs to which the fitness algorithm can be applied. It fulfils both of those criteria, and in the process, nicely demonstrates how natural selection can very quickly produce seemingly improbable results.

Comment #105617

Posted by Rilke's Granddaughter on June 14, 2006 2:27 PM (e)

Caledonian wrote:

This is a trivially simple point, and for a person not to have grasped it by now requires either a hostile agenda, a lack of abstract reasoning, or both.

Or more likely, the fact that you are talking about two different things. RBH seems to be talking about evolution modeling in general; you are talking about this experiment in particular.

And nothing he said earlier seems to justify the general snarkiness of your response to him. Nor, quite frankly, the general snarkiness of your response to me.

Reading for comprehension would be helpful on your part.

Comment #105620

Posted by Caledonian on June 14, 2006 2:38 PM (e)

Rilke's Grandmother wrote:

Or more likely, the fact that you are talking about two different things. RBH seems to be talking about evolution modeling in general; you are talking about this experiment in particular.

Oh, really?

RBH wrote:

As a consequence, any algorithm that incorporates a fitness calculation that refers to some phenotype (or genotype) not currently in the population is not a model of biological evolution.

*That* is what I have been arguing against; this experiment in particular is just an example of the much larger set RBH misrepresents.

You don’t seem to understand the arguments that have been made, Grandmother. Perhaps you should review the thread history and read some of the posts that you may have skipped over in your eagerness to express your opinion.

Comment #105621

Posted by Rilke's Granddaughter on June 14, 2006 2:41 PM (e)

Erik wrote:

Determining the global optimum or optima of a function defined on a many-dimensional space is typically very difficult for a mere human. That we cannot in practice determine them does not mean they do not exist. Rewriting the function in terms of distances to such optima is typically also a daunting task in practice. That we cannot in practice do the rewriting does not mean that it is in principle impossible. For one-dimensional cases it is easy to do it practice.

But I don’t think that’s the point that RBH is discussing. His point is that the algorithm does not have visibility (in the case of ‘veridical modeling of evolution’) to the landscape in order to determine such an optimum. The landscape is timewise dimensional, for example, and determining a global optimum on that would require the ability to ‘forsee things’ (Sybil Trewlaney eat your heart out).

The algorithm used by evolution does not, cannot by the nature of the problem, incorporate global maxima or minima on the fitness landscape.

Comment #105623

Posted by Rilke's Granddaughter on June 14, 2006 2:49 PM (e)

Caledonian wrote:

That* is what I have been arguing against; this experiment in particular is just an example of the much larger set RBH misrepresents.

The sample problem in the OP is not a sample problem in evolution - evolution simply doesn’t work that way. That’s what RBH is trying to point out.

The OP problem uses an evolutionary model that does not conform to the evolutionary model that represents ‘real world evolution’.

And your inability to read for comprehension is demonstrated in your inability to understand the difference between a grandmother and a granddaughter.

But I’m sure you can do better when you’ve calmed down and actually tried to understand what we’re posting.

Comment #105630

Posted by Erik 12345 on June 14, 2006 3:18 PM (e)

Rilke's Granddaughter wrote:

His point is that the algorithm does not have visibility (in the case of ‘veridical modeling of evolution’) to the landscape in order to determine such an optimum.

What, in general, does it mean for an algorithm to “have visibility to the landscape”?

And which algorithm do you have in mind? It is customary to split evolutionary algorithms into two parts, a fitness evaluation part and a part taking care of the population dynamics. The fitness evaluation part is essentially a black box as far as the rest of the evolutionary algorithm is concerned. Is it the fitness evaluation part or the population dynamics part that must not “have visibility to the landscape” in order to be veridical?

Rilke's Granddaughter wrote:

The landscape is timewise dimensional, for example, and determining a global optimum on that would require the ability to ‘forsee things’ (Sybil Trewlaney eat your heart out).

The algorithm used by evolution does not, cannot by the nature of the problem, incorporate global maxima or minima on the fitness landscape.

So what is your view of population genetics models that incorporate global maxima in the fitness landscape? What do you make of Burger & Lande’s computer simulation using a distance-to-optimum fitness function?

Comment #105632

Posted by Erik 12345 on June 14, 2006 3:38 PM (e)

RBH wrote:

On a first reading, Burger & Lande’s specific predictions appear to be math-form-insensitive.

A very fortunate thing, since any predictions which are not math-form-insensitive would be subjective or logically inconsistent!

But how does Burger & Lande’s model relate to post #105028 above, where you classified fitness calculations as either “global” or “local” and remarked that “any algorithm that incorporates a fitness calculation that refers to some phenotype (or genotype) not currently in the population is not a model of biological evolution”? Does the fact that Burger & Lande’s fitness function refer to the optimum phenotypic value (= k*t) disqualify it as a model of biological evolution?

RBH wrote:

Even if they are, however, other properties of the system — e.g., the specific system dynamics — may be sensitive to the form of the fitness function and those properties may interact with the specific properties under examination in Burger & Lande’s context to produce differences in outcomes.

I suspect you may regard the fitness function as somehow representing a little “movie” (i.e. a time-ordered sequence of events or intermediate states) that ends with the assignment of a fitness value. Although it is certainly conceivable to compute fitness value in such a way, only the final result can have any significance and fitness functions are identified based on (only) the output they give for different inputs. If we do care about such “movies”, we should instead model the life-histories of individuals rather than simply assigning fitness values.

Comment #105645

Posted by Rilke's Granddaughter on June 14, 2006 4:44 PM (e)

Erik wrote:

What, in general, does it mean for an algorithm to “have visibility to the landscape”?

And which algorithm do you have in mind? It is customary to split evolutionary algorithms into two parts, a fitness evaluation part and a part taking care of the population dynamics. The fitness evaluation part is essentially a black box as far as the rest of the evolutionary algorithm is concerned. Is it the fitness evaluation part or the population dynamics part that must not “have visibility to the landscape” in order to be veridical?

It means that the ‘black box’ algorithm you mention includes a factor which represents the optimum phenotype value for the entire possible landscape not simply those nodes currently occupied.

And the answer to your question is yes: the fitness algorithm does not have access to the landscape optimum phenotype value.

So what is your view of population genetics models that incorporate global maxima in the fitness landscape? What do you make of Burger & Lande’s computer simulation using a distance-to-optimum fitness function?

I think they are using a local, rather than an global value in the equation you reference.

Comment #105648

Posted by Erik 12345 on June 14, 2006 4:55 PM (e)

Rilke's Granddaughter wrote:

I think they are using a local, rather than an global value in the equation you reference.

So you don’t think the fitness algorithm used by Burger & Lande has access to the landscape optimum phenotype value? What do you think the term kt in the Gaussian-type exponential of Eq. (16) represents?

Comment #105650

Posted by jeannot on June 14, 2006 5:12 PM (e)

Erik wrote:

Do you say this because you think that all estimations of fitness are inaccurate? Or because you think that estimations of fitness using some optimum are particularly inaccurate?

As I said, estimations of fitness using some optimal genotype or phenotype may produce accurate (precise) results in some situations, but they are not based on the real process of natural selection. The optimum is just an abstraction. At a given time, the environment selects the fitness forms, the optimum just doesn’t exist. How could it define the fitness of replicators (alleles, individual)?
Not to mention that there may not even be any hypothetical optimum, for instance, under frequency-dependence, and soft selection.

To me, the use of an optimum in order to determine an adaptive landscape and/or simulate evolution is based on the assumptions:
- that an optimum exists
- that it can define alone the fitness of suboptimal replicators (individuals, alleles…).

Evolution by natural selection is not based on these assumptions, at least not in the books or papers I’ve read.

For instance, you want to simulate the evolution of beak size in a population of finches, by setting the optimum as the adapted size for consumption of the more frequent seeds: you make those two assumptions. But you’re not sure the optimum is correct of even exists. As the population evolves, different (and extreme) beak sizes may be favored by instra-specific competition for resources (or whatever), resulting in an unstable polymorphism or even speciation.
Can this be modeled with a program à la METHINKS? I personally don’t know. But even if it could, it wouldn’t alter the fact that fitness is NOT related to a non-existing phenotype (or genotype), but to the local local environment. So there. ;-)

(excuse my English :( ).

Comment #105651

Posted by Caledonian on June 14, 2006 5:12 PM (e)

Reikili's Granddaughter wrote:

The sample problem in the OP is not a sample problem in evolution - evolution simply doesn’t work that way. That’s what RBH is trying to point out.

But it can and does work that way – that’s what you’ve both been failing to comprehend.

This is pointless. Welcome to the list, Granddougher.

Comment #105653

Posted by Rilke's Granddaughter on June 14, 2006 5:20 PM (e)

Erik wrote:

So you don’t think the fitness algorithm used by Burger & Lande has access to the landscape optimum phenotype value? What do you think the term kt in the Gaussian-type exponential of Eq. (16) represents?

The local optima; not the optima for the entire space.

Comment #105654

Posted by Rilke's Granddaughter on June 14, 2006 5:24 PM (e)

Caledonian wrote:

But it can and does work that way — that’s what you’ve both been failing to comprehend.

Your argument ad nauseum isn’t working. No matter how many times you say this, you will continue to be completely incorrect. But I admire your stubborness.

This is pointless. Welcome to the list, Granddougher.

I see that you continue your string of remarkably childish rejoinders. Interesting, but completely unproductive. It does, however, considerably diminish the value of your posts as contributions to this thread.

Just thought you should know.

Comment #105656

Posted by Andreas Bombe on June 14, 2006 5:37 PM (e)

Henry J wrote:

I wonder though if recipe might be a better analogy than computer code (and never mind that my idea of “cooking” is punching buttons on a microwave), since at least a large part of the function seems to be adding of “ingredients” when they’re called for.

I don’t think so. A recipe is in effect nothing but a (timer and event driven) sequential program processed by a cook, to wrap it in computer terms.

Now, after thinking it over, I think the best computer science analogy for a genome is hardware design. Actually that goes more into electrical engineering, but still. If you’ve ever dabbled in hardware design (the stuff you commonly write in Verilog or VHDL), you will know that it is very different from computer programs.

With that, a gene with its triggers has an equivalent in a logic gate: It has one or more inputs on which it constantly performs a logic operation which determines the output signal - a voltage in hardware, an RNA transcription in genes.

The main difference is that the output in hardware is very simple (one voltage representing “1” and one representing “0”) but the wiring is complex - each output must be connected to all required inputs and must not be connected to another output. In the genome the output is complex - a more or less long string of RNA - but the connectivity is trivial. After all, the whole genome is floating in the same contained liquid, everything can drift anywhere. It’s enough for a gene to have a trigger receptor for some molecule, it doesn’t need an explicit connection path from the source of those molecules. That makes the connection network very malleable compared to wire connected networks as in hardware.

The other main difference is of course that the genome isn’t a binary digital operation. A single gene viewed in isolation may be, but the connections between them aren’t and are probabilistic instead.

To take it further, the genome is equivalent to a chip. A chip has lots of interconnected logic gates, some input pins delivering signals from the outside to some of these gates and some output pins delivering the outputs of some gates to the outside world.

Comment #105658

Posted by Erik 12345 on June 14, 2006 5:37 PM (e)

Rilke's Granddaughter wrote:

The local optima; not the optima for the entire space.

Then where is global optimum of the fitness function defined by Eq. (16), if not at z = kt?

Comment #105678

Posted by Caledonian on June 14, 2006 7:26 PM (e)

Just a quick reminder of what’s necessary for Darwinian evolution to take place:

PZ Myers wrote:

1) Darwinian logic is quite simple and clear. Here’s a short summary:

* If heritable variation exists, (which, of course, it does)
* if excess reproduction occurs, (also obviously true, or we’d be up to our ears in mice)
* if variants differ in their likelihood of survival and reproduction, (a little trickier, but still fairly obvious)
* then the relative frequencies of the variants must change.

Comment #105713

Posted by Henry J on June 14, 2006 9:44 PM (e)

Andreas,

Re “the genome is equivalent to a chip.”
Except that evolution of the gene pool can rewire that “chip” a whole lot easier than a hardware chip can be rewired. ;)

Come to think of it, I guess some aspects of that “chip” would get rewired during development of the organism, as well.

Henry

Comment #105727

Posted by Gil Dodgen on June 14, 2006 10:11 PM (e)

The fact of the matter remains: Random mutation and natural selection as an explanation for all of life’s complexity, functionally integrated machinery, and information content is wishful speculation, unsupported by convincing hard evidence. This should simply be admitted.

Comment #105731

Posted by steve s on June 14, 2006 10:22 PM (e)

Oh, don’t worry, Gil. In a week or so, Paul Nelson’s going to be presenting Ontogenetic Depth v 2.0 at the Society of Developmental Biology meeting, and I’m sure that will obliterate Darwinism, you know, like the Explanatory Filter did, and the NFL theorems, and your analogies to computers, and Irreducible Complexity, and Sal’s plane anecdotes, and the last 400-500 dumb things you guys have said, and Intelligent Evolution will in the future, &c, &c, &c….

Comment #105737

Posted by Gil Dodgen on June 14, 2006 10:35 PM (e)

Dear Steve,

I appreciate your intellectually satisfying refutation of my thesis.

Comment #105744

Posted by 'Rev Dr' Lenny Flank on June 14, 2006 10:49 PM (e)

The fact of the matter remains: Random mutation and natural selection as an explanation for all of life’s complexity, functionally integrated machinery, and information content is wishful speculation, unsupported by convincing hard evidence.

Says you. (shrug)

Comment #105745

Posted by steve s on June 14, 2006 10:50 PM (e)

Glad you simply admitted it.

Comment #105747

Posted by steve s on June 14, 2006 10:51 PM (e)

And I look forward to all the analogies I’m sure you’ll present in the future, and the concomitant incredulity.

Comment #105901

Posted by Rilke's Granddaughter on June 15, 2006 5:36 PM (e)

Gil wrote:

The fact of the matter remains: Random mutation and natural selection as an explanation for all of life’s complexity, functionally integrated machinery, and information content is wishful speculation, unsupported by convincing hard evidence. This should simply be admitted.

False. In the first place, the theory of evolution is far more than ‘random mutation and natural selection’. In the second place, the Avida experiments (among others) demonstrate that you’re wrong. In the third place, PvM has demolished your contention that “Hello World” couldn’t be produced via variation and selection.

Either be sufficiently mature to admit that you’re wrong; or provide actual evidence that you are right. Your opinions do not an argument make.

Comment #105902

Posted by David B. Benson on June 15, 2006 5:36 PM (e)

I suppose this is old-fashioned of me, but

optimum == the best

hence, ‘global optimum’ is redundant whilst ‘local optima’ is at best confusing. For the latter, ‘local maxima’ is certainly to be preferred.

Comment #105904

Posted by Rilke's Granddaughter on June 15, 2006 5:38 PM (e)

Erik wrote:

Then where is global optimum of the fitness function defined by Eq. (16), if not at z = kt?

There isn’t one. There is an optimum at that particular node, because one can determine based on the fitness algorithm suggested/black boxed that such an optimum must exist.

Evolutionary fitness in the real world is determinable only from the actual factors that affect a specific individual member of a population.

Comment #105991

Posted by Erik 12345 on June 16, 2006 4:10 AM (e)

jeannot wrote:

To me, the use of an optimum in order to determine an adaptive landscape and/or simulate evolution is based on the assumptions:
- that an optimum exists
- that it can define alone the fitness of suboptimal replicators (individuals, alleles\x{2026}).

The first assumption is not related to the use of an optimum as reference point in order to compute fitness. An optimum exists for any reasonable fitness function. Thus, the first assumption is implicit in the use of a fitness function, regardless of how it is calculated.

The second assumption is not actually made. It is not an optimum genotype/phenotype, but rather the difference between it and a genotype/phenotype of interest, that is assumed to determine fitness. The number of free parameters in a genotype/phenotype is the same as the number of free parameters in the difference between the genotype/phenotype and an optimal genotype/phenotype. Therefore the assumption actually made is equivalent to assuming the existence of a fitness function in the first place.

Evolution by natural selection is not based on these assumptions, at least not in the books or papers I\x{2019}ve read.

It’s not clear what you mean by this, but “evolution by natural selection” is of course a much more general notion than any specific population genetics model. For example, the model in the Burger & Lande paper, cited above as a counter-example to the claims that fitness cannot “veridically” be computed in a way that references the global optimum, is only intended to capture some aspects of stabilizing and directional selection. It is not intended to capture, say, frequency dependent selection.

For instance, you want to simulate the evolution of beak size in a population of finches, by setting the optimum as the adapted size for consumption of the more frequent seeds: you make those two assumptions. But you\x{2019}re not sure the optimum is correct of even exists. As the population evolves, different (and extreme) beak sizes may be favored by instra-specific competition for resources (or whatever), resulting in an unstable polymorphism or even speciation.
Can this be modeled with a program à la METHINKS?

Sure, in principle it can. You just need a more general form of the fitness function. In Dawkins’s METHINKS program the fitness is a function of a single variable, namely the genotype to be evaluated. In the case of your finches, one would probably need a fitness function that depends not only on the genotype to be evaluated, but also on the composition of the rest of the population. That would allow for frequency dependent selection.

Comment #105992

Posted by Erik 12345 on June 16, 2006 4:12 AM (e)

David B. Benson wrote:

I suppose this is old-fashioned of me, but

optimum == the best

hence, \x{2018}global optimum\x{2019} is redundant whilst \x{2018}local optima\x{2019} is at best confusing. For the latter, \x{2018}local maxima\x{2019} is certainly to be preferred.

In standard terminology, a function has a local maximum (minimum) at a particular point if it is the largest (smallest) function value within some local region around the point. A function has a global maximum (minimum) at a particular point if the value at that point is equal to the highest (lowest) value that function attains. A function has a global (local) optimum at a point if that point is either a global (local) minimum or a global (local) maximum.

Naturally, if something is a global optimum, then it is also a local optimum.

Erik wrote:

Then where is global optimum of the fitness function defined by Eq. (16), if not at z = kt?

Rilke's Granddaughter wrote:

There isn\x{2019}t one. There is an optimum at that particular node, because one can determine based on the fitness algorithm suggested/black boxed that such an optimum must exist.

Since z=kt is so obviously what in standard terminology would be called a “global optimum”, I assume that you have your own private terminology in which “global optimum” means something different. It would help if you explained how your private notion of a “global optimum” differs from the conventional meaning of the term.

Evolutionary fitness in the real world is determinable only from the actual factors that affect a specific individual member of a population.

Maybe what you want to say is that you believe that fitness depends on so many things that it isn’t possible in practice to accurately calculate the fitness of real-world genotypes/phenotypes without simulating the life-histories of their carriers?

Comment #106007

Posted by Caledonian on June 16, 2006 7:09 AM (e)

I believe what he’s trying to say is that any perceived difference between the model and real life, no matter how trivial or irrelevant, will be seized upon as rhetorical evidence that evolutionary theory is false.

Comment #106020

Posted by Erik 12345 on June 16, 2006 9:05 AM (e)

In previous comments by others, a few partly overlapping concerns about fitness functions have been suggested. I would identify and summarize them like this:

* The algorithm used to evaluate fitness matters in some important way. In particular, two algorithms that give identical fitness values need not be equally veridical.

* There is an important distinction between calculations of fitness that refer to a reference genotype/phenotype (typically the optimal one) and calculations of fitness that do not. The former kind is completely unrealistic.

* Fitness should not depend on genotypes/phenotypes that are not represented in the evolving population.

* Dawkins’s WEASEL program, while perhaps good for demonstrating the difference between cumulative selection and independent random sampling, is a prime example of the above mentioned objectionable ways of evaluating fitness.

I regard the first, second, and fourth of these concerns as wrong. The third I agree with provided that a proper interpretation of the word “depend” is made.

What is the function of a fitness function?

For the purposes of modelling evolution and, in particular, how genotype/phenotype frequencies change over time, one highly relevant type of quantity are measures of how much offspring are produced by carriers of a particular genotype/phenotype. The task done by a fitness function is to provide us with such a measure for every individual. By summarizing the entire life-histories of carriers of a genotype/phenotype in a single number—the fitness value—population geneticists and like-minded scientists can simplify their models, perhaps at the expense of some accuracy, by avoiding any explicit modelling of individuals’ lives.

One very direct way of calculating fitness values can of course be to nevertheless try to simulate the lives of individuals in a way that gives us much more than just a fitness value, e.g. in addition we might get some kind of “movie” showing us how the individual developed and/or competed with others. But such extra bonus information that result as a by-product of the fitness calculation is going to be ignored by the model. If one really wants to take life-histories into account, one should not model the dynamics via a fitness function.

Distance-to-optimum calculations of fitness

Dawkins’s WEASEL program is not the only case of a fitness calculation that refers to an optimal genotype/phenotype. A look the population genetics literature reveals that completely analogous fitness functions are not uncommon in that literature. Anyone who thinks that there is something objectionable in principle about Dawkins’s choice of fitness function is therefore put in the awkward position of having to advance the same objections against the works of many famous population geneticists. Here are two clear examples just to drive this point home:

“Fitness is taken to be determined entirely by Gaussian stabilizing viability selection on the phenotypic value of the trait. The relative fitness of individuals of genotypic value G arises from an average of viability over environmental effects (see e.g. Turelli, 1984 or Bulmer, 1989) and is given by

w(G) = exp[-(G - Zopt)^2 / (2 Vs)],

where Vs^{-1} (>0) is a direct measure of the intensity of selection on genotypic values of the trait and Zopt is the optimal phenotypic value (and also the optimal genotypic value).”

quoted from Y. Bello and D. Waxman, Near-periodic substitution and the genetic variance induced by environmental change, Journal of Theoretical Biology, 239(2):152-160, 2006
http://dx.doi.org/10.1016/j.jtbi.2005.08.044

“In Fisher’s and Kimura’s analyses, it was assumed that all traits are under stabilizing selection of identical intensity. In particular, it was assumed that the fitness of a phenotype is a monotonically decreasing function of its Euclidean distance from the optimal phenotype. Geometrically, this corresponds to a “fitness landscape” that is spherically symmetric. “Surfaces” of constant fitness are hyperspheres (i.e., circles when n = 2, spheres when n = 3, …) that are centered on the optimal phenotype. If we choose to measure each trait in such a way that its optimal value is 0, then the optimal phenotype will lie at the coordinate origin, z = 0 = (0,0,…,0). Fitness is then a function of ||z|| = (z_1^2+z_2^2+…+z_n^2)^{1/2}.”

quoted from D. Waxman and J.J. Welch, Fisher’s Microscope and Haldane’s Ellipse, American Naturalist, 166:447-457, 2005,
http://www.lifesci.sussex.ac.uk/home/David_Waxma…

One can have several reasonable concerns about the use of these specific fitness functions. For example, they are very idealized (the first takes into account only one trait while the second is very symmetrical). Another example is that we might prefer to think of what is “global” in the model as actually just a small part of some bigger real fitness landscape, most of which is left unmodeled. Or we might think that provided fitness functions model different parts of the fitness landscape with different accuracy.

However the proponents of the view that there’s something fundamentally objectionable about Dawkins’s WEASEL program might want to understand the fact that completely analogous fitness functions are often used in population genetics, it is inescapable that the objections to WEASEL are at least as applicable to population genetics.

There is also the problem that there are always many equivalent ways of expressing a fitness function, although some are easier for mere humans to analyze and handle than others. Some of these expressions will refer an optimum genotype/phenotype and some will not. As a simple example, consider quadratic stabilizing selection for a single trait z, which, with some suitable choice of units, can be written

w(z) = 1 - (z - 0.17)^2

When we write the function in this way it is obvious that z = 0.17 is the optimum and function is expressed in terms of the distance to this optimum. But expanding the squared parenthesis, we obtain this completely equivalent expression

w(z) = 0.9711 + 0.34 z - z^2.

In this form there is no longer a reference to the optimum (0.17).

“Depend”

Obviously, the fitness of a genotype/phenotype shouldn’t be influenced by things that don’t exist, in particular it shouldn’t be influenced by genotypes/phenotypes that aren’t carried by any individual in the population. Well, some reservations are needed because it is entirely legitimate to normalize the fitness scale taking a non-existent genotype/phenotype and declaring that “my system of units is such that this genotype/phenotype has fitness 1”. Apart from some reservations like that, it should be clear that fitness isn’t influenced by things that don’t exist.

But how is that desideratum expressed and verified mathematically? Is it enough to check if the expression used for calculating fitness refers to some potentially non-existent genotype/phenotype (such as the optimal one)?

The answer to the second of these questions is no.

Comment #106023

Posted by Aureola Nominee, FCD on June 16, 2006 9:26 AM (e)

I apologize in advance for what will surely turn out to be a totally clueless question:

since in reality organisms do not appear to be “assessed for fitness” by reference to an abstract Platonic ideal, isn’t there something fundamentally wrong with doing so in modelling?

Comment #106034

Posted by Erik 12345 on June 16, 2006 10:39 AM (e)

Aureola Nominee wrote:

since in reality organisms do not appear to be \x{201C}assessed for fitness\x{201D} by reference to an abstract Platonic ideal, isn\x{2019}t there something fundamentally wrong with doing so in modelling?

Firstly, what you looks to you like an abstract Platonic ideal might look to modellers like just a particularly convenient, but ultimately arbitrary, reference point. In many cases it is natural to assume that there is an optimum somewhere (fitness functions that lack optima are too pathological to be reasonable anyway). Having made that assumption, one can, without making any further assumptions about the fitness function, agree on the convention to measure certain quantities relative to the position of this optimum. Such practices are common in modelling (e.g. optima are popular expansion points in Taylor expansions, because the first-order term will then be zero), but to onlookers they might appear to reflect abstract Platonic ideals.

Secondly, what difference do you think it makes? How could it possibly be significant whether we use, say, the fitness function

w(z) = 1 - (z - 0.17)^2

or the completely equivalent fitness function

w(z) = 1 - (z - 100.27 + 100.1)^2 ?

Thirdly, computer calculations do many things that reality does not. How is that in any way significant? For example, reality surely does not determine the trajectory of a projectile by numerically integrating some mathematical model.

Comment #106038

Posted by Aureola Nominee, FCD on June 16, 2006 11:05 AM (e)

Thank you, Erik.

First, I don’t see those two equations as different; I see them as the same equation, written in two different ways. Therefore they both suffer from the same fundamental flaw (if it is a flaw), or neither does (if it isn’t).

Second, my point is that “reproductive fitness”, in my layman’s eyes, does not depend from how close or how far a given organism is from a theoretical optimum, because it is relative, not absolute.

In other words, if I have a population of organisms, the reproductive success of each individual depends on how much better or worse than the others it is, not on how much worse than the local optimum it is.

So, for instance, if I take the function you mention

w(z) = 1 - (z - 0.17)^2

and use it to calculate a relative fitness

w(z1) - w(z2) = (z2 - 0.17)^2 - (z1 - 0.17)^2

I obtain

w(z1) - w (z2) = (z2)^2 - (z1)^2 - (0.34 * (z2 - z1))

which seems to me to be a very different kettle of fish!

As I said, I do not presume to correct people who have devoted their careers to this stuff; but I would really like to understand why, instead of using relative fitness (which would avoid the whole problem of “comparing to a non-existing ideal”, we seem to be using absolute fitness. Where is my mistake?

P.S. Your remark on modelling trajectories seems to me not to address this aspect.

Comment #106072

Posted by jeannot on June 16, 2006 12:41 PM (e)

Erik wrote:

It is not an optimum genotype/phenotype, but rather the difference between it and a genotype/phenotype of interest, that is assumed to determine fitness.

Sure. I didn’t mean that fitness was the optimum.

Erik wrote:

You just need a more general form of the fitness function. In Dawkins’s METHINKS program the fitness is a function of a single variable, namely the genotype to be evaluated. In the case of your finches, one would probably need a fitness function that depends not only on the genotype to be evaluated, but also on the composition of the rest of the population. That would allow for frequency dependent selection.

Yes, but in my example, the ‘optimal beak size’ in the model doesn’t turn to be optimal at all. To me, this parameter is not the optimum, but the selective pressure at the beginning of the simulation (the size of the more frequent seeds). In fact, your model will involve local fitness calculations.

Comment #106183

Posted by Erik 12345 on June 16, 2006 5:52 PM (e)

Aureola Nominee wrote:

First, I don’t see those two equations as different; I see them as the same equation, written in two different ways. Therefore they both suffer from the same fundamental flaw (if it is a flaw), or neither does (if it isn’t).

Good.

So, for instance, if I take the function you mention

w(z) = 1 - (z - 0.17)^2

and use it to calculate a relative fitness

w(z1) - w(z2) = (z2 - 0.17)^2 - (z1 - 0.17)^2

I obtain

w(z1) - w (z2) = (z2)^2 - (z1)^2 - (0.34 * (z2 - z1))

which seems to me to be a very different kettle of fish!

The first expression for w(z1) - w(z2) does contain a reference to the global optimum (0.17). If rewriting w(z) cannot remove this flaw (if indeed it is a flaw), then why would rewriting be able to remove the same flaw in w(z1) - w(z2)?

As for the merits of using fitness differences like w(z1) - w(z2) instead of w(z), there’s no reason to refrain from doing that in those cases when it happens to simplify the treatment. But, being equivalent to the use of w(z), it cannot remove any flaws in w(z) (of course, I don’t agree that what others here have claimed as flaws really are flaws).

Comment #106193

Posted by 'Rev Dr' Lenny Flank on June 16, 2006 6:13 PM (e)

Mathematical equations … now my head hurts. Owwwwwwwwwww.

(grin)

Sorry, but I’ve always been mathematically-challenged. It’s why I was an English major and not a science major.

Comment #106195

Posted by Aureola Nominee, FCD on June 16, 2006 6:24 PM (e)

Erik:

My point is precisely that using the difference is not equivalent to using the absolute value. I’m not entirely convinced that the effects of modelling absolute fitness vs. relative fitness are negligible; however, not being a professional in this field, I’ll defer to expert opinion.

Comment #106288

Posted by Erik 12345 on June 17, 2006 11:11 AM (e)

Aureola Nominee wrote:

My point is precisely that using the difference is not equivalent to using the absolute value.

But the argument you advance in favour of this point doesn’t seem to be any more valid for fitness differences than for fitness.

I’m not entirely convinced that the effects of modelling absolute fitness vs. relative fitness are negligible; however, not being a professional in this field, I’ll defer to expert opinion.

OK. For the record, I am not myself an expert in this field. (But I have supplemented my own non-authoritative arguments by citing a few examples of experts who seem to have no trouble computing fitness by reference to global optima.)