PvM posted Entry 1410 on August 29, 2005 10:00 AM.
Trackback URL: http://www.pandasthumb.org/cgi-bin/mt/mt-tb.fcgi/1406

For Neurode…

Bergstrom (Department of Zoology University of Washington Seattle, WA, USA) and Lachmann (Max Planck Institute for Mathematics in the Sciences Leipzig, Germany) have published a paper titled “Shannon Information and Biological Fitness”.

They conclude that

In this paper we have shown that two measures of information, Shannon entropy and the decision-theory value of information, are united into one single information measure when one looks at the strategies that natural selection will favor, namely those that maximize the long term growth rate of biological organisms. Furthermore, we have shown that in evolving biological systems, the fitness value of information is bounded above by the Shannon entropy. These results suggest a close relationship between biological concepts of Darwinian fitness and information-theoretic measures such as Shannon entropy or mutual information.

Since Shannon information does not address the issue of information quality, as it does not distinguish between relevant and irrelevant information, they define the value of information as follows

Definition: The value of information associated with a cue or signal X is defined as the difference between the maximum expected payoff or fitness that a decision-maker can obtain by conditioning on X and the maximum expected payoff that could be obtained without conditioning on X.

In “Change in Shannon information between systems as a measure of adaptation”, Katharina Mullen, from the department of physics and astronomy of the Vrije Universiteit Amsterdam, explains how fitness and Shannon information relate

An information-theoretic measure of adaptation is presented as change in Shannon information between the components (e.g., entity and environment) of an adaptive system. The measure is applicable to natural and artificial adaptive systems in the absence of exogenic fitness criteria, unlike fitness-function-based measures of adaptation. It is introduced via formulation of the of the simplest system in which non-zero change in shared information between components arises, and via application to a predator-prey model.

Interestingly, she discusses the teleology of survival.

Under the view that an environment and an entity seeking to survive in the environment are patterns correlated to each other so that the state of the pattern represented by each effects the other, the purpose of the entity is representation of that pattern that allows maintenance of a maximal degree of order relative to the environment; that is, the purpose of the entity becomes taking on a state that allows survival with maximal probability given the environment. Should the environment’s state change in a way as to change the state of the entity that allows maximal (or some degree of) order relative to the environment to be maintained, there is a selective pressure on the entity to change the pattern it embodies toward this new state. Then the ability of the entity to persist depends on how well changes in the environment are communicated to the pattern represented by the entity; Equation 3 measures this. By this reasoning there is a sense in which Equation 3 measures the ‘survivability’ of a system, and Equation 4 measures increases or decreases in that survivability.

Equation 3 describes Shannon mutual information I(X:Y)=H(X)+H(Y)-H(X,Y). By defining the information at time t0 and tn as I0 and In, the difference between the two is equation 4

She concludes that

In natural and artificial systems in which fitness is endogenic, application of the measure presented here may be more desirable than application of a fitness-based measure.

Things get even better in cale-free dynamics emerging from information transfer the authors argue:

Abstract
The dynamics based on information transfer is proposed as an underlying mechanism for the scale-invariant dynamic critical behavior observed in a variety of systems. We apply the dynamics to the globally coupled Ising model, which is analytically tractable, and show that dynamic criticality is indeed attained. Such emergence of criticality is confirmed numerically in the two-dimensional Ising model as well as the globally coupled one and in a biological evolution model. Although criticality is precise only when information transfer is reversible, it may also be observed even in the irreversible case, during the practical time scale shorter than the relaxation time.

For a reprint of this article see this link

The authors observe that

As an attempt to seek a general theoretical answer to the question why criticality appears so common, we note that essentially any system in nature is coupled to the environmental surroundings and consider information transfer between the system and the environment. The mathematical formulation of information was given in the context of the communication theory whereas entropy is identified with a measure of missing information [3]. The importance of such information transfer together with the role of entropy has been addressed in biological evolution based on random mutation and natural selection [4]: In general, every species tends to minimize its entropy, or in other words, attempts to get information from the environment. Here we propose that this information-transfer dynamics may serve as a generic and universal mechanism for dynamic scale-invariant behaviors observed in a vareity of systems, including physical as well as biological systems.

The reason why I am excited about these findings is that they tie together: scale free networks, Shannon information, criticality and evolution in a theoretic foundation.

For those interested in Shannon entropy, information and the common confusions, I recommend Adami’s paper Information theory in molecular biology

Commenters are responsible for the content of comments. The opinions expressed in articles, linked materials, and comments are not necessarily those of PandasThumb.org. See our full disclaimer.

Comment #45446

Posted by Steven Thomas Smith on August 29, 2005 1:51 PM (e)

Interestingly, A.N. Kolmogorov used mathematical biology in a sly and brave way to counter Lysenkoism in the Soviet Union. When one of T.D. Lysenko’s followers published an article with experimental data claimed to be at odds with Mendelian genetics, A.N. Kolmogorov published a response that showed up a statistical error and complemented the author’s hard work confirming an established scientific fact with new data. This was nothing less than courageous—Lysenko used his position to imprison and murder any biologist that opposed him. Kolmogorov wished to defend scientific truth in his country and was aware of Fisher’s work on the neo-Darwinian synthesis. It was a good thing for Kolmogorov that Stalin understood the utility and international prestige that Kolmogorov brought his country.

Because of the interesting parallels between Lysenkoism and modern-day intelligent design creationism, I’ve excerpted Lysenko’s response to Kolmogorov here in full. It is worth reading to the end to see that Lysenko’s opposition to Mendelian genetics stems from the same philosophical discomfort held by intelligent design creationists: revulsion to “blind chance.” Also note Lysenko’s feigned expertise that would easily fool journalists and other nonspecialists, as well as his reliance on poorly-defined yet reasonable-sounding terms that are in fact nonsensical—what does Lysenko mean by “family”? I’ve highlighted the money quotes in bold italics, and kept the transliterated names as they appear in the original English translation.

“In Response to an Article by A. N. Kolmogoroff,” by T. D. Lyssenko, Member of the Academy

Published in Comptes Rendus (Doklady) de l’Académie des Sciences de l’URSS, 1940, Vol. XXVIII, Nº 9.

The Comptes Rendus of the Academy of Sciences of the USSR, vol. XXVVII, No. 1, 1940, carries an article by the well-known mathematician A. N. Kolmogoroff, entitled «On a New Confirmation of Mendel’s Laws». In his desire to prove the truth and invulnerability of of Mendel’s law, the author provides mathematical arguments, formulae and even curves. I do not believe I am competent to pass an opinion on these mathematical proofs and arguments. Moreover, as a biologist I am not interested in the question whether or not Mendel was a good mathematician. As to my opinion of Mendel’s statistics, I have expressed it in many occasions in the press and I claim that they have no bearing whatever in biology.

What I wish to say in my short Note here is that Kolmogoroff’s article, too, has no bearing on biological science.

While explaining Mendel’s principle of 3:1 segregation, Kolmogoroff says in his article that it makes no difference whether a single family or a group of many families, produced by different pairs of heterozygous parents of the Aa type, are taken for analysis.

Indeed, it may seem to Kolmogoroff that all the plants produced by different pairs of heterozygous parents of the Aa type are alike. But we, biologists, know that no two plants can be quite identical in their hereditary qualities. We know that in a family of the second generation of crossbred wheat, 150 out of 200 plants may be of the beardless type and 50 of the bearded one, but in another family of the same combination there may be as many as 190 beardless plants and only 10 bearded, and in a third family we may even find that all the 200 plants are of the ordinary beardless type, and so on and so forth. In other words, each family has its variety.

To Kolmogoroff this is of no interest, since his only concern is that the mean should coincide with the mathematical conclusions. But we, who are concerned with genetics and breeding work, cannot be indifferent to such phenomenon.

If, for instance, we have singled out some wheat plants for reproduction within a family including a large ratio of bearded type (50 out of 200), i.e., a family showing great variety, we may be sure that the progeny of selected plants will show great variety, too. Under such conditions selective breeding may prove hard and futile. It will be different, however, if we take seed from a family in which all the 200 specimens are more uniform. For this reason seeds from various plants of the first generation of crossbreeds should not be mixed, but should be sown separately, each family apart.

That is why we, biologists, do not take the slightest interest in mathematical calculations which confirm the useless statistical formulae of the Mendelists.

Kolmogoroff’s article is based on the results of Ermolaeva’s work. But Ermolaeva in her work has shown that the progeny of various families of crossbred peas of the same combination vary, each in a different way, whereas according to Kolmogoroff the variety manifested by the plants of different families falls within the limits of mathematically admissible error. We biologists, however, do not want to submit to blind chance, even though this chance is mathematically admissible. We maintain that biological regularities to not resemble mathematical laws. We are of the opinion that in this controversy between Kolmogoroff, Member of the Academy, and post-graduate Ermolaeva, it is Ermolaeva who is in the right, and not Kolmogoroff.

I world recommend to the interested reader David Joravsky’s history The Lysenko Affair (Harvard University Press, 1970), who in Chaper 3 titled “Harmless Cranks” says of Lysenko:

Lysenko … had the benefit of education, but the peasant style of thought survived the years he spent at the Kiev Agricultural Institute. What he did learn very well—unless it was the gift of his genes—was the art of self-advertisement. In 1927, when he was only twenty-nine years old, working at an obscure experiment station is Azerbaidjan, he managed to get a boost from Pravda itself. A feature article said he had “solved the problem of fertilizing the fields without fertilizers and minerals.” … Such miracles will seem trite to anyone who is familiar with the Soviet press, but this miracle worker was quite original.

Skinny, with prominent cheeckbones and closecropped hair [later replaced with a lank forelock], … this Lysenko gives one the sensation of a toothache. God grant him health, he is a man of doleful appearance. Both stingy with a word and unremarkable in features, except that you remember his morose eye crawling along the earth with such a look as if he were at least getting ready to kill someone. He smiled only once, this barefoot scientist. …

[T]he young man’s masterful way with journalists, his skill at using newspapers to make scientific discoveries of great practical importance, this was not ephemeral. It would be a constant feature of Lysenko’s entire career, from the Pravda article of 1927 until the end of 1964, when Pravda and all the other newspapers would finally turn against him.

The reporter of 1927 confessed that he stared at Lysenko’s notebook with ignorant awe. He did not understand the “scientific laws” by which the barefoot scientist had quickly solved his problem, without trial and error. … He made a primitive error in statistical reasoning, and he paid almost no attention to the lessons learned by previous investigators of this problem. …

Lysenko then revealed another of his chief and lasting characteristics: a total, angry refusal to give any thoughtful consideration to criticism.

Comment #45449

Posted by steve on August 29, 2005 2:07 PM (e)

I wonder what Bergstrom and Lachmann think of Dembski.

Comment #45456

Posted by SteveF on August 29, 2005 2:26 PM (e)

I wonder if Bergstrom and Lachmann have even heard of Dembski.

Of course if they haven’t, then I’m sure they will have heard of the Southern Baptist Theological Seminary, that fine centre of biological research.

Comment #45500

Posted by Grey Wolf on August 29, 2005 4:57 PM (e)

Funny… the one article specifically directed at Neurode, answering his “beef” with evolution is the one he doesn’t comment upon… why is that, you think? I know he is still around, (s)he has posted after this article was added.

Grey Wolf

PD: who, in accordance with the Troll Theory predicts Neurode will continue to ignore the article, or post saying that it doesn’t actually answer his/her problems without actually producing evidence for his/her position.

PPD: the problem with Troll Theory, of course, is that it’s invocation might be enough to cause it to fail. So if you were thinking of answering, Neurode, throw in something about how to falsify ID, will you?

Comment #45504

Posted by SteveF on August 29, 2005 5:05 PM (e)

The good thing about the Troll Theory is that it predicts both that neurode will turn up and that he won’t turn up.

Comment #45505

Posted by Grey Wolf on August 29, 2005 5:13 PM (e)

SteveF wrote:

The good thing about the Troll Theory is that it predicts both that neurode will turn up and that he won’t turn up.

Actually, troll theory predicts that the troll will never provide evidence for his/her position, which obviously can be done by turning up and posting an empty post, or by not turning up at all. But it *can* be falsified, so it *is* a proper theory unlike, say, ID.

Hope that helps,

Grey Wolf

Comment #45645

Posted by raj on August 30, 2005 4:59 AM (e)

Given the incomplete knowledge of how “information” is encoded in the human genome, it strikes me as being rather a waste of time to speculate as to whether, how and to what extent Shannon’s information theorems apply. There was an article in a recent issue of SciAm that suggests that the encoding in the human genome is far more complex than had previously been believed.

Comment #45707

Posted by Steven Thomas Smith on August 30, 2005 11:13 AM (e)

Given the incomplete knowledge of how “information” is encoded in the human genome, it strikes me as being rather a waste of time to speculate as to whether, how and to what extent Shannon’s information theorems apply.

It’s an alphabet—Shannon applies. Are you aware of the development of the central dogma of molecular biology and how it was motivated by information theory?

Crick wrote:

The central dogma was put forward [4] at a period when much of what we now know in molecular genetics was not established. All we had to work on were certain fragmentary experimental results, themselves often uncertain and confused, and a boundless optimism that the basic concepts involved were rather simple and probably much the same in all living things. In such a situation well constructed theories can play a really useful part in stating problems clearly and thus guiding experiment.

The two central concepts which had been produced, originally without any explicit statement of the simplification being introduced, were those of sequential information and of defined alphabets. Neither of these steps was trivial. Because it was abundantly clear by that time that a protein had a well defined three dimensional structure, and that its activity depended crucially on this structure, it was necessary to put the folding-up process on one side, and postulate that, by and large, the polypeptide chain folded itself up. This temporarily reduced the central problem from a three dimensional one to a one dimensional one. It was also necessary to argue that in spite of the miscellaneous list of amino-acids found in proteins, (as then given in all biochemical textbooks) some of them, such as phosphoserine, were secondary modifications; and that there was probably a universal set of twenty used throughout nature. In the same way minor modifications to the nucleic acid bases were ignored; uracil in RNA was considered informationally analogous to thymine in DNA, thus giving four standard symbols for the components of nucleic acid.

The principal problem could then be stated as the formulation of the general rules for information transfer from one polymer with a defined alphabet to another.

Comment #45713

Posted by 'Rev Dr' Lenny Flank on August 30, 2005 11:56 AM (e)

It’s an alphabet—Shannon applies.

I’m curious — who is the sender? Who is the receiver? What is the “message”?

Comment #45767

Posted by Steven Thomas Smith on August 30, 2005 3:02 PM (e)

who is the sender? Who is the receiver? What is the “message”?

Lenny, the argument behind the central dogma is motivated by the concept Shannon entropy, or how many bits of information each letter contains.

There are 20 amino acid “letters” and 4 DNA “letters.” Prohibiting a loss of information, it is impossible to translate between the amino acid “letter” ‘Leucine’, say, and one of ‘A’, ‘C’, ‘G’, ‘T’, just like in my message to you, it is impossible to translate an ASCII ‘A’ to a single bit ‘0’ or ‘1’. Similarly, we cannot translate between ‘Leucine’ and the “2d order extension” alphabet ‘AA’, ‘AC’, …, ‘TT’ (sixteen “letters”), nor can we translate between an ASCII ‘A’ and the 2-bit alphabet ‘00’, ‘01’, ‘10’, ‘11’.

However, three DNA letters does the trick (or seven bits to encode ASCII letters). There are 4*4*4 = 64 letters in this 3d order extension alphabet (which is coded from DNA to mRNA), more than enough to encode for 20 amino acids. In our example, the mRNA letters ‘GUG’, ‘CUC’, ‘CUU’, and ‘CUA’ are translated into the amino acid ‘Leucine’. And ‘0100001’ translates to ASCII ‘A’. George Gamow and Francis Crick realized that it would be easy for nature to translate a 3d order extension alphabet of DNA (64 letters) into amino acids (20 letters), but that the reverse translation would not be unique:

The transfer protein –> RNA (and the analogous protein –> DNA) would have required (back) translation, that is the transfer from one alphabet to a structurally quite different one. It was realized that forward translation involved very complex machinery. Moreover, it seemed unlikely on general grounds that this machinery could easily work backwards. The only reasonable alternative was that the cell had evolved an entirely separate set of complicated machinery for back translation, and of this there was no trace, and no reason to believe that it might be needed.

I decided, therefore, to play safe, and to state as the basic assumption of the new molecular biology the non-existence of transfers of class III [Protein –> Protein/RNA/DNA]. Because these were all the possible transfers from protein, the central dogma could be stated in the form “once (sequential) information has passed into protein it cannot get out again”.

Note that the translation DNA –> Protein (64 letter –> 20 letter) is consistent with the neo-Darwinian synthesis, but that the translation Protein –> DNA would allow for Lamarckism, and that I have ignored that reality that this translation involves RNA.

Shannon’s famous paper addresses the very case of translation between such alphabets:

Shannon wrote:

The ratio of the entropy of a source to the maximum value it could have while still restricted to the same symbols will be called its relative entropy. This is the maximum compression possible when we encode into the same alphabet. One minus the relative entropy is the redundancy. The redundancy of ordinary English, not considering statistical structure over greater distances than about eight letters, is roughly 50%. This means that when we write English half of what we write is determined by the structure of the language and half is chosen freely. The figure 50% was found by several independent methods which all gave results in 14 this neighborhood. One is by calculation of the entropy of the approximations to English. A second method is to delete a certain fraction of the letters from a sample of English text and then let someone attempt to restore them. If they can be restored when 50% are deleted the redundancy must be greater than 50%. A third method depends on certain known results in cryptography.

Two extremes of redundancy in English prose are represented by Basic English and by James Joyce’s book “Finnegans Wake”. The Basic English vocabulary is limited to 850 words and the redundancy is very high. This is reflected in the expansion that occurs when a passage is translated into Basic English. Joyce on the other hand enlarges the vocabulary and is alleged to achieve a compression of semantic content.

Though we have been implicitly assuming that the entropy of the 3d order extension of DNA equals log_2(64) = 6 bits and that all letter sequences are equally likely, this cannot be the case—most creatures created by a uniform sampling of these letters wouldn’t even be monsters, they just couldn’t exist, much less self-replicate. So a realistic quantification of the entropy of DNA is difficult, and depends on the constraints that the individual created by the DNA sequence can live and self-replicate. It also depends upon the historical evolutionary constraints that generated the current DNA library.

In this context, the answer to your original question is:

Information source: DNA
Transmitter: RNA
Channel: Biochemistry
Receiver: Amino acids
Destination: Protein

Under the highly simplified assumption of uniformly distributed DNA letters, the Shannon efficiency of the central dogma equals log(20)/log(64) = 72%. An intelligently designed code can achieve perfect or near-perfect efficiencies of 100%.

I wonder what the explanation of intelligent design “theorists” would be for this extraordinarily inefficient coding scheme for generating proteins.

Comment #45817

Posted by 'Rev Dr' Lenny Flank on August 30, 2005 10:45 PM (e)

Receiver: Amino acids

Um, perhaps I’m just dense, but aren’t amino acids nothing but a collection of dead atoms?

Please explain to me, if you would, how a collection of dead atoms can act as a “reciever of information”.

Can you think of any OTHER “recievers of information” that are … well . . dead?

Comment #45845

Posted by ts (not Tim) on August 31, 2005 4:47 AM (e)

Can you think of any OTHER “recievers of information” that are … well . . dead?

Apparently vitalism isn’t dead.

Browsers and servers are handy examples.

Comment #45868

Posted by Steven Thomas Smith on August 31, 2005 9:25 AM (e)

Apparently vitalism isn’t dead.

I love Julian Huxley’s remark that Bergson’s élan vital is no better an explanation of life than is explaining the operation of a railway engine by élan locomotif.

Comment #46015

Posted by 'Rev Dr' Lenny Flank on September 1, 2005 4:24 AM (e)

Apparently vitalism isn’t dead.

Browsers and servers are handy examples.

Apparently, neither is anthropomorphizing.

Browsers and servers don’t “do” anything with “information”, any more than mailboxes do.

Comment #46030

Posted by Steven Thomas Smith on September 1, 2005 8:03 AM (e)

Lenny,

If you’re interested, please read Shannon for some insight here—it’s really a beautiful paper, with mostly freshman-level math or below. Information is simply produced when one “message” is chosen from a set of messages. The amount of information depends upon the likelihood of that message. The recipient of the information can be a person or thing:

Shannon wrote:

5. The destination is the person (or thing) for whom the message is intended.

This is the only reference to a person in Shannon’s paper, and it’s superfluous—any physical object may be the destination of information. There’s no supernatural life forces or intelligence or whatever required to define these basic concepts.

Comment #46034

Posted by ts (not Tim) on September 1, 2005 8:19 AM (e)

Browsers and servers don’t “do” anything with “information”, any more than mailboxes do.

You’re about as knowledgeable on this subject as the IDiots who say that evolution contradicts the 2LOT.

Comment #46039

Posted by ts (not Tim) on September 1, 2005 8:44 AM (e)

BTW, the accusation of “anthropomorphizing” is absurd; I said nothing about human (anthro) qualities, just that servers and browsers are “dead” “recievers (sic) of information”. They are, after all, information processing systems, whereas a mailbox that passively holds a letter is not, and information processing systems indeed do things with information – like cause certain sequences of pixels to appear on a screen, or cause specific changes in the magnetic polarities of spots on a disk. I suggest, Lenny, that you should stick with what is apparently the only thing you know about, which is roasting trolls.

Comment #46043

Posted by 'Rev Dr' Lenny Flank on September 1, 2005 8:54 AM (e)

servers and browsers are “dead” “recievers (sic) of information”.

No they’re not —- they no more “process” or “recieve” information than mailboxes “process” or “recieve” postal letters.

Humans do all those things. Servers, browsers, and mailboxes are simply the mechanisms that humans use to do those things.

The rest of your silly dick-waving is ignored, as usual.

Comment #46047

Posted by 'Rev Dr' Lenny Flank on September 1, 2005 9:05 AM (e)

This is the only reference to a person in Shannon’s paper, and it’s superfluous—any physical object may be the destination of information. There’s no supernatural life forces or intelligence or whatever required to define these basic concepts.

“Destination of information” is different than a “receiver”. My mail ends up in my mailbox (its “destination”), but if I don’t read any of it, no “information” is “transmitted”. What is “information” if nobody sees it?

And I have already argued against “vitalism” — after all, I pointed out that the DNA “sender” and molecular “recievers” are dead, utterly dead. Indeed, they are just dead molecules interacting with other dead molecules, according to the laws of chemistry and physics – none of which require any “intelligence” or “vital force” or “supernatural forces” or somesuch. And, it seems to me, it doesn’t require any “information”, either, any more than crystal reproduction does. (When a crystal reproduces, who “sends” the “information” to do this? Who “recieves” it?)

So again I ask, what is “information” if nobody sees it? Who reads the “information” in DNA? Who “receives” it?

I don’t see any need to invoke “information theory” in the process of life. Just the plain old ordinary laws of chemistry and physics. Life isn’t special. It’s just chemistry – just big molecules interacting with other big molecules. No “informaiton” needed or required.

Comment #46048

Posted by ts (not Tim) on September 1, 2005 9:08 AM (e)

No they’re not —— they no more “process” or “recieve” information than mailboxes “process” or “recieve” postal letters.

I just pointed out how that isn’t so, you silly goose.

Humans do all those things. Servers, browsers, and mailboxes are simply the mechanisms that humans use to do those things.

This is the same sort of fallacious argument that the IDiots make – humans design things; anything designed must be designed by a human. Just because humans do something doesn’t mean that nothing else can.

The rest of your silly dick-waving is ignored, as usual.

It’s yours that is waving around, with your blatantly ignorant vitalist drivel about “dead” things and “anthromorphizing”. You’re making a bigger fool of yourself here as neurode and Blast combined.

Comment #46049

Posted by ts (not Tim) on September 1, 2005 9:11 AM (e)

And I have already argued against “vitalism” —- after all, I pointed out that the DNA “sender” and molecular “recievers” are dead, utterly dead.

That is an argument for vitalism – that “living” things have some special property, such that only they are capable of sending or receiving messages.

Comment #46050

Posted by ts (not Tim) on September 1, 2005 9:14 AM (e)

If you want to understand life, don’t think about vibrant, throbbing gels and oozes, think about information technology.
— Richard Dawkins, The Blind Watchmaker, 1986, Norton, p. 112.

Comment #46054

Posted by W. Kevin Vicklund on September 1, 2005 9:51 AM (e)

Let’s look at an analogous argument:

If a tree falls in a forest, and no one is there to hear it, does it make a sound?

Lenny says no.

ts says yes.

The difference is how they each conceptualize the phrase “a sound”

As an EE who sometimes works with signal processing, I agree with ts.

Comment #46055

Posted by ts (not Tim) on September 1, 2005 10:03 AM (e)

I don’t see any need to invoke “information theory” in the process of life. Just the plain old ordinary laws of chemistry and physics. Life isn’t special. It’s just chemistry — just big molecules interacting with other big molecules. No “informaiton” needed or required.

There’s a difference between laws and theories; laws describe, theories explain and predict. “the plain old ordinary laws of chemistry and physics” are not enough to explain biodiversity or make predictions about fossil records, drug resistance in bacteria, and so on – we need to invoke the theory of evolution to do that. And there are many details that information theory explains and predicts, as spelled out in the articles cited above.

Bodies and brains and books are also “just big molecules interacting with other big molecules”, but that’s not enough to explain or predict the sorts of changes in behavior that is exhibited after people read books, or after they aim throat vibrations at each others’ ears. Computers are “just big molecules interacting with other big molecules”, but that’s not enough to explain or predict what happens when computers running server and browser programs exchange signals. All this can be explained by information theory and the concepts of senders, receivers, messages, and information. To deny that browsers and servers receive information is to obstinately deny the plainly obvious and to refuse to use ordinary language. To deny that browsers and servers do things is absurd when staring at a browser doing something on the screen in front of you. Formatting messages, scrolling windows, responding to mouse clicks, and so on are all doing things. No human is doing these things, the computer is; no little man is hiding inside.

Comment #46057

Posted by ts (not Tim) on September 1, 2005 10:10 AM (e)

The difference is how they each conceptualize the phrase “a sound”

I conceptualize it in scientific terms. The tree causes the air to vibrate, with various causal consequences. The notion that it doesn’t count as sound unless some human hears it a primitive pre-scientific anthropocentric vitalistic notion.

Comment #46062

Posted by 'Rev Dr' Lenny Flank on September 1, 2005 10:34 AM (e)

If a tree falls in a forest, and no one is there to hear it, does it make a sound?

Lenny says no.

More accurately, I say “if there’s no one around, then how do you know a tree fell?”

Comment #46064

Posted by ts (not Tim) on September 1, 2005 10:39 AM (e)

More accurately, I say “if there’s no one around, then how do you know a tree fell?”

It was asserted that a tree fell. Really, Lenny, you should stick with what you’re good at – posting boilerplate questions for trolls.

Comment #46066

Posted by Ric on September 1, 2005 10:48 AM (e)

“ posting boilerplate questions for trolls.”

They never answer them though, do they?

Regular Churchgoer, Ts?

Comment #46069

Posted by ts (not Tim) on September 1, 2005 11:04 AM (e)

They never answer them though, do they?

No, which is why Lenny does a service by posting them, embarrassing the IDiot trolls and showing that they know not whereof they speak. Which is also why Lenny should stick to the one thing he knows – so he doesn’t embarrass himself and show that he knows not whereof he speaks.

Regular Churchgoer, Ts?

Not since my Bar Mitzvah.

Comment #46079

Posted by Steven Thomas Smith on September 1, 2005 12:05 PM (e)

Under the highly simplified assumption of uniformly distributed DNA letters, the Shannon efficiency of the central dogma equals log(20)/log(64) = 72%. An intelligently designed code can achieve perfect or near-perfect efficiencies of 100%.

I wonder what the explanation of intelligent design “theorists” would be for this extraordinarily inefficient coding scheme for generating proteins.

On the subject of coding efficiency, I’d like to get back to sticking it to the intelligent design creationists.

Another great subject in coding theory is “error correcting” codes. By using extra “error correction” bits in the code, a decoder can determine both that an error occurred and how to correct it. You probably use these codes every day on CDs and DVDs; these called the Reed-Solomon codes. (These were developed at my home institution in the late 1950s by Irving Reed and Gustave Solomon—Irving Reed once ruminated to me that he never received a penny for this invention, but that’s a different story.)

So what about that 72% coding efficiency for the Central Dogma? The mRNA takes lg(64) = 6 bits to encode lg(20) = 4.3 bits of information (simplified uniform assumptions). What happens to those extra 1.7 bits for every amino acid that’s created?

An intelligent desiger would use these bits to correct transcription errors. Or natural selection could chance upon such a method if there were a selective advantage for it.

I’m no expert in this field, but a quick Google search revealed the paper “Is there an error correcting code in the base sequence in DNA?” (Biophys J. 1996 Sep. 71(3):1539-44). Bottom line:

The sequence of bases in DNA is also a digital code consisting of four symbols: A, C, G, and T. Does DNA also contain an error correcting code? … We developed an efficient procedure to determine whether such an error correcting code is present in the base sequence. We illustrate the use of this procedure by using it to analyze the lac operon and the gene for cytochrome c. These genes do not appear to contain such a simple error correcting code.

This negative result does not present any difficulty to an evolutionary biologist, just as it’s not a problem that our eyes are the products of very poor evolutionary design, or that we all don’t sprout wings and laser gun weapons.

But how do intelligent design creationists account for this waste? Why would an intelligent designer literally throw away 1.7 bits for every amino acid that must be created in an organism??! How wasteful in information is this? I weigh 175 lbs, and an amino acid weighs about 135 daltons (5e–25 lbs), so to create me, that’s a loss of 175/5e–25*1.7 = 6e26 bits, which is 62 yottabytes (~10^24, much more than Avogadro’s number). A Terabyte RAID costs about $900, so just to store all those lost bytes would cost about $51 petadollars ($51e15), or over 4000 times the U.S. GDP!

This is intelligent design? How does the ID movement account for this waste? Does the Flying Spaghetti Monster have stock in EMC Corporation or something?

Comment #46085

Posted by 'Rev Dr' Lenny Flank on September 1, 2005 12:17 PM (e)

More accurately, I say “if there’s no one around, then how do you know a tree fell?”

It was asserted that a tree fell.

It’s asserted that ID happens. (shrug)

Comment #46147

Posted by ts (not Tim) on September 2, 2005 1:16 AM (e)

It’s asserted that ID happens. (shrug)

Yeah, shrug. “A tree falls” was the antecedent of a conditional. I realize that this is beyond your level of comprehension, so let’s answer your question this way: At noon a satellite photo showed the tree standing; at 12:05 a satellite photo showed that the tree was on the ground. That’s how we know whether the tree fell. Mt. Vicklund’s question was, did the tree make a sound.

Sheesh.