Nick Matzke posted Entry 3204 on June 22, 2007 11:47 PM.
Trackback URL: http://www.pandasthumb.org/cgi-bin/mt/mt-tb.fcgi/3193

Given all of the recent ignorant yammering about “junk DNA” on the Discovery Institute’s blog and other ID blogs – unfortunately partially derived from a fair bit of ignorant yammering in the science media on the same topic – I think it is worth it to post a very simple and insightful post from April 2007 by T. Ryan Gregory entitled “The Onion Test.” Gregory is a professor at the University of Guelph and runs genomesize.org, an online database of animal genome sizes. He has recently become one heck of science blogger (at Genomicron) and has been doing a yeoman’s job of attempting to explain patiently and calmly to the world what the real scientific issues are with genome size, the “junk DNA” concept, and the problems with the ubiquitous-but-bogus storyline about junk DNA. Said ubiquitous-but-bogus storyline goes something like this: “Scientists have found that junk DNA is functional! Weren’t scientists (er, other scientists) stupid to think it was junk! What morons! Three cheers for our pet idea, which is that junk DNA does X.” ID advocates, who don’t even have an “X”, repeat the story but instead just riff off the vague idea that someone somewhere has explained what the function of “junk DNA” is, have played this storyline for all it’s worth, adding a completely vapid “We told you so!” on top of it.

For a dose of reality, I recommend that everyone read Gregory’s Onion Test. I quote it below for your convenience.

The onion test.

I am not sure how official this is, but here is a term I would like to coin right here on my blog: “The onion test”.

The onion test is a simple reality check for anyone who thinks they have come up with a universal function for non-coding DNA1. Whatever your proposed function, ask yourself this question: Can I explain why an onion needs about five times more non-coding DNA for this function than a human?

The onion, Allium cepa, is a diploid (2n = 16) plant with a haploid genome size of about 17 pg. Human, Homo sapiens, is a diploid (2n = 46) animal with a haploid genome size of about 3.5 pg. This comparison is chosen more or less arbitrarily (there are far bigger genomes than onion, and far smaller ones than human), but it makes the problem of universal function for non-coding DNA clear2.

Further, if you think perhaps onions are somehow special, consider that members of the genus Allium range in genome size from 7 pg to 31.5 pg. So why can A. altyncolicum make do with one fifth as much regulation, structural maintenance, protection against mutagens, or [insert preferred universal function] as A. ursinum?


Left, A. altyncolicum (7 pg); centre, A. cepa (17 pg); right, A. ursinum (31.5 pg).

There you have it. The onion test. To be applied to any ambitious claims that a universal function has been found for non-coding DNA.

____________

1 I do not endorse the use of the term “junk DNA”, which I think has deviated far too much from its original meaning and is now little more than a loaded buzzword; the descriptive term “non-coding DNA” is what I use to refer to the majority of eukaryotic sequences (of various types) that do not encode protein products.

2 Some non-coding DNA certainly has a function at the organismal level, but this does not justify a huge leap from “this bit of non-coding DNA [usually less than 5% of the genome] is functional” to “ergo, all non-coding DNA is functional”.

I will leave it here for now. It is possible that The Onion Test doesn’t seem quite as compelling to other readers as it does to me. I have some ideas about why this might be true – a lot depends on what background knowledge you bring to this – but I am going to wait for peoples’ comments to assess this further. I would particularly like ID advocates to try to explain why The Onion Test doesn’t nuke their claims that junk DNA is functional.

The same goes for Andrew Pellionisz and whoever else runs junkdna.com. (By the way, I am pretty well convinced that junkdna.com is a crank science website. I would tell readers to visit junkdna.com and give me their own assessments, except it crashes my browser when I click on it. That plus the egregious abuse of HTML formatting are pretty bad signs all by themselves.)

————————–
Note: I agree with Ryan Gregory’s various criticisms of the term “junk DNA”, in particular because “junk” DNA might have some non-sequence-specific “function” based purely on bulk amount of DNA. But this is a different issue than the much bigger problem of the now-common but wildly wrong idea that scientists have determined that most/all “junk DNA” is functional, and none of the “junk DNA is functional!” people are claiming that the function is something pedestrian like “taking up space.”

Commenters are responsible for the content of comments. The opinions expressed in articles, linked materials, and comments are not necessarily those of PandasThumb.org. See our full disclaimer.

Post a Comment

Use KwickXML formatting to markup your comments: <b>, <i>, <u> <s>, <quote author="...">, <url href="...">, etc. You may need to refresh before you will see your comment.




Remember personal info?

  


Comment #184262

Posted by Bob O'H on June 23, 2007 12:24 AM (e)

Yep, junkdna.com is either a crank site, or is doing a very good impression of one.

I sort-of like the onion test. The only change I’d like to see is to have two similar species with very different amounts of junknon-coding DNA, just to drive the point home. Humans and onions are a bit too different, so someone is going to argue that humans need more junknon-coding DNA because they’re more complex (I can’t see exactly how the argument will go, but you know you’ll want to respond by scrawling “STUPID” across their forehead in bright green letters, with plaid trim). They’ll conveniently ignore the wide distribution of species with large amounts of junk non-codinghighly repetitive DNA across the Linnean hierarchy.

Bob

Comment #184263

Posted by dunnoMaybe on June 23, 2007 12:27 AM (e)

Onions have large cells which make them useful as samples in an introductory class on the microscope. The large genome size suggests that non-coding DNA may have (inter alia) a structural role in the cell: mechanics of cell division, nuclear integrity, etc. I think the “junk” hypothesis is the last that should be considered. It’s a non-answer to a good question (“what is that for?”), much like so-called vestigial organs.

Comment #184265

Posted by GeoMor on June 23, 2007 12:44 AM (e)

junkdna.com itself is basically just a blog where Pellionisz posts news articles and stuff. He adds his some of his own color commentary there, but by and large, the stuff about his own theories is relegated to his many other web sites. I think it’s very hard to tell if it’s “crank science” or not, because he seems to keep the mathematical details of his theories confidential, supposedly for IP reasons. I previously wrote a summary of what I think it’s about, which he described as “pretty much correct”.

I don’t think the basic idea is obviously ludicrous, but there is not nearly enough detail to really know. Certainly though, when you consider his (1) secrecy about what his theories actually are, (2) strident claims about their importance (e.g. that it’s related to Parkinson’s and Alzheimeir’s and various other severe diseases), and (3) apparent willingness to exhaustively argue with people on blogs and Wikipedia, all this taken together has to make you a little worried.

Comment #184266

Posted by tacitus on June 23, 2007 1:54 AM (e)

You know, I don’t really see why IDists should get so hung up over a little “junk DNA” in the human genome. They harp on about how an efficient design would not contain so much junk, but if our supposed designer was so efficient in the first place, why on earth did it take over four billion years to implement the design of human beings? Given the right scientific know-how and technology, one could easily see how a designer could achieve the same result in, oh say, 6,000 years. :-)

Comment #184268

Posted by Steve Reuland on June 23, 2007 2:09 AM (e)

The junkdna.com website doesn’t crash my browser (I’m using Firefox). But I agree that it looks like a crank website.

Comment #184269

Posted by IanR on June 23, 2007 2:32 AM (e)

Actually there’s an easy IDist answer to the onion test. Obviously onions have a much more complex evolutionary future than do we. They need those genes for their descendants, the future sapient beings of the world. So the designer has “frontloaded” them with the extra genes. Of course, that also means that hexaploid bread wheat didn’t evolve from einkorn and emmer wheat - instead, the diploid einkorn and the tetraploid emmer are degenerate species (since, as creationist keep saying, evolution cannot add information, it can only remove it…)

Oww, the implications of my nonsense are making my head hurt.

Comment #184271

Posted by Bob O'H on June 23, 2007 3:35 AM (e)

IanR - hey, wow. You have a prediction out of all of that. Bread wheat has a big future, because of all the junk DNA, but rice doesn’t have a lot (that’s why they sequenced it first). Therefore, vodka and not sake will be the staple drink in the future.

And that will make your head hurt.

Bob

Comment #184282

Posted by Torbjörn Larsson, OM on June 23, 2007 4:56 AM (e)

Nick Matzke wrote:

… I am pretty well convinced that junkdna.com is a crank science website.

I’m glad someone takes notice of Pellionisz’s antics.

While Firefox can open the site, it is pretty bad. The first page is filled with ‘news’, and links to “related sites” such as this one. It is a norwegian television channel forum with a thread where a ‘communication adviser’ is cited by a ‘department director’ which tries to describe what a gene is.

Some quotes to remember is that:

truud2 wrote:

The human DNA is a biological Internet and superior in many aspects to the artificial one. [Bold original.]

truud2 wrote:

In addition the DNA may obviously store all harmonic waves of 150 megahertz too, thus of course also visible light. The 22th octave of 150 megahertz lies straight in this range. The color of this light radiation is by the way blue. Is it a coincidence that the solar radiation is broken by the terrestrial atmosphere just in such a way that we live in a world with blue sky?

For example we speak today nearly naturally of the »genetic code«, thus of a systematic information coding. But the past genetics stopped here and settled the remainder of the work exclusive with the help of chemistry, instead of consulting also language experts. [Bold and italics original.]

One of the links goes to an old site. There one can find a list of ‘peer-reviewed’ published material, such as keynote lectures in a symposium for biological nanostructures and a single publication.

The paper is a review (!) paper published in Cerebellum, and seems legit.

The problem I see is that in addition to what GeoMor describes I don’t find much of a review of an area but an unsubstantiated prediction. It is very open, but apparently anything biological that can be loosely modeled as fractal is supposed to be caused by anything resembling fractal structures in DNA:

Malcolm J. Simons & Andras J. Pellionisz wrote:

… the ‘fractal-like’ properties of P-cells, as well as of other organelles and organs (e.g., Cardial coronary arteries), and the ‘fractal-like’ self-similar repetitions known to be in DNA since 1994, are in a causal relationship, with fractal sets of DNA generating fractal anatomical (structural proteins), physiological (metabolic proteins) and related pathological formations. [References removed.]

And the evidence is that some morphological structures can indeed be loosely modeled as fractals:

Malcolm J. Simons & Andras J. Pellionisz wrote:

The morphological findings reported elsewhere and summarized here support the prediction.

Summary:
- Crank science site layout.
- Crank science links.
- ‘Peer reviewed publications’, which aren’t and are often presented in areas unrelated to evolutionary biology.
- Unsubstantiated prediction as proof.

If I am wrong on the crank idea or the facts, I welcome any corrections.

Comment #184283

Posted by Torbjörn Larsson, OM on June 23, 2007 5:05 AM (e)

Oh, and I forgot to mention that the only ‘relation’ between the norwegian site and Pellionisz site seems to be that it linked back to Pellionisz site. I assume that the other ‘related sites’ are similarly listed from a link search.

Comment #184293

Posted by blogsta on June 23, 2007 7:58 AM (e)

The onion test is more formally known as the “C-value paradox”. It has been known and discussed for ages of course. There are some nice articles on it in particular by Dan Hartl, who shows that Drosophila has a small genome because of a high rate of random deletions.

Comment #184294

Posted by David Stanton on June 23, 2007 8:02 AM (e)

dunnoMaybe,

What you are describing is termed the nucleotypic effect. That is to say that the presence of DNA in the nucleus can have some phenotypic effect regardless of sequence. The reason is that increased amounts of DNA tend to increase the size of the nucleus and the size of the cell. There is some evidence to suggest that this is due to a constraint on the ratio of the size of the nucleus to the size of the cytoplasm and may involve the number of nuclear pores required for transport. This is why some polyploid plants tend to be larger and have larger cells that the parental species.

Of course, this does not mean that “junk” DNA has a function. If you want to make a plant bigger, whether you are a human or a god, it would probably be better to use polyploidy than SINES, LINES and pseudogenes for a number of reasons. For example, SINES and LINES can cause insertional mutagenesis, hybrid dysgenesis, chromosomal rearrangements, etc. Polyploidy can provide genetic material free of constraint that can mutate to create new genes. Ohno showed us the importance of polyploidy for evolution and he was correct.

Increasing amounts of DNA can also have deleterious effects due to the fact that it takes time and energy to duplcate DNA. This can increase the cell cycle time and even the generation time for some species. So once again, there is a price to be paid for “junk” DNA and it can only be tolerated for so long. If it cannot be eliminated, the end result could be extinction.

Comment #184320

Posted by Robert King on June 23, 2007 11:01 AM (e)

I think comparing the onion genome to the human genome is hugely interesting but, perhaps, the more devastating point in the onion test is that different onions can have such remarkably different sized genomes. As I’ve commented elsewhere, “no junk” is not a prediction of ID itself. Something can contain redundancy and even useless parts and still be designed. By making “no junk” a prediction of ID the ID-ers seem to be projecting the additional - and unnecessary - theological concept of the Christian God onto their designer. That is, they are designing the Designer, aka as making him/her/they/it (or whatever) conform to their religious notions. I’m sure that’s just an oversight on their part. To me the onion test clearly points to the existence of several designers, sub specialty: onion; it’s much like comparing Windows Vista, Mac OS X and Linux.

Comment #184323

Posted by Science Avenger on June 23, 2007 11:53 AM (e)

Torbjorn Larsson said:

- Crank science site layout.

What are the attributes of a crank science layout?

Comment #184327

Posted by Torbjörn Larsson, OM on June 23, 2007 12:54 PM (e)

Science Avenger wrote:

What are the attributes of a crank science layout?

Well, if I read myself correctly, it seems to be a “first page … filled with ‘news’”. ;-)

But actually, the page is so awful, filled with text, and with the usual small and large fonts, that I left it mostly undefined and for anyone to define for himself/herself. I would use Baez crackpot index - that first page would score rather high, I would think.

Comment #184338

Posted by nickmatzke on June 23, 2007 2:32 PM (e)

What are the attributes of a crank science layout?

It is hard to describe exactly, but for some reason cranks often use extreme formatting, wild changes in font, color, bolding, all caps, etc. This makes it very hard for normal people to read. But it is highly significant to the crank.

I am speculating, but here are some possible psychological causes: (1) cranks often get ignored, so they feel an increased need to emphasize; (2) cranks (we are familiar with this with creationists) build their theories by obsessively pulling together “information” often ripped from the original context and without understanding the “big picture” empirically.

Comment #184341

Posted by nickmatzke on June 23, 2007 2:42 PM (e)

What are the attributes of a crank science layout?

It is hard to describe exactly, but for some reason cranks often use extreme formatting, wild changes in font, color, bolding, all caps, etc. This makes it very hard for normal people to read. But it is highly significant to the crank.

I am speculating, but here are some possible psychological causes: (1) cranks often get ignored, so they feel an increased need to emphasize; (2) cranks (we are familiar with this with creationists) build their theories by obsessively pulling together “information” often ripped from the original context and without understanding the “big picture” empirically.

————–

Regarding the C-value paradox, or better the C-value enigma: basically, C-value (haploid genome size) correlates not with “complexity” but with cell volume or nuclear volume. This is an extremely strong and striking correlation. To me this seems like an overwhelming and crushingly important fact if anyone is going to discuss the “junk DNA” question in a competent way. And yet the creationists, the news media, and even many scientists don’t seem to be aware of it.

Does anyone have any insight about why it seems so rare for this fact to enter the discussions? I am leaning towards the idea that many people simply were never trained to think like a comparative biologist.

PS: As far as explaining the C-value engima, here is probably the definitive recent review:

Gregory, T.R. (2001). Coincidence, coevolution, or causation? DNA content, cell size, and the C-value enigma. Biological Reviews 76(1): 65-101. pdf

Basically there are two main groups of options:

(1) the variability in genome sizes is due to junk DNA, e.g. because bigger cells experience less selection pressure against genome size; or

(2) the variability in genome size is due to the bulk amount of DNA in a cell nucleus serving some kind of non-sequence specific function, a function based on raw bulk amount of DNA. E.g., the genome size might specify the nucleus size, which would specify cell size, and cell size is under selection e.g. for rapid vs. slow growth.

Something like option #2 seems somewhat more likely to me but AFAIK the issue is not definitively resolved. But even if something like #2 is the case, it is more a case of “junk DNA is functional even though it’s still junk”, sort of like you can make a wall from a pile of any old junk – either way this is a long, long, way from the apparently popular idea that all of this “junk DNA” is serving some highly complex software-like function.

This all seems very clear to me, but it seems almost impossible to communicate to journalists and others who are not used to thinking in terms of comparative biology (I have tried a few times). I think that many people just don’t understand how much diversity there is in biology, and what a big deal it is that this diversity in genome size correlates strongly with another characteristic, cell volume. So the “discussion” in the media procedes without the central facts available in the field.

Suggestions welcome…

Comment #184344

Posted by steve s on June 23, 2007 3:04 PM (e)

Denzel wrote:

Regarding the C-value paradox, or better the C-value enigma: basically, C-value (haploid genome size) correlates not with “complexity” but with cell volume or nuclear volume. This is an extremely strong and striking correlation. To me this seems like an overwhelming and crushingly important fact if anyone is going to discuss the “junk DNA” question in a competent way. And yet the creationists, the news media, and even many scientists don’t seem to be aware of it.

Does anyone have any insight about why it seems so rare for this fact to enter the discussions? I am leaning towards the idea that many people simply were never trained to think like a comparative biologist.

PS: As far as explaining the C-value engima, here is probably the definitive recent review:

Gregory, T.R. (2001). Coincidence, coevolution, or causation? DNA content, cell size, and the C-value enigma. Biological Reviews 76(1): 65-101. pdf

Basically there are two main groups of options:

(1) the variability in genome sizes is due to junk DNA, e.g. because bigger cells experience less selection pressure against genome size; or

(2) the variability in genome size is due to the bulk amount of DNA in a cell nucleus serving some kind of non-sequence specific function, a function based on raw bulk amount of DNA. E.g., the genome size might specify the nucleus size, which would specify cell size, and cell size is under selection e.g. for rapid vs. slow growth.

Something like option #2 seems somewhat more likely to me but AFAIK the issue is not definitively resolved. But even if something like #2 is the case, it is more a case of “junk DNA is functional even though it’s still junk”, sort of like you can make a wall from a pile of any old junk – either way this is a long, long, way from the apparently popular idea that all of this “junk DNA” is serving some highly complex software-like function.

This all seems very clear to me, but it seems almost impossible to communicate to journalists and others who are not used to thinking in terms of comparative biology (I have tried a few times). I think that many people just don’t understand how much diversity there is in biology, and what a big deal it is that this diversity in genome size correlates strongly with another characteristic, cell volume. So the “discussion” in the media procedes without the central facts available in the field.

Wow. This is really neat. I’ve never heard any of that before.

Comment #184347

Posted by stevaroni on June 23, 2007 3:51 PM (e)

Nick and Torbjorn said things like:

“I am pretty well convinced that junkdna.com is a crank science website”

and..

“Crank science site layout”

Guys;

You’re being too hard on them. After all, right there on front page (next to pictures of the Guggenheim) is this in big blue letters…

You only believe theories when they make predictions confirmed by scientific evidence

If only we can get the creationist websites to similarly profess their respect for actual evidence, we’ll be on to something here.

Comment #184348

Posted by Henry J on June 23, 2007 3:51 PM (e)

Something called “the onion test” should have something to do with putting condiments on hamburgers… :p

Henry

Comment #184349

Posted by David Stanton on June 23, 2007 4:45 PM (e)

Nick,

Thanks for the reference, it looks really good.

As to why this entire topic is poorly understood, I have a few speculations. This particular field is rather esoteric and not usually covered in most mainstream course work. I am only aware of some of the issues due to a graduate research project. If not for that, I doubt I would have heard about this either.

As for why the press is poorly informed, it seems to me that this topic takes more than seven words to describe adequately. Therefore, the probability of a journalist understanding, caring and educating on this topic is rather low. I’m not saying that all jornalists are stupid, just that they must work under constraints like anyone else.

I do think that you are absolutely correct about comparative biology. In fact I have always felt that this is the real reason why almost no one who gets a degree in Biology remains a creationist. I also believe that lack of this background is why so few creationists ever consider the comparative approach to questions such as the origin of bombadier beetles, etc. The amount of biological diversity never ceases to amaze even professional biologists. To me, this is the most obvious evidence that there was no plan or purpose involved in evolution. If there was, then I guess you have to explain not only why God is so fond of beetles in general, but why she is so fond of weevils in particular. And let’s not forget that, if you have a choice, you should always pick the lesser of two weevils.

Comment #184350

Posted by Torbjörn Larsson, OM on June 23, 2007 4:52 PM (e)

stevaroni wrote:

respect for actual evidence

I sympathize. But I’m going to be nitpicky; the professed intention isn’t enough to free from crankdom. To connect back to nickmatzke’s discussion about large & small fonts:

John Baez wrote:

7. 5 points for each word in all capital letters (except for those with defective keyboards).

( http://math.ucr.edu/home/baez/crackpot.html )

Comment #184352

Posted by Andras Pellionisz on June 23, 2007 6:01 PM (e)

Mature debates avoid “name calling” and focus on substantial intellectual issues.

Thus, it may not serve useful purposes to “debate” if the http://www.junkdna.com “Junk DNA portal” is a “crank” or “not crank” website. See and decide for yourself. “Don’t bite my finger, look where I am pointing”.

Please come forward with your concept/algorithm/software and application to resolve nasty issues of “junk DNA diseases”; http://www.junkdna.com/junkdna_diseases.html

Dictionary says, “crank” means “esoteric, unusual, etc”; the misnomer “Junk DNA” never was a scientific term. It was an unusual and harmful framing of the budding science of “genome regulation” when Ohno introduced the term “Junk DNA” (1972). Seven years before, Jacobs and Monod won their Nobel for the “Operon” regulation (1965). We’d be much further ahead without framing the vast majority of DNA as “Junk”. (IMHO the late Barbara McClintock, getting her Nobel half a Century delayed, would probably agree).

The site http://www.junkdna.com does not crash any browser/platform. However, the collection of reports, covering cogent arguments is not recommended for “dial-up” novices because of its size; over 10 Mb with archives of earlier years, reaching back to 1972.

The “official” declaration by the NIH-led “ENCODE” consortium that BOTH the “Junk DNA” and “Gene” dogmas were found dead, plus that the Lamarckian question mark was put back on Dogmatic Darwinism were not well-liked by some. The (professional) server of the http://www.junkdna.com site was brought down for a day or two by a tsunami of users, not excluding interference by hackers. (It is up and running at the same url: http://www.junkdna.com ).

I do suggest an issue of substance that might be appropriate for this “blog” – while I reserve preference of disposing IP by means of more appropriate channels.

While I would stay away from “name calling”, I also consider that “beating a dead horse” might not be good use of time (unless you enjoy it). It is no longer relevant to “debate” HOW DEAD “Junk DNA” is.

For science, the new issue of relevance is “simple”: “how does DNA work”? Please put your two cents into a coherent theory. Let’s here some basic concepts, and show experimental evidence for it.

My contribution is that there is an “Algorithmic Design” inherent to the Genome. (More specifically, a fractal design of DNA that results in a fractal growth of organelles, organs and organisms; as sketched out in http://www.fractogene.com). While I don’t recommend scooping IP by “blog debates”, I may need to direct attention to peer-invited and peer-reviewed publications, e.g. experimental support of quantitative prediction(s) of FractoGene, and assert for the record that I have accepted and will consider invitations to disseminate/debate science issues in proper science environment.

In a philosophical sense, I am wishing that “Algorithmic Design” will unite all camps and will focus on identifiable science issues/projects, perhaps discouraging hollow shouting over ideological trenches. If this is a blog about science, let’s stay with science.

While engaging both Nick Matzke (with whom I exchanged a substantive number of emails) and Ryan Gregory (whose work I greatly admire, most notably his reduction of “C-value paradox” to “C-value paradigm” and finally with his 2001 paper to “C-value explanation), here is my take (in a compressed “blog” format…) trying to pass his “Onion test”, which BTW is pretty much the same as Richard Dawkins’ “Salamander paradox” – and my explanation is also consistent with Gregory’s ”explanation of C-value” (the former “Junk DNA” ratio corresponding to the “cell size”).

In FractoGene’s terms, the explanation is straightforward, indeed. Fortunately, I don’t have to reiterate it here, since it has been posted on the “Junk DNA Portal” for quite a while: See http://www.junkdna.com/#big_dino_small_genome , http://www.junkdna.com/#biology_unified etc. Suffice to say here that recursive algorithms are notoriously “slow converging” or “fast converging”, at times depending on the slightest diversity in their parameters.

Those who wish to look at “Genomics beyond Genes” through the PostModern frame of “PostGenetics” at real details; the International PostGenetics Society is open for all; http://www.postgenetics.org

Sincerely,

pellionisz_at_junkdna.com

Comment #184356

Posted by Dave on June 23, 2007 7:39 PM (e)

Yep, looks like “crank” about covers it.

Comment #184357

Posted by Doc Bill on June 23, 2007 7:56 PM (e)

Andras,

You had an opportunity to present some science, but chose to expend several hundred words in creationist, crank commentary.

Your entire premise is wrong. That’s not an insult, it’s a fact.

So, prove yourself right.

Over to you.

Comment #184361

Posted by paul on June 23, 2007 8:42 PM (e)

Maybe use of quotation marks is an indicator of cranks.

Comment #184363

Posted by Torbjörn Larsson, OM on June 23, 2007 8:46 PM (e)

Doc Bill wrote:

You had an opportunity to present some science, but chose to expend several hundred words in creationist, crank commentary.

Damning, as is the fact that he calls a judgment on the presented science and the site (science and presentation) for ““name calling””.

As you say, back to Pellionisz.

Comment #184364

Posted by Science Avenger on June 23, 2007 8:59 PM (e)

Andras Pellionisz said:

While I would stay away from “name calling”, I also consider that “beating a dead horse” might not be good use of time (unless you enjoy it). It is no longer relevant to “debate” HOW DEAD “Junk DNA” is.
For science, the new issue of relevance is “simple”…

Why do you use “quotations” seemingly at “random”. It makes “reading” your writing very “difficult”.

Comment #184378

Posted by Jkrehbiel on June 23, 2007 10:19 PM (e)

Most likely not an original thought, but isn’t “junk DNA” the quintessential argument from ignorance?

“It doesn’t code for proteins, and I don’t know anything else it might do, so it must be junk.”

Kind of like when you wife throws away some important little whatsis because she doesn’t know what it is.

Comment #184385

Posted by Henry J on June 23, 2007 11:00 PM (e)

Re “Most likely not an original thought, but isn’t “junk DNA” the quintessential argument from ignorance?”

Uh - argument toward what conclusion?

The presence of non-coding DNA wasn’t one of the arguments for evolution - it was a surprise to the scientists who discovered it.

Henry

Comment #184389

Posted by nickmatzke on June 24, 2007 12:45 AM (e)

Most likely not an original thought, but isn’t “junk DNA” the quintessential argument from ignorance?

“It doesn’t code for proteins, and I don’t know anything else it might do, so it must be junk.”

No, no, no. This is more of what you see in the media – nice thoughts, but no empirical background – and is exactly what I am complaining about. We know (or at least the scientists who work on this know) the following highly relevant facts:

(1) the onion genome has way more DNA than we do,
(2) the size of onion genomes varies drastically within the genus by over 4 times, and everything is still pretty much a normal onion
(3) this pattern holds not just for onions but for thousands of organisms, plants and animals

Conclusion 1: Most of that extra DNA isn’t doing anything very remarkable. If you can build a perfectly good onion with a 7 pg genome, a 31.5 pg genome is mostly not necessary to build an almost identical onion.

(4) furthermore the differences in genome size are due mostly to repetitive elements that copy themselves, fossil viruses, and other things that look an awful lot like junk in that they are easily explanable by well-known mutational processes but are easily explainable in terms of specific organismal function

(5) Plus, apart from the massive natural experiment in #1-3, researchers have done experiments e.g. with mice where even *evolutionarily conserved* noncoding DNA can be deleted with no apparent ill effects

Conclusion #2: The “junk” DNA might have some function e.g. based on raw bulk amount of NDA, but it is quite obviously mostly *not* doing anything approximating software, “coding”, or other stuff that people usually assume.

Kind of like when you wife throws away some important little whatsis because she doesn’t know what it is.

Wives often still maintain that it is junk even after the husband attempts an explanation… ;-)

Comment #184390

Posted by GeoMor on June 24, 2007 12:47 AM (e)

Andras Pellionisz wrote:

In FractoGene’s terms, the explanation is straightforward, indeed…Suffice to say here that recursive algorithms are notoriously “slow converging” or “fast converging”, at times depending on the slightest diversity in their parameters.

I thought the essence of fractals is that they don’t converge at all.

I’ve never heard of recursive algorithms being slow or fast to converge. Iterative algorithms do that.

And the genome constitutes the algorithm and the parameters, right? Why dramatic variance in the size of the program, as opposed to its output? By your own point, you should only need slight changes in the program for big effects.

Comment #184404

Posted by J Thomas on June 24, 2007 6:04 AM (e)

I agree that “junk DNA” has lost its meaning, better not to use the term.

“Noncoding DNA” is too general. We have disparate known functions for some of it.

Back in the old days people used to sometimes use “junk DNA” for DNA that they didn’t know how to induce. Like, try to get the DNA to produce protein in vitro and if it doesn’t do it, it must be noncoding. But of course there may be positive control involved, it may only produce protein when needed proteins are present that the invitro system doesn’t supply.

Some highly repetitive DNA is used for centromeres. How much of it do you need? That might vary with how big the chromosomes are, how many chromosomes, temperature, temperature range, etc. I don’t know what makes one genome have more or bigger centromeres than another.

Some might be inactive viruses. But are those viruses that are lurking in the genome, waiting to wreak havoc? Or are they dead viruses that are just inert? Do they serve some use for the cell, perhaps providing protection from some similar viruses but not all similar viruses? A hard disk that has a virus protection scheme which checks against known viral sequences will look like it has a whole lot of viruses, to a different protector. Can we tell the difference between a sleeping virus and a dead reference copy used for protection?

A long time ago there was an argument that since there were insertion sequences that could copy themselves to new locations, it was predictable that such things would spread. A regular allele would be present in half the offspring. An IS that spread properly would be present in all the offspring, providing a strong selective advantage. But if it kept reproducing randomly it would eventually cause problems. It might insert itself into places where it caused trouble, or it might create so many copies that it became a drag on the individuals that had it. So the argument was that such things would evolve ways to control their own numbers. They would spread only when they were not already present in large numbers, and successful ones would hold their numbers low enough not to damage their hosts. They would then be junk, that had no use but did not interfere too much. But of course the argument could be extended – successful insertion sequences would find ways to actually profit their hosts and so spread even more. Would we notice when that happened? Maybe sometimes….

In bacteria, lots of regulation happens from proteins binding to DNA. Of course the DNA sites it binds to have the function of being bound to and it’s only accident if they also produce proteins. There are various functions for proteins that bind DNA in eucaryotes too. But also there are various reasons to have proteins that bind RNA of various sorts. And no particular reason for binding sites to also be coding regions.

There are lots of different stories here. No reason to expect them to all fit the same story. No way to tell whether there are unknown functions involved.

When you delete sequences and find that there is no effect on survival, that tends to indicate the sequences weren’t needed. But to really check that you need to show that the individuals with the deletions survive just as well in the wild…. The deleted sequences might have an important effect on survival in circumstances that you didn’t happen to check.

What if a particular “noncoding” sequence is important for speciation? This is subtle. Speciate too easily and your biome may be filled with specialist species whose populations are each too small to survive well in the longterm, but which channel their competitors into other specialist niches too. Speciate too rarely and you can’t take sufficient advantage of new ecological niches, and your species may go extinct when its niche disappears. Could there be genes which regulate this? Yes, but how would they be selected? If they are selected only during speciation, that’s a long time to accumulate mutations in the interim. But during the speciation events you might get a strong jackpot effect. I don’t know whether that could work out. But if it could, then to test whether a deletion was neutral you’d have to test it not only across the whole climate range the species survives under, but also across speciation events….

It’s very hard to tell what has been selected. If you happen to find out that a given sequence has a particular effect on survival, then you can speculate that it might have been selected for that effect. If you don’t notice any particular effect then it’s hard to say whether there is one you haven’t noticed.

Comment #184434

Posted by Andras Pellionisz on June 24, 2007 12:47 PM (e)

With much appreciation to GeoMor, Ryan Gregory should be pleased that this blog, starting with the very specific “Onion test” (as introduced by Nick Matzke) after some detours finally focused on its explanation by FractoGene. (Would be interesting to see other concepts that can explain away the the “Onion-“ and Richard Dawkins’ “Salamander-paradox”).

There is much room to improve GeoMor’s understanding of fractals to his comfort, but he is not alone in this regard. For some, a simple “Google” might do, but here are some specifics to get beyond the initial difficulties:

“I thought the essence of fractals is that they don’t converge at all.” Not so, see e.g. the Hilbert-curve (that converges to entirely fill 2-dimensional space). While a fractal algorithm can be run “forever”, e.g. none of the printed/computer displayed Mandelbrot sets have been produced by “forever running” computers. Pixel resolution of the print/screen is the practical limit to stop running the algorithm.

“I’ve never heard of recursive algorithms being slow or fast to converge. Iterative algorithms do that”. Fractals are both recursive and iterative (yes, the two terms have different, though overlapping meanings). Let’s start with the “oldest fractal”, the Cantor-dust. It is recursive (a repeated “call” to a procedure; “take a line, chop it to three, leave out the middle”). The Cantor-dust also iterates towards a “goal” of (infinitely smaller particles). See also the iteration of Julia-sets (wikipedia is a convenient place to start).

“And the genome constitutes the algorithm and the parameters, right? Why dramatic variance in the size of the program, as opposed to its output? By your own point, you should only need slight changes in the program for big effects.” This is a very profound notion. You got it upside down at the moment, but e.g. the following link might help: http://www.junkdna.com/junkdna_fractals.html

pellionisz_at_junkdna.com

Comment #184446

Posted by raven on June 24, 2007 3:44 PM (e)

i “made” tHe “ONIon” Argument MYSELF “preVIOUSLY” usinG mammaLian “GeNoMeS” AS the “EXAMPLE.”

raven on June 16, 2007 6:29 PM (e)

Just going to repost an old one. This junk DNA issue is just going around in circles. Some noncoding will have functions, some is just there. The wide variety of genome sizes among mammals tells one that right there. Plust 8% of the human genome is defective retroviruses left over from ancient battles.

raven on April 23, 2007 10:28 AM (e)

From www.genomesize.com
Human is 3.5 pg/cell

Number of mammals: 438
Smallest mammalian genome size: 1.73pg, Miniopterus schreibersi, Bent-winged bat
Largest mammalian genome size: 8.40pg, Tympanoctomys barrerae, Red viscacha rat
Mean for mammals: 3.47pg ± 0.04

Avian genome sizes tend to be smaller.

I’ve no doubt that much noncoding DNA is functional, introns, regulatory regions, etc..As shown by phylogenetic conservation.

Some of it is clearly probably just genome parasites and adventitious accumulations. This is implied by the fact that genome sizes vary widely even within mammals and between mammals and avians. Not seeing how or why the the actual functional genome varies all that much between say mammals. Does the red rat really have twice as many genes as a human? Or a bat half as many?

Comment #184447

Posted by raven on June 24, 2007 3:50 PM (e)

Some of the noncoding DNA questions are empirically addressable. In the study below, the authors just deleted big chunks of DNA and made knockout mice from the cells. Nothing much happened even though some of the DNA was conserved from mice to humans.

Reference from “Jerry” previous thread:

1: Nature. 2004 Oct 21;431(7011):988-93.
Megabase deletions of gene deserts result in viable mice.Nóbrega MA, Zhu Y, Plajzer-Frick I, Afzal V, Rubin EM.
DOE Joint Genome Institute Walnut Creek, California 94598, USA.

The functional importance of the roughly 98% of mammalian genomes not corresponding to protein coding sequences remains largely undetermined. Here we show that some large-scale deletions of the non-coding DNA referred to as gene deserts can be well tolerated by an organism. We deleted two large non-coding intervals, 1,511 kilobases and 845 kilobases in length, from the mouse genome. Viable mice homozygous for the deletions were generated and were indistinguishable from wild-type littermates with regard to morphology, reproductive fitness, growth, longevity and a variety of parameters assaying general homeostasis. Further detailed analysis of the expression of multiple genes bracketing the deletions revealed only minor expression differences in homozygous deletion and wild-type mice. Together, the two deleted segments harbour 1,243 non-coding sequences conserved between humans and rodents (more than 100 base pairs, 70% identity). Some of the deleted sequences might encode for functions unidentified in our screen; nonetheless, these studies further support the existence of potentially ‘disposable DNA’ in the genomes of mammals.

Comment #184449

Posted by Whatever on June 24, 2007 3:57 PM (e)

What are the attributes of a crank science layout?

Beyond odd formatting, cranks seem to name-drop a lot…
But then, so do those who desire venture capital.

Myself, I wouldn’t want to be too closely associated with Dr. Teller.

Comment #184453

Posted by Flint on June 24, 2007 4:30 PM (e)

Further, if you think perhaps onions are somehow special, consider that members of the genus Allium range in genome size from 7 pg to 31.5 pg.

This is very significant. Our error lies in trying to decide whether or not something is intelligently designed, binary fashion. This range shows the binary orientation to be in error. We see instead that some things are much more intelligently designed than others. The key question is whether a much greater quantity of apparently inactive DNA indicates more or less intelligence in the design.

Comment #184461

Posted by Andras Pellionisz on June 24, 2007 5:52 PM (e)

“What are the attributes of a crank science layout?”

“…Myself, I wouldn’t want to be too closely associated with Dr. Teller.”

This is perhaps the strangest reaction to the http://www.junkdna.com “Junk DNA Portal”:

(1) The site never mentions Dr. Teller. Although he was a scientist of historical proportions he never claimed and never had any contribution to Genomics. I can see why some would like to brand him a “crank scientist”, but name calling masks confession that one has nothing to say about the issues of merit, plus it is somewhat belated:

(2) On September 9th, it will be 4 years that he passed away. Thus, nobody can be “too closely associated with him” at this time (RIP).

(3) A personal friendship (now in the past tense) should be off-limits to “ad hominem” attempts. Not only because it has nothing to do with “Junk DNA”, but for reasons of human decency.

Back to the collapse of “Junk DNA/Genes” dogma… towards what we call since 2005 the new science of PostGenetics (“Genomics beyond Genes”)…

pellionisz_at_junkdna.com

Comment #184510

Posted by caligula on June 25, 2007 2:09 AM (e)

I’d be interested to know what people, especially Nick Matzke, make of these new results from the ENCODE project:

http://www.genome.gov/Pages/Research/ENCODE/natu…

I don’t seem to understand very clearly what is so revolutionary about them. For example, I always thought that most of our DNA gets transcribed. I mean, is there any significant mechanism to prevent transcription of non-coding sequences? Rather, I always believed that most mRNA (if it deserves to be called mRNA) does not translate to a functional protein. Also, what do they mean by “functional”? As I understand it, they call non-constrained and non-coding regions of DNA “functional” if it gets transcribed? But what actual biological function, then, does the non-coding transcript have, especially if it is known to be selectively neutral? Does the “function” of non-coding trascripts have to do with inflated nuclear/cell size, as Nick suggested, or what?

Comment #184511

Posted by nickmatzke on June 25, 2007 2:19 AM (e)

Caligula,

I initially thought along the lines you suggest about the ENCODE paper, but then I was informed that:

(1) A great many of these RNA transcripts are degraded very rapidly

(2) The relevant enzymes that perform transcription from DNA to RNA are not very specific, and the ENCODE techniques are super-sensitive so that they can even pick up low-level “accidental” transcription.

(3) So much of the reported transcription is probably not biologically relevant, assuming it’s even biologically “real” (lab cell lines were used and the conditions are less than natural).

Nonetheless it is interesting that more of the DNA genome is transcribed to RNA than was previously thought; however it doesn’t prove that the transcribed DNA is functional, and it doesn’t change the situation from comparative biology one bit. It is still the case that onions have way more DNA than humans, and that some onions have way more DNA than others, even though the smaller genome can obviously make a perfectly good onion.

Comment #184514

Posted by Torbjörn Larsson, OM on June 25, 2007 3:48 AM (e)

Andras Pellionisz wrote:

this blog, starting with the very specific “Onion test” (as introduced by Nick Matzke) after some detours finally focused on its explanation by FractoGene.

And there we have it; there is AFAIK nothing written by GeoMor or Pellionisz (or anyone else) here that tries to answer the specific question posed in the onion test by using FractoGene’s ideas.

Pure crankery - and I thank Pellionisz for making the point so clearly.

Comment #184515

Posted by caligula on June 25, 2007 3:50 AM (e)

Nick,

Thanks for the response. Based on your input, I somewhat regret the hype-ous wording of the ENCODE report. They sure seem to imply that they have revealed a lot of previously known biologial functionality from our DNA, even if they fail to clearly explain what this alleged functionality is. You seem to imply that they are letting people believe more than is warranted.

What do you think of the claim that we also have many coding sequences that are not under constraint? How would such sequences come about without natural selection? Could they code short amino acid sequences to act simply as chemically relatively stable filler, again just to “inflate” cells?

Comment #184516

Posted by caligula on June 25, 2007 3:59 AM (e)

caligula wrote:

a lot of previously known biologial functionality

Obviously, this should be “a lot of previously unknown biological functionality”.

Comment #184540

Posted by J Thomas on June 25, 2007 8:26 AM (e)

We deleted two large non-coding intervals, 1,511 kilobases and 845 kilobases in length, from the mouse genome. Viable mice homozygous for the deletions were generated and were indistinguishable from wild-type littermates with regard to morphology, reproductive fitness, growth, longevity and a variety of parameters assaying general homeostasis.

This is negative evidence. Suppose it had turned out differently, suppose that the mice were obviously different and less viable. That would have indicated the deleted DNA was important.

But when they couldn’t tell the difference, it doesn’t say. Like trying to prove a negative. Suppose the mice were fed aflatoxins, and the wild-type mice responded in ways that reduced the damage but the ones with deletions did not. This experiment wouldn’t have picked that up because the mice were not exposed to aflatoxins.

Similarly for all the other environmental challenges that mice have faced in their evolutionary history, that they are not exposed to in the lab.

Likewise with onions. It’s of course absurd to claim that human beings are more complex than onions. We don’t have evidence yet which are more complex. But to make an argument that two onion species need the same amount of noncoding DNA, it would help if the two species occupy the same ecological niche in the same biome. Otherwise you don’t know what different environments they are facing that might require different regulatory mechanisms.

I find it plausible that some noncoding DNA may have unknown function that is subject to selection. But let’s try that argument the other way around. Imagine that we somehow determined that all the noncoding DNA had important functions. Wouldn’t that ba a fantastic result! It makes sense that with occasional random duplications and occasional random deletions, among genes that aren’t selected you’d expect some duplicates to be there when you looked. If they aren’t there, that implies strong selection to remove excess DNA. I’d expect some of that in procaryotes, but much less so in eucaryotes. I guess it could happen. But I wouldn’t predict that all DNA is functional and I’d be very surprised if it turned out to be true.

Comment #184541

Posted by Torbjörn Larsson, OM on June 25, 2007 8:27 AM (e)

caligula wrote:

Could they code short amino acid sequences to act simply as chemically relatively stable filler, again just to “inflate” cells?

Rather, wouldn’t they mainly modify supply of nucleotides and diffusion rates if there is an appreciable amount? The cell seems to be a crowded chemical reactor.

But it is my impression that RNA can be recycled, just as many proteins are. Wouldn’t that happen with the occasional defect mRNA that can’t be translated? So what and how much “relatively” stability are we really discussing?

Comment #184568

Posted by David Stanton on June 25, 2007 11:17 AM (e)

Caligula,

Nick is correct in his response. However, there have been some recent discoveries that suggest that RNA may have some previously unknown functions besides coding for protein. For example, interference RNA is implicated in gene regulation and antisense RNA can also affect gene expression. And let’s not forget that RNA can form complex secondary and tertiary structures which make it useful for catalyzing chemical reactions such as those carried out by ribozymes and spliceosomes. Indeed, the molecular world is much stanger that was thought twenty years ago. But of course, none of this really addresses the central issue raised by the onion test.

Comment #184605

Posted by GeoMor on June 25, 2007 5:54 PM (e)

Andras Pellionisz wrote:

This is a very profound notion. You got it upside down at the moment, but e.g. the following link might help: http://www.junkdna.com/junkdna_fractals.html

This link is not helpful at all.

Your original point, I believe, was that the speed with with an iterative algorithm converges can be sensitively dependent on initial conditions. I agree. Why does this explain variance in genome size? It seems like you’re suggesting that genome size is dependent on the behavior of some program. So there must be some evolutionary meta-program that can spew a bunch of DNA into the genome - such that the size of an onion genome can vary by almost an order of magnitude - yet all this DNA is not junk, but determines a phenotype. Is this right? Why the indirection and what’s the evidence for it?

Also I would like to know what you think of the observation that purely random processes of shuffling and duplication, reasonably similar to processes of chromosome evolution, easily give rise to all these scale-free or “fractal” sequence composition distributions (like “pyknons”).

Finally, you’ve asked for alternative explanations, and I think the point of the onion test is that the hypothesis that most of this extra DNA is junk is a fine explanation for why the genome size is so variant. We know how things like transposable elements and retroviruses work, and it makes perfect sense that they can inflate the genome over time, to different extents along different evolutionary trajectories. How can it be ruled out that a lot of these insertions are just junk?

Comment #184608

Posted by Whatever on June 25, 2007 6:01 PM (e)

Andras quoted me “What are the attributes of a crank science layout?”

“…Myself, I wouldn’t want to be too closely associated with Dr. Teller.”

And commented:
This is perhaps the strangest reaction to the http://www.junkdna.com “Junk DNA Portal”:

i) It was not a reaction to “junkdna.com” alone. Andras is responsible for other sites as well (fractogene.com, helixometry.com).

(1) The site never mentions Dr. Teller. Although he was a scientist of historical proportions he never claimed and never had any contribution to Genomics. I can see why some would like to brand him a “crank scientist”, but name calling masks confession that one has nothing to say about the issues of merit, plus it is somewhat belated:

ii) I don’t consider Teller a ‘crank scientist’. The ellipses that Andras used to replace my text read: “Beyond odd formatting, cranks seem to name-drop a lot… But then, so do those who desire venture capital.” Teller is but one of the many names dropped by Andras on his web pages.

(3) A personal friendship (now in the past tense) should be off-limits to “ad hominem” attempts. Not only because it has nothing to do with “Junk DNA”, but for reasons of human decency.

iii) Not an ad hominem. Teller was an extremely “controversial” scientist. I wouldn’t want to be associated with him, particularly if I were trying to start a company. YMMV.

By the way, many of the links listed in the credits page of Andras’ helixometry.com site are broken.

Comment #184611

Posted by raven on June 25, 2007 6:20 PM (e)

This [homozygous viable megabase deletions] is negative evidence.

So what. It is also very strong experimental evidence that is aimed at answering a particular question.

A hard fact or two is worth a 1000 pages of armchair speculation.

Nonscientists must think we all sit in ivory towers emailing each other about this and that. Real science has little of that and a lot of hard work by lots of clever people, designing experiments, collecting data, analyzing it, repeat ad infinitum. I think it was Edison who said science is 1% inspiration and 99% persperation.

This is one of many reasons why creationism and ID fail. They never do anything but wave their hands. A hard day is when some new fact is discovered that contradicts their mythology, a new dinosaur, another report on drug resistant pathogens, an emerging disease etc.. Time to crank up the old ad hoc machine and explain it away again.

Comment #184621

Posted by GeoMor on June 25, 2007 7:44 PM (e)

J Thomas wrote:

But when they couldn’t tell the difference, it doesn’t say. Like trying to prove a negative. Suppose the mice were fed aflatoxins, and the wild-type mice responded in ways that reduced the damage but the ones with deletions did not. This experiment wouldn’t have picked that up because the mice were not exposed to aflatoxins.

BTW, Rubin’s lab has now shown that mice with knockouts of ultraconserved elements are viable with no obvious phenotypic defects, as well. (It’s not published yet, but I’ve seen it at a conference)

This does not invalidate the point that these experiments can’t prove a negative. But the findings are, nonetheless, curious and somewhat troubling. From a comparative genomics perspective, if you had to pick one place in the primary sequence of the genome and place a bet that it has some really, really important function, you’d start with the ultraconserved elements. Yeah, maybe they’re buffered, or maybe it’s an environment thing, or maybe the phenotype is just too subtle. It’s just that, again, these would be at the very top of your list, if you were making bets. Kind of troubling.

Comment #184628

Posted by David Stanton on June 25, 2007 8:36 PM (e)

“Further, if you think perhaps onions are somehow special, consider that members of the genus Allium range in genome size from 7 pg to 31.5 pg.”

Another important aspect of genome size variation, especially in plants, is evolutionary polyploidy followed by diploidization. Polyploidy is quite common in plants and there is good evidence to indicate that it has been responsible far a great deal of divergence in the plant kingdom. This can increase the amount of DNA in the genome anywhere from 3N up to 16N or more. It can also involve interspecific hybridization. This is probably responsible for at least some, if not most, of the variation seen in onions.

And let’s not forget somatic polyploidy either. Don’t forget that you liver cells are mostly 8N and some somatic tissue in some organisms is up to 10,000N.

Oncew again, not very intelligent design, but perhaps tolerated and even adapted to and rarely adventageous if it occurs.

Comment #184632

Posted by Nick (Matzke) on June 25, 2007 9:00 PM (e)

And let’s not forget somatic polyploidy either. Don’t forget that you
liver cells are mostly 8N and some somatic tissue in some organisms is up
to 10,000N.

Is this the same as the polytene chromosome phenomenon, e.g. in Drosophila salivary glands?

PS: A related issue: mammalian red blood cells, which are very small, have dispensed with their nuclei.

Comment #184635

Posted by David Stanton on June 25, 2007 9:39 PM (e)

Nick,

Yes, it is almost exactly the same phenomenon. Polytene chromosomes are produced by rounds of endoreplication (replication without cytokinesis) and the products of replication remain together to form thick polytene chromosomes. These structures were instrumental in early studies of gene expression and early chromosome variation studies. Most endopolyploidy does not result in polytene chromosomes but it can affect cell size and rates of gene expression.

Comment #184637

Posted by Raging Bee on June 25, 2007 9:47 PM (e)

Andras Pellionisz: Your “use” of “scare” quotes is astoun”ding.”

Comment #184650

Posted by J Thomas on June 26, 2007 2:32 AM (e)

GeoMor, you say that mice that have deletions of particular ultraconserved regions are viable and phenotypically normal.

I should mention my bias. I tend to think as if a noncoding region that has some function (some selective value) will have its function in terms of influencing production of proteins. I tend to think in terms of proteins since they do so much, and regulation of proteins is so important. I try to remember other possible functions, but my unconscious bias is still to think in terms of proteins.

The problem I have with the concept of “viable and phenotypically normal” (not your phrase, I don’t mean to attribute it to you by putting it in quotes) is that “viable” always translates to “viable in some particular environment”. And “phenotypically normal” means “phenotypically normal in some particular environment”. The obvious environment to use is the laboratory environment for mice.

But if the deleted regions are involved in regulation, and the thing they regulate is a response to something that does not vary in the lab, then you won’t see the regulation fail.

So for example, most mammals must deal with temperature changes. They get too hot or too cold and must respond to that. But if lab mice suffer heat stress that isn’t part of an experiment, the technician is at fault.

Similarly, most mammals must regulate their sodium and potassium balance. Get too much sodium ion in the blood, and red blood cells release potassium. The liver releases potassium. The kidneys, sweat, and tears preferentially excrete sodium and retain potassium. In extremity skeletal muscles release potassium and start to fail while the heart continues. Eat something that’s too salty and all this regulation happens, regulation that’s important for animals that have nervous systems and muscles. But how much chance will the lab mouse have to get that imbalance? Aren’t they given balanced meals all the time?

I thought of those two because they were things I’d had cause to think of before. How many things can you think of that vary in the wild for most mammals, and have varied over a long evolutionary history, but that don’t vary in the lab?

Suppose you think of fifty possibilities, and you test them all, and they all come out negative. You’ve done 50 experiments with your mice, and the fifty-first thing you think of might be the one. Negative results. You’re holding the wrong end of the stick.

It’s intuitive that the most conserved things ought to be the most fundamental. If the cell can’t survive unless this particular sequence is intact, chances are this sequence will evolve slowly. Maybe it evolved fast in the distant past, but now there’s something that works very well and almost any variation on it will be a setback. Growth, cell division, that sort of thing ought to be most fundamental. Regulating things that inevitably change during the cell cycle is the most central thing. But there are external variables that have been changing for a very long time, and those must be responded to also. Wrong end of the stick. If you happen to find something that the deleted regions actually do, then you can look at how that behavior might be selected. Or demonstrate that they do nothing, ever, which may be another negative proof.

Twenty years ago I met a very pretty girl who was sure that a sequence on a bacterial plasmid was “junk DNA”. Her argument was that under in vitro transscription, it wasn’t transcribed. I asked her, “What about positive control?” and she got mad and stopped speaking to me. Since there are bacterial genes that are only transcribed when a particular protein is attached, there was no way to tell whether there might be such a protein that her in vitro system didn’t provide. At the time even if the sequence had been known – and it was both expensive and tedious to sequence something that was presumed junk – no one had the expertise to look at the sequence and tell whether that might happen, though they could tell whether it would code for peptides if it did get initiated. But it seems to me that all along people have been eager to declare that things were junk DNA when it would be more prudent to leave it undecided.

Comment #184660

Posted by GeoMor on June 26, 2007 6:37 AM (e)

J Thomas, you gave a very well-written argument for a point which I explicitly acknowledged to be true.

The preponderance of evidence so far has been that ultraconserved elements are developmental regulators conserved throughout the vertebrate lineage. They are enriched in the vicinity of Hox genes and other classic developmental genes. You can stick them in front of GFP in vivo and find that they drive highly tissue- and temporal- specific expression, often in the brain.

Again, the troubling thing here is not that anyone’s experiment logically disproves their function, but rather that the UCEs are our all-stars. Our lead-off hitters. Our world heavyweight champions. We would not predict, from a genomics perspective, that these would be the regulators you only need at 40 deg C in the left toe on Tuesdays. On the contrary, we have almost all the evidence you could possibly want, before actually doing the experiment, that screwing up the UCEs should be catastrophic for organismal development.

When you get rid of a couple of them and nothing obvious happens, it’s troubling. You start to feel that you’re digging deep to say that it’s just the lab environs. Certainly, it shifts the burden of proof back onto us genomicists.

Anyway, like I said, this is unpublished work I saw at a conference. We should probably wait for the paper to pass judgement – I’m sure it’ll be a lively discussion at that time.

Comment #184675

Posted by David Stanton on June 26, 2007 8:51 AM (e)

GeoMor,

I agree that this observation is very difficult to explain. It could be telling us something very important about gene regulation. I don’t know what the answer is and I am not familiar with the paper, but I could make some guesses.

Some conserved elements resist sequence divergence within species due to concerted evolution. This is a phenomenon found in repetitive elements and presumably occurs through biased gene conversion. If the elements that were eleminated were indeed repetitive, it is possible that enough copies remain to function properly even though some were eliminated.

Of course there are some problems with this explanation. Regulatory elements are not often highly repetitive and conversion would only homogenize elements within one species. Still, the possibility does raise the point that one must look specifically at the sequences deleted and determine ther normal functions individually before declaring there is no loss of function. The issue is very complex, but at least we are now capable of addressing some of these ideas.

Comment #184731

Posted by Andras Pellionisz on June 26, 2007 4:16 PM (e)

GeoMor,

There is meaningful exchange with you (see also http://www.pandasthumb.org/archives/2007/01/junk… ). I believe you see the merits of the argument that FractoGene, by means of convergence of iterative recursion depending on slight divergence on conditions is a contender to pass the onion test (a.k.a. Richard Dawkins’ salamander test). I would welcome private email exchange with you and also with Torbjörn Larsson (I don’t know his identity either, since Google shows several by this name).

Few comments for the majority of bloggers:

I believe this blog was aimed at explaining widely different C-values of sub-species (onions, salamadra) in other ways than the default; who cares how much junk here or there. We are still awaiting other concepts than FractoGene, as a non-default explanation.

For GeoMor’s very profound comment (about what part of the Genome should be swollen, should there be some superior program to be imposed) I should have given for the initiated a more specific of my url-s, wher FractoGene invokes the von Neumann principle (that there is no distinction between data and instruction in a computer, and the words are defined only by the context, not in themselves). I publicly introduced the von Neumann principle to Genomics at the 2003 February (Future of Life 50th Anniversary of DNA mtg in Monterey).

For those who limit themselves to counting my quotation marks (none in this entry to save you an effort) or badmouth my deceised friend the above can be illuminated by a simplistic example:

When you freeze water e.g. in a Petri-glass (or blow a sand dune) there is no need for putting in some supreme program for a fractal pattern to emerge. Moreover, there is no distinction which H20 molecule (or speckle or dust) is data or instruction. The algorithmic design is an intrinsic (though emergent) property of the matter. There is a novel unanimity emerging: of course we never thought it was junk, and of course we always knew that the DNA was fractal, and that organelles, organs and organism were fractal.

Myself and people I work with (Hungarian mafia or not) think this is very good news for FractoGene and its applications. Because there is quite a bit of IP and know-how for utilities in biotech, nanotech and infotech. Certainly, those suffering from some lethal junkDNA diseases can’t wait. They don’t even have time to blog. I don’t, either, but public opinion of voters counts where to steer resources such as tax dollars.

pellionisz_at_junkdna.com

Comment #184808

Posted by GeoMor on June 27, 2007 7:24 AM (e)

Andras Pellionisz wrote:

I believe you see the merits of the argument that FractoGene, by means of convergence of iterative recursion depending on slight divergence on conditions is a contender to pass the onion test (a.k.a. Richard Dawkins’ salamander test).

Sure, I’d agree to that, but only on the assumption of good faith and the fact that what you’ve said is far too vague to tell whether the emperor has any clothes or not. Frankly, I think the way you present things makes it difficult for outsiders, even/especially outsiders knowledgeable in genomics, to take it seriously (whether they express this in a dignified/mature manner is another matter). I think this needn’t be so, because when the basic ideas are distilled down, I find them largely quite reasonable, and some of them are very good observations. Such as the fact that fractal morphological structures must have some basis in the genome, parts of which may therefore resemble fractal algorithms. But it takes many hours of looking at all your web sites, which have a very low signal-to-noise ratio, to get to this point. I would not blame anyone who has not invested some time in it for declaring you to be a “crank”.

My advice would be to summarize your ideas succinctly and logically in one place. Add some figures and diagrams that don’t look hastily thrown together and copied from other sources. Include scholarly references to the scientific literature. Add a FAQ list that points to some background reading without being condescending about it. Tone down the insinuation that your theory leads to immediate insight in all these “junk DNA diseases”, which is simply not a believable claim to knowledgeable outside observers, whether it is true or not. Consult with a professional scientific copyediting service that can help you explain everything clearly.

I think you can do all of the above without revealing any more of your intellectual property than you already have. But it sure would help to explain a lot more about what you know about the materialistic basis, in the genome, of the fractal design you purport to have discovered. Without this, I think it will remain very hard for knowledgeable outsiders to reach even the point of highly-skeptical curiosity that I am at.

Comment #184850

Posted by Andras Pellionisz on June 27, 2007 9:58 AM (e)

GeoMor,

Sounds like the book FractoGene in PostGenetics. It has been in the making for some time, and other priorities taken care of it is coming. Blogs, websites (etc) are means of selection of the sorts of people who might be contributors in a massive effort, since there is a lot to do. And don’t forget, USPTO has a special criterium, non-obviousness. Naysayers are useful…

As for junkdna diseases, FractoGene is not at curing them. However, it has been clinched for some cases that the known cause lies in defects of fractal intergenic and intronic structures.

Look forward to private emails.

pellionisz_at_junkdna.com

Comment #184855

Posted by Whatever on June 27, 2007 10:50 AM (e)

Andras - “I publicly introduced the von Neumann principle to Genomics at the 2003 February (Future of Life 50th Anniversary of DNA mtg in Monterey).

Do you mean the field of genomics in general or a specific conference dealing with genomics? If it’s the former, then I think others have long been previously aware of the context dependence of genetic sequences and that data and instructions are “interwoven” and not readily distinguished.

Comment #184862

Posted by Whatever on June 27, 2007 12:24 PM (e)

I have to agree with GeoMor’s assement above in comment #184808. There is nothing inherently ‘cranky’ about using fractal mathematics to try to learn something about genomic expression and evolution. I’m certain that such studies will find useful applications (There are some already). But the S/N ratio, lack of specificity in explanations, and unfortuately, huge amount of self-promotion and hyperbolic prose filling those sites don’t aid the transmission of the message.

Comment #185020

Posted by Torbjörn Larsson, OM on June 28, 2007 12:30 PM (e)

Syntax Error: mismatched tag 'blockquote'

Comment #185021

Posted by Torbjörn Larsson, OM on June 28, 2007 12:35 PM (e)

[In an effort to rush this late comment past the spamfilter for several links, I cut and repost. Sorry for eventual double comments.]

Sorry for late response, catching up on old comments.

Andras:

Andras Pellionisz wrote:

I would welcome private email exchange with you and also with Torbjörn Larsson

No, I’m not interested in email exchanges here, it will be blog comments or not at all.

GeoMor, Whatever:

I agree.

But I must also add that cranks lives in herds, and Andras Pellionisz are sympathetic to design and ID, up to and including embracing their language.

scordova wrote:

Pellionisz gives a favorable review of a pro-ID article. The article is: How Scientific Evidence is Changing the Tide of the Evolution vs. Intelligent Design Debate by Wade Schaer.

Dr. Pellionisz gives his review at junkdna.com

Comment #185023

Posted by Torbjörn Larsson, OM on June 28, 2007 12:42 PM (e)

Cont.:

The quote should be here .

Now, the article can’t be found due to the site design. And Cordova is a known lier.

Comment #185024

Posted by Torbjörn Larsson, OM on June 28, 2007 12:45 PM (e)

But I suspect that there once was a review, because Pellionisz himself says in a comment on another UD thread:

Andras Pellionisz wrote:

Well, it seems certainly true that among Darwinists-turned-Atheists, such as Richard Dawkins, the view that most “junk” DNA must have a function is still much less of a “mainstream” compared to those who suspect a design (even of a mathematical kind) in the DNA. Where is such a tabulation of science findings on functionality of different sorts of “junk” DNA e.g. by Richard Dawkins, as compared to the tabulation of a “blogger” who suspects design:

http://www.junkdna.com/new_cit…..on_of_junk [Bold added.]

Which of course again, doesn’t say anything about the utility of fractal models. But it says something about my view of FractoGene and Pellionisz. :-)

Comment #185126

Posted by Tim Fuller on June 29, 2007 2:33 PM (e)

I’ve seen the ID crowd reaction to the discovery that the so-called junk DNA isn’t junk after all. I tried to post over there at their website, but they didn’t want to hear what I had to say.

To wit: I’ve never thought of these sequences as ‘junk’ or even ‘non-coding’, but then I am not a biological science or DNA expert. From a lay viewpoint, one that appears to be holding true, we just don’t know how it fits together yet.

The ‘non-coding’ is mainly occurring between our own ears in our early attempts in trying to figuring it out.

I just tried to point out at uncommon descent that just because the ‘junk’ DNA isn’t ‘junk’ doesn’t strengthen ID’s position at all. I also mentioned that as far as I knew, ID didn’t have the first provable hypothesis by which to test itself.

That didn’t make the cut over there. It was treated like a terrorist threat and banned.

Enjoy.

Comment #185281

Posted by Andras Pellionisz on June 30, 2007 4:06 PM (e)

The http://www.junkdna.com is a site that focuses on what used to be a scientific term “Junk DNA” - e.g. today commemorates with an Obituary the 35th Anniversary of the late Dr. Ohno’s introduction of this now dead misnomer (1972-2007). *That* site is not “ideological” - but focuses on real science issues. As a scientist, I do find (and trying to track down) an “algorithmic design” in the Genome (apparently, a fractal design, both in the Genome and in organelles, organs and organisms that it governs). Such “algorithmic designs” are found elsewhere in natural science (e=mc^2 for one) and in the interest of progress of science (at least, my science) I am trying not to be used by “ideological warfare” of any kind. Regretfully, I have no control of those abusing science/scientists. I find anonymous abuse particularly unfair.

Back to the algorithmic design of the Genome… pleeeease focus on science…

pellionisz_at_junkdna.com

P.S. pleeeease also kindly note (eg. by looking it up) that the etymology of the noun “design” does not necessarily invoke a “designER”.

Comment #185283

Posted by trrll on June 30, 2007 4:42 PM (e)

I do not favor the elimination of the term “junk DNA,” but it is important to understand what the term refers to. In particular, it was never synonymous with “noncoding DNA.”

The junk DNA hypothesis is that some fraction of the noncoding DNA in some species lacks any current function, but is available to serve as raw material for evolution. The junk DNA hypothesis is based upon the observation that morphologically similar species may differ dramatically in the amount of noncoding DNA (e.g. the “onion test”), as well as upon theoretical considerations from evolutionary biology that argue that there should be some sort of equilibrium balance between selection within the genome favoring “selfish DNA,” which propagates (i.e. duplicates itself) within the genome, and selection at the organism level against an excessively large genome size. It is presumed that this equilibrium balance will likely differ from species to species.

The question is not whether noncoding DNA can have important functions–that has been known almost since the dawn of molecular biology–but rather a quantitative one: how much of the genome is true junk? Needless to say, finding a previously unrecognized function for some small fraction of noncoding DNA does not bear on this question at all.

Comment #185286

Posted by Henry J on June 30, 2007 5:35 PM (e)

Re “pleeeease also kindly note (eg. by looking it up) that the etymology of the noun “design” does not necessarily invoke a “designER”.”

Yeah, sometimes “design” just refers to a description of the parts of something and their relationship to each other. However, that meaning of the word doesn’t appear to be the one intended by ID pushers.

Henry

Comment #185352

Posted by J Thomas on July 1, 2007 11:10 AM (e)

Needless to say, finding a previously unrecognized function for some small fraction of noncoding DNA does not bear on this question at all.

Similarly, finding that a small fraction of the noncoding DNA can be eliminated without obvious effect doesn’t bear on the question at all either.

To answer the question we must establish either functions or nonfunction for a large fraction of all the noncoding DNA.

Is this question worth the effort required? Perhaps answers will fall out as a side effect of other research.

Comment #186657

Posted by mark brenneman on July 8, 2007 6:38 PM (e)

Very loose and mostly peri-scientific blogs link between genes and intelligence: WARNING. THE CONTENTS OF THIS REVIEW MAY BE DANGEROUS TO YOUR SCIENCE. Biochemical compounds Nociception behavior expression, of some clock-controlled genes might be used to treat the cognitive dysfunction, which are the top non-self hit. To investigate the possibility of training cellular automata (CA), to perform several image processing tasks with borderline hits in the details, greatly improves the precision of biological analyses. Correlated with a poor patient prognosis as miRNA of nonsense mutation implicated in the binding of 14-3-3. to protein kinase C at the ∞ junction on each side of the brain stem close to the primary sensory apparatus and the mouth if they exist at all, to the estrogen-preferring, member is initially, a very broad distribution of a gradient
of a maternal morphogen. Regulatory logic confirmed a link will have a copy of the IQ motif reproductive and nonreproductive functions. And which is as a new member definesthe ability to organize things logically could predict future multiple regression models. In order to investigate this point.

Comment #186687

Posted by PvM on July 9, 2007 12:27 AM (e)

Hey Mark, are you trying to win the contest Dembski alluded to?

Comment #186768

Posted by David Stanton on July 9, 2007 11:47 AM (e)

We have here an excellent example of a case where “clock-controlled genes might be used to treat the cognitive dysfunction”.

Comment #186796

Posted by Stuart ARCS on July 9, 2007 3:35 PM (e)

Interesting idea, but my bet, if I did bet, would still be on the UV catastrophe. We didn’t understand it before Max came along and offered some prospect of our doing so.

All those who claim that some of DNA is either junk or non-coding - or whatever term you may wish to use to indicate that we cannot presently find a function for it, may I call it ill-understood DNA? - may wish to consider that the next Planck in our understanding is just around the corner.

The black body survived the UV catastrophe, because that is the way it is in the world in which we live, and I have no doubt that the ill-understood DNA will continue to do what it is supposed to do, for that is the way that it is in the world in which we live, and it will do that whether we understand it or not.

Of course, I hope that it goes without saying that I would like to know what it does and how it does it.

Comment #188886

Posted by David Clemmons on July 19, 2007 10:29 PM (e)

Recent research has been finding that “junk DNA” is, at least in part, used by the genome for encoding RNA. It has been found that the “junk DNA” are regulator genes, turning the other 5% of genes in the human genome that is not referred to as “junk DNA” on and off.

Though this makes deciphering the human genome a more complex job than first anticipated, I do not see where it takes either side of the creation-evolution debate.

Comment #188892

Posted by Nick (Matzke) on July 19, 2007 10:58 PM (e)

Recent research has been finding that “junk DNA” is, at least in part, used by the genome for encoding RNA. It has been found that the “junk DNA” are regulator genes, turning the other 5% of genes in the human genome that is not referred to as “junk DNA” on and off.

Though this makes deciphering the human genome a more complex job than first anticipated, I do not see where it takes either side of the creation-evolution debate.

Again, read the opening post. Why do onions need five times as much of this regulation as we do?

The fact that much of the DNA genome is transcribed doesn’t change anything. The enzymes that transcribe DNA to RNA are not terribly specific. A lot of that RNA is transcribed in very low copy numbers, probably by accident (perfect specificity is thermodynamically impossible), and degraded very quickly. Some vertebrates work perfectly well with only 10% of our genome and with very little of the repetitive DNA, fossil viruses, etc., that you seem to think must be functional because it is transcribed. On the other hand, ferns, onions, etc. can have up to hundreds of times more DNA in their genome that humans have. Are you really going to assert that all of that has some specific regulation function?