PvM posted Entry 2388 on June 21, 2006 07:43 PM.
Trackback URL: http://www.pandasthumb.org/cgi-bin/mt/mt-tb.fcgi/2383

Random Search and No Free Lunch

In his book “No Free Lunch”, Dembski argues that, based upon the No Free Lunch Theorems, finding an optimal solution via “random search” is virtually impossible because no evolutionary algorithm is superior to random search. And while various authors have shown the many problems with Dembski’s arguments, I intend to focus on a relatively small but devastating aspect of the No Free Lunch Theorems.

First I will explain what the No Free Lunch Theorems are all about, subsequently I will show how Dembski uses the No Free Lunch Theorems and finally I will show that the No Free Lunch Theorems show how a random search, perhaps counterintuitively, is actually quite effective.

No Free Lunch (NFL) Theorems

The No Free Lunch Theorems, after which Dembski named his book, are based on a set of papers by Wolpert and MacReady which basically state that:

“[…] all algorithms that search for an extremum of a cost function perform exactly the same, when averaged over all possible cost functions.” (Wolpert and Macready, 1995)

Dembski and the NFL

Dembski argued, based on the No Free Lunch Theorems, that evolutionary algorithms could not perform better than a random search.

Dembski wrote:

It’s against this backdrop of displacement that I treat the No Free Lunch theorems. These theorems say that when averaged across all fitness functions of a given class (each fitness function being an item of information that constrains an otherwise unconstrained search), no evolutionary algorithm is superior to blind or random search.

and

Dembski wrote:

In general, arbitrary, unconstrained, maximal classes of fitness functions each seem to have a No Free Lunch theorem for which evolutionary algorithms cannot, on average, outperform blind search.

Source: Dembski Evolution’s Logic of Credulity: An Unfettered Response to Allen Orr

While Dembski’s treatment of the “No Free Lunch” theorems was, according to mathematician David Wolpert, mostly written in Jello, it is still interesting to pursue some of Dembski’s claims. As I will show, not only do the No Free Lunch theorems fail to support Dembski’s thesis, but in fact the No Free Lunch theorems show that such optimization is child’s play.

The question really becomes: Is it really that hard to find an optimal solution using random search under the assumptions of the No Free Lunch Theorems?

The answer may be a surprise to many and it is ‘not really’.

Finding the optimum value may be hard but finding a solution which is almost as good, is actually reasonably simple. And since evolution does not necessarily search for the best solution, it quickly becomes clear that Dembski’s ‘No Free Lunch” may have little relevance to evolutionary theory.

In 2002, on the ISCID boards, Erik provided the following calculations:

Erik wrote:

Ironically, even if we grant that the prior over the set of all cost functions is uniform, the NFL theorem does not say that optimization is very difficult. It actually says that, when the prior is uniform, optimization is child’s play! I mean that almost literally. Almost any strategy no matter how elaborate or crude will do. If the prior over the set of cost functions is uniform, then so is the prior over the set of cost values. That means that if we sample a point in the search space we are equally likely to get a low cost value as a high cost value. Suppose that there are Y possible cost values. Then the probability a sampled point will have one of the L lowest cost values is just

r = L / Y,

regardless of which strategy that was used to decide which point to sample. The probability s that at least one of N different sampled points will have a cost value among the L best is given by

s = 1 - (1 - r)^N,

again independently of the strategy used. Is that good or bad performance? The number of points required to achieve a given performance and confidence level is

N = ln(1 - s) / ln(1 - r) ~ - ln(1 - s) / r.

After sampling 298 points the probability that at least one of them is among the best 1% is 0.95. After 916 sampled points the same probability is 0.9999. If instead we want a point among the best 0.1% we need to sample 2994 points to find one with probability 0.95, or approximately 9206 points to find one with probability 0.9999. That kind of performance may not be satisfactory when the optimization must be done very fast in real-time under critical conditions, but it is good for most purposes. Certainly our universe would seem to be able to spare the time necessary to sample 9206 points.

This is why Thomas English wrote

“The maligned uniform distribution is actually benign. The probability of finding one of the better points with n evaluations does not depend on the size of the domain [7]. For instance, 916 evaluations uncover with 99.99% certainty a point that is better than 99% of the domain. What is remarkable about NFL and the uniform is not just that simple enumeration of points is optimal, but that it is highly effective.” (see below for a reference)

Source: English T. (1999) “Some Information Theoretic Results On Evolutionary Optimization”, Proceedings of the 1999 Congress on Evolutionary Computation: CEC99, pp. 788-795

The inference is never better than the assumption of a uniform prior that it relies on, however. It would seem that in most non-trivial optimization problems the number of good points in the search space are not as frequent as the number of bad points, meaning that the corresponding cost functions are not drawn uniformly from the set of all possible cost functions.

As Erik pointed out, as early as 1996, Tom English derived how relatively simple optimization really is:

Tom English wrote:

The obvious interpretation of “no free lunch” is that no optimizer is faster, in general, than any other. This misses some very important aspects of the result, however. One might conclude that all of the optimizers are slow, because none is faster than enumeration. And one might also conclude that the unavoidable slowness derives from the perverse difficulty of the uniform distribution of test functions. Both of these conclusions would be wrong.
If the distribution of functions is uniform, the optimizer’s best-so-far value is the maximum of n realizations of a uniform random variable. The probability that all n values are in the lower q fraction of the codomain is p = qn. Exploring n = log2 p points makes the probability p that all values are in the lower q fraction. Table 1 shows n for several values of q and p.
It is astonishing that in 99.99% of trials a value better than 99.999% of those in the codomain is obtained with fewer than one million evaluations. This is an average over all functions, of course. It bears mention that one of them has only the worst codomain value in its range, and another has only the best codomain value in its range.

Thomas M. English Evaluation of Evolutionary and Genetic Optimizers: No Free Lunch Evolutionary Programming V: Proceedings of the Fifth Annual Conference on Evolutionary Programming, L. J. Fogel, P. J. Angeline, and T Bäck, Eds., pp. 163-169. Cambridge, Mass: MIT Press, 1996.

Fraction Probability
0.01 0.001 0.0001
0.99 458 678 916
0.999 4603 6904 9206
0.9999 46049 69074 929099
0.99999 460515 690772 921029

Tom English repeated these facts on a posting to PandasThumb

In 1996 I showed that NFL is a symptom of conservation of information in search. Repeating a quote of Dembski above:

Dembski wrote:

The upshot of these theorems is that evolutionary algorithms, far from being universal problem solvers, are in fact quite limited problem solvers that depend crucially on additional information not inherent in the algorithms before they are able to solve any interesting problems. This additional information needs to be carefully specified and fine-tuned, and such specification and fine-tuning is always thoroughly teleological.

Under the theorems’ assumption of a uniform distribution of problems, an uninformed optimizer is optimal. To be 99.99% sure of getting a solution better than 99.999% of all candidate solutions, it suffices to draw a uniform sample of just 921,029 solutions. Optimization is a benign problem with rare instances that are hard. Dembski increases the incidence of difficult instances by stipulating “interesting problems.” At that point it is no longer clear which NFL theorems he believes apply. Incidentally, an optimizer cannot tune itself to the problem instance while solving it, but its parameters can be tuned to the problem distribution from run to run. It is possible to automate adaptation of an optimizer to the problem distribution without teleology.

Source: Tom English Pandasthumb Comment

I cannot emphasize strongly enough how wrong Dembski is in his comments on random search as these almost trivial calculations reveal. While Dembski is correct that finding the optimal solution may be extremely hard, finding a solution which is arbitrarily close to the solution is actually quite straightforward.

It should not come as a surprise that the “No Free Lunch Theorems” have more unfortunate surprises in store for Intelligent Design. More on that later…

No Free Lunch Theorems

  1. Dembski, Introduction to No Free Lunch
  2. Richard Wein, Not a Free Lunch But a Box of Chocolates A critique of William Dembski’s book No Free Lunch
  3. Wesley Elsberry, No Free Lunch, The Book
  4. English, Optimization Is Easy and Learning Is Hard In the Typical Function (2000), Proceedings of the 2000 Congress on Evolutionary Computation
  5. Critique of Intelligent Design Talk Reason
  6. Yin-Yang: No-Free-Lunch Theorems for Search Discussion at the 1995 International Conference on Genetic Algorithms
  7. No Free Lunch Theorems
  8. Wikipedia: No Free Lunch Theorem
  9. The No Free Lunch Theorems, Evolution and Evolutionary Algorithms
  10. Index to Creationist Claims
  11. Wolpert and MacReady, No Free Lunch Theorems for Optimization (1996)
  12. Wolpert and MacReady No Free Lunch Theorems for Search (1995)
  13. Wolpert William Dembski’s treatment of the No Free Lunch theorems is written in jello
  14. Mark Toussaint Homepage Publications
  15. Marc Toussaint, Christian Igel Neutrality: A Necessity for Self-Adaptation (2002) , 2002
  16. Toussaint, PhD thesis The evolution of genetic representations and modular adaptation
  17. Toussaint, On the Evolution of Phenotypic Exploration Distributions (2003)

Commenters are responsible for the content of comments. The opinions expressed in articles, linked materials, and comments are not necessarily those of PandasThumb.org. See our full disclaimer.

Comment #107176

Posted by fnxtr on June 21, 2006 12:15 PM (e)

The Anti-Science League really does seem to miss the point that evolution doesn’t “seek the optimal solution”, or anthing so anthropomorphically deliberate; it is a *result* of the range of the available options at the time. “Best” is constantly in flux. Hence the title of this website.

Comment #107190

Posted by PaulC on June 21, 2006 1:00 PM (e)

Dembski:

The upshot of these theorems is that evolutionary algorithms, far from being universal problem solvers,

I must have missed the memo where somebody claimed that evolutionary algorithms are universal problem solvers.

Actually, it’s easy to write a “universal” problem solver if you’re not in a hurry to see it get an answer (e.g. dove-tailing: just enumerate all possible solutions starting from the smallest ones first). But usually we are interested in an optimization method that comes up with good results in a feasible time frame. Evolutionary algorithms are generally applicable to domains in which we do not have a deep knowledge of structure that would lead to a better optimization (unlike, say, bipartite matching, network flow, or linear programming, for which there are domain-specific algorithms). However, evolutionary algorithms are not “universal”. Evolution itself is not capable of optimizing all objective functions. Arguably, it does not optimize anything, though it does a fine job of satisficing fitness criteria in natural ecologies.

Dembski’s take on NFL is that it somehow proves that evolutionary algorithms are a waste of time. But I fail to see how he distinguishes between these and other very general methods–e.g. hill-climbing and fixed-point methods–that are also not “universal” but often give useful results when one applies them to an appropriate kind of objective function.

Comment #107196

Posted by PvM on June 21, 2006 1:23 PM (e)

I will discuss both the concept of evolvability (why evolution works so well) and the concept of displacement in later postings. Little steps… Otherwise there may be too much information to deal with.

Evolvability, or as I see it the co-evolution of mechanisms of variation, helps understand how and why evolution can ‘learn from the past’ and how it can be successful.

Displacement is a whole can of worms and I believe that Dembski’s approach fails to explain why/how natural intelligence can circumvent this problem, if it even is a real problem. Since evolution is not a global optimizer, I find his displacement problem of limited interest.

Comment #107202

Posted by William E Emba on June 21, 2006 1:36 PM (e)

PvM wrote:

While Dembski’s treatment of the “No Free Lunch” theorems was, according to mathematician David Wolpert, mostly written in Jello,

Wolpert stated that the mathematical part of Dembski’s treatment of NFL was hopelessly mistaken. “Jello” referred to the wooziness of Dembski’s philosophy.

Comment #107204

Posted by Moses on June 21, 2006 1:41 PM (e)

Poof! I say! Poof!

Comment #107206

Posted by secondclass on June 21, 2006 1:50 PM (e)

Pim wrote:

Displacement is a whole can of worms and I believe that Dembski’s approach fails to explain why/how natural intelligence can circumvent this problem, if it even is a real problem.

That’s why Dembski sees all intelligence as ultimately supernatural. But that doesn’t solve the problem either.

His thinking is: (1) X is impossible, or virtually impossible, according to established science. (2) Therefore, X is designed.

Even if he could establish his premise, which he can’t, his conclusion is a non sequitur. According to his logic, if we were to discover a particle that moves faster than the speed of light, we should just label it “designed” instead of adjusting our understanding of physics.

Comment #107207

Posted by GuyeFaux on June 21, 2006 1:52 PM (e)

Very nice, succinct post.

I really liked Wolfram’s jello article, and thought that it was good enough to blow the whole “Evolution-eats-a-free-lunch” thing out of the water. Basically, it points out that

1) Dembski knows what he wants to prove: evolution is insufficient to produce complexity,

2) Knows how he wants to prove it: show that evolution purports to be an algorithm which performs better over all problem spaces.

3) Does not in fact prove it.

In the world of formal logic and math, there’s no such thing as a partially proved theorem.

Comment #107209

Posted by Alann on June 21, 2006 1:58 PM (e)

The No Free Lunch theorem seems essentially pointless.

First if limit the domain in any way (like to planets with water, or to carbon based life) the theorem is no longer applicable.
Try the wikipedia entry

Although mathematically sound, the fact that NFLT cannot be applied to arbitrary subsets of cost functions limits its applicability in practice

Second if I understand it correctly the idea is that over all theoretical math problems any two search algorithms are equally effective. So in principle if you can create a search algorithm that is inherently less efficient the theorem would be disproved.

Maybe this is an oversimplification but doesn’t this disprove the NFLT?

Search A: (0,-1,+1,-2,+2,-3,+3,…)
    if LastResult is nothing then return 0
    elseif LastResult>=0 return -(LastResult+1)
    else return -LastResult
Search B: (0, 0, 0,-1, 0,+1, 0,…)
    IterationCount = IterationCount +1
    if IsEven(IterationCount) return Search A
    else return 0

or how about this:

Search C: (0, 0, 0, 0, 0, 0, 0, …)
    return 0

While A,B,C may be equal for the subset of mathematical problems which result in zero, for anything else B will automatically take twice as many iterations, while C will never find the answer.

Could someone tell me if I am missing something here?

Comment #107211

Posted by GuyeFaux on June 21, 2006 2:07 PM (e)

Could someone tell me if I am missing something here?

You would have to apply Search A and Search B over all possible sequences and average their effectiveness. If either A or B performed better, we’d have a problem.

Comment #107219

Posted by Grey Wolf on June 21, 2006 2:46 PM (e)

While Dembski is correct that finding the optimal solution may be extremely hard, finding a solution which is arbitrarily close to the solution is actually quite straightforward.

Yes, but see, Dembski knows that *he* is the perfect man. Consequently, evolution, if it exists, has had to produce the optimal solution (i.e. him). Since that is impossible, evolution cannot have happened.

Never ignore sublime egotism as the basis for creationism drivel.

Hope that helps,

Grey Wolf

Comment #107251

Posted by Sir_Toejam on June 21, 2006 4:27 PM (e)

Poof! I say! Poof!

“rule #1:

No Pooftas!”

Comment #107253

Posted by Sir_Toejam on June 21, 2006 4:31 PM (e)

ya know, sometimes I think Dembski made up this stuff just to piss off his old alma mater.

One does wonder how on earth he could have presented a defendable thesis, if his NFL drivel represents the way his mind actually works.

Comment #107256

Posted by secondclass on June 21, 2006 5:05 PM (e)

Alann wrote:

Second if I understand it correctly the idea is that over all theoretical math problems any two search algorithms are equally effective.

My understanding is that any two non-repeating search algorithms are equally effective, so this would not apply to your algorithms B and C. Hope this helps.

Comment #107263

Posted by Coin on June 21, 2006 5:29 PM (e)

Alann wrote:

While A,B,C may be equal for the subset of mathematical problems which result in zero, for anything else B will automatically take twice as many iterations, while C will never find the answer.

Could someone tell me if I am missing something here?

Alann, my understanding is that the ‘no free lunch’ theorems only consider algorithms which (1) never visit the same point twice, and (2) eventually visit every single point in the space. In other words insofar as the ‘no free lunch’ theorems are concerned, your algorithms B and C do not exist.

Comment #107280

Posted by ttw on June 21, 2006 6:57 PM (e)

Evolution is surivial of the adequate.

If there is some “structure” on the problem space, the iterated nature of evolutionary algorithms (putting the next generation’s samples in the “better” region) can give a very fast convergence rate.

Comment #107297

Posted by 'Rev Dr' Lenny Flank on June 21, 2006 7:49 PM (e)

His thinking is: (1) X is impossible, or virtually impossible, according to established science. (2) Therefore, X is designed.

Indeed. “God of the Gaps”.

That, of course, is precisely why his “filter” consists of three steps ihn a specified order: (1) rule out chance, (2) rule out law, (3) therefore design.

Why doesn’t Dembski follow the sequence (1) rule out design, (2) rule out chance, (3) therefore law, or some other sequence? Because, of all the possible sequences to his “filter”, he has chosen one that absolutely positively negates any need whatsoever for design “theory” to present anything, offer anything, test anything, or demonstrate anything. His whole argumetn boils down to nothing more than “if evolution can’t explain it, then godiddit”. Or, “God of the Gaps”.

I suspect that is not a coincidence.

Comment #107302

Posted by Shalini, BBWAD on June 21, 2006 8:03 PM (e)

Notice how Dembski hurries to wash his hands from providing any substance whatsoever to his beloved theory?

This from the ‘Issac Newton of information theory’.

(snicker)

Comment #107334

Posted by djlactin on June 21, 2006 11:24 PM (e)

sez Dembski:

finding an optimal solution via “random search” is virtually impossible because no evolutionary algorithm is superior…

This very premise reveals a gross lack of understanding of evolution on Dembski’s part.

[Name an ‘optimal’ organism, Mr. Bill!]

Evolution does not optimize. At best, it results in gene pools becoming better adapted to local conditions, subject to available genetic variability combined with phenotypic, genotypic, and developmental constraints.

Hmmm…
1) Only guided search results in optimal solutions.

2) Evolution doesn’t result in optimal solutions.

3) Therefore, evolution is not a ‘guided search’.

“Oh dear,” says the Designer, and disappears in a puff of logic.

Comment #107354

Posted by Adam Ierymenko on June 22, 2006 12:55 AM (e)

Wow… I already knew that Dembski’s NFL drivel was bad… but this thread seems to have torn it a few new ones.

I continue to be held speechless by both the intellectual vacuity and the amoralism (lying, deceptive use of fallacies, etc.) of today’s religious apologists.

If there’s a God, how come his followers behave as if there isn’t?

Comment #107364

Posted by pwe on June 22, 2006 4:40 AM (e)

Adam Ierymenko wrote:

Wow… I already knew that Dembski’s NFL drivel was bad… but this thread seems to have torn it a few new ones.

I continue to be held speechless by both the intellectual vacuity and the amoralism (lying, deceptive use of fallacies, etc.) of today’s religious apologists.

If there’s a God, how come his followers behave as if there isn’t?

If cows had gods, …

The IDists are not followers of any ethic god, but of the DESIGNER. It’s frequently claimed that the purpose of the Wedge Strategy is to introduce a theocrazy (mis-spelling intentional), but really it is a technocrazy (mis-spelling intentional).

How often do you find anything about ethics in ID articles? Very rarely! It’s mostly engineers claiming that the Designer designs like an engineer.

Comment #107387

Posted by Corkscrew on June 22, 2006 7:31 AM (e)

What if the domain is infinite and the cost function goes to infinity somewhere? If I’m understanding correctly, that sort of situation wouldn’t allow you to apply this result in any meaningful sense.

Am I horribly missing the point here?

Comment #107388

Posted by Corkscrew on June 22, 2006 7:51 AM (e)

Syntax Error: mismatched tag 'quote'

Comment #107393

Posted by PaulC on June 22, 2006 8:03 AM (e)

Corkscrew:

What if the domain is infinite and the cost function goes to infinity somewhere? If I’m understanding correctly, that sort of situation wouldn’t allow you to apply this result in any meaningful sense.

Since the result applies to averaging over all fitness functions, I don’t see how to handle it in this context. For a function to have an infinite limit somewhere as you describe, I think it has to satisfy at least some conditions of continuity. “Most” fitness functions are not continuous. So I can see how to average over fitness functions that map the domain to a finite interval, but if there’s a way to include functions with infinite limits, then my math skills are completely inadequate. Maybe someone else can help. It is also true that the functions with limits are going to have measure 0 in the space of all possible functions to real values, for instance, so maybe we can just ignore them in the average (but I will not go out on a limb and say we can).

Finally, I think the comments about random search still apply. If I have a fitness function, whether or not it has some range values that are much higher than the others (outliers) or even has some infinite limits in places, it is still relatively easy to find values in the 99%tile, 99.9%tile etc. just by random sampling. My result may turn out to be very small compared to the actual maximum, but it will still be much higher than most other values of the fitness function.

Comment #107394

Posted by PaulC on June 22, 2006 8:09 AM (e)

Correction: when I wrote “but it will still be much higher than most other values of the fitness function” I should have said “but it will still have a high probability of being greater than or equal to most other values of the fitness function.” There’s no guarantee that it will be “much higher.”

Actually, many optimization problems look a lot like this. Almost all randomly chosen solutions have an objective function that could just as well be set at 0 (alternatively, they violate constraints and are infeasible). You can easily find the 99.9%tile value, but it will be 0. The useful solutions are a vanishingly small percentage of the sample space, so you will need some other kind of optimization to find them. It still might not be hard; for instance if the objective function is convex, some kind of hill-climbing will take you to the solution once you find a feasible starting point.

Comment #107401

Posted by Torbjörn Larsson on June 22, 2006 8:55 AM (e)

Let me see if I get the story so far.

English notes that the averaging complaint should be dropped. I don’t understand why it doesn’t apply, since usually evolution should see a gradient or else be momentarily happy or slowly drift.

Anyway, he notes that those complementary random instances doesn’t fit in the world, as Alann says, and that he has in fact proved it, but that “it remains possible that the “real-world” distribution approximates an NFL distribution closely”.

*But* he also notes that “Under the theorems’ assumption of a uniform distribution of problems, an uninformed optimizer is optimal” and sufficiently fast for a population. So No Free Lunch for Dembski.

Furthermore, if NFL assumes “(1) never visit the same point twice, and (2) eventually visit every single point in the space” it doesn’t necessarily apply to evolution. Populations are related (duh!) and doesn’t cover the whole space they are evolving through.

And since it assumes optimality it doesn’t apply to evolution anyway. Furthermore it doesn’t apply for since genes and species coevolve. NFL also seems to assume conservation of information. (Perhaps that is why coevolution dropkicks it, as PvM hints.) Since we have an inflow of information from the nonclosed environment on the properties of newly realised features and from changes in the environment affecting these properties, it also breaks NFL assumptions on the fitness landscape.

I’m eager to see the rest of this story!

Comment #107402

Posted by Torbjörn Larsson on June 22, 2006 9:14 AM (e)

“What if the domain is infinite and the cost function goes to infinity somewhere?”

What does that mean? If you look at fitness instead I guess you have 0-100 % of the population reproducing with on average n children. How is fitness defined, do you need to invoke cost, and how do you map between?

Comment #107434

Posted by Mike Rogers on June 22, 2006 11:44 AM (e)

Adam Ierymenko wrote:

If there’s a God, how come his followers behave as if there isn’t?

I get the impression that the ID people are far more concerned about the perceived consequences of whether or not people believe in God than they are about the actual truth of the matter.

I think that they believe in God in the much same way some older children believe in Santa Claus. In latter case, the child has pretty much figured out what’s really going on but still chooses to feign belief a little longer because it’s fun. In the case of ID, they firmly believe, regarding themselves, that disbelief in God will consign them to unmitigated existential angst and a loss of meaning or purpose to life and, regarding others, that disbelief in God will inevitiably lead to cultural decline and moral chaos.

But they still have enough of a lurking intellectual conscience to crave some balance between their desire to believe in God and the values of reason and science via some kind of rationalization. They also often have scientistic leanings, since modern science has been so successful. The resulting inner tension, combined with their own repressed nihilism, makes fideism, or else just belief based on non-scientific grounds, seems psycholgically unaccepatble. So to them the truth isn’t really what counts. It’s just about having a (pseudo-) scientific justification for their faith and convincing everyone else of its truth. To them, that end justify any means.

Comment #107448

Posted by Corkscrew on June 22, 2006 1:23 PM (e)

For a function to have an infinite limit somewhere as you describe, I think it has to satisfy at least some conditions of continuity.

Good point. I’d note, however, that there are other functions that would mess things up - for example, if for any n there exists an infinite number of elements of the set of genotypes/phenotypes such that the cost function evaluates to greater than n. This wouldn’t necessarily need to be continuous. Consider, for example, the markings on a ruler, where the length is more or less proportional to int(log10(x)). That sure as hell ain’t continuous, but still has an infinite number of elements of cost > any given value.

Finally, I think the comments about random search still apply. If I have a fitness function, whether or not it has some range values that are much higher than the others (outliers) or even has some infinite limits in places, it is still relatively easy to find values in the 99%tile, 99.9%tile etc. just by random sampling.

Gah, you’re right, I need to read up on my measure theory. My problem was I was having trouble seeing how percentages could be usefully defined on certain sets.

TL wrote:

Furthermore, if NFL assumes “(1) never visit the same point twice, and (2) eventually visit every single point in the space” it doesn’t necessarily apply to evolution.

And the second point, although incidental, puts the nail in my objection’s coffin - obviously you can’t visit every single point of an infinite space.

Comment #107456

Posted by Donald M on June 22, 2006 2:21 PM (e)

In another thread PvM wrote:

What a little critical thinking and access to google can do…

Apparently, Pim, you didn’t heed your own advice in this post (or your earlier one on this subject for that matter). Besides the fact that you remove your quotes from Dembski from their full context in order to create your pretext, you don’t even cite the correct source. Instead of using a brief summary statement from the response to Orr, why didn’t you quote directly from the book No Free Lunch? I presume you’ve read it, since you participated in the ISCID online study/disucssion group with Dembski a couple years back. Also, you completely ignore Dembski’s further eloborations and development of his concepts in this article and this one as well as this one, all of which are relevant to your project of trying to dismantle Dembski.

Further, you’re careful to footnote the source for the Dembski quote from the Orr paper, but you don’t bother to reference where we can find the original ISCID discussion that contains the elaborate quote from Erik. Given how out of context so much else of your ramblings seem to be, I have to wonder if this oversight was because you didn’t want anyone to see the rest of the ISCID discussion.

If I didn’t know better, I’d say you were attempting to carefully construct an elborate straw man, complete with select-o-quotes and other misrepresentations. You’ll have to do better than that.

Comment #107482

Posted by Ken Kelly on June 22, 2006 3:48 PM (e)

Donald,

Which of the two Dembski quotes in PvM’s post do you consider to be misleading, absent their full context?

Comment #107484

Posted by secondclass on June 22, 2006 4:20 PM (e)

Donald M wrote:

…you don’t even cite the correct source.

Um, yes he did.

Donal M wrote:

Instead of using a brief summary statement from the response to Orr, why didn’t you quote directly from the book No Free Lunch?

How about if you tell us how No Free Lunch and subsequent papers mitigate the problems that PvM outlines? If you’re going to accuse Pim of mining quotes that misrepresent Dembski, then at least provide us with some evidence.

Comment #107486

Posted by PvM on June 22, 2006 4:40 PM (e)

Donald seems to go for the ad hominem approach. Seems that ID is even more vacuous than I had imagined.

Still embarassed for accusing me erroneously of leaving out 16 paragraphs of text?

Sometimes it’s best to remain silent Donald, unless you can really contribute something to the really devastating finding that random search under NFL assumptions is trivial.

Care to defend Dembski ?

Thought so…

Comment #107487

Posted by PvM on June 22, 2006 4:43 PM (e)

As far as not addressing all the work of Dembski, don’t worry I am working my way slowly through the more obvious and damning errors and mistakes.
I can understand the shock to some who actually took all Dembski said as the gospel, so to speak

Comment #107489

Posted by Sir_Toejam on June 22, 2006 5:06 PM (e)

poor ducky is a masochist; he never tires of being wrong.

no such thing as bad publicity, eh Donald?

Comment #107494

Posted by 'Rev Dr' Lenny Flank on June 22, 2006 5:40 PM (e)

Wow, Donald, has it been a whole month already? Is FL standing by, waiting for his turn?

Yes, yes, yes, Donald —- science doesn’t pay any attention to your religious opinions, and you don’t like that. Right. We got it. Really. We heard you the first hundred times.

Of course, weather forecasting or accident investigation or medical practice or the rules of basketball also don’t pay any attention to your religious opinions, do they.

If it makes you feel any better, Donald, none of them pay any attention to MY religious opinions either. Of course, I don’t throw tantrums over it, like you do. (shrug)

Comment #107550

Posted by PvM on June 22, 2006 11:11 PM (e)

Since Donald seems to be unable to address the demolition of Dembski’s argument I am more than willing to present to him the following challenge

Show support for your claim that

Donald M wrote:

Given how out of context so much else of your ramblings seem to be, I have to wonder if this oversight was because you didn’t want anyone to see the rest of the ISCID discussion.

If I didn’t know better, I’d say you were attempting to carefully construct an elborate straw man, complete with select-o-quotes and other misrepresentations. You’ll have to do better than that.

I encourage you to document

1. That I quote out of context
2. That I create Strawmen
3. That I selectively quote
4. That I misrepresent

Or be known for spreading unsupported assertions.

Fair enough eh Donald. Surprise us with some independent thought. If you need to ask Dembski for guidance, realize his poor performance in this area.

What is it going to be… yes or …
Or do you need to sleep on it for a while.

Comment #107551

Posted by steve s on June 22, 2006 11:32 PM (e)

Donald doesn’t waste time fact-checking the posts at UD. It would only slow him down. He finds it much easier to just accept what they say is true. I mean, why bother checking? You’ll just find out that they misled you.

Comment #107569

Posted by SPARC on June 23, 2006 3:03 AM (e)

Nobody needs NFL, irreducible or specified complexity to prove creation. Actually we can observe it every day. We are just a few clicks away from an obviously created parallel world and we can easily identify the creators as being Behe, Demski and others heavenly supported by the DI. However, due to its redundancy it may not be unreducible complex (taking away one part will not mean the armagedon for this world). Thus , it remains to be proven that this creation involved some intelligence.

Comment #107570

Posted by SPARC on June 23, 2006 3:05 AM (e)

Nobody needs NFL, irreducible or specified complexity to prove creation. Actually we can observe it every day. We are just a few clicks away from an obviously created parallel world and we can easily identify the creators as being Behe, Dembski and others heavenly supported by the DI. However, due to its redundancy it may not be irreducible complex (taking away one part will unfortunazely not mean the Armageddon for this world). Thus , it remains to be proven that this creation involved some intelligence.

Comment #107571

Posted by SPARC on June 23, 2006 3:07 AM (e)

Nobody needs NFL, irreducible or specified complexity to prove creation. Actually we can observe it every day. We are just a few clicks away from an obviously created parallel world and we can easily identify the creators as being Behe, Dembski and others heavenly supported by the DI. However, due to its redundancy it may not be irreducible complex (taking away one part will unfortunately not mean the Armageddon for this world). Thus , it remains to be proven that this creation involved some intelligence.

Comment #107591

Posted by SPARC on June 23, 2006 7:31 AM (e)

BTW, they have created another new
world, have a look

Comment #107604

Posted by stevaroni on June 23, 2006 9:57 AM (e)

BTW, they have created another new
world, have a look

OK, I just blew out my irony meter on that one.

For those who haven’t looked, the page is the research intelligent design.org wiki homepage.

At least in my browser, it comes up as…

> Main Page

> There is currently no text in this page

Yes, I understand it’s a work in progress, but it speaks volumes anyhow.

Comment #107622

Posted by Alann on June 23, 2006 11:46 AM (e)

I’d like to thank secondclass and Coin for explaining the criteria for a search algorithm that I had missed.
(That it must never repeat a point (no inherent inefficiency) and must eventually hit every point (eventually will get it right))
I think I understand the NFLT now:

1) There are an infinite number of math problems
2) For any given value there are an infinite number of math problems which return that value
3) Since each given value has an infinite number of problems all values are equally likely
4) In effect this says that the average of all math problems is essentially picking a random number
5) Since the result is essentially random no search algorithm can be better than another.

It is pure garbage. Its like an elaborate proof that 1 = 1 by starting with (A x 0) + 1 = (B x 0) + 1, and implying that this somehow infers something about A and B.

First its an attempt to trick you by starting with the hidden premise that your solution is purely random. Well duh, if your solution is random of course you cannot search for it effectively. They want to gloss over the step where in order to apply the NFLT you must first prove the problem is purely random. Yes just try and explain that there are equally as many scenarios where the life form would be a fire breathing dragon or a unicorn, as there are where the life form would be a fish or a goat.

Second even in mathematics step 3 is just plain wrong. If there is an infinite number of A and an infinite number of B.
It DOES NOT hold true that there are as many A as B. Infinity does not behave like a normal number:

    infinity * infinity = infinity (not infinity squared)
    infinity/infinity=infinity (not one)
    infinity + infinity = infinity (not two infinity)
    infinity - infinity = infinity (not zero)

In fact it should be obvious that certain numbers have special properties making them more or less likely than others. For example Zero would be more likely because any number times zero is zero, at the same time prime numbers would be rarer because they are less likely though multiplication.

Comment #107626

Posted by Gray on June 23, 2006 12:12 PM (e)

Alann wrote:

Second even in mathematics step 3 is just plain wrong. If there is an infinite number of A and an infinite number of B, it DOES NOT hold true that there are as many A as B. Infinity does not behave like a normal number.

I’m know very little about mathematics, but isn’t it also the case that there are different orders of infinity. The natural numbers and the odd numbers, for example, are both infinite, but there are “more” (in some sense that I don’t understand; infinity is a bizarre concept after all) of the former than the latter. Right?

Comment #107631

Posted by Henry J on June 23, 2006 12:44 PM (e)

Alann,
Re “Infinity does not behave like a normal number:”

Not to mention that some infinities are larger than others, so much so that there can’t be a set of all cardinal numbers (a.k.a. sizes of sets).

Gray,
Re “The natural numbers and the odd numbers, for example, are both infinite, but there are “more” (in some sense that I don’t understand; infinity is a bizarre concept after all) of the former than the latter. Right?”

I think you mean integers as compared to real numbers. There’s more real numbers than integers. Though otoh, the set of rational numbers is the same size as the set of integers.

Henry

Comment #107651

Posted by Coin on June 23, 2006 3:25 PM (e)

Alann,

Not exactly.

Basically, if you ever find yourself treating “infinity” like a number, just assume you’re doing something wrong. In equations, infinity is something that should only be dealt with in the limit (and you have to follow the rules that limits bring with them). Infinity is not a number.

(Full disclosure: There are these things often called “transfinite numbers”, which come in two flavors, “infinite ordinals” and “infinite cardinals”. These are technically “infinite numbers” and they’re sometimes called that, but you can’t treat them like numbers. You’re only allowed to use them for certain things, and a lot of the normal nice polite things that numbers do don’t exist when we’re talking about infinite ordinals and cardinals. For example, if we are talking about infinite ordinals (ordinals are the ones that are more like “numbers” as we think of them– they’re numbers that you can count to), the operation of division is not universally defined.)

Where I’m going with this is that:

3. Since each given value has an infinite number of problems all values are equally likely … Second even in mathematics step 3 is just plain wrong.

The mathematics are fine, you just understand them incorrectly. There are accepted methods for taking averages over infinite areas, or working with probabilities in infinite spaces, and they don’t look anything like “infinity / infinity”. I haven’t read the original proofs of Wolpert and Macready, the people who invented NFL, but I am quite confident they followed the rules and did their “equally likely” calculation in the correct way. From the way the NFL proofs have been described to me, one way you can look at them as working is something like: if you have an algorithm A and an algorithm B, for any given problem algorithm A performs better at, you can go and find a problem that algorithm B will perform better at. The things algorithm A is good at cancel out the things algorithm B is good at exactly, eventually leaving the two exactly evenly matched. There’s an enlightening graph on wikipedia showing how this works– an algorithm A is optimized such that it’s really really good at a small set of things, but as a result it gets ever so slightly worse at everything else, leaving it on average just as good as universally-mediocre algorithm B.

The NFL theorems of Wolpert and Macready are quite mathematically sound. What is not so sound is some of the extra-mathematical pronouncements some persons have made about the importance and consequences of the NFL theorems, in particular the pronouncements made by Mr. William Dembski. You are quite right when you say:

First its an attempt to trick you by starting with the hidden premise that your solution is purely random. Well duh, if your solution is random of course you cannot search for it effectively. They want to gloss over the step where in order to apply the NFLT you must first prove the problem is purely random.

Wolpert and Macready didn’t have this problem, since they were talking about search algorithms in an extremely general sense, and even “purely random” cost functions absolutely do matter in that context. Dembski however is ultimately trying to talk about the suitibility of a specific set of search algorithms (evolutionary ones) for a specific set of problems (allowing a biological organism to survive in the physical universe), which means NFL hardly has any relevance at all anymore.

Gray wrote:

The natural numbers and the odd numbers, for example, are both infinite, but there are “more” (in some sense that I don’t understand; infinity is a bizarre concept after all) of the former than the latter. Right?

The way math people look at it, there are the same number (this would be a cardinal number, here) of natural numbers as odd numbers, even though the odd numbers are a subset of the natural numbers. The reason for looking at things this way is that you can line up the natural numbers with the odd numbers (pair 1 with 1, 2 with 3, 3 with 5, 4 with 7…) and the way you’ve lined them up, there’s exactly one natural number for every odd number and exactly one odd number for every natural number. This is what math requires to consider two sets to be the “same size” or same cardinality.

Comment #107657

Posted by Alann on June 23, 2006 3:49 PM (e)

There are an infinite number of:
Prime Numbers (2,3,5,7,…)
Natural Numbers (+1,+2,+3,…): positive integers (sometimes zero is included)
Integers (…,-2,-1, 0,+1,+2,…)
Rational Numbers (x/y): All integers plus all possible fractions.
Irrational Numbers: Decimal numbers with non-repeating digits, including Pi, the square root of 2, etc.
Real Numbers: All Rational and Irrational Numbers
Imaginary numbers: Any Real number plus a real number times the imaginary number (square root of -1).

While each may be a bigger set than the last, there is no term for bigger or smaller infinity.

In this case the infinity represented in the biological sense used here says all environments and conditions are equally possible, little details like “on Earth” which make things finite just tend to get in the way of what is otherwise perfectly acceptable Mad Science.

Comment #107662

Posted by Coin on June 23, 2006 4:12 PM (e)

Alann wrote:

While each may be a bigger set than the last, there is no term for bigger or smaller infinity.

As far as mathematics has been concerned ever since Cantor’s work in the late 1800s, actually, there are quite a lot of terms for bigger or smaller infinities (an infinite number of them, as it happens).

For example, two important terms for bigger or smaller infinities are “Aleph-null” and “Aleph-one”. In your “there are an infinite number of” list there, the first four items are the same size as each other, and that size is “aleph-null”. The last three items are the same size as each other, and that size is larger (a larger infinity) than “aleph-null”.

(While we know the size of the real numbers is larger than aleph-null, there is a big messy argument that has been going on in math for about 130 years, about whether or not the size of the real numbers is the same as “aleph-one”. I can’t tell who’s winning.)

These are cardinal infinities, of course. “Infinity” by itself doesn’t really mean much of anything, and its use is maybe to be avoided, because there are so many different kinds of infinities and some of them have so little to do with one another. Really when you get down to it saying that something is “infinite” is like saying that it is “not blue”. What does this tell us?

Comment #107677

Posted by Gray on June 23, 2006 6:13 PM (e)

Coin,

Thanks for the explanation. If you wouldn’t mind indulging the mathematically challenged, I could use a little more clarification. You write,

The way math people look at it, there are the same number … of natural numbers as odd numbers, even though the odd numbers are a subset of the natural numbers. The reason for looking at things this way is that you can line up the natural numbers with the odd numbers (pair 1 with 1, 2 with 3, 3 with 5, 4 with 7…), and the way you’ve lined them up, there’s exactly one natural number for every odd number and exactly one odd number for every natural number. This is what math requires to consider two sets to be the “same size” or same cardinality.

That’s pretty baffling. The comparison with finite number series keeps leading me astray. For any finite series of successive natural numbers, there are more natural numbers than odd numbers. i.e., when members of the two sets are paired, there are “extra” natural numbers. That, combined with a conception of an infinite series as a strange sort of finite one that “keeps going and going,” leads me to conclude that there are “more” natural numbers than odd ones.

What about this: is the idea that the odd numbers are a subset of the natural numbers formally at odds with the idea that both sets have an equal number of members. What is the definition of “subset”?

Comment #107678

Posted by Gray on June 23, 2006 6:29 PM (e)

Addendum to my question: For any sets A and B, where B is a subset of A, it seems that the following two statements are true: (i) A contains every member than B contains, (ii) A contains some members that B does not contain. Don’t (i) and (ii) support the conclusion that A has more members than B?

Comment #107683

Posted by Coin on June 23, 2006 7:10 PM (e)

Gray wrote:

That’s pretty baffling. The comparison with finite number series keeps leading me astray.

It might help keep your mind clear if, just for the moment, you try to think of numbers and quantities as different things.

Gray wrote:

What about this: is the idea that the odd numbers are a subset of the natural numbers formally at odds with the idea that both sets have an equal number of members. What is the definition of “subset”?

“A is a subset of B” just means that every member of A is also a member of B. This means that any two equal sets are subsets of one another– A is always a subset of A.

There’s also a concept of “proper subset” that says that if “A is a proper subset of B”, then every member of A is also a member of B and B contains at least one member that A does not. This is what you were thinking of in your addendum post– the odd numbers are a proper subset of the natural numbers.

But by itself the fact that A is a proper subset of B tells us nothing about sizes. It just says there exists an element which is in B but not in A. That’s it. Sizes are a different issue.

So formally, no, there’s nothing problematic about this, even if intuitively it seems horribly wrong.

Gray wrote:

For any finite series of successive natural numbers, there are more natural numbers than odd numbers. i.e., when members of the two sets are paired, there are “extra” natural numbers. That, combined with a conception of an infinite series as a strange sort of finite one that “keeps going and going,” leads me to conclude that there are “more” natural numbers than odd ones…. Don’t (i) and (ii) support the conclusion that A has more members than B?

The thing is that the way we define the “size” (cardinality) of a set is done not in absolute terms, but only in relation to other sets. This is why removing an element from an infinite set B (and thus creating proper subset A) does not reduce the size of B– size isn’t some kind of absolute counter, it’s something that’s measured, sort of, by cardinal relationships with other sets.

The way we measure this relationship is using functions. Two sets are of equal size if there is a “bijection” between them– this is just a function that maps from A to B such that the function produces exactly one member of B for each member of A, and the function can be reversed to produce exactly one member of A for each member of B. I described this process as “pairing up” in an earlier post, but this is really a very bad way of putting it because it makes you think as if you were walking alongside a wall with a huge group of natural numbers and a huge group of odd numbers and telling them to stand next to each other. In this metaphor, as you walked along the wall you’d think you’ve always got more natural numbers left than odd numbers, because you’re moving through the group of natural numbers “slower”. This is just a metaphor, though, and if you ignore the metahor and look at things in terms of functions they make much more sense.

The bijection (the mapping, the function) from the odd numbers to the natural numbers is f(x) = 2x - 1. Instead of walking along a wall and lining up numbers, imagine you just stood at the beginning of the wall with all of these numbers with a bullhorn and just yelled “hey, natural numbers! find the odd number equal to yourself plus two minus one, and stand next to it!”. This function neatly pairs up every odd number with a natural number and vice versa, even though there’s an infinite quantity of both.

Strange as this is, it’s necessary. Imagine if math had taken a different path, and defined things such that there were “more” natural numbers than odd numbers. Now let’s say you took the set of natural numbers, multiplied everything in the set by 2, and subtracted 1 from everything in the set. This new set is the same “size” as it was before you applied the function f(x)=2x-1, right? It has to be, you didn’t take anything out. But even though this new set is the same “size” as the natural numbers, it’s equal to the odd numbers, and didn’t we just say there are more natural numbers than odd numbers..? This doesn’t work at all.

It may help to read this. Or it may make things worse, I don’t know. Anyway, the important thing to keep in mind is that not everything in math has to have a physical analogy. When we use a word in math, like infinity, we’re not necessarily using it to mean exactly the same thing it would mean in normal english. More often we’re using the word to describe some logical entity which was constructed to have a number of properties that it would be fundamentally necessary for the english-language equivalent to have.

Comment #107690

Posted by Tom Curtis on June 23, 2006 7:55 PM (e)

Gray, there are two definitions of the notion of “bigger than” for quantities. Definition one, which you allude to is that if A is a proper subset of B, than set B is bigger than A. The other definition is that if every member of A can be uniquely correlated with a member of B, but not every member of B can be uniquely correlated with a member of A, than B is bigger than A. For finite sets, both definitions are equivalent.

For infinite sets, most mathemeticians treat the definitions as independant, and the second definition as more fundamental. As coin has shown, it is easy to show that if you treat the “size” of the set of natural numbers as a quantity, then by the second definition the set of odd numbers has the same size as the set of natural numbers. IN fact, by that definition, the set or rational numbers is also the same size, and so also is the set of computable numbers. But the set of real numbers is larger.

Some mathemeticians dispute this. The claim that the set of computable numbers is the set or real numbers; and that talk about the size of infinite sets is nonsense. I think they’re right, but I don’t have anywhere near enough mathematical knowledge to justify that opinion. (On the other hand, if they are wrong, Platonism is true, and there is a God, and wishes are horses.)

http://en.wikipedia.org/wiki/Aleph-null
http://en.wikipedia.org/wiki/Constructive_logic
http://en.wikipedia.org/wiki/Computable_number
http://en.wikipedia.org/wiki/Supertask

Comment #107693

Posted by Doc Bill on June 23, 2006 8:04 PM (e)

Where’s Donald?

Alas, another mindless, baseless, clueless creationist tossing out Coulteresque assertions as if they were rare gems and not the paste of wishful thinking.

Good-bye, Donald.

Comment #107694

Posted by 'Rev Dr' Lenny Flank on June 23, 2006 8:05 PM (e)

Donald will be back next month for another drive-by.

Comment #107697

Posted by Sir_Toejam on June 23, 2006 8:17 PM (e)

Donald will be back next month for another drive-by.

and very likely will say exactly the same thing, yet again, redardless of being proven wrong by Pim no less than 3 times, and even Dembski admitting the error in accusing Pim of quote mining.

It’s exactly why i say it simply isn’t worth bothering to respond to him.

heck, just post the links to the previous times he was shown to be wrong and have done with it.

Comment #107702

Posted by Gray on June 23, 2006 8:58 PM (e)

Coin,

At some point you will have to say to me what Socrates says to Glaucon, “You won’t be able to follow me any longer,” but hopefully we’re not there quite yet. So let me take one more stab at this.

If B is a proper set of A, can there ever be a bijection of A and B? Based on the summary definitions you’ve provided, it seems not. If A contains every member of B, then can’t every member of B be “paired” with the same member in A without remainder in B? But A also contains at least one member, that B does not. If all the same members of A and B have been paired, aren’t any members of A that are not contained in B without a “partner”?

Let A be the natural numbers, and B the odd numbers. Every member of B can be paired with the same member in A (1 with 1, 3 with 3, 5 with 5, ad infinitum) without remainder in B. But A also contains all the even numbers that do not have a “partner” in B; all the members of B are already “taken” by the odd members in A.

The wikipedia entry for Hilbert’s Hotel is interesting. I’m actually familiar with the proof for the existence of God that denies the possibility of an actual infinite. It’s called the Kalam cosmological argument (a friend of mine wrote his dissertation on it). At present, the whole business strikes me as a bit nonsensical. I wonder whether that sentiment would disappear with a few graduate level courses in mathematics. Isn’t there a school of mathematics that rejects all of this talk about “infinite” series? The intuitionists?

Comment #107707

Posted by Gray on June 23, 2006 9:24 PM (e)

Tom Curtis wrote:

Definition one, which you allude to is that if A is a proper subset of B, then set B is bigger than A. The other definition is that if every member of A can be uniquely correlated with a member of B, but not every member of B can be uniquely correlated with a member of A, then B is bigger than A. For finite sets, both definitions are equivalent. For infinite sets, most mathemeticians treat the definitions as independant, and the second definition as more fundamental.

Thanks. That’s a helpful distinction. However, I wonder if the natural and odd numbers are the same “size” under the second definition. It seems to me that if the members are not anonymous, so to speak, but are indexed, then perhaps it is not the case that every natural number can be uniquely correlated with an odd number.

Isn’t the way that members are correlated both arbitrary and significant. If, for example, I correlate the first natural with the first odd, the second natural with the second odd, etc., then each natural number would be uniquely correlated with an odd number. If, on the other hand, I correlate only the odd natural numbers with the odd numbers, then the even natural numbers would not be correlated to the odd numbers. Does that make any sense at all?

Comment #107736

Posted by Henry J on June 23, 2006 11:05 PM (e)

Gray,

Re “(i) A contains every member than B contains, (ii) A contains some members that B does not contain. Don’t (i) and (ii) support the conclusion that A has more members than B?”

If both are finite it does - that’s the mathematical definition of “finite set”: that all proper subsets of a set S are smaller than the set itself. If S has a proper subset the same size as itself then S is infinite.

(Note - Proper subset of a set S = subset of S that doesn’t contain all the elements of S.)
(Note - sets S1 and S2 are the same size if there exists a one-to-one mapping between their elements.)

———–

Tom,
What’s a computable number? Does that include all algebraic numbers plus some that aren’t algebraic?

(Note - algrebraic number = solution to a single-variable polynomial with integer coefficients.)

Henry

Comment #107740

Posted by Anton Mates on June 23, 2006 11:49 PM (e)

Gray wrote:

If B is a proper set of A, can there ever be a bijection of A and B? Based on the summary definitions you’ve provided, it seems not. If A contains every member of B, then can’t every member of B be “paired” with the same member in A without remainder in B? But A also contains at least one member, that B does not. If all the same members of A and B have been paired, aren’t any members of A that are not contained in B without a “partner”?

Sure, but that just shows that there exists a function between sets–the identity in this case–which is not a bijection. (You can always find such a function unless both sets are the empty set.) That doesn’t mean there isn’t also a function which is a bijection, which is the criterion for equality.

What you’ve demonstrated above, incidentally, is the existence of a surjective mapping from A to B. That proves that the size of B is not greater than the size of A, so it does in itself tell you something about the relation between the two numbers. It just doesn’t tell you whether size(B) is actually less than size(A), or merely equal.

Let A be the natural numbers, and B the odd numbers. Every member of B can be paired with the same member in A (1 with 1, 3 with 3, 5 with 5, ad infinitum) without remainder in B. But A also contains all the even numbers that do not have a “partner” in B; all the members of B are already “taken” by the odd members in A.

You could also pair every member n of A with the member (2n+1) of B; that would take care of every element in A but now B has an element, 1, which is left over.

Isn’t there a school of mathematics that rejects all of this talk about “infinite” series? The intuitionists?

I don’t think so–AFAIK even the hardcore constructivists are still happy with infinite series/sequences, though they’re less inclined to grant them certain properties. I think I count as an intuitionist, and I’m happy with them, anyway. :)

Comment #107743

Posted by Henry J on June 24, 2006 12:12 AM (e)

Re “Isn’t the way that members are correlated both arbitrary and significant.”

Arbitrary, since the question is simply whether or not a one-to-one mapping exists at all. It doesn’t really matter which such mapping gets demonstrated, and also it doesn’t matter if there are other mappings that aren’t one-to-one.

Henry

Comment #107769

Posted by Marek 14 on June 24, 2006 6:36 AM (e)

As for the aleph-null and aleph-one: it was proven that the continuum hypothesis (essentially whether the cardinality of real numbers is aleph-one or higher) is undecidable in standard set theory, so whether you want to accept it or not, you won’t hit any contradictions.

Comment #107779

Posted by Keith Douglas on June 24, 2006 7:41 AM (e)

A computable (real) number is one that can be generated by a turing machine (or a markov algorithm, post system, lambda calculus, etc.). Since there are only countably many turing machines, there are only countably many such numbers and whence “most” real numbers are uncomputable.

Comment #107786

Posted by Gray on June 24, 2006 8:31 AM (e)

Allright, I may have it. The “size” of a set is not absolute but is relative to another set, and the relation between the two is established by a function that correlates the members of one with members of the other. So any set can be larger, smaller or equal to any other set depending on the function used. Right?

If so, is it the case that the odd numbers are smaller than, larger than, or equal to the natural numbers depending on the function used? That doesn’t sound quite right.

Comment #107813

Posted by Anton Mates on June 24, 2006 11:04 AM (e)

Gray wrote:

Allright, I may have it. The “size” of a set is not absolute but is relative to another set, and the relation between the two is established by a function that correlates the members of one with members of the other. So any set can be larger, smaller or equal to any other set depending on the function used. Right?

Er…no. :) Whether one set is larger, smaller or equal to another depends on whether functions between the sets and with certain properties exist; but it is not dependent on exactly what those functions are or which one you’re talking about at the moment.

By analogy, a number is square if (say) there exists an integer whose square is that number. If there was no such integer, it wouldn’t be square; but you don’t have to reference that integer to say it’s square. You don’t have to say “25 is square relative to 5 or -5, but not relative to 7;” 25 is a square number period.

The sets {1,2,3} and {a,b,c} are equal in size because there exists a one-to-one function from each set into the other. For instance, the function f, with f(1)=a, f(2)=b, f(3)=c; and the function g, with g(a)=1, g(b)=2, g( c)=3. These aren’t the only functions which do the job; you could replace f with F, F(1)=b, F(2)=c, F(3)=a. There are also lots of functions which don’t do the job: for instance, h with h(1)=a, h(2)=a, h(3)=a. But the mere existence of f and g is all you need to say the sets are equal in size.

By contrast, take the sets {1,2,3} and N={natural numbers}:

There’s still a one-to-one function from {1,2,3} into N–for instance, f with f(1)=10, f(2)=300, f(3)=1. That proves that the size of {1,2,3} is less than or equal to to the size of N.

On the other hand, there is no one-to-one function from N into {1,2,3}. (This is easy to see. Take any function h from N into {1,2,3}. h(1), h(2), h(3) and h(4) cannot all have different values; therefore h is not one-to-one). That proves that the size of N is not less than or equal to the size of {1,2,3}.

Put them together, and they prove that the size of N must be greater than the size of {1,2,3}.

Comment #107821

Posted by Gray on June 24, 2006 11:40 AM (e)

O.K. So N (natural numbers) and O (odd numbers) are equal in size because there exists at least one function that uniquely correlates all the members of N with all the members of O.

Assuming that I’ve got that right, my difficulty is as follows. For sets with finite numbers of members, if two sets are of equal size, and all of the members of the first set are correlated to members of the second set by a one-to-one function (any one-to-one function), the second set cannot have members that are not correlated to members of the first set. But for sets with infinite numbers of members, that’s not the case.

It’s possible to correlate all of the members of O with members in N, but leave some (an infinite number) of the members of N with no correlate in O. It’s also possible to correlate all of the members of N with members in O, but leave some (an infinite number) of the members of O with no correlate in N. It follows, paradoxically, that it’s possible for a one-to-one function between sets of equal size to leave “extras,” as it were.

Comment #107954

Posted by Henry J on June 24, 2006 6:44 PM (e)

Re “It follows, paradoxically, that it’s possible for a one-to-one function between sets of equal size to leave “extras,” as it were.”

That’s essentially a rephrasing of the definition of infinite set - it can have the same size as some proper subset of itself.

Ergo, don’t expect infinite sets to “act like” finite sets; they don’t.

Henry

Comment #107967

Posted by Gray on June 24, 2006 7:03 PM (e)

Henry J wrote:

That’s essentially a rephrasing of the definition of infinite set – it can have the same size as some proper subset of itself.

Now that I understand what is meant by the claim that two infinite sets are the “same” size or “equal” is size, I am of the opinion that it is a complete misuse of those words. What the are used for in this context has nothing to do with the word’s normal meaning. And while normal usage may be refined for technical applications, the core of the normal meaning cannot be abandoned, much less contradicted. It would be better if mathematicians coined a word (perhaps they already have and we havnt been using it). At any rate, talk of “equality” or “sameness” of size should be abandoned.

Comment #108122

Posted by SPARC on June 25, 2006 3:31 AM (e)

Syntax Error: mismatched tag 'kwickxml'

Comment #108126

Posted by SPARC on June 25, 2006 3:45 AM (e)

Stevaroni: OK, I just blew out my irony meter on that one.

For those who haven’t looked, the page is the research intelligent design.org wiki homepage.

At least in my browser, it comes up as…

Sorry, that was my mistake, one slas too much at the end of the URL.

Click here and you will still find a vacuum filled with emptiness

Comment #108169

Posted by William E Emba on June 25, 2006 9:00 AM (e)

Coin wrote:

Basically, if you ever find yourself treating “infinity” like a number, just assume you’re doing something wrong. In equations, infinity is something that should only be dealt with in the limit (and you have to follow the rules that limits bring with them). Infinity is not a number.

Infinity is not a number, except when it is.

In modern mathematics, the word “number” is used as is convenient, so arguing about it is pointless. Infinity can be dealt with directly, or as code for certain kinds of limits. There are numerous situations where treating infinity like a finite number is encouraged, even rigorous. Abraham Robinson’s nonstandard analysis is the most famous instance of this.

For millennia, philosophers owned the concept of infinity, and talked about it ad infinitum, but since Cantor, it’s been discovered that most of what they said was arrant nonsense, or at best of very limited application. Not all philosophers have noticed, however. This is like philosophers claiming relativity is wrong because of Kant. Or denying stellar astrophysics because of Comte.

Comment #108199

Posted by Donald M on June 25, 2006 11:17 AM (e)

Pim snarls:

I encourage you to document

1. That I quote out of context
2. That I create Strawmen
3. That I selectively quote
4. That I misrepresent

Or be known for spreading unsupported assertions.

Fair enough eh Donald. Surprise us with some independent thought. If you need to ask Dembski for guidance, realize his poor performance in this area.

What is it going to be… yes or …
Or do you need to sleep on it for a while.

I already did 1-4. I referenced two additional articles that eloborate on the subject. You can do your own homework. Trying to make it appear as if a summary paragraph out of one article, one that pre-dates the two papers that provide far more detail, as well ignoring NFL, represents the whole of the argument is simply misleading. You don’t even reference any of these in your reference list. Instead you only reference critiques. Nor do you reference any of the responses that Dembski himself has made to some of his critics. IN other words, you’re being very careful to select only what makes the case the you want to make, and compeltely ignore anything that might call that case into question. This comes directly from the “How to Build A Straw Man in 5 Easy Steps” manual. You see, Pim, in order to refute Dembski’s arguments, you have to consider the actual arguments in total…not the bits and pieces that suit your purpose, which is all you’re doing here.

Comment #108200

Posted by 'Rev Dr' Lenny Flank on June 25, 2006 11:54 AM (e)

Hey Donald, why won’t you answer the simple questions I keep asking you?

What, again, did you say the scientific theory of ID is? How, again, did you say this scientific theory of ID explains these problems? What, again, did you say the designer did? What mechanisms, again, did you say it used to do whatever the heck you think it did? Where, again, did you say we can see the designer using these mechanisms to do … well . . anything?

Or is “POOF!! God — uh, I mean, The Unknown Intelligent Designer — dunnit!!!!” the extent of your, uh, scientific theory of ID …. ?

How does “evolution can’t explain X Y or Z, therefore goddidit” differ from plain old ordinary run-of-the-mill “god of the gaps?

Here’s *another* question for you to not answer, Donald: Suppose in ten years, we DO come up with a specific mutation by mutation explanation for how X Y or Z appeared. What then? Does that mean (1) the designer USED to produce those things, but stopped all of a sudden when we came up with another mechanisms? or (2) the designer was using that mechanism the entire time, or (3) there never was any designer there to begin with.

Which is it, Donald? 1, 2 or 3?

Oh, and if ID isn’t about religion, Donald, then why do you spend so much time bitching and moaning about “philosophical materialism”?

(sound of crickets chirping)

You are a liar, Donald. A bare, bald-faced, deceptive, deceitful, deliberate liar, with malice aforethought. Still.

Comment #108201

Posted by psychologist on June 25, 2006 11:54 AM (e)

Donald, you should have asked Pim for his source if you doubted his quote. Instead you called him a liar. You owe Pim an apology.

Comment #108202

Posted by 'Rev Dr' Lenny Flank on June 25, 2006 11:56 AM (e)

in order to refute Dembski’s arguments

Dembski himself has already given up on his, uh, arguments, Donald. He and his fellow IDers have already declared that there isn’t any scientific theory of ID, probably won’t ever be any, and that he is returning to his “first love”, apologetics. Because, after all, ID isn’t about religion. No sirree Bob.

Didn’t you get that memo, Donald?

Comment #108204

Posted by Barley Zagner on June 25, 2006 11:58 AM (e)

I also don’t think that there is really a theory of intelligent design at the present time to propose as a comparable alternative to the Darwinian theory, which is, whatever errors it might contain, a fully worked out scheme. There is no intelligent design theory that’s comparable. Working out a positive theory is the job of the scientific people that we have affiliated with the movement. Some of them are quite convinced that it’s doable, but that’s for them to prove…No product is ready for competition in the educational world.

–Philip Johnson, 2006

*Go get the reference yourself, DonaldM. Nobody’s obliged to do your research for you.

Comment #108238

Posted by PvM on June 25, 2006 1:22 PM (e)

Donald M wrote:

PvM wrote:

I encourage you to document

1. That I quote out of context
2. That I create Strawmen
3. That I selectively quote
4. That I misrepresent

Or be known for spreading unsupported assertions.

Fair enough eh Donald. Surprise us with some independent thought. If you need to ask Dembski for guidance, realize his poor performance in this area.

What is it going to be… yes or …
Or do you need to sleep on it for a while.

I already did 1-4. I referenced two additional articles that eloborate on the subject. You can do your own homework.

In other words, Donald did not really do 1-4 but referenced to additional articles that elaborate on the topic and expect me to make his arguments to support 1-4. It should be clear by now that Donald M is once again making unsupported accusations. Has he not learned from the past?

Donald M wrote:

Trying to make it appear as if a summary paragraph out of one article, one that pre-dates the two papers that provide far more detail, as well ignoring NFL, represents the whole of the argument is simply misleading.

In order for my argument to be misleading you have to show that it does not address the two papers that followed my references. In addition, if Donald believes that my references were so inadequate, why Dembski has not publicly withdrawn them as being insufficient>?

Donald M wrote:

You don’t even reference any of these in your reference list. Instead you only reference critiques. Nor do you reference any of the responses that Dembski himself has made to some of his critics. IN other words, you’re being very careful to select only what makes the case the you want to make, and compeltely ignore anything that might call that case into question.

Perhaps Donald M can show us what Dembski has contributed that address my arguments? That Dembski has ‘responded’ to his critics is a far cry from Dembski admitting his errors and or Dembski actually addressing his critics main objections.

Donald M wrote:

This comes directly from the “How to Build A Straw Man in 5 Easy Steps” manual. You see, Pim, in order to refute Dembski’s arguments, you have to consider the actual arguments in total…not the bits and pieces that suit your purpose, which is all you’re doing here.

That is what you claim and yet you have failed to provide any detailed examples. Come on Donald, do some research and show that by not addressing these two papers I somehow misrepresented Dembski’s argument and/or created a strawman.

You have again made several unsupported accusations… Why is it so hard to defend them? Let alone present them with sufficient supporting data? What arguments did I miss Donald? Show us that you are familiar with Dembski’s own arguments. If you were, I doubt you would be making these accusations.

I understand that Donald may be going through the typical steps of recovery, and denial and anger are high on the list. After all, to find out that ID is without clothes must come as quite a shock to the faithful.

Comment #108245

Posted by PvM on June 25, 2006 1:33 PM (e)

Donald M’s references, help further show how Dembski seems to be unfamiliar with the effectiveness of random search under NFL…

Dembski wrote:

The upshot of the NoFree Lunch theorems is that averaged over all fitness functions, evolutionary computation does no better than blind search (see Dembski 2002, ch 4 as well as Dembski 2005 for an overview). But this raises a question: How does evolutionary computation obtain its power since, clearly, it is capable of doing better than blind search?

And yet blind search itself performs quite well as I and others have shown. Dembski clearly ‘does not get it’

Dembski does end up with an excellent question

The question therefore remains: Insofar as evolutionary computation does better than blind search, what is its source of power? That’s a topic for another paper.

The answer is relatively simple ‘evolvability’. The details are even more interesting as they show how neutrality itself is a selectable trait and how self-evolution requires neutrality (Toussaint). The idea that evolution can ‘adapt its variation’ based on fitness, helps understand the answer to Dembski’s question.

What is Dembski’s answer? I mean scientifically relevant answer… Poof just does not do it. His articles on ‘conservation of information’ and ‘searching large spaces’ fail to show much relevance to evolutionary processes. But perhaps Donald can make his case based on these three papers?

Well Donald? Can you?

Comment #108283

Posted by Henry J on June 25, 2006 3:37 PM (e)

Re “Now that I understand what is meant by the claim that two infinite sets are the “same” size or “equal” is size, I am of the opinion that it is a complete misuse of those words.”

Well, the technical term is “cardinality of the set”, or “cardinal number”. Which for finite sets is essentially the same thing as the size of the set, so I don’t generally bother distinguishing the terms. I don’t know if mathematicians limit the use of the word “size” to finite sizes or not.

Henry

Comment #108423

Posted by Anton Mates on June 25, 2006 11:31 PM (e)

Gray wrote:

It’s possible to correlate all of the members of O with members in N, but leave some (an infinite number) of the members of N with no correlate in O. It’s also possible to correlate all of the members of N with members in O, but leave some (an infinite number) of the members of O with no correlate in N. It follows, paradoxically, that it’s possible for a one-to-one function between sets of equal size to leave “extras,” as it were.

Correct. But it’s not really paradoxical unless you consider “a one-to-one function between sets of equal size must always be surjective” to be self-evident, which AFAIK most people don’t.

Note, too, that just about any formalization of “size” or “number of elements” for infinite sets would feature the same oddness. However you formalize set size, a set obviously ought to have the same size as itself; yet you can have a one-to-one function from (say) N to itself which leaves some elements left over. f(n)=n+5, for example.

Now that I understand what is meant by the claim that two infinite sets are the “same” size or “equal” is size, I am of the opinion that it is a complete misuse of those words. What the are used for in this context has nothing to do with the word’s normal meaning. And while normal usage may be refined for technical applications, the core of the normal meaning cannot be abandoned, much less contradicted. It would be better if mathematicians coined a word (perhaps they already have and we havnt been using it). At any rate, talk of “equality” or “sameness” of size should be abandoned.

As Henry noted, the formal term is “cardinality.” Which is necessary to use sometimes so that it’s clear you’re talking about “size” as in number of elements, and not “size” as in “extent” or length/width/area/volume. (The latter is formalized as “measure;” that’s the measure theory Robert O’Brien keeps going on about.)

But I doubt people will ever abandon “size” or “number of elements” as a shorthand for cardinality. Why should they? As Henry said, set cardinality is precisely the same thing as size for all finite sets, which is all the “normal meaning” of the latter word covers anyway. Beyond that, cardinality has the following properties for finite and infinite sets alike:

The cardinality of a union of sets is never less than the cardinality of any individual set;
The cardinality of an intersection of sets is never greater than the cardinality of any individual set;
The cardinality of a set is never less than the cardinality of any subset;
Reflexivity: card(A)=card(A);
Transitivity: If card(A) ≤ card(B) and card(B) ≤ card(C), then card(A) ≤ card(C);
If card(A) ≤ card(B) and card(B) ≤ card(A), then card(A) = card(B);

all of which are intuitively properties “size” should have.

So cardinality, in the opinion of pretty much every math guy I’ve ever asked on the subject, is the ideal way of being able to talk about set size for infinite sets.

Comment #108845

Posted by William E Emba on June 27, 2006 6:16 PM (e)

Gray wrote:

Now that I understand what is meant by the claim that two infinite sets are the “same” size or “equal” is size, I am of the opinion that it is a complete misuse of those words.

Well, you can complain all you like. No one will care.

What they are used for in this context has nothing to do with the word’s normal meaning.

This is rank nonsense. It obviously has something to do with the word’s normal meaning.

And while normal usage may be refined for technical applications, the core of the normal meaning cannot be abandoned, much less contradicted.

The core of the normal meaning has been retained. Tell me, how would you define two disjoint sets, say points on one line versus points on a parallel line, to be the “same size”, except by setting up a correspondence in the first place?

It would be better if mathematicians coined a word (perhaps they already have and we havnt been using it).

The word “equipollent” is out there, but is very rarely used. There’s a reason: the simpler terminology does just fine.

At any rate, talk of “equality” or “sameness” of size should be abandoned.

Why? Because you’re stupid?

Comment #110228

Posted by miriam on July 5, 2006 7:05 PM (e)

Sorry for this