Over at the impressive community blog of Less Wrong, Eliezer Yudkowsky published an interesting article titled “Einstein’s Arrogance.” Check it out. It’s worth a read. One point stands out:
To assign more than 50% probability to the correct candidate from a pool of 100,000,000 possible hypotheses, you need at least 27 bits of evidence (or thereabouts). You cannot expect to find the correct candidate without tests that are this strong, because lesser tests will yield more than one candidate that passes all the tests. If you try to apply a test that only has a million-to-one chance of a false positive (~20 bits), you’ll end up with a hundred candidates. Just finding the right answer, within a large space of possibilities, requires a large amount of evidence. (bolding mine)
The argumentation is impressive, but Yudkowsky should note that the pool of possible hypotheses is far greater than 100,000,000 — it is infinite. Just as an infinite number of curves can fit two, three, four … up to n points on a graph, and there is a one-to-one match between equations that express these curves and the curves, so can the logical consequences of an infinite number of hypotheses ‘predict’ the results of any finite number of experiments. Therefore, no number of bits of evidence can ever permit assigning a probability more than 50% to any candidate from the pool, much less assigning a probability to the correct candidate.
Ah, but there is a way to limit the number of theories in the pool of possible hypotheses to a finite number. The only way the pool is limited is, so I conjecture, by the elimination of theories that are incompatible with our background assumptions. And yet, any of our background assumptions could be false. Yudkowsky advances the thought-experiment of black boxes that always beep when the properly-ordered six numbers are entered, but have a 25% chance of beeping when the combination is wrong. But how does one know that these black boxes always beep when the properly-ordered six numbers are entered? Yes, this is a thought-experiment, but it is an assumption that is more often than not false in science: our background assumptions, even about the accuracy of black boxes, are unjustified, constantly open to revision, and have historically been revised.
Up until the Michelson-Morley experiment (and for some time after), one of our background assumptions was that light traveled through a medium called ‘luminiferous aether.’ Before the background assumptions deductively entailed by luminiferious aether were rejected, Einstein’s hypothesis was ruled out from swimming in the pool of possible hypotheses, for the two contradict one another about the speed of light: Einstein assumed that the speed of light in a vacuum was constant regardless of reference frame. Everything else follows from this assumption (and some mathematical equations formulated by Lorentz). Therefore, the pool of possible hypotheses in light of our background assumptions is never fixed at an arbitrary number, and this pool may not even contain at any one time a correct candidate.
In fact, if we are to work with probabilities, I pull out the pessimistic meta-induction argument and see how it flies: every previous theory we have rejected rested on false background assumptions. It is highly probable that our current background assumptions are false, therefore it is highly probable that we exclude from the pool of possible hypotheses a true hypothesis.
If that argument does not sit well, then we can appeal to a more robust interpretation of probability: we have an infinite number of hypotheses at our disposal that equally fit the available results of a finite number of tests. How could we ever claim that a finite number of hypotheses (only one hundred million?) should include the true hypothesis?
And so on.