d

Posts Tagged ‘albert einstein’

Understanding Reality

In critical rationalism on 29/08/2011 at 6:39 am

Physical concepts are free creations of the human mind, and are not, however it may seem, uniquely determined by the external world. In our endeavour to understand reality, we are somewhat like a man trying to understand the mechanism of a closed watch … He will never be able to compare his picture with the real mechanism, and he cannot even imagine the possibility or the meaning of such a comparison. But he certainly believes that, as his knowledge increases, his picture of reality will become simpler and … explain a … wider range of his sensuous expressions. He may also believe in the existence of the ideal limit of knowledge and that it is approached by the human mind. He may call this limit the objective truth. (Albert Einstein, The Evolution of Physics Simon and Schuster, New York, 1938, 33)

//

Trust

In critical rationalism, irrationalism, skepticism on 26/08/2011 at 2:39 pm

People accept without reflection the ideas, fads, styles, and tastes of their times. Everyone is subject to this problem, even those that harp on this problem.

Why do we dismiss the stories printed in the National Enquirer and accept the articles printed in Scientific American? Is the difference in the presentation? Are we clued in to the problems in trusting the National Enquirer after seeing the sensationalistic headlines and poor typesetting?

We’re just going about begging the question, since we are using the guilty verdict as part of the prosecution. It sounds like a matter of taste to prefer Scientific American for its excellent formatting. What are we to make of the discovery by the National Inquirer of the John Edwards scandal?

Read the rest of this entry »

What’s Day is Night

In experiments, induction on 23/07/2011 at 3:23 am

Over at the impressive community blog of Less Wrong, Eliezer Yudkowsky published an interesting article titled “Einstein’s Arrogance.” Check it out. It’s worth a read. One point stands out:

To assign more than 50% probability to the correct candidate from a pool of 100,000,000 possible hypotheses, you need at least 27 bits of evidence (or thereabouts).  You cannot expect to find the correct candidate without tests that are this strong, because lesser tests will yield more than one candidate that passes all the tests.  If you try to apply a test that only has a million-to-one chance of a false positive (~20 bits), you’ll end up with a hundred candidates.  Just finding the right answer, within a large space of possibilities, requires a large amount of evidence. (bolding mine)

The argumentation is impressive, but Yudkowsky should note that the pool of possible hypotheses is far greater than 100,000,000 — it is infinite. Just as an infinite number of curves can fit two, three, four … up to n points on a graph, and there is a one-to-one match between equations that express these curves and the curves, so can the logical consequences of an infinite number of hypotheses ‘predict’ the results of any finite number of experiments. Therefore, no number of bits of evidence can ever permit assigning a probability more than 50% to any candidate from the pool, much less assigning a probability to the correct candidate.

Ah, but there is a way to limit the number of theories in the pool of possible hypotheses to a finite number. The only way the pool is limited is, so I conjecture, by the elimination of theories that are incompatible with our background assumptions. And yet, any of our background assumptions could be false. Yudkowsky advances the thought-experiment of black boxes that always beep when the properly-ordered six numbers are entered, but have a 25% chance of beeping when the combination is wrong. But how does one know that these black boxes always beep when the properly-ordered six numbers are entered? Yes, this is a thought-experiment, but it is an assumption that is more often than not false in science: our background assumptions, even about the accuracy of black boxes, are unjustified, constantly open to revision, and have historically been revised.

Up until the Michelson-Morley experiment (and for some time after), one of our background assumptions was that light traveled through a medium called ‘luminiferous aether.’ Before the background assumptions deductively entailed by luminiferious aether were rejected, Einstein’s hypothesis was ruled out from swimming in the pool of possible hypotheses, for the two contradict one another about the speed of light: Einstein assumed that the speed of light in a vacuum was constant regardless of reference frame. Everything else follows from this assumption (and some mathematical equations formulated by Lorentz). Therefore, the pool of possible hypotheses in light of our background assumptions is never fixed at an arbitrary number, and this pool may not even contain at any one time a correct candidate.

In fact, if we are to work with probabilities, I pull out the pessimistic meta-induction argument and see how it flies: every previous theory we have rejected rested on false background assumptions. It is highly probable that our current background assumptions are false, therefore it is highly probable that we exclude from the pool of possible hypotheses a true hypothesis.

If that argument does not sit well, then we can appeal to a more robust interpretation of probability: we have an infinite number of hypotheses at our disposal that equally fit the available results of a finite number of tests. How could we ever claim that a finite number of hypotheses (only one hundred million?) should include the true hypothesis?

And so on.

//

Einstein

In experiments on 07/07/2011 at 11:32 am

There are two ways that a theorist goes astray: (1) The devil leads him by the nose with a false hypothesis. (For this he deserves our pity) (2) His arguments are erroneous and sloppy. (For this he deserves a beating). (Einstein, letter to Lorentz, The Collected Papers of Albert Einstein, Princeton University Press, (Princeton, NJ, 1987-2006), volume 8A, p. 88)

//

Philosophy of Science v. Epistemology

In duhem, induction, quine on 23/06/2011 at 10:41 am

In light of Einstein, Rutherford, and Maxwell, if we assume the knowledge-acquiring process S employs in everyday affairs is distilled or refined in scientific practice, then the problem of induction and the Duhem-Quine thesis should have long ago put to rest any theory of knowledge that claims S can know theory 1 has a greater objective verisimilitude than theory 2.

//

“Falsifiability”

In critical rationalism, popper on 18/06/2011 at 12:31 am

Recently, a friend of mine sent me this criticism of falsifiability published in Edge.org in 2008 by Rebecca Goldstein, the wife of Steven Pinker and author of a few decent (so I hear) books. Upon reading it, I knew I had to write up a good ‘fisk’ of the criticism, seeing as it provides a good opportunity to clear up some misconceptions.

Read the rest of this entry »

Newton & Einstein

In induction, popper on 15/06/2011 at 4:05 pm

“… (1) Newton’s theory is exceedingly well corroborated. (2) Einstein’s theory is at least equally well corroborated. (3) Newton’s and Einstein’s theories largely agree with each other; nevertheless, they are logically inconsistent with each other because, as for instance in the case of strongly eccentric planetary orbits, they lead to conflicting predictions. (4) Therefore, corroboration cannot be a probability (in the sense of the calculus of probabilities).

“… The proof is simple. If corroboration were a probability, then the corroboration of ‘Either Newton or Einstein’ would be equal to the sum of the two corroborations, for the two logically exclude each other. But as both are exceedingly well corroborated, they would both have had a greater probability than ½ (½ would mean: no corroboration). Thus, their sum would be greater than 1, which is impossible. It follows that corroboration cannot be a probability.

“… It would be interesting to hear what the theoreticians of induction … who identify the degree of corroboration (or the ‘degree of rational belief’) with a degree of probability — would have to say about this simple refutation of their theory.” (Popper, Karl. 2009. The Two Fundamental Problems of the Theory of Knowledge, xxivn. New York: Routledge.)

//

Instrumentalism

In skepticism on 15/06/2011 at 2:51 pm

Things that succeed teach us little beyond the fact that they have been successful; things that fail provide incontrovertible evidence that the limits of design have been exceeded. Emulating success risks failure; studying failure increases our chances of success. The simple principle that is seldom explicitly stated is that the most successful designs are based on the best and most complete assumptions about failure. (Henry Petroski, Success Through Failure)


The problem with an instrumentalist solution, at least as I see it, is that scientific theories aren’t employed as instruments.

We keep our instruments, even though they have limitations. We use the hammer for pushing nails into wood; we wouldn’t use the hammer as a screwdriver, would we? But while engineers may use a false–but limited–theory, this doesn’t happen in science. When a theory is found to have a limitation, a scientist searches for a better theory, one that overcomes this limitation. Do you think scientists ought to behave like engineers and stop superseding theories with broader theories?

Thus, instrumentalism doesn’t take the progress of science seriously, or it makes no sense by its lights: it’s not interested in the quest for truth.

//