Things that succeed teach us little beyond the fact that they have been successful; things that fail provide incontrovertible evidence that the limits of design have been exceeded. Emulating success risks failure; studying failure increases our chances of success. The simple principle that is seldom explicitly stated is that the most successful designs are based on the best and most complete assumptions about failure. (Henry Petroski, Success Through Failure)
Archive for the ‘experiments’ Category
As one begins most murder mysteries, a victim is found dead in a room. A detective finds a security camera that shows only John Doe entering and leaving the room at the time of the murder. The forensics lab then calls to tell the detective that John Doe’s DNA was found on the rim of a glass in the room.
Q: In light of the DNA on the glass, should the detective now give more credence to the hypothesis that John Doe is the murderer?
One solution to Goodman’s new riddle of induction, as proposed by David Lewis and WVO Quine, is that certain languages describe natural properties, which have a special metaphysical status. All things being equal, the evidence will favor the hypothesis that uses languages that have natural properties over any other language in all cases. The problem of choosing between hypotheses that will be favored by the evidence and hypotheses that will not is solved by choosing a hypothesis expressed in a language that uses natural properties. There is, however, a problem with this solution: how can a scientist decide whether a language is using a natural property?
A philosophical problem has the form: I don’t know my way about. (Ludwig Wittgenstein)
Up until the late 19th century every observation was compatible with Newton’s theory of gravity. All these observations are also compatible with Einstein’s General Theory of Relativity. Two quite different theories were compatible with the same set of observations; therefore, one cannot know they have derived true theories from observations.
Assume we have a long series of numbers. They go on: 2, 4, 8 … What is the next number in the series?
Wesley Salmon objects to Popper’s theory of knowledge on the grounds that, contrary to its stated rejection of a principle of induction, in order to explain how one can rationally decide between competing unfalsified theories, it requires the adoption of a principle of induction. The advice to an applied scientist or engineer to act as if the best-tested theories are probably true and the untested theories are probably false, though no doubt excellent advice, does not have any claim to be dubbed ‘rational’ unless a pragmatic principle of induction is adopted.
If the applied scientist’s choice is guided by the best-tested scientific theories available to him, then it appears that he is assuming that what was successful in the past will remain successful in the future. This would be an assumption rejected by Popper, for it employs the principle of induction. However, if a scientist, following Popper’s theory of knowledge, renounces a principle of induction, then he is not allowed to say that ‘future unobserved events will resemble past observed events.’
Over at the impressive community blog of Less Wrong, Eliezer Yudkowsky published an interesting article titled “Einstein’s Arrogance.” Check it out. It’s worth a read. One point stands out:
To assign more than 50% probability to the correct candidate from a pool of 100,000,000 possible hypotheses, you need at least 27 bits of evidence (or thereabouts). You cannot expect to find the correct candidate without tests that are this strong, because lesser tests will yield more than one candidate that passes all the tests. If you try to apply a test that only has a million-to-one chance of a false positive (~20 bits), you’ll end up with a hundred candidates. Just finding the right answer, within a large space of possibilities, requires a large amount of evidence. (bolding mine)
The argumentation is impressive, but Yudkowsky should note that the pool of possible hypotheses is far greater than 100,000,000 — it is infinite. Just as an infinite number of curves can fit two, three, four … up to n points on a graph, and there is a one-to-one match between equations that express these curves and the curves, so can the logical consequences of an infinite number of hypotheses ‘predict’ the results of any finite number of experiments. Therefore, no number of bits of evidence can ever permit assigning a probability more than 50% to any candidate from the pool, much less assigning a probability to the correct candidate.
Ah, but there is a way to limit the number of theories in the pool of possible hypotheses to a finite number. The only way the pool is limited is, so I conjecture, by the elimination of theories that are incompatible with our background assumptions. And yet, any of our background assumptions could be false. Yudkowsky advances the thought-experiment of black boxes that always beep when the properly-ordered six numbers are entered, but have a 25% chance of beeping when the combination is wrong. But how does one know that these black boxes always beep when the properly-ordered six numbers are entered? Yes, this is a thought-experiment, but it is an assumption that is more often than not false in science: our background assumptions, even about the accuracy of black boxes, are unjustified, constantly open to revision, and have historically been revised.
Up until the Michelson-Morley experiment (and for some time after), one of our background assumptions was that light traveled through a medium called ‘luminiferous aether.’ Before the background assumptions deductively entailed by luminiferious aether were rejected, Einstein’s hypothesis was ruled out from swimming in the pool of possible hypotheses, for the two contradict one another about the speed of light: Einstein assumed that the speed of light in a vacuum was constant regardless of reference frame. Everything else follows from this assumption (and some mathematical equations formulated by Lorentz). Therefore, the pool of possible hypotheses in light of our background assumptions is never fixed at an arbitrary number, and this pool may not even contain at any one time a correct candidate.
In fact, if we are to work with probabilities, I pull out the pessimistic meta-induction argument and see how it flies: every previous theory we have rejected rested on false background assumptions. It is highly probable that our current background assumptions are false, therefore it is highly probable that we exclude from the pool of possible hypotheses a true hypothesis.
If that argument does not sit well, then we can appeal to a more robust interpretation of probability: we have an infinite number of hypotheses at our disposal that equally fit the available results of a finite number of tests. How could we ever claim that a finite number of hypotheses (only one hundred million?) should include the true hypothesis?
And so on.
Larry Laudan’s well-known paper “The Demise of the Demarcation Problem” has been republished several times in several volumes. The most readily available copy I could find was in “Physics, Philosophy, and Psychoanalysis: Essays in Honor of Adolf Grünbaum.” The paper is worth reading for Laudan’s historical analysis of the demarcation problem, but two points in the essay stand out as supremely lackluster, especially for Laudan.
The problem: if a scientist abandons theory A after deciding that it does not stand up to criticism (say, the theory fails a crucial experiment), the scientist could make the wrong choice. Theory A could very well be true, or be more approximately true (have more verisimilitude), than the replacement theory B.
Why is this a problem?
- The crucial experiment could produce a false positive, so that a scientist rejects the theory rather than rejecting the result of the test. Naturally, the scientist’s replacement theory B would have less verisimilitude.
- The crucial experiment produces a true outcome, but theory B is more approximately true than theory A over this small range; however, theory B has overall less verisimilitude than theory A.
This problem applies to any number of crucial experiments: a scientist may abandon a theory with a high degree of objective verisimilitude because he mistakenly thinks it has a low degree of verisimilitude.
The set of preformed crucial experiments will be very small, smaller than all crucial experiments available to the scientist at any one time, which in turn will be very, very small compared to all crucial experiments. This further assumes that the results of the tests are easily decidable.
Think of it this way: the scientist has insufficient reasons, which amount to nothing, after preforming a crucial experiment. Now, just keep adding additional insufficient reasons. What does the scientist have? Nothing.
What have we learned? There cannot be any evidence that anything can raise the objective probability of future success.
Assume that 1 is not an immediate problem. All results of tests are conclusive. The problem still remains, and appears to be far more robust and serious for the scientist than Quine’s problem. Now, how do we deal with 2?
One solution is to tentatively reject A and adopt B. After all, they’re only theories. Truth takes second to coherence, but the rule of operations is the negation of Quine’s holism and goes against Popper’s claim that scientists are interested in increasing verisimilitude in scientific theories. If anything, this best approximates van Fraassen’s position on empirical adequacy.
‘Faith’ is often taken to be a theory that is not taken on logical or empirical grounds. This is little more than a simultaneous disparagement of theories that are not logically or empirically grounded and an assumption that such grounding is possible.
It is impossible, so I conjecture, to ground anything. If this is the case, according to this description of faith, all theories are equally faith-based. That doesn’t seem right. At this point, most people see this as a reductio of the conjecture of the impossibility of grounding. I can intuitively tell apart a scientific theory, they might say, from religious theories. Therefore, some theories are grounded. The nature of grounding is then examined in detail.
Duncan Pritchard, who holds a chair in epistemology at the University of Edinburgh, published What is This Thing Called Knowledge? some years ago. He has three textbooks, two published books on epistemology, and approximately fifty journal articles to his name. Let me make this clear: Pritchard is no first-year undergrad at a community college. And yet, his What is This Thing Called Knowledge? has a short section on Popper’s response to the problem of induction that is … shameful. Just shameful.
Over the years, shuttle managers had treated each additional debris strike not as evidence of failure that required immediate correction, but as proof that the shuttle could safely survive impacts that violated its design specifications. (Lee Hotz, Huston, you have a problem)
Teacher: Previously, we touched on how non-scientific statements play a bigger role than Popper first acknowledged. Gamma, you said yesterday that you disagreed with Sigma’s description of the scientific process?
I go to a wedding and I miss a gigantic explosion in the blogosphere over Sir Harold Kroto’s Nobel Laureate lecture. Eh, I’ve missed worse things.
Andrew Brown at The Guardian has an adequate–but far from complete–drubbing of Kroto’s proto-positivist claim that “Science is the only philosophical construct we have to determine TRUTH with any degree of reliability.”
PZ Myers disagrees with Brown, but I’m not surprised. After reading him for a few years, he comes off as a genuine naïve Popperian, saying “If someone were to say something truly false and giggleworthy, like for instance, “all cats are black,” what I’d do is go out and find a Siamese and a white Persian and wave them in his face. Isn’t that obvious?” After Quine, it isn’t so obvious anymore. Isn’t that obvious? PZ is a scientist, and scientists aren’t often paid to think about epistemology, so I won’t hold it against him. Only through a critical discussion can we come to an agreement, tentative though it may be, about things like the color of cats–and yet this agreement is forever provisional. If someone were to point out this distinction in private, PZ would probably temper his initial statement, but headlines sell papers.
Kroto, Myers, and Brown all come off thinking that science is directed at establishing claims–I am apparently the odd man out when I concede that, rather than lifting up other traditions to science’s level, science does not have the epistemic privilege Kroto and PZ think: there is no way to reliably determine the truth.
That said, we can choose to prefer science over other ‘ways of knowing’ for the same reason we can choose to prefer a theory that has survived criticism over one that has not: while its past success at solving our problems provide no ‘good reasons’ for favoring science, the failures of alternative ‘ways of knowing’ are sufficient to provisionally adopt what remains.
There are two ways that a theorist goes astray: (1) The devil leads him by the nose with a false hypothesis. (For this he deserves our pity) (2) His arguments are erroneous and sloppy. (For this he deserves a beating). (Einstein, letter to Lorentz, The Collected Papers of Albert Einstein, Princeton University Press, (Princeton, NJ, 1987-2006), volume 8A, p. 88)
There is a significant difference between what I will call ‘negative’ and ‘positive’ thinking. Positive thinking rests on the assumption that a solution’s past success (the ‘is’) guarantees or increases the probability of the solution’s future success (the ‘ought’): past success ought to show future success. Negative thinking, however, does not run into the is/ought problem: if a universal statement contradicts an existential statement, and the existential statement corresponds with the facts, then the existential statement is false.
Not many people have heard of Alfred North Whitehead’s (yes, the coauthor of the Principia Mathematica!) 1922 theory of gravitation. It’s an interesting theory, not just for its content, but for its historical significance: for the longest time, both Einstein’s theory of gravitation and Whitehead’s theory of gravitation predicted “not only for the three classic tests of light bending, gravitational redshift and the precession of the perihelion of Mercury, but also for the Shapiro time delay effect,” (See Gary Biggons, On the Multiple Deaths of Whitehead’s Theory of Gravity) and subsequently both theories were equally corroborated by the data.
Through the 20’s, 30’s and 40’s, children were frequently having heart attacks after being administered anesthetic. As Mosher says,
A thymic death is one of the supreme tragedies of surgery. An apparently healthy child dies during the administration of an anesthetic, during or after an uncomplicated tonsil and adenoid operation, or, as recently happened, during a simple circumcision. Again, as reported by one of our medical examiners, a child was standing on the edge of the sidewalk. A runaway horse dashed by and the child dropped dead. At autopsy the condition known as status lymphaticus was found; that is, there was an enlarged thymus and a hypertrophy of all the lymphoid structures of the alimentary canal … This slight pathology was all that was found to explain the unexpected death. (Harris P. Mosher, “An Original Communication,” “A Clinical and Preoperative Study of the Thymus in Children of the Tonsil and Adenoid Age,” The Laryngoscope Vol.36. Jan. 1926.)
Upon examination of the child’s body, the thymus was frequently found larger than expected. Everything else was normal. Since the thymus is pretty close to the heart, doctors decided to routinely radiate the thymus in children to shrink it in size.
From 1924 to 1946, it was the policy of the Massachusetts Eye and Ear Infirmary in Boston to apply prophylactic irradiation in every case in which an “enlarged” thymus gland was diagnosed in infancy. … Whenever the width of the superior mediastinum was at least half the width of the heart the gland was characterized as `enlarged’ or `suspicious,’ and the child was given radiation treatment… (M.L. Janower & O.S. Miettenen, “Neoplasms after Childhood Irradiation of the Thymus Gland,” Journal of the American Medical Association Vol.215: 753. 1971.)
It turns out that there was no ‘slight pathology’ of an enlarged thymus: medical cadavers actually had smaller glands than on average. Chronic stress leads to the thymus becoming smaller, and these men and women before death were under extreme amounts of stress. Cadavers were, for one hundred and fifty years, collected from poor houses. These were people that were near death without access to proper medical care. The auxiliary hypothesis “Children that died immediately after being administered anesthetic have an enlarged thymus” was wrong–dead wrong.
All the evidence corroborated status lymphaticus as a cause of heart attacks, and yet it also corroborated the theory that doctors were misapplying anesthetic to children. No one thought to figure out the correct proportion for children. Thousands of children died.
What are we wrong about right now?
As the problem presented itself to us there were three possibilities. There might be no deflection at all; that is, light might not be subject to gravitation. There might be a ‘half-deflection,’ signifying that light was subject to gravitation, as Newton had suggested, and obeyed the simple Newtonian law. Or there might be a ‘full deflection,’ confirming Einstein’s instead of Newton’s law. I remember Dyson explaining all of this to my companion Cottingham, who gathered the main idea that the bigger the deflection, the more exiting it would be. ‘What will it mean if we get double the deflection?’ ‘Then,’ said Dyson, ‘Eddington will go mad, and you will have come home alone. (S. Chandrasekhar, Am. J. Phys. 47, 212 (1979))