Just found this review. It’s worth checking out.
Just found this review. It’s worth checking out.
Wesley Salmon objects to Popper’s theory of knowledge on the grounds that, contrary to its stated rejection of a principle of induction, in order to explain how one can rationally decide between competing unfalsified theories, it requires the adoption of a principle of induction. The advice to an applied scientist or engineer to act as if the best-tested theories are probably true and the untested theories are probably false, though no doubt excellent advice, does not have any claim to be dubbed ‘rational’ unless a pragmatic principle of induction is adopted.
If the applied scientist’s choice is guided by the best-tested scientific theories available to him, then it appears that he is assuming that what was successful in the past will remain successful in the future. This would be an assumption rejected by Popper, for it employs the principle of induction. However, if a scientist, following Popper’s theory of knowledge, renounces a principle of induction, then he is not allowed to say that ‘future unobserved events will resemble past observed events.’
Here is a copy of the first part of an unfinished biography of Karl Popper, by William W. Bartley: Rehearsing a Revolution [.pdf]. I suggest reading pp. 50-71, for they deal with Popper’s trouble with Kant and Kierkegaard.
Two points of interest I’ve picked up from talking to friends of Bartley: (1) he was either bisexual or gay (this puts his Wittgenstein in a different light); (2) he took up writing the — rather poor, in my opinion — hagiography of Werner Erhard in order to live comfortably in California with his long-time companion.
Peter Singer’s 1974 article in the New York Review of Books, Discovering Karl Popper is extremely favorable of Popper’s philosophy of science–except for three paragraphs in the middle, which are highly informed criticism. I’ve reproduced them below along with some limited comments in light of that criticism.
A man will be imprisoned in a room with a door that’s unlocked and opens inwards; as long as it does not occur to him to pull rather than push. (Ludwig Wittgenstein)
In Wittgenstein’s posthumous Philosophical Investigations he argues that meaning of terms is equal to its use within language: each ‘linguistic universe’ has its own rules. Content cannot be separated from criteria by which they are judged: criteria is never inter-cultural, but sub-cultural. Each discipline or ‘language’ game has its own standards, which cannot be reducible to other standards or principles. The task of the philosopher is then to describe and clarify standards, not to judge, defend, or criticize proposals laid out within a ‘language game.’ Criticism can only point out the misuse of language, or violations of the rules.
Argument or judgment does not cross disciplines, for they exist only in reference to criteria of the rules of the game. This leads to relativism, where there is no rational choice to be made between competing games: all games are equally defensible.
Larry Laudan’s well-known paper “The Demise of the Demarcation Problem” has been republished several times in several volumes. The most readily available copy I could find was in “Physics, Philosophy, and Psychoanalysis: Essays in Honor of Adolf Grünbaum.” The paper is worth reading for Laudan’s historical analysis of the demarcation problem, but two points in the essay stand out as supremely lackluster, especially for Laudan.
The biggest problems in critical rationalism, expressed as broadly as possible:
The same could be said of most schools, and I do not intend to target anyone directly. That said, the work of historians of science and philosophy is invaluable, giving unseen insights into brilliant minds; however, Popper should only be valued because he, like many before him, paved the way for others. Of course, 1 is often not a problem, for younger generations often have had little understanding of the full implications of critical rationalism, and good historical scholarship can only help. That said, all things being equal, 1 is not as important as 2 in the long run, for critical rationalism will stagnate without constant criticism and revision in light of that criticism.
What can be done to advance critical rationalism? It is still in a savage state.
In Faith And Poltical Philosophy: The Correspondence between Leo Strauss and Eric Voegelin, 1934-1964 comes the following dialogue between Strauss and Voegelin:
Why should scientists prefer theories that are simple and not complex? There is a very simple answer to this complex problem: Leibniz points out in section VI of his Discourse on Metaphysics that a theory ought to be simpler than the data it sets out to explain, otherwise it does not explain anything. A theory becomes vacuous if an arbitrarily complex mathematical statement is permitted to count as a theory, for one can always construct a theory to fit the data, even if the data is random.
There’s a lovely debate that’s been around for some time between Christopher Hitchens and William Lane Craig at Biola University. I recommend that you watch it — or watch parts of it, namely Christopher Hitchens’s turns at the podium. William Lane Craig is an awful speaker. Christ must have granted Hitchens a silver tongue and Craig a wooden ear. If you can bear through Craig’s turn at the microphone, then you’ll witness a great ‘debate’ between a philosopher-hack and a public intellectual.
What I find most interesting about the debate — besides the subject of ‘Does God exist?’ — is Hitchens and Craig’s respective debating styles. I will start with Hitchens:
Positive theories of knowledge assert that, if they are correct, future guesses are guaranteed to have (at least probabilistically) a marked improvement in their objective verisimilitude, not just in their increased empirical adequacy. If this were true, it would be an immense boon for everyone. Logical negativism rules such a possibility out a priori; in fact, it originates in the supposed failure of all positive theories of knowledge. Therefore, the greatest argument one can muster against this dogma in logical negativism is to demonstrate that some kind of necessary increase in verisimilitude occurs when replacing an old theory with a new one.
Teacher: Previously, we touched on how non-scientific statements play a bigger role than Popper first acknowledged. Gamma, you said yesterday that you disagreed with Sigma’s description of the scientific process?
I go to a wedding and I miss a gigantic explosion in the blogosphere over Sir Harold Kroto’s Nobel Laureate lecture. Eh, I’ve missed worse things.
Andrew Brown at The Guardian has an adequate–but far from complete–drubbing of Kroto’s proto-positivist claim that “Science is the only philosophical construct we have to determine TRUTH with any degree of reliability.”
PZ Myers disagrees with Brown, but I’m not surprised. After reading him for a few years, he comes off as a genuine naïve Popperian, saying “If someone were to say something truly false and giggleworthy, like for instance, “all cats are black,” what I’d do is go out and find a Siamese and a white Persian and wave them in his face. Isn’t that obvious?” After Quine, it isn’t so obvious anymore. Isn’t that obvious? PZ is a scientist, and scientists aren’t often paid to think about epistemology, so I won’t hold it against him. Only through a critical discussion can we come to an agreement, tentative though it may be, about things like the color of cats–and yet this agreement is forever provisional. If someone were to point out this distinction in private, PZ would probably temper his initial statement, but headlines sell papers.
Kroto, Myers, and Brown all come off thinking that science is directed at establishing claims–I am apparently the odd man out when I concede that, rather than lifting up other traditions to science’s level, science does not have the epistemic privilege Kroto and PZ think: there is no way to reliably determine the truth.
That said, we can choose to prefer science over other ‘ways of knowing’ for the same reason we can choose to prefer a theory that has survived criticism over one that has not: while its past success at solving our problems provide no ‘good reasons’ for favoring science, the failures of alternative ‘ways of knowing’ are sufficient to provisionally adopt what remains.
One of the aims of science, perhaps its most fundamental aim, is knowledge — not of past events — but of future events. Scientists want to ‘read the book of nature’, to borrow a phrase from Bacon. Think of the laws of nature as being general truths, or as they’re known in predicate logic, universal statements (“for all x, y”). So the question, to rephrase David Byrne is, how do I get there? How can scientists grasp hold of the laws of nature?
The popular answer is by ‘inductive inference’, called by Aristotle “the passage from individuals to universals.”  Inductive inference usually takes the following form: “This bacon is crispy. That bacon is crispy. … Therefore, all bacon is crispy.”
One intuitively wouldn’t want to have a set of incoherent beliefs. Preferring incoherence is to be frowned upon, for one belief in this set must be false. Any sort of epistemology should then strive for some kind of coherence and mutual support, and if incoherence is found, of finding a way to determine which member of the set is false and which is true.
There are two kinds of coherentism I’m thinking of: the first kind is sort of a nebulous coherentism, that it is better to prefer a set of beliefs that support one another over a set of incoherent beliefs. I would then call myself a ‘weak’ coherentist in a sense, as would most modern epistemologists, but we strive not just for the coherence of our beliefs as indicating its truth, but for the truth of all of our beliefs.
The second kind of coherentism I will call ‘strict coherentism.’ It sees no recourse necessary to any sort of a posteriori examination. This gambit is played, I think, in order to circumnavigate a serious problem for most justificationists: we may be justified in preferring a coherent system over an incoherent system.
If inductive evidence is ampliative evidence, then it is clear what would count as a successful outcome of the inductivist project. Given hypothesis h, and evidence e, one must show that evidence e makes p(h if e, e), greater than p(h if e). Evidence e can be anything one cares to name, including repeated sightings of white swans, black raven, or blue hats.
Popper and Miller proved in 1983 that, following from the rules of probability, no e can satisfy this requirement. Until this proof is answered, inductivists are tilting at windmills.
” … if the hypothesis h logically implies the evidence e in the presence b [background knowledge] (so that he is equivalent to h) then p(h, eb) is proportional to p(h, b) … suppose that e is some such evidence statement as ‘All swans in Vienna in 1986 are white’, h the supposedly inductive generalization ‘All swans are white’ and k the counterinductive generalization ‘All swans are black, except those in Vienna in 1986, which are white’. Then p(h, eb) = p(h, b)/p(k, b). No matter how h and k generalize on the evidence e, this evidence is unable to disturb the ration of their probabilities …. Supporting evidence points in all directions at once, and therefore points usefully in no direction. (Popper & Miller, Why Probabilistic Support is not Inductive, Phil. Trans. of the Royal Society of London. Series A, Mathematical and Physical Sciences, Vol. 321, No. 1562 (Apr. 30, 1987))