d

Archive for the ‘induction’ Category

Woe to Induction

In critical rationalism, induction on 18/11/2011 at 8:01 am

Łukasiewicz’s argument against inductive inference is simple: under any objective interpretation of the probability calculus, the probability that a single hypothesis is true where m is the set of logical consequences of the hypothesis and if n events are taken to possess property P is m / (m+n). Thus, m is always greater than n, so that the probability of the hypothesis cannot be greater than ½. Moreover, since scientific theories are often universal, m approaches infinity and the probability of the scientific theory approaches 0. Carnap rediscovers this argument after attempting to produce an objective inductive logic and is shaken to the core.

Łukasiewicz, Jan. 1909. On Probability of Inductive Conclusions, Przeglad Filozoficzny, 12, 209–210.

Carnap, Rudolph. 1950. The Logical Foundations of Probability. 2nd ed., Chicago: The University of Chicago Press, 572.

//

It’s Worse Being Green

In empiricism, induction, justificationism, underdetermination on 12/10/2011 at 5:19 am

In It’s Not Easy Being Grue, I argued for skepticism — or at least incredulity — towards any inductive inference made solely by appealing to a posteriori evidence. Two hypotheses, as long as they have a logical content greater than the evidence and are not yet refuted are, as a matter of following the rules of logic, necessarily equally favored by the evidence. Even if one should appeal to one of the two hypotheses having a natural property, this problem still stands, since it cannot be uncovered through a posteriori investigation. Of course, more than two hypotheses fit this criteria — any number of empirically adequate hypotheses with greater logical content than the evidence may be constructed. In sum, favoring one hypothesis over another, even with an a prior warrant, cannot be determined from a posteriori evidence at all.

Read the rest of this entry »

It’s Not Easy Being Grue

In empiricism, experiments, induction, quine, skepticism, underdetermination on 10/10/2011 at 1:52 am

One solution to Goodman’s new riddle of induction, as proposed by David Lewis and WVO Quine, is that certain languages describe natural properties, which have a special metaphysical status. All things being equal, the evidence will favor the hypothesis that uses languages that have natural properties over any other language in all cases. The problem of choosing between hypotheses that will be favored by the evidence and hypotheses that will not is solved by choosing a hypothesis expressed in a language that uses natural properties. There is, however, a problem with this solution: how can a scientist decide whether a language is using a natural property?

Read the rest of this entry »

Induction Machines

In critical rationalism, induction on 23/08/2011 at 12:26 pm

Imagine that a computer is built to make empirical generalizations with inductive logic (whatever that may be) and that this computer is in a simple universe with a limited number of individuals,number of properties, and relationships between these properties the individuals can have. Furthermore, the universe operates with a limited number of ‘natural laws’. In this universe a computer can be created such that in some reasonable period of time it will discover the ‘natural laws’. If the laws were modified, then the computer would find a new set of laws. If this universe were further complicated, then this computer could be enhanced to be able to formulate hypotheses, to test these hypotheses, and to eliminate those that do not survive testing.

This induction machine is limited insofar as it is limited by its programmer’s intellectual horizon: the programmer decides what is or is not a property or relation; the programmer decides what the induction machine can recognize as repetitions; it is the programmer that decides what kinds of questions the machine should address. All the most important and difficult problems are already solved by the programmer, and this induction machine is little more than a speeding-up process of a room full of bean-counters or punch-card holders.

Here we have today’s work in artificial intelligence, which is precisely limited by this constraint. The theories that these computer programs develop are conditional on the initial conditions that are needed for in an induction machine. Inductive inferences does not then occur within the context of discovery; the programmer provides these. Inductive inferences occur within the context of justification, and even then it still does not satisfactorily solve the problem of induction, for the problem cannot logically be solved. These computers have become problem-solving machines that operate on conjecturing the most parsimonious theory and attempted refutation of that theory.

//

Rules

In experiments, induction, justificationism on 20/08/2011 at 7:29 am

A philosophical problem has the form: I don’t know my way about. (Ludwig Wittgenstein)

Up until the late 19th century every observation was compatible with Newton’s theory of gravity. All these observations are also compatible with Einstein’s General Theory of Relativity. Two quite different theories were compatible with the same set of observations; therefore, one cannot know they have derived true theories from observations.

Assume we have a long series of numbers. They go on: 2, 4, 8 … What is the next number in the series?

Read the rest of this entry »

The Sun Also Rises

In induction on 08/08/2011 at 5:57 am

As simply as possible …

Do we know that the Sun will rise tomorrow because it has risen in the past? No. Hume’s psychological account of inductive inferences is mistaken, for it misstated the problem. Somehow empiricists have taken Hume as the final word that the justification for the belief that the Sun will rise tomorrow is that the Sun has risen in the past.  It is easy to undermine that argument, for there is no logical inference made. We know the Sun will rise tomorrow because we know why it rises. We have an explanation: the Earth rotates on its axis roughly every 24 hours, and we have an explanation for why that happens, and so on. The rising Sun has led us to seek an explanation, and that explanation is our ‘justification,’ for if the Earth rotates on its axis roughly every 24 hours, then the Sun will rise tomorrow.

The logical content is transmitted from the conditional “if” to the “then,” for while the phrase ‘the Sun will rise tomorrow’ is clearly not true when understood in its broadest sense (the Sun does not ‘rise’, solar eclipses are infrequent events, and people in the far North experience no sunlight for months at a time), when understood colloquially, it is but an observation report of the Sun rising in the East when viewed from a particular vantage point at a particular time. In other words, it would be like saying “If all dogs are brown, then all other things being equal, an individual will, upon seeing a dog, report that it is brown.”

Conditional knowledge, however, is in no way justified by appealing to the explanation. Another explanation about laws of gravitation is necessary. This new explanation requires another explanation, and so on, creating an infinite regress of explanations. This conditional knowledge is in no way justified, for our explanations have in the past been false, and there is no way to know if our explanations are true, for explanations always have a logical content that extends far into the future and past, discussing events that we will never have a chance to observe.

//

Strevens on Induction

In induction on 04/08/2011 at 4:38 pm

When two theories are empirically equivalent, their likelihoods relative to any given body of evidence are equal. Thus the difference in anyone’s subjective probabilities for the theories must be due entirely to the difference in the prior probabilities that were assigned to the theories before any evidence came in. Bayesian confirmation theory preserves a prior bias towards simplicity, but it implements no additional bias of its own. …

Bayesian confirmation theory does impose an objective constraint on inductive inference, in the form of the likelihood lover’s principle, but this is not sufficient to commit the Bayesian to assuming the uniformity of nature, or the superiority of “non-grueish” vocabulary or simple theories. The first of these failures, in particular, implies that [Bayesian confirmation theory] does not solve the problem of induction in its old-fashioned sense.

If the old-fashioned problem of induction cannot be solved, what can we nevertheless say about [Bayesian confirmation theory]’s contribution to the justification of induction? There are two kinds of comments that can be made. First, we can identify unconditional, though relatively weak, constraints that [Bayesian confirmation theory] puts on induction, most notably the likelihood lover’s principle. Second, we can identify conditional constraints on induction, that is, constraints that hold given other, reasonable, or at least psychologically compelling, assumptions. We can say, for example, that if we assign low priors to grueish hypotheses, [Bayesian confirmation theory] directs us to expect a future that resembles the past. This is, remember, considerably more than we had before we began. (Michael Strevens, Notes on Bayesian Confirmation Theory [.pdf], 66)

Strevens is admirable, for he is upfront about the inadequacies of Bayesianism. That said, Strevens at times overstates his case. For instance, tet me simplify the bolded passages: “The evidence for any particular theory is underdetermined … however, if we reject theories that are incompatible with our assumption that the future will resemble the past, we will expect a future that resembles the past.” At least two problems for Strevens:

(1) The future does not resemble the past in all domains: black swans, white ravens — in fact, all falsified scientific theories — should give us pause before assuming something that is demonstrably false when applied to all domains. Therefore, if the future resembles the past in only some domains, why we should assume that the future will resemble the past in any particular domain?

(2) Even if the future should resemble the past in this particular domain, it does not follow that any theory that assumes a future that resembles the past is true, for while ‘grueish hypotheses’ are ruled out, there may still be an alternative theory that follows from that same assumption that the future will resemble the past that is in fact true. After all, it’s happened in the past (Einstein replacing Newton).

//

Practical Prediction

In experiments, induction, justificationism, popper, salmon on 01/08/2011 at 5:24 am

Wesley Salmon objects to Popper’s theory of knowledge on the grounds that, contrary to its stated rejection of a principle of induction, in order to explain how one can rationally decide between competing unfalsified theories, it requires the adoption of a principle of induction. The advice to an applied scientist or engineer to act as if the best-tested theories are probably true and the untested theories are probably false, though no doubt excellent advice, does not have any claim to be dubbed ‘rational’ unless a pragmatic principle of induction is adopted.

If the applied scientist’s choice is guided by the best-tested scientific theories available to him, then it appears that he is assuming that what was successful in the past will remain successful in the future. This would be an  assumption rejected by Popper, for it employs the principle of induction. However, if a scientist, following Popper’s theory of knowledge, renounces a principle of induction, then he is not allowed to say that ‘future unobserved events will resemble past observed events.’

Read the rest of this entry »

You Have No Idea How Wrong You Are

In fallibilism, induction, the ancient greeks on 25/07/2011 at 3:06 am

//

Dogma

In fideism, induction on 23/07/2011 at 6:24 am

Nothing is more characteristic of a dogmatist epistemology than its theory of error. For if some truths are manifest, one must explain how anyone can be mistaken about them, in other words, why the truths are not manifest to everybody. According to its particular theory of error, each dogmatist epistemology offers its particular therapeutics to purge minds from error. (Imre Lakatos)

I’ve heard it said from followers of Rand that a theory (usually one of Rand’s own, or a variation thereof) is unassailable, for any criticism of the theory must necessarily assume the theory in order to criticize it. This, somehow, invalidates all criticism.

Is the supposition “Any criticism must assume the validity of the theory being criticized” self-evident?

One problem: how does one know that all possible criticisms employ that theory? Is anyone familiar with all potential arguments against the theory? Of course not: novel ideas are created every day. Therefore, this assertion, that all criticism must assume the theory is true, is based on an inductive inference, which cannot, as a matter of logic, be as demonstrably self-evident or unconditionally immune to criticism as it first appears.

It might be the case that it is true, but it is hardly evident to me, especially once this doubt is raised. Furthermore, whatever theory is used to demonstrate how the initial theory is self-evident must, of course, be scrutinized to determine if it suffers from the same problem: is this new theory self-evident as well? A regress of ‘unassailable’ theories begins in earnest.

The world is far more interesting than we can imagine: asserting that no criticism could possibly exist speaks only to, I think, their limited intellectual horizon. I conjecture that it is better for an idea to stick its neck out as far as it can, therefore inviting many criticisms, and taking them serious. One criticism, if accepted, is enough. As the followers of Rand would have it, the world can only be a constant construction of sandcastles following the blueprints of the Master, and yet no helpful criticism of the blueprints or their faithful execution is permitted. I might go so far as to say that this meta-theory is self-evident, but of course, I don’t.

Assume that everything I have just said is not the case: assume that the Randian (for they are such an easy punching bag, no?) now says that by any criticism that does not assume the same things as Objectivism is then starting from different — incompatible — assumptions, and is not a viable criticism. This might be a possible defensive maneuver for the Randian, for it disallows criticism of its assumptions and criticism of its coherence. Here we have the gestation of the most uninteresting post-modernists within the Randian (or the religious presuppositionalists like Van Til), for the Randian must not be aware of a reductio ad absurdum.

And this, I should note, is a point that deserves no further clarification on my part, for pointing out incoherence is one of the most powerful criticisms available.

//

What’s Day is Night

In experiments, induction on 23/07/2011 at 3:23 am

Over at the impressive community blog of Less Wrong, Eliezer Yudkowsky published an interesting article titled “Einstein’s Arrogance.” Check it out. It’s worth a read. One point stands out:

To assign more than 50% probability to the correct candidate from a pool of 100,000,000 possible hypotheses, you need at least 27 bits of evidence (or thereabouts).  You cannot expect to find the correct candidate without tests that are this strong, because lesser tests will yield more than one candidate that passes all the tests.  If you try to apply a test that only has a million-to-one chance of a false positive (~20 bits), you’ll end up with a hundred candidates.  Just finding the right answer, within a large space of possibilities, requires a large amount of evidence. (bolding mine)

The argumentation is impressive, but Yudkowsky should note that the pool of possible hypotheses is far greater than 100,000,000 — it is infinite. Just as an infinite number of curves can fit two, three, four … up to n points on a graph, and there is a one-to-one match between equations that express these curves and the curves, so can the logical consequences of an infinite number of hypotheses ‘predict’ the results of any finite number of experiments. Therefore, no number of bits of evidence can ever permit assigning a probability more than 50% to any candidate from the pool, much less assigning a probability to the correct candidate.

Ah, but there is a way to limit the number of theories in the pool of possible hypotheses to a finite number. The only way the pool is limited is, so I conjecture, by the elimination of theories that are incompatible with our background assumptions. And yet, any of our background assumptions could be false. Yudkowsky advances the thought-experiment of black boxes that always beep when the properly-ordered six numbers are entered, but have a 25% chance of beeping when the combination is wrong. But how does one know that these black boxes always beep when the properly-ordered six numbers are entered? Yes, this is a thought-experiment, but it is an assumption that is more often than not false in science: our background assumptions, even about the accuracy of black boxes, are unjustified, constantly open to revision, and have historically been revised.

Up until the Michelson-Morley experiment (and for some time after), one of our background assumptions was that light traveled through a medium called ‘luminiferous aether.’ Before the background assumptions deductively entailed by luminiferious aether were rejected, Einstein’s hypothesis was ruled out from swimming in the pool of possible hypotheses, for the two contradict one another about the speed of light: Einstein assumed that the speed of light in a vacuum was constant regardless of reference frame. Everything else follows from this assumption (and some mathematical equations formulated by Lorentz). Therefore, the pool of possible hypotheses in light of our background assumptions is never fixed at an arbitrary number, and this pool may not even contain at any one time a correct candidate.

In fact, if we are to work with probabilities, I pull out the pessimistic meta-induction argument and see how it flies: every previous theory we have rejected rested on false background assumptions. It is highly probable that our current background assumptions are false, therefore it is highly probable that we exclude from the pool of possible hypotheses a true hypothesis.

If that argument does not sit well, then we can appeal to a more robust interpretation of probability: we have an infinite number of hypotheses at our disposal that equally fit the available results of a finite number of tests. How could we ever claim that a finite number of hypotheses (only one hundred million?) should include the true hypothesis?

And so on.

//

Evidence

In empiricism, induction, justificationism on 19/07/2011 at 6:49 am

There is endless conjecture, and certainty is not to be counted upon (Kant, Critique of Pure Reason)

Some people treat evidence as something that accumulates over time, like sap from a tree. Once enough evidence is collected, you need only synthesize it into syrup, and then you’ve proved your point. “I have X amount of evidence for Y, therefore you ought to believe Y, otherwise you are behaving irrationally.” So the story goes.

Read the rest of this entry »

What is This Thing Called Knowledge?

In critical rationalism, duhem, experiments, induction on 13/07/2011 at 12:21 pm


Duncan Pritchard, who holds a chair in epistemology at the University of Edinburgh, published What is This Thing Called Knowledge? some years ago. He has three textbooks, two published books on epistemology, and approximately fifty journal articles to his name. Let me make this clear: Pritchard is no first-year undergrad at a community college. And yet, his What is This Thing Called Knowledge? has a short section on Popper’s response to the problem of induction that is … shameful. Just shameful.

Read the rest of this entry »

Technology and Failure

In experiments, fallibilism, induction, skepticism on 12/07/2011 at 12:35 pm

Over the years, shuttle managers had treated each additional debris strike not as evidence of failure that required immediate correction, but as proof that the shuttle could safely survive impacts that violated its design specifications. (Lee Hotz, Huston, you have a problem)

//

Wherefore art thou Induction?

In critical rationalism, induction, quine, underdetermination on 11/07/2011 at 10:12 am

The word ‘induction’ takes on many meanings, always when most convenient. Like a slippery eel, just when a critic of induction has their hands around its neck, it wiggles out once more.

Does induction refer to the ‘context of discovery’ or the ‘context of justification’?

If ‘induction’ refers to the context of discovery, the critic of induction need only point to the greatest historical developments in science. Without blinders on, the critic points out that these theories are birthed in the heat of dealing with significant scientific problems. The framework comes before observation (read: Einstein). How then could enumerative induction work? Theories are then imaginative creations–possible solutions to problems. Even if enumerative induction is permitted during the context of discovery, it does not help the scientist any more than dreaming next to a raging fire (read: Kekulé’s oroboros), drug use (read: Feynman, Kary Mullis), &c., which is to say that is has no privileged position over even the most arbitrary ‘methods.’

If ‘induction’ refers to the context of justification, is this a process of objective inductive verification à la Carnap? If so, then this program is defunct, for no number of verifications can increase the probability assigned to a strictly universal statement. Is this the process of subjective certitude after repeated verifications? Then it contradicts the probability calculus and fails to solve the problem of underdetermination.

If ‘induction’ refers to the metaphysical assumption of regularity of systems, which we may approximate if enough inductions of the system are collected, then the inductivist retreats to asserting only that there exists regularities, calling this assumption ‘induction.’ If a proposed regularity should turn out to be false, then this was either a mistaken induction or not induction at all. If it is not an induction, then this is little more than wordplay: we cannot tell this type of induction apart from a conjecture. If it is a mistaken induction, this type of induction should only be known to be mistaken in hindsight: it tells us nothing until we learn that we are wrong.

And what is that but a falsification?

//

Reduced to Twelve Lines of Dialogue

In critical rationalism, empiricism, induction, popper on 05/07/2011 at 12:32 pm

Logical Positivist: Popper, we know we didn’t let you in our club, but what you do you think of our plan on eliminating metaphysics by reducing all meaningful statements to elementary statements of experience or analytic truths? Isn’t it swell?

Popper: Are you blind?

Logical Positivist: What?

Popper: You define ‘meaningful’ as ‘possible to empirically investigate’ while you define ‘meaningless’ as ‘impossible to empirically investigate,’ but metaphysics has usually been defined as non-empirical. Your use of the word ‘meaningless’ is derogatory, rather than descriptive. I call your very plan into question as merely restricting definitions.

Logical Positivist: No, it’s not!

Popper: Fine, if that will not turn you, put that criticism aside. Does this criticism work? Your very plan is not analytic, nor is it reducible to an elementary statement of experience. Therefore, there exists at least one meaningful metaphysical statement: your plan.

Logical Positivist: … could you try something more … palatable?

Popper: Sure, try this on for size. If we assume that you are successful in eliminating all metaphysics–by that very criterion of meaning, scientific laws cannot be reduced to elementary statements of experience, and ought to be rejected as meaningless.

Logical Positivist: … um … Let me get back to you …

Popper: Take as much time as you want. Put all my previous objections aside and assume for the moment that you have solved them all. How about this? You accept an inductive logic, right?

Logical Positivist: Sure!

Popper: Your proposed inductive logics are not reducible to elementary statements of experience or analytic truths. Your plan is clearly incoherent.

//

Philosophy of Science v. Epistemology

In duhem, induction, quine on 23/06/2011 at 10:41 am

In light of Einstein, Rutherford, and Maxwell, if we assume the knowledge-acquiring process S employs in everyday affairs is distilled or refined in scientific practice, then the problem of induction and the Duhem-Quine thesis should have long ago put to rest any theory of knowledge that claims S can know theory 1 has a greater objective verisimilitude than theory 2.

//

Salmon and Corroboration

In bartley, critical rationalism, induction, popper, salmon on 22/06/2011 at 2:14 am

If inductive evidence is ampliative evidence, then it is clear what would count as a successful outcome of the inductivist project. Given hypothesis h, and evidence e, one must show that evidence e makes p(h if e, e), greater than p(h if e). Evidence e can be anything one cares to name, including repeated sightings of white swans, black raven, or blue hats.

Popper and Miller proved in 1983 that, following from the rules of probability, no e can satisfy this requirement. Until this proof is answered, inductivists are tilting at windmills.

” … if the hypothesis h logically implies the evidence e in the presence b [background knowledge] (so that he is equivalent to h) then p(h, eb) is proportional to p(h, b) … suppose that e is some such evidence statement as ‘All swans in Vienna in 1986 are white’, h  the supposedly inductive generalization ‘All swans are white’ and k the counterinductive generalization ‘All swans are black, except those in Vienna in 1986, which are white’. Then p(h, eb) = p(h, b)/p(k, b). No matter how h and k generalize on the evidence e, this evidence is unable to disturb the ration of their probabilities …. Supporting evidence points in all directions at once, and therefore points usefully in no direction. (Popper & Miller, Why Probabilistic Support is not Inductive, Phil. Trans. of the Royal Society of London. Series A, Mathematical and Physical Sciences, Vol. 321, No. 1562 (Apr. 30, 1987))

Read the rest of this entry »

Rand’s Problem with the Problem of Induction

In induction, rand on 21/06/2011 at 3:16 am

Dykes:

By careful observation – free from preconception – we are able to discover the identities of the entities we observe. Thereafter, we are fully entitled to assume that like entities will cause like events, the form of inference we call induction. And, because it rests on the axiom of the Law of Identity, correct induction – free from contradiction – is a valid route to knowledge. (¶ 11)

I must address this paragraph, line by line: “By careful observation – free from preconception – we are able to discover the identities of the entities we observe.” (¶ 11) The assumption that we may have unmediated observation, ‘free from preconception’, is just that: an assumption that such an observation may take place. From what we know in neuroscience and basic biology, it appears that all sensory qualities we have are not in any way immediate. It is dubious, to say the least, that it is possible to observe ‘free from preconception’, for it would require a mind wiped clean even of its structure, and perhaps eliminating all its previous content. Simply put, the mind is not in any way a blank slate. To counter the fact that it is impossible to know if one is observing ‘free from preconception’ by declaring that we have observation ‘free from preconception’ is absurd.

Thus, Dykes must first argue that observation is ‘free from preconception,’ and that we may come to know which observations are ‘free from preconception’ and which observations are not.

Read the rest of this entry »

Rand’s Law of Identitiy

In induction, rand on 21/06/2011 at 2:59 am

I’ve learned very few truly valuable things in life. I won’t list them all, and they may be repugnant or less than valuable to some, but I will list one: argument is not about winning. If you win an argument, you lose. Arguments are about getting closer, no matter how hard they are, to the truth. Of course, I choose not to go into a lengthy argument about why this is the case, simply because I’m not out to convert anyone.

That said, there are times that I see arguments that are just wrong. In these cases, I do not mean to say that their conclusions are therefore false, only that the argument is fallacious — not manifestly so, as is often the case. Some times the wrongness is hidden deep within, and only by prying carefully at the edges can we get a glimpse at where the argument runs afoul.

Read the rest of this entry »