d

Evidence

In empiricism, induction, justificationism on 19/07/2011 at 6:49 am

There is endless conjecture, and certainty is not to be counted upon (Kant, Critique of Pure Reason)

Some people treat evidence as something that accumulates over time, like sap from a tree. Once enough evidence is collected, you need only synthesize it into syrup, and then you’ve proved your point. “I have X amount of evidence for Y, therefore you ought to believe Y, otherwise you are behaving irrationally.” So the story goes.

I don’t think there is such a thing as ‘evidence’, at least how it’s traditionally understood, since all corroborating evidence cannot say whether or not the theory is true or false. It’s the nature of the beast: The logical content of the set of corroborating evidence is always smaller than the theory the set is intended to corroborate.

When scientists test some particles in a lab to a high degree of accuracy, or take a volt meter and stick it on things, then they are said to be gathering ‘evidence’; however, scientists have only tested an extremely limited number of objects: Nobody has done tests on quantum mechanics of the ivory of piano keys, monkeys in the Congo, the pages of the Gutenberg bible, &c. Yet, QM is said to work just as well for my piano keys and chairs and rat brains and everything else that hasn’t been put under a direct test …

Even if all our tests are correct, our present theories may be a false limited case of a broader theory, the theory could be false in certain regions of space, during certain times, when applied to certain elements under specific stress, &c.

Assume for a moment that this ‘syrup’ theory of evidence is true. What does that mean when one encounters a falsifying/disconfirming/refuting test? Does the syrup evaporate in a flash of heat? If so, then the syrup serves no purpose. Does the syrup remain, but only reduced by a small bit? Then it contradicts the deductive inference of modus tollens.

Put that aside for a moment and consider a (relatively) recent amendment: instead of quantifying evidence, we quantify our belief in the evidence. How do we carry over this additive ‘syrup’ concept of evidence to apply to belief? Forgive me if I’m a bit incredulous, but it looks, at least to me, that it suffers from all the problems of quantifying evidence, along with all sorts of new problems.

//

  1. From a mathematical perspective (e.g., Keynes, Good) one never has ‘evidence that X’. Instead one has ‘evidence that is consistent with X’ and ‘evidence that an alternative, Y, is false’. Evidence ‘that X’ could only occur if one knew what all the alternatives were and had strong evidence against them. This is rarely the case, although one might say that one had ‘evidence for X out of the alternatives {X, Y, …}.

    The ‘weight of evidence’ against Y is related to the likelihood of the evidence given Y (see Good or my blog.) If one makes the assumption that you know what all the alternatives are and how frequent they are ‘a priori’ then you can say what the ‘evidential power’ of a given experiment is, and then say that a hypothesis, X, has survived a test of a given power. But that’s a lot of assumptions.

    • djmarsay,

      I agree with you on all points. Usually, though, Bayesians seem to think, and perhaps I’m reading them wrong, that their priors allow them to make the jump from contingency (“if X is true, then the probability of Y given Z is …”) to necessity (“the probability of Y given Z”). In other words, it’s not like they’re taking the problem of underdetermination seriously.

  2. d

    Bayesian probability is often presented as about subjective probability, in which case why should I take any notice? In its favour it can be said that in some cases one needs the theory in order to be able to make definite decisions. But what if I regard some residual uncertainty as unavoidable?

    In Good’s notation one has likelihoods such as P(E|H:C), the probability of evidence E given a hypothesis H in context C. Strong Bayesians seem to happy to ‘assess’ the context and then treat it as fact. They also suppose that priors, P(H:C), exist, usually by making an implicit assumption that past performance is a perfect guide to the present value. More thoughtful Bayesians such as Spielgelhalter (http://www.youtube.com/watch?v=MmnPdXZz-S8) may compute in the same way as strong Bayesians but are more mindful of the uncertainties.

    If one is sure that the likelihoods and priors are about right, then often you end up with the same conclusions whether you figure using Bayesian probability and declaring your uncertainty or follow Keynes / Turing / Good to compute it directly. The difference arises when the actual likelihoods or priors could be very different from what you think is typical. In such cases uncertainty is not a number and, as you say, it is better to be explicit about dependencies.

    Dave

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s