One must tear them away from the reality to which they have become accustomed to and cause them to see everything anew. … but one has to do everything, one has to create a situation that threatens them. … For knowledge, whatever it is worth, from the most precise mathematics to the darkest suggestions of art, is not to calm the soul but to create a state of vibration and tension in it. (Witold Gombrowicz, Diary)
If evolution can force a group of organisms into a local maximum determined by some path it went down long ago, if life evolved an inherently inefficient mechanism, it would be stuck with it — you can’t start over without scrapping the whole project. This, I think, can be made analogous to scientific theories:
If we assume that science is interested in the overarching goal of attaining interesting truths, but we cannot know when or if we have acquired these interesting truths, then we have to settle for eliminating interesting falsehoods. Imagine we have a graph with different labels for the x, y, and z, coordinates. What do they stand for? I don’t know. I’m not a mathematician. Perhaps we can have verisimilitude, logical content, and something else I can’t be bothered to think of right now. It might not be able to be expressed in three dimensions. More might be needed. It’s not something that bothers me — it’s for creating a mental image of the possible theories available. Something like this.
We have a search space (the graph), we can apply a fitness function (empirical adequacy, or something like that), the nodes on the graph in the search space are our scientific theories, the edges of a node correspond to possible changes in our theories. Whatever edge produces the best result of the fitness function is selected. And so on.
We want to reach the global maximum, to optimize our theories, but we can only move further away from the global and local minima. Sounds a lot like evolution, no? Of course, evolutionary epistemology has been around for more than fifty years: BVSR.
Humanity “proceeds through life as if in a fog” (Milan Kundera)
The direction of the future is only there in order to elude us. (Georges Bataille)
What if a scientific theory could be caught in a local maximum? If the theory correctly predicts events only within a limited area, and assume that this limited area is either as large or larger than our available knowledge (imagine that it’s the part of the graph we can see, the rest of it obscured by an infinitely large piece of white paper), then any slight change to the theory in light of our available knowledge would make it slide down the slope towards a local minima, now making the theory empirically inadequate. In brief, there’s no way of knowing that the theory is true or false with our available knowledge.
Only a new theory, one that ‘scraps the whole project’, by explaining how a previous test was mistaken, or an auxiliary hypothesis was in error, or that the old theory was a limiting case of the new theory, can expand our available knowledge. It will open up a new plateau (critical experiment) that the two theories can then be compared on: if the old theory survives, then when we’ve cut a bit out of the paper, we see that the local maximum was in fact larger than we anticipated; if the new theory triumphs, we’ve started to crawl up a new crest towards another local maximum. We can look across a new vista and start again.
Of course, one problem is that no matter how imaginative we are, we are constrained by the state of our current technology. Without the ability to conduct a crucial experiment (think of the vast amounts of money funneled into CERN), there’s no chance for us to change. Change in our scientific knowledge, then, requires change somewhere else in our knowledge