Imagine that a computer is built to make empirical generalizations with inductive logic (whatever that may be) and that this computer is in a simple universe with a limited number of individuals,number of properties, and relationships between these properties the individuals can have. Furthermore, the universe operates with a limited number of ‘natural laws’. In this universe a computer can be created such that in some reasonable period of time it will discover the ‘natural laws’. If the laws were modified, then the computer would find a new set of laws. If this universe were further complicated, then this computer could be enhanced to be able to formulate hypotheses, to test these hypotheses, and to eliminate those that do not survive testing.
This induction machine is limited insofar as it is limited by its programmer’s intellectual horizon: the programmer decides what is or is not a property or relation; the programmer decides what the induction machine can recognize as repetitions; it is the programmer that decides what kinds of questions the machine should address. All the most important and difficult problems are already solved by the programmer, and this induction machine is little more than a speeding-up process of a room full of bean-counters or punch-card holders.
Here we have today’s work in artificial intelligence, which is precisely limited by this constraint. The theories that these computer programs develop are conditional on the initial conditions that are needed for in an induction machine. Inductive inferences does not then occur within the context of discovery; the programmer provides these. Inductive inferences occur within the context of justification, and even then it still does not satisfactorily solve the problem of induction, for the problem cannot logically be solved. These computers have become problem-solving machines that operate on conjecturing the most parsimonious theory and attempted refutation of that theory.