Tuesday, January 31, 2012

On the riddles of induction.

Induction is weird. We've got a pretty good logic of deduction -- conclusions drawn on the basis of evidence that guarantees them -- at least for sentences as wholes, and subject-predicate sentences. (Deduction involving modality -- so, possibility -- and duty, not as much.)

Induction is the process of reasoning whereby conclusions are drawn on evidence that is less than fully conclusive. In other words, there is a possibility that the conclusion is wrong, even though it has been appropriately drawn from the evidence.

As I say, it's weird. We have to use induction -- it's impossible, given our general epistemic situation (limits on cognition, time, energy) to have guaranteed conclusions always. But induction has some problems, which raise questions as to the justification for our reliance on it.

For example, there's the raven paradox. Take the sentence "All ravens are black". Obviously, seeing a black raven serves as evidence for this claim; the more ravens seen, the more evidence. But, logically, "All ravens are black" is equivalent -- that is, it means the same thing, or is true and false in the same circumstances -- to "All non-black things are non-ravens". And "All non-black things are non-ravens" is confirmed by any non-black thing which is also a non-raven -- so, for example, a white shoe. Given that the two sentences are equivalent, though, it follows that a white shoe confirms "All ravens are black". Which is crazy.

The problem here doesn't seem to be that rules of semantic or logical equivalence, but the rules governing confirmation of generalizations (the two in this case are universal generalizations, but that's not necessary). We really don't know what makes a piece of evidence serve as confirmation of a general claim, and our immediate intuitions on the issue are inadequate to the task.

Taking another example, there's Hume's riddle of induction. Consider the claim "The sun will rise tomorrow". I can draw this conclusion on the basis of another form of induction; not generalization, but enumerative induction. That is, I can enumerate a series of past events which, together, imply a statement regarding a future event. In this case, sentences like "The sun rose today", "The sun rose yesterday", and so on.

As Hume notes, though, this assumes that nature behaves in a uniform fashion, such that what was is a guide to what will be. And what guarantee do we have of that?

Just the following. Take the claim "Nature will behave in a uniform fashion". I can draw this conclusion on the basis of a set of past claims: "Nature behaved in a uniform fashion today", "Nature behaved in a uniform fashion yesterday", and so on. In other words, I can only draw the conclusion on the basis of enumerative induction, the very process I need that conclusion to support!

Hume suggests that induction is a habit, rather than something logically valid or reliable. But the lesson I tend to draw is similar to the lesson regarding generalization I draw from the raven paradox: our basic intuitions regarding enumerative induction aren't good enough to account for why enumerative induction is a good process of inference. We don't really know why enumerative induction leads us to acceptable conclusions, or when it does so -- and, to the point, when it does not.

Taking one more example, Nelson Goodman gives us the "new riddle of induction" -- or, more commonly, the grue problem. Consider the sentence "All emeralds are green". Let us define a new predicate, "grue", such that anything that is green before January 1, 2013 is grue, and anything that is blue after January 1, 2013 is grue. Thus, by definition, "All emeralds are grue" is also true. why is it, then, that our observations about emeralds lead us to conclude that "All emeralds are green" rather than "All emeralds are grue"?

The natural suggestion is that "grue" is an artifical predicate, while "green" is not. But this is arbitrary. After all, define "bleen" such that anything that is blue before January 1, 2013 is bleen, and anything that is green after January 1, 2013 is bleen. Therefore, "green" can be defined as anything that is grue before January 1, 2013, and bleen after January 1, 2013. (One could repeat the problem with "emerose" and "romerald" rather than "emerald" and "rose", in the subject place.)

So, the appearance of "natural" or "artifical" when it comes to predicates and subjects -- in other words, descriptive terms generally -- is largely historical. It's a matter of where our language starts. Which means we really have no idea why we draw the inductive inferences with the content that we do.

This isn't just idle speculation. It's fundamental to our ability to generate knowledge. If the reliability of common inferential practices -- generalization, enumerative induction, contentful inductive inferences at all -- can't be explained or justified, then how can we continue to use them?

Maybe Hume was right. Maybe it's just a habit. And there's no real basis for trusting these inferences at all.

No comments: