Sunday, August 13, 2006

Weekend Big Ideas: Should we always hear both sides?

This was inspired by a very old post at Fake Barn Country (which now appears, unfortunately, to have gone moribund). Do we have to hear "both sides" to a story before we make a judgement about what's true? First, we should put aside an extremely bad reading of the "both sides" claim, namely that there is always two and only two sides to any story. That's clearly nonsense. The spirit of the claim seems to be, though, that one is obligated to undertake a fair and complete understanding of all views on a subject before one forms a settled or considered opinion.

On its own, though, that reading won't quite work, either. Some subjects are just obvious. The Sun is hot. Water is wet. 2+2=4. And so on. I don't need to consider opposing views -- the Sun is cold, water is only wettish, 2+2=5 -- in order to believe that these are true. The reasons I don't need to consider opposing views are not all the same. That the Sun is hot I know because I know the surface temperature, in at least an approximate sense, and I know it vastly exceeds what I could tolerate. I call temperatures I have a hard time tolerating "hot", so the Sun is very certainly hot. I consider water to be wet because all instances of water I have encountered produce in me the experience which I associate with wetness. 2+2=4 because that is a definitional truth of arithmetic, which can be grounded in various more fundamental mathematical views. Or, in summary, I know these by inference from other beliefs, by inference from prior experience, or by inference from rules of a system. To really simplify even further, if the issue is one which can be settled by inferences from what I've already got at hand, then it seems I don't need to consider opposing views before settling on an opinion and calling it true.

Of course, I could always be wrong. This is not an infalliblist doctrine, by any means. The question is at what point my obligation to only believe things I have good reason to believe is discharged, and it seems that it usually gets discharged when I can make good inferences from what I already know.

This method won't settle all issues. Sometimes, I'm exposed to things I know nothing about, or sufficiently little about that I cannot make any good inferences from what I already know. Sometimes I exceed my past experience and experience something novel. I had no idea what bulgar would taste like until I tried it (answer: kind of like a combination of wheat and sesame seeds). I know nothing about neurophysiology, and claims about how brains operate tend to confuse me on that basis. And so on.

So, when the "good inferences" method fails, am I then obligated to consider all possibilities before making a considered judgement? The answer, simply, is "no". The reason the answer is "no" is that my epistemic obligation discharges when I can make a good inference from what I already know. The failure to pass that hurdle is what sends me off with another method -- I have insufficient information already at hand to support a good inference, so I try to get more information. Once I pass the threshold, though, once I gain enough information to leap over the hurdle of being able to make a good inference, then my usual strategy can kick in again and my obligation can be discharged.

The objection that could be raised here is that I'm justifying dogma and closed-mindedness. I'm doing no such thing: what I'm suggesting is that I only have good reason to ensure that everything I believe is supported by good inferences. When my inferences are challenged, either by attacking the chain of reasoning or by attacking the beliefs on which the reasoning is founded, then I no longer can claim to believe things based on good reasoning. I would then have two options: show that the attack fails (that is, that my reasoning actually is good or its founding beliefs well-grounded), or redeploy my reasoning (and thus potentially refine my beliefs, possibly radically).

The claim is not that, once one has passed the obligation of good inferences, then one never has to think again -- if I were saying that, then the charge of dogmatism and closed-mindedness would be apt. Instead, the claim is that there is a threshold beliefs must pass in order to be taken seriously: they must be founded on good inferences from past information. Once their passing that threshold is called into question, then the obligation cannot be said to be fulfilled, and one is therefore obligated to re-found one's beliefs on good inferences.

That, I think, is really the best sense that can be made of the "two sides" claim: that one shouldn't believe things dogmatically or instinctively, but because one has good reasons for them. However, in many cases, this doesn't require one do anything more than inspect what one already believes. Which is why it is important to engage in open and fair debate with different-minded people, for only through that mechanism will one be able to ensure that one's inferences are genuinely good.

2 comments:

undergroundman said...

When my inferences are challenged, either by attacking the chain of reasoning or by attacking the beliefs on which the reasoning is founded, then I no longer can claim to believe things based on good reasoning.

Is that not what an opposing viewpoint is? An attack? Aren't you justified in responding to them as such?

The reason the answer is "no" is that my epistemic obligation discharges when I can make a good inference from what I already know.

How can I know whether an inference is truly good until I know all viewpoints on it? How can I decide who is right when we talk about experts who knew far more than me, until I fully understand how they justified their views? How can I choose between Milton Friedman and Keynes? Smith and Marx? Nietzsche and [insert opposing phlosopher]?

It seems to me that you're saying that, as long as you feel reasonably secure in your beliefs, you can ignore alternative ones instead of coming to terms with them head-on. But then you say that if your inferences are attacked, you should come to terms with that attack. I probably missed something. Or are you saying that we can ignore opposing views if they aren't based on reason? How are we to know if they are or not if we don't understand them?

It reminds me of two major contemporary economists, William Easterly and Jeffrey Sachs (search for them in Wikipedia). They disagree on the right way to end poverty and are allegedly not on speaking terms (according to Wikipedia, no citation -- but still believable). It seems to me it would be best if they simply got together and located exactly where their differences lie. Then they can agree to disagree and consider testing their differing hypotheses.

Meh. These days I don't pass firm judgment on anything I have even the slightest doubts about, which means a lot. It hinders me in learning, since I'm often reluctant to absorb so-called laws (like the categorical imperative) even when I can't pin down my exact problem with them. But in the long run I think it's better.

ADHR said...

Is that not what an opposing viewpoint is? An attack? Aren't you justified in responding to them as such?

Not entirely. By "attack" I mean a well-founded opposing viewpoint. As the Python sketch says, there's a difference between argument and contradiction; so, merely being contradicted is not an "attack" in the pertinent sense.

How can I know whether an inference is truly good until I know all viewpoints on it? How can I decide who is right when we talk about experts who knew far more than me, until I fully understand how they justified their views? How can I choose between Milton Friedman and Keynes? Smith and Marx? Nietzsche and [insert opposing phlosopher]?

As I said, I start with what I know and I reason from there. When the reasoning is undercut, or the premises successfully undermined, then I can see the reasoning is bad. (And, keep in mind, we're speaking deontically here: it's what I'm obligated to do. To be sure of being right, I'd clearly have to go beyond what I'm obligated to do, just as to be morally good I have to go beyond what I morally must do.)

It seems to me that you're saying that, as long as you feel reasonably secure in your beliefs, you can ignore alternative ones instead of coming to terms with them head-on. But then you say that if your inferences are attacked, you should come to terms with that attack. I probably missed something. Or are you saying that we can ignore opposing views if they aren't based on reason? How are we to know if they are or not if we don't understand them?

The idea is that as long as I've, in my judgement (and who else's could I use?), reasoned well from what I already know, I don't have to go and seek out new information or new ideas. I've discharged my epistemic obligations. However, when a well-founded critique (what I glossed as an "attack") is made, then I have to take it seriously and either defeat it or change my own views.

Not understanding these critiques is an important problem. In that case, I'd suggest we should try to understand them first, and then decide if the critiques are well-founded or not. A critique which can't be made clear could be considered fundamentally ill-founded; one that is simply badly-expressed or not fully explained is not necessarily ill-founded. So, there may be a need for further exposition before a judgement in that regard could be made. So, the situation when one's views are attacked is probably marginally more complex: the attack could either be obviously bad, obviously worth taking seriously, or not obviously either. In the former case, it can be ignored; in the median, it must be addressed somehow; and in the latter, it must be somehow better expressed, if possible -- and this would then reduce the latter case to one of the former or the median.

It reminds me of two major contemporary economists, William Easterly and Jeffrey Sachs (search for them in Wikipedia). They disagree on the right way to end poverty and are allegedly not on speaking terms (according to Wikipedia, no citation -- but still believable). It seems to me it would be best if they simply got together and located exactly where their differences lie. Then they can agree to disagree and consider testing their differing hypotheses.

You'd think so, wouldn't you? Especially since there's a big ongoing debate on these sorts of global justice issues in political philosophy. AFAIK, diametrically opposed philosophers (those who favour foreign aid solutions, those who favour institutional changes, those who want to leave the market to do its own thing, etc.) all still basically get along with each other.

Meh. These days I don't pass firm judgment on anything I have even the slightest doubts about, which means a lot. It hinders me in learning, since I'm often reluctant to absorb so-called laws (like the categorical imperative) even when I can't pin down my exact problem with them. But in the long run I think it's better.

The consequence, of course, is that there's very little you can ever claim to know. Which is one of the reasons why I want to set the bar for "knowing" lower. The cost is that knowledge is then not even remotely certain, but simply whatever's based on the best reasons at hand, but I tend to think that's a lesser cost to pay.

The categorical imperative is deeply problematic. I don't know of many moral philosophers these days who take it on without at least some qualification. My sympathies, morally, are broadly deontological, but I don't buy Kant's views in this regard. (If you want a particular problem with it, consider this: Kant thinks that the universalizability formulation (i.e., act only on the maxim that can be willed a universal law) and the personalization formulation (i.e., always use persons as ends in themselves, never as a means only) are equivalent. One of his famous examples is the murderer at the door: Kant says you should tell the murderer where to find his victim because you cannot universalize lying. It seems, though, that in telling the murderer the truth, you use his victim as a mere means to your fulfillment of your moral obligations. So, are the formulatiosn really the same?)