Thursday, August 02, 2007

What does a successful reasons-explanation of action look like?

The project I've set myself is figuring out reasons-explanations of action. I'm a big fan of the idea of having particular questions for any given project, on the grounds that, without a question, you can just keep investigating and researching with no particular end in sight. Having a question sets an end: once the question is answered, the project's done. So, the questions I'm trying to answer are two. First, what is the form of a reasons-explanation of action? Is it causal? Lawlike? Teleological? Something else? And, second, what consequences does this have for (a) the nature of reasons, (b) the nature of agency, and (c) metaethics.

Particularly, under (a), I'm interested in the question of whether reasons for action are psychological (e.g., I went to the store because I wanted some milk and believed they had it) or not (e.g., I went to the store because it had milk). I'm also interested in the question of whether reasons are simple (i.e., when I do x, I do so for one reason) or complex (i.e., when I do x, I do so for a constellation of reasons), and whether they are plural (i.e., any given reason can favour multiple actions, and any action can be favoured by multiple reasons) or not (i.e., any given reason can favour only one action or any action can only be favoured by one reason).

Under (b), I'm interested in what agents -- basically, people, but it could include non-humans, and even highly-intelligent robots or computer programs -- have to be like in order to be able to act on the basis of reasons and use reasons in reasoning about potential courses of action.

Under (c), I want to talk about whether there's a viable distinction between the "merely" practical and the moral. I think there is, but it's contentious, so I need to argue that my view of reasons-explanations gets the distinction off the ground. I also want to tackle the debate between motivational internalists and externalists, i.e., the issue of whether grasping or understanding a reason automatically motivates one to act, or whether motivation is a seperate "add-on" to grasping reasons. (I'm leaning towards the latter at the moment, but I have no firm convictions.) Maybe also some stuff about moral ontology and epistemology, which would seem to follow from the answeres to (a) and (b), combined with the first issue under (c).

The first half of the first section of the dissertation will argue for three criteria on a successful/satisfying reasons-explanation of action. The second half of that section will argue that the teleological model of reasons-explanations satisfies them, more fully characterize the teleological model I'll be working with, and defend the teleological model against objections. The second section will argue that no other model can do satisfy these criteria, either in particular incarnations, or in general. I'll consider three types of non-teleological models: causal models; law-based and anomological generalization-based models; and "the rest". The third section will trace out the consequences of the teleological model, as mentioned under the second question above. What I want to tackle here is outlining the arguments of the first half of the first section.

My sense is that we offer reasons-explanations of actions because we're looking to understand the actions. It's not just for fun or because it serves some sort of social function or what have you. And what we're trying to understand is why that agent performed that action. In other words, we're trying to understand four things:
  1. How did the reasons lead that agent to that action?
  2. What did the agent see (in the reasons) that made the action worth doing?
  3. What was the action that the agent did, exactly?
  4. What is the agent like such that he does such actions for such reasons?
(1) has often been interpreted causally. It need not be. To take it as meaning something causal requires assuming that "leading", in this sense, can only be understood in a mechanical, proximal causation sorta way. But, as Michael Smith points out (and I'm sure others do as well), you don't have to be committed to a causal understanding just because you think you can account for how reasons lead agents to actions. G. F. Schueler makes a similar point, but in a different way: he argues that there's an understanding of "cause" which just means "whatever follows the word 'because'". In other words, we can reinterpret causal explanations such that they are co-extensive with successful explanations.

What matters for me is that there are several other viable ways of understanding the way in which reasons "lead" to action, and it's by no means obvious (tradition notwithstanding) that it has to be causal. Perhaps reasons lead agents to action because there are true generalizations such that certain reasons had by agents always lead to certain actions. These generalizations could be laws of nature (not a terribly plausible line, but a possible one), or just useful "rules of thumb" for us humans. We could also say that reasons lead agents to action because when we look at the action, we can construct a history that places the action as a response to something in the agent's past. Rüdiger Bittner has this view. I'm not sure how much sense it makes, because I'm not sure what sense can be made of the idea of a "response" without falling into good ol' mechanistic, proximal causation, but I'm willing to allow it may be interestingly different. Finally, we could say that reasons lead agents to action because reasons show agents (or simply are) goals they could accomplish. This would require making sense of what pursuing a goal is, a task which I don't think is impossible.

(2) should not be contentious, if it's interpreted right. (J. David Velleman notwithstanding.) Any agent who does something for reasons which show it to be completely worthless is not really acting for those reasons. He's acting either for some other point (e.g., because he thinks doing something worthless is, in this situation, actually worthy) or he's being compelled somehow. It's important, as said, to read this right. It's not that something has to be worthy, all things considered, come what may, after examing all possible courses of action and carefully weighing the value found in each. All that's being claimed is that the action has to somewhat worthy: that is, the action has minimal practical value. Otherwise, we really haven't found a reason for that agent to do that action.

(3) may sound odd, but part of what picking out reasons does is also pick out an action. Certain reasons are inappropriate for certain actions. That is, as Donald Davidson said, reasons-explanations only explain relative to particular ways of explaining the action. "To turn on the lights" is not a reason for "startling the burglar" nor for "adding $.30 to my electricity bill this month", although it may be a reason for "flicking the lightswitch". So, (3) says nothing more than reasons and actions come together. (Although, as I've said already, part of what I want to do is figure out how tightly reasons and actions are bound together. My current thought is that the connection is pretty tenuous.)

(4) comes together with (2). If we show that the action was worth doing in the agent's eyes, then what we're doing is showing that the agent was a being such that the reason is worthy for him. That is, giving a reason that the agent found valuable requires that the agent be such that he accepts such value and is capable of recognizing it in this situation, and acting upon this recognition. If the agent doesn't care about animals, or didn't see one, or was unable to operate the car properly, then the reason for his action of turning the car sharply to the right when noticing a squirrel in the road can't be "because he doesn't want to hurt the squirrel". We have to find something he actually values. Perhaps his companion is squeamish and he doesn't want to offend her. Or perhaps he just bought new tires and doesn't want to get squirrel guts on them. (A good theory of reasons-explanations has to accept the the minimal practical value may not be equivalent to the all-things-considered practical or moral value.)

So, I can come up with three criteria that any reasons-explanation of action must satisfy:
  1. It must account for the connection between reasons and action in the agent [from (1) above]
  2. It must give reasons that the agent genuinely finds worthy [from (2) and (4)]
  3. It must fit the action within the pattern implied by (a) and (b).
Let me say more about these.

(a), as said, doesn't have to be read causally. What's important for (a) is showing that the reasons, the action and the agent collectively form some sort of pattern. If they can't be fit together (and I know "fit together" has to be given some more specific formulation), in some way, then there's just no explanation to be had. Either the action is characterized/described incorrectly, or the reasons are, or the agent is (or some combination). Or, of course, this is just not a case of acting for reasons. Not everything agents do counts as acting for reasons. (Whether or not action has to be identified with acting for reasons is a problem for another time, methinks.)

(b) claims that it's not enough to give reasons that the agent finds worthy, but really aren't. That is, subjective value doesn't cut it. This may seem odd, but the reason is simple: subjective value, at least in this context, is explanatorily imparsimonious. This is because everything that subjective value is introduced to do can be covered by objective value, but not everything that objective value does can be covered by subjective.

There are two distinct ways in which anything may be valuable, which may be called "subjective" and "objective". If we're talking about subjective value in this context, then the claim is that an agent has a reason to φ if the agent saw something about φ-ing that, in the agent’s eyes, made φ-ing worth doing. It may be that the feature(s) the agent focused on had no actual value, or even had disvalue. By contrast, if we're talking about objective value, then an agent has a reason to φ if the agent saw something about φ-ing that actually made φ-ing worthwhile.

Subjective and objective value come apart in two types of case, when they disagree on whether or not value is present, i.e., (i) when there is subjective value but no objective value, and (ii) when there is objective value but no subjective value. The supporter of subjective value can accommodate (i) easily, as it is just the case he would appeal to in order to justify asserting the existence of subjective value. After all, if the agent believes that φ-ing is worth doing, but is wrong, then surely his belief is enough to make φ-ing worth doing for him and give him some reason to do it.

However, this supporter will have difficulties if he concedes that in (ii) there is value present -– just not value for some agent. If objective value is allowed in (ii), then what should be said about its absence in (i)? Is it the case that objective value is a "fallback" from subjective, and thus only appears when there is no subjective value? This would be a strange and awkward view. Alternately, accepting objective value is as viable as subjective, is (i) a case with both the lack of (objective) value and the presence of (subjective) value? Surely this is contradictory. Perhaps this supporter could just deny there is any reason to φ in (ii), but this sounds extremely strange. A valuable feature in φ-ing, even one not appreciated, surely does in some sense give one reason to φ. Denying there is any reason here just looks dogmatic. The only remaining option seems to be to deny objective value altogether. But I think this sort of conclusion should fall out of an understanding of action, not be used to shore it up.

Subjective value may, for all its problems, still be superior to objective value. Suppose we completely reject subjective value. How would we thus be forced to interpret (i) and (ii)? In (i), a supporter of objective value can simply assert, as one sensibly might, that it is a case of error on the part of the agent. The agent believed, incorrectly, that φ-ing was worth doing. Consequently, if the agent did in fact φ, it would follow that the agent had no reason to do so, but only believed he did. Cases of believing, erroneously, we have reason for what we do seem fairly common, so this perspective on (i) is fairly comfortable. The supporter of objective value's view of (ii) is just as simple and comfortable: he would claim that, in (ii), the agent has just failed to appreciate a valuable feature. It is another kind of error, the mirror of the previous: instead of taking something to be valuable that actually was not, the agent has fail to consider something valuable that actually was. Again, I think that cases of overlooking important features of situations are fairly common.

Overall, then, trying to bring subjective and objective value together leads to a series of untenable contortions, and subjective value all on its own can't just be presumed. (Yeah, I know that latter one is weak. I'm still thinking about the consequences of rejecting subjective value at this point. I suspect it requires extremely weird interpretations of (i) and (ii), but I'm not sure how to phrase it.) Given that objective value has no need to go through these contortions, I conclude that it should be accepted.

(c) is sort of inevitable. Once we know what the reasons are and what the agent is like, there are going to be limits on what the action can be like. On its own, a theory of reasons-explanations probably won't put too stringent limits on what actions are -- events, processes, states of affairs, what have you. But, at least how we describe or characterize the action is going to be limited.

No comments: