Wednesday, October 17, 2007

Open access series: Improved access.

(On an unrelated note, philosophy blogging is on hold until I figure out what on earth I think about motivation. My next dissertation-related argument to conjure up is on just that topic and I don't have my own thinking entirely straight yet. Hopefully this week, but it might be next.)

Introduction to this series for those who have lost their place in the various delays is here.

So far (here), it's turned out that the status quo in access to research is not acceptable. All arguments against extending access are dismal failures. However, arguments in favour of total open access (TOA), without restrictions, also tend to fail (here). The general problem with pro-TOA arguments was what I called the "totality presumption", the claim that, once we accept that the status quo is no good, we must instantly move to TOA, without stopping in a more limited, but still more open, model of access. As I said last time, there are two ways for TOA to triumph over IA:
I can see two possibilities. First, arguments in favour of IA over TOA and the status quo fail. Thus, TOA is a fallback from the failure of IA. This is unstable, though, as it depends on the (lack of) creativity of defenders of IA. Second, IA is not stable in its own right; it will, inevitably, tend either to TOA or the status quo. But this is just the black and white fallacy. So, for TOA to be preferable to IA -- that is, for TP to be true -- it must be that there are no good arguments for IA.
So, what arguments do I have for IA? Just two:
  1. The elitist argument
  2. The epistemic morality argument
I'll deal with these in order.

The elitist argument is straightforward. It claims, quite frankly, that some people are better able than others to make use of research (elitist premise 1, or EP1). It also claims, again quite frankly, that some people are hopelessly incapable of making use of research -- indeed, they will actually harm themselves, in some way (elitist premise 2, or EP2). It then claims that, since it is often difficult to tell whether any given person falls into the former or the latter category, we should only limit access to research when we are clear this person falls into the latter category (default premise, DP). Finally, it claims that, in some cases, it is clear that some people fall into the latter category (factual premise, FP). Thus, it follows that we should improve access, but not have total open access.

Looking at the premises in turn, it's important to note that EP1 and EP2 are really flipsides of the same coin. One says that more value is produced by research in some people's hands, and the other says that disvalue is produced by research in some other people's hands. (And this, of course, may vary depending on the kind of research. The same person can, for example, produce wonderful value with economic research and tremendous disvalue with medical research.) The driving idea behind this is that actions done for reasons are efficaciously value-seeking. (See here for some early comments on this matter, and here (.pdf) for the latest considered version of the thesis.) By this I just mean that, when we act for reasons, we are taking courses of action that are the most effective ways to achieve the value that is in our reasons (which are goals). So, when somebody accesses research -- which, as a deliberative and intentional action, should probably always be an action for reasons -- they are taking a course of action that is the most effective way to achieve the value in their goals, but only insofar as they are practically rational. The understanding of acting for reasons as effiacious value-seeking is an ideal, after all; it's a governing standard, not an empirical generalization. Very often, the ideal will not be achieved, and people will fail to act rationally. In the simplest case, they simply take ineffective means to their goals, while in more complicated cases, they may have goals that lack in objective value, or even be unable to perceive their reasons for acting correctly due to some cognitive deficit.

Whatever the sources of practical irrationality, we have to acknowledge when designing policies that the possibility of practical irrationality is real and, often, very strong. So, our policies should aim to control the potential damage wrought by cases of practical irrationality. After all, setting policies is itself an action done for reasons, and hence should effectively seek the value of the goals of making policy.

The question then, naturally enough, is what is the goal of setting a policy? Cards on the table: I'm broadly a socialist and thus think that every policy should, in the end, be of overall benefit to society. So, the goal in setting a policy on research access should be serving the overall benefit of society. But, if the policy (as in the case of TOA) allows for individual people to, through practical irrationality, fail to efficaciously seek value in their accessing of research, then the policy has failed to achieve its goal. Thus it, too, is practically irrational. Since we don't want an irrational policy (I presume!), the policy should block this possibility.

(It should be noted that no one can claim that the status quo satisfies the basic requirement of efficaciously seeking value, for the mechanisms by which the status quo controls access to research are hopelessly inefficient, and simply fail to restrict access only to those who will produce disvalue with their access.)

Now, the policy should also, strictly, require those who would produce benefit to society to access research. What form the requirement should take will be determined by empirical considerations -- after all, the requirement shouldn't be an inefficient method of achieving its goal. This is where DP comes in. DP states that, as a default, everyone has access; it is only those who are demonstrably unable to produce anything but disvalue from access to research who should be prevented from having access to it. I have a few reasons for this. For one, although socialist I be, I am well aware of the dangers of an extremely powerful and interventionist government, or any other institution. So, the onus should be on the administrators of research access to prove, to some high standard of proof, that an individual should not be allowed access to particular research. The process for this should, of course, be fully transparent, subject to review, and so on. (No secret tribunals!) For two, it's often extremely difficult to determine whether value or disvalue will result from allowing someone access to research findings. So, it's only at the margins that there's ever going to be certainty (or reasonable certainty) regarding whether granting access to a given person will produce benefit to society. And, when the certainty is that they will then, of course, access should be granted. When the certainty is that they will produce disvalue then, of course, access should be refused. Thus, DP is an asymmetric principle: in order to prevent institutional overreach, and in order to ensure that value is produced, we default in favour of access.

Finally comes FP. FP is crucial for the argument; if FP is false, then IA collapses into TOA. If there are no clear cases where granting individuals access to research will probably result in disvalue, then it follows, by DP, that access should just be granted to everyone. And this is TOA. But I think FP is obviously true. It just seems so clear that there will be some people, somewhere, who should be restricted in their access to some research. The most obvious example I can concoct is regarding weapons research. Obviously, if everyone had access to research data regarding, say, new chemical weapons, this could tremendously harm society (and, indeed, individual people). To not accept this example is, I think, a little naive. Whether or not this sort of case is typical is not the point; all FP requires -- and thus all the elitist argument requires -- is that there be such cases.

The second argument I have is what I've called the "epistemic morality argument". It's sort of an odd name, perhaps, but the argument it names is quite simple. It claims that knowledge is sometimes a bad thing, or at least not always a good thing (the epistemic value premise, or EVP). It then claims that bad knowledge should be restricted (the restriction premise, or RP). It also claims that bad knowledge should only be restricted when it is clearly bad (the clarity premise, or CP). Thus, some research should be restricted.

This is very similar to the elitist argument -- they're driven by the same basic intuition, namely that there could be real harms to opening up research access to everyone -- but focuses entirely on the effects of knowledge. There is no consideration of whether people will or will not make good use of knowledge.

RP is a consequentialist-styled claim. I'm not, technically, a consequentialist about morality, but I do think consequences often matter. In this case, since I don't see any non-consequentialist considerations that would trump the consequences, it seems to me that consequences must be all that matters here. So, RP is an instance of the general idea that we should limit the bad and promote the good.

EVP is a frank denial of a claim that has lasted at least since John Stuart Mill, namely that it's always better to have or to know the truth than not. EVP is also an instance of the claim, going back at least to Plato, that we should sometimes restrict the truth because it's not always good. I don't have much to add in terms of defending the claim, as this dispute has always struck me as being at something of a stand-off. Mill and his followers assert truth is always good, while Plato and his assert that it is not always good. Mill's defense of the view, for what it's worth, comes as part of a discussion of the ideal sort of person. According to Mill, the best person is an individual generally unrestricted in his conduct, save when he may bring harm to others through his actions. This person is creative, energetic, and engaged in the public life of his society. So, according to Mill, truth is always good because (basically) it produces these sorts of creative, energetic, engaged people -- what he called the "active personality".

Now, I'm all for the active personality, but there's some slippery reasoning here. Mill wants to say that allowing the truth to be freely disseminated serves the interests of all people by allowing them to become active people. And truth is always good because it produces these active people. But what, really, is the evidence that truth does this? And, to the point, where is the consideration of the fact that societies are not, in fact, generally made up of active people? Truth may be good in the hands of the active, but when you don't have active people, why should we allow truth to roam free? (If the answer is that it will make them into active people, the first question recurs: and what is the evidence that truth produces active people?)

On the other hand, Plato's claim comes in the midst of a discussion of how to best promote overall social good. And, according to Plato, we do this by only giving people the truths they need in order to function in their social role. (He also says that we should sometimes lie to people in order to help them function, which I'm not so keen on.) This seems a generally good rule of thumb to me. You don't tell a student in an introductory class in philosophy to read Kant's entire corpus and understand it. It's unfair, probably impossible and perhaps even cruel. Instead, you give the student some introductory-level readings, to help them develop the skills necessary to finally tackle something like Kant's corpus. In other words, you take people as they are, not as they will be, and allow them to access the knowledge they can deal with.

And this is where CP comes in. According to CP, much as with DP in the elitist argument, we start with the assumption that the knowledge is not bad, and only restrict it when it is clear that the knowledge is bad. The reasoning behind CP is much the same as behind DP. Since we often can't be sure that knowledge is bad, we should err on the side of restricting institutional control (i.e., dictatorial power). When we are sure, if the knowledge is reasonably considered to be good, we should allow it; and, if the knowledge is reasonably considered to be bad, then and only then we should restrict it.

So, these are the two arguments I have in favour of IA. If successful, they defeat the totality presumption, and thus undercut the arguments in favour of TOA. Since the status quo has no viable defense, it follows that IA is the preferable policy.

No comments: