9/12/09 // WD 1Tb/EPISTEMOLOGY/Irrationality/Irrationality.rtf
Irrationality and Cognition
John L. Pollock
Department of Philosophy
University of Arizona
Tucson, Arizona 85721
pollock@arizona.edu
http://www.u.arizona.edu/~pollock
Abstract
The strategy of this paper is to throw light on rational cognition and epistemic justification by examining irrationality. Epistemic irrationality is possible because we are reflexive cognizers, able to reason about and redirect some aspects of our own cognition. One consequence of this is that one cannot give a theory of epistemic rationality or epistemic justification without simultaneously giving a theory of practical rationality. A further consequence is that practical irrationality can affect our epistemic cognition. I argue that practical irrationality derives from a general difficulty we have in overriding built-in shortcut modules aimed at making cognition more efficient, and all epistemic irrationality can be traced to this same source.
A consequence of this account is that a theory of rationality is a descriptive theory, describing contingent features of a cognitive architecture, and it forms the core of a general theory of “voluntary” cognition — those aspects of cognition that are under voluntary control. It also follows that most of the so-called “rules for rationality” that philosophers have proposed are really just rules describing default (non-reflexive) cognition. It can be perfectly rational for a reflexive cognizer to break these rules.
The “normativity” of rationality is a reflection of a built-in feature of reflexive cognition — when we detect violations of rationality, we have a tendency to desire to correct them. This is just another part of the descriptive theory of rationality.
Although theories of rationality are descriptive, the structure of reflexive cognition gives philosophers, as human cognizers, privileged access to certain aspects of rational cognition. Philosophical theories of rationality are really scientific theories, based on inference to the best explanation, that take contingent introspective data as the evidence to be explained.
1. The Puzzle of Irrationality
Philosophers ask, “What should I believe? What should I do? And how should I go about deciding these matters?” These are questions about rationality. We want to know how we, as real cognizers, with all our built-in cognitive limitations, should go about deciding what to believe and what to do. This last point deserves emphasis. Much work on rationality is about “ideal rationality” and “ideal agents”. But it is not clear what ideal rationality has to do with real, resource-bounded, agents. We come to the study of rationality wanting to know what we should do, and this paper is about that concept of rationality.
Philosophers, particularly epistemologists, regard irrationality as the nemesis of the cognizer, and they often think of their task as that of formulating rules for rationality. Rules for rationality are rules governing how cognitive agents should perform their cognitive tasks. If asked for simple examples, philosophers might propose rules like “Don’t hold beliefs for which you do not have good reasons (or good arguments)”, “When you do have a good argument for a conclusion, you should accept the conclusion”, and “Be diligent in the pursuit of evidence”. Epistemological theories are often regarded as proposing more detailed rules of rationality governing things like inductive reasoning, temporal reasoning, and so forth, and theories of practical reasoning propose rules for rational decision making.
Philosophers seek rules for avoiding irrationality, but they rarely stop to ask a more fundamental question. Why is it possible for humans to be irrational? We have evolved to have a particular cognitive architecture. Evolution has found it useful for us to reason both about what to believe and about what to do. Rationality consists of reasoning, or more generally, cognizing, correctly. But if rationality is desirable, why is irrationality possible? If we have built-in rules for how to cognize, why aren’t we built to always cognize rationally? Consider the steering mechanism of a car. There are “rules” we want it to follow, but we do that by simply making it work that way. Why isn’t cognition similar? An even better comparison is with artificial cognitive agents in AI. For example, my own system OSCAR (Pollock 1995) is built to cognize in certain ways, in accordance with a theory of how rational cognition ought to work, and OSCAR cannot but follow the prescribed rules. Again, why aren’t humans like this? Why are we able to be irrational?
The simple answer might be that evolution just did not design us very well. The suggestion would be that we work in accordance with the wrong rules. But this creates a different puzzle. If we are built to work in accordance with rules that conflict with the rules for rationality, how does rationality come to have any psychological authority over us? In fact, when we violate the rules of rationality, and subsequently realize we have done so, we feel a certain pressure to “correct” our behavior and conform to the rules. However, if we are built to act in accordance with rules that lead to our violating the rules of rationality, where does this pressure for conforming to them come from? Their authority over us is not just a theoretical authority described by philosophical ideals. They have real psychological authority over us. From whence do they derive their authority? If evolution could build us so that rationality has this kind of post hoc authority over us, why could it not have built us so that we simply followed the rules of rationality in the first place?
It cannot be denied that we are built in such a way that considerations of rationality have psychological authority over us. But we are not built in such a way that they have absolute authority — we can violate them. What is going on? What is the role of rationality in our cognitive architecture? Why would anyone build a cognitive agent in this way? And by extension, why would evolution build us in this way?
These puzzles suggest that we are thinking of rationality in the wrong way. I am going to suggest that these puzzles have a threefold solution. First, the rules philosophers have typically proposed are misdescribed as “rules for rationality”. They play an important role in rational cognition, but it can be perfectly rational to violate them. Second, the reason we can violate them is that we are reflexive cognizers who can think about our own cognition and redirect various aspects of it, and there are rules for rationality governing how this is to be done. But, third, we do still behave irrationally sometimes, and that ultimately is to be traced to a particular localized flaw in our cognitive design.
Having proposed an account of irrationality, we will be able to use that to throw light on rationality. I will urge that the task of describing the rules for rationality is a purely descriptive enterprise, of the sort that falls in principle under the purview of psychology. Still, there is something normative about the rules for rationality, and I will try to explain that. Although, on this account, theories of rationality are empirical theories about human cognition, the nature of reflexive cognition provides philosophers with a privileged access to rational cognition, enabling us to investigate these matters without performing laboratory experiments.
First, some preliminaries.
2. Rationality, Epistemology, and Practical Cognition
Much of epistemology is about how beliefs should be formed and maintained. It is about “rational doxastic dynamics”. Beliefs that are formed or maintained in the right way are said to be justified. This is the “procedural” notion of epistemic justification that I have written about at length in my works in epistemology (Pollock 1987, 1997; Pollock and Cruz 1999). It is to be contrasted with the notions of epistemic justification that are constructed for the sake of analyzing “S knows that P”, an enterprise that is orthogonal to my present purposes.
Procedural epistemic justification is closely connected to rationality. We can distinguish, at least loosely, between epistemic cognition, which is cognition about what to believe, and practical cognition, which is cognition about what to do. Epistemic rationality pertains to epistemic cognition, and practical rationality pertains to practical cognition. Rationality pertains to “things the cognizer does” — acts, and in the case of epistemic rationality, cognitive acts. In particular, epistemic rationality pertains to “believings”. Epistemic justification pertains instead to beliefs — the products of acts of believing. But there seems to be a tight connection. As a first approximation we might say that a belief is justified iff it is rational for the cognizer to believe it. Similarly, practical cognition issues in decisions, and we can say that a decision is justified iff it is the product of rational practical cognition.
It is a commonplace of epistemology that epistemic cognition is not simply practical cognition about what to believe. If anyone still needs convincing of this, note that the logical properties of epistemic cognition and practical cognition are different. For instance, if Jones, whom you regard as a reliable source, informs you that P, but Smith, whom regard as equally reliable, informs you that ~P, what should you believe? Without further evidence, it would be irrational to decide at random to adopt either belief. Rather, you should withhold belief. Now contrast this with practical cognition. Consider Buridan’s ass, who starved to death midway between two equally succulent bales of hay because he could not decide from which to eat. That was irrational. He should have chosen one at random. Practical rationality dictates that ties should be broken at random. By contrast, epistemic rationality dictates that ties should not be broken at all except by the input of new information that renders them no longer ties. So epistemic cognition and practical cognition work differently. And of course, there are many other differences between them — this is just one simple example of the difference.
On the other hand, a common presumption in epistemology is that epistemic justification is a sui generis kind of justification entirely unrelated to practical cognition, and one can study epistemic rationality and epistemic justification without ever thinking about practical cognition. One of the burdens of this paper will be to argue that this is wrong. I will argue that for sophisticated cognitive agents like human beings, an account of epistemic rationality must presuppose an account of practical rationality. I will defend this by discussing how epistemic cognition and practical cognition are intertwined. I will suggest that epistemic irrationality always derives from a certain kind of practical irrationality, and I will give an account of why we are subject to that kind of practical irrationality. It turns out that for what are largely computational reasons, it is desirable to have a cognitive architecture that, as a side effect, makes this kind of irrationality possible. This will be an important ingredient in an account of epistemic rationality, and it will explain why it is possible to hold beliefs unjustifiably, or more generally to be epistemically irrational.
3. Rationality and Reflexive Cognition
First a disclaimer. One way people can behave irrationally is by being broken. If a person suffers a stroke, he may behave irrationally. But this is not the kind of irrationality I am talking about. Philosophers have generally supposed that people don’t have to be broken to be irrational. So when I speak of irrationality in this paper, I am only concerned with those varieties of irrationality that arise in intact cognizers.
The key to understanding rationality is to note that not all aspects of cognition are subject to evaluation as rational or irrational. For instance, visual processing produces a visual representation of our immediate surroundings, but it is a purely automatic process. Although the visual representation can be inaccurate, it makes no sense to ask whether it was produced irrationally. We have this odd notion of having control over certain aspects of our cognition and not over other aspects of it. We have no control over the computation of the visual image. It is a black box. It is both not introspectible and cognitively impenetrable. But we feel like we do have some control over various aspects of our reasoning. For example, you are irrational if, in the face of counter-evidence, you accept the visual image as veridical. The latter is something over which you do have control. If you note that you are being irrational in accepting that conclusion, you can withdraw it. In this sense, we perform some cognitive operations “deliberately”. We have voluntary control over them.1
To have voluntary control over something, we must be able to monitor it. So mental operations over which we have voluntary control must be introspectible. Furthermore, if we have voluntary control over something we must be able to decide for ourselves whether to do it. Such decisions are performed by weighing the consequences of doing it or not doing it, i.e., they are made as a result of practical cognition. So we must be able to engage in practical cognition regarding those mental operations that we perform deliberately. To say that we can engage in cognition about some of our mental operations is to say that we are reflexive cognizers. We have the following central connection between rationality and reflexive cognition:
Rationality only pertains to mental operations over which we have voluntary control. Such operations must be introspectible, and we must be able to engage in practical cognition about whether to perform them.
I will refer to such cognition as voluntary cognition. This need not be cognition that we perform deliberately, but we can deliberately alter its course. Rationality only pertains to voluntary cognition.
4. Q&I Modules
Next, another preliminary. In studying rational cognition, philosophers have often focused their attention on reasoning, to the exclusion of all else. But it is important to realize that much of our belief formation and decision making is based instead on various shortcut procedures. Shortcut procedures are an indispensable constituent of the cognitive architecture of any agent that must make decisions rapidly in unpredictable environments. I refer to these as Q&I modules (quick and inflexible modules). I have argued that they play a pervasive role in both epistemic and practical cognition (Pollock 1989, 1995). Consider catching (or avoiding) a flying object. You have to predict the trajectory. You do not do this by measuring the velocity and position of the object and then computing parabolic paths. That would take too long. Instead, humans and most higher animals have a built in cognitive module that enables them to rapidly predict trajectories on the basis of visual information. At a higher level, explicit inductive or probabilistic reasoning imposes a tremendous cognitive load on the cognizer. We avoid that by using various Q&I modules that summarize data as we accumulate it, without forcing us to recall all the data, and then makes generalizations on the basis of the summary (Pollock 1989, 119ff).
Although they make cognition faster, Q&I modules are often subject to various sources of inaccuracy that can be corrected by explicit reasoning if the agent has the time to perform such reasoning. Accordingly, our cognitive architecture is organized so that explicit reasoning takes precedence over the output of the Q&I modules when the reasoning is actually performed, and the agent can often learn to mistrust Q&I modules in particular contexts. For instance, we learn to discount the output of the Q&I module that predicts trajectories when (often by using that module) we can predict that the flying object will hit other objects in flight.
5. Practical Irrationality
I distinguished between practical rationality and epistemic rationality. By implication, we can distinguish between practical irrationality and epistemic irrationality. Practical irrationality is easier to understand. The explanation turns on the role Q&I modules play in practical cognition. Paramount among the Q&I modules operative in practical cognition is one that computes and stores evaluations of various features of our environment — what I call feature likings. Ideally, feature likings would be based on explicit computations of expected values. But they are often based on a form of conditioning instead (see my (1995), (2001), and (2006a) for more discussion of this). The advantage of such evaluative conditioning is that it is often able to produce evaluations in the absence of our having explicit beliefs about probabilities and utilities. It estimates expected values more directly. But it is also subject to various sources of inaccuracy, such as short-sightedness. Thus, for example, a cognizer may become conditioned to like smoking even though he is aware that the long-term consequences of smoking give it a negative expected value.
Decision-making is driven by either full-fledged decision-theoretic reasoning, or by some of the shortcut procedures that are an important part of rational cognition. If a decision is based on full-fledged decision-theoretic reasoning, then it is rational as long as the beliefs and evaluations on which it is based are held rationally. This is a matter of epistemic rationality, because what is at issue is beliefs about outcomes and probabilities. If the decision is based on a shortcut procedure, it is rational as long as it is rational to use that shortcut procedure in this case. And that is true iff the agent lacks the information necessary for overriding the shortcut procedure. So the cognizer is behaving irrationally iff he has the information but fails to override the shortcut procedure. For instance, a person might have a conditioned feature liking for smoking, but know full well that smoking is not good for him. If he fails to override the feature liking in his decision making, he is being irrational. This seems to be the only kind of uniquely practical irrationality (i.e., practical irrationality that does not arise from irrationally held beliefs). We might put this by saying that smoking is the stereotypical case of practical irrationality. The smoker is irrational because he knows that smoking has a negative expected value, but he does it anyway.
What makes it easy to understand practical irrationality is that all decision making has to be driven by something, and if the cognitive system is not broken, these are the only ways it can be driven.
Overriding shortcut procedures is something one can explicitly decide to do. One can engage in higher-order cognition about this, and act on the basis of it. So overriding shortcut procedures is, in the requisite sense, under the control of a reflexive agent. Is the agent who fails to override a shortcut procedure just not doing enough practical cognition? That does not seem quite right. The smoker can think about the undesirable consequences of smoking, and conclude that he should not smoke, but do it anyway. He did all the requisite cognition, but it did not move him. The problem is a failure of beliefs about expected values to move the agent sufficiently to overcome the force of the shortcut procedures. The desire to do something creates a strong disposition to do it, and the belief that one should not do it may not be as effective. Then one is making a choice, but one is not making the rational choice. Note, however, that one can tell that one is not making the rational choice. Some smokers may deny that they are being irrational, but they deny it by denying the claims about expected values, not by denying that they should do what has the higher expected value.
I have suggested that uniquely practical irrationality always arises from a failure to override the output of Q&I modules, and I have illustrated this with a particular case — the failure to override conditioned feature likings. There is a large literature on practical irrationality,2 and I do not have time to survey it here. My general focus will be on epistemic irrationality instead. I have not done a careful survey of cases of practical rationality, but I think it is plausible that uniquely practical irrationality always consists of the failure to override the output of Q&I modules. This source of practical irrationality seems to be a design flaw in human beings, probably deriving from the fact that Q&I modules are phylogenetically older than mechanisms for reasoning explicitly about expected values. It is important to retain Q&I modules in an agent archiecture, because explicit reasoning is too slow. In particular, it is important to retain evaluative conditioning, because explicit reasoning about expected values is both too slow and requires too much experience of the world for us to base all our decisions on it. But it appears that evolution has done an imperfect job of merging the two mechanisms.
The upshot is that practical irrationality is easy to understand. My suggestion is that it may all derive from this single imperfection in our cognitive architecture. When I turn to epistemic irrationality, I will argue for the somewhat surprising conclusion that it too derives from this same source.
6. Reflexive Epistemic Cognition
Epistemic irrationality consists of holding beliefs irrationally. We have seen that rationality only pertains to mental operations over which we have voluntary control. Such operations must be introspectible, and we must be able to engage in practical cognition about whether to perform them. But why would a cognitive agent be built so that it has voluntary control over some of its mental operations? Why not build it so that it follows the desired rules for cognition automatically? What I will now argue is that there are good reasons for building a cognitive agent so that it is capable of such reflexive epistemic cognition.
Share with your friends: |