Omissions: an Exclusion Problem for Causalism

Omissions: an Exclusion Problem for Causalism

Omissions: An Exclusion Problem for Causalism

I

Causal theories of agency are the norm nowadays. According to theories of this type (“causalist” theories, for short), actions are bodily movements or states appropriately caused by mental states of the agent (belief and desire pairs, intentions, decisions, etc). Those mental states are the reasons for which the agent acts. The causal link between the mental states and the action differentiates the specific reasons for which the agent acts from the possibly more inclusive set of reasons that he has for acting. Causalism is traditionally attributed to Davidson and his followers (Davidson (1963)).

Causalism faces several problems. I will mention two of them.[1] First, according to non-reductive physicalism, a common view in the philosophy of mind, mental states are realized by, but not identical to, physical states (presumably, neural states). Thus, for every bodily state of an agent that mental states allegedly cause, there is a causal explanation that appeals only to the underlying neural states. Those neural states are sufficient to account for the ensuing bodily states. As a result, on pain of committing ourselves to an overabundance of causes, it seems that we should conclude that the bodily states are not really caused by the mental states but by the neural states. In other words, mental states are “epiphenomenal” and thus the causalist cannot appeal to them in his account of agency. This is the problem of causal exclusion for the mental by the physical. (See, e.g., Kim (1993a).)[2]

A second problem for causalism is that it is not obvious that causalist theories have the resources to capture the way in which omissions can also express agency. Omissions (at least “intentional” omissions, or things that agents fail to do intentionally) are an important way in which our agency is manifested. For example, it seems that my voluntarily refraining from saving a child who is drowning in a pond nearby says as much of me as an agent as certain “positive” actions or commissions that I may perform.[3] Thus, it looks as if a comprehensive account of agency should encompass omissions. However, on some views of causation, only events (i.e. positive occurrences) can be causes and effects. On the assumption that this view of causation is true, it is hard to see how omissions and other absences could enter in causal relations. Hence, it is hard to justify the causalist claim that omissions (like actions) are caused by the agent’s mental states.

These problems are not intractable. The first problem can be dealt with in a variety of ways, for instance, by arguing that the kind of causal redundancy that mental and physical events give rise to is not problematic, or that mental and physical events can both have causal powers. The second problem, in turn, can be addressed by giving a persuasive account of causation on which omissions (and other absences) are capable of entering in causal relations.[4] I will argue, however, that there is a problem for causalism that arises even if these two main traditional problems are solved, and that combines features of the two: it’s a new problem of causal exclusion for just omissions. I lay out the new problem in the next section.

II

In order to accommodate omissions, a causalist should adopt a liberal conception of the notion of bodily movement (or state) on which non-movements also count as bodily movements (or states) and can be caused by mental events (or states) of the agent. Consider, for instance, my intentionally failing to jump into the water to rescue a drowning child. The causalist should say that some mental events or states of mine, e.g., my decision not to jump into the water, caused a certain bodily state of mine (or the absence of a movement of a certain type), e.g., my not jumping into the water. This is parallel to the way in which, if I had intentionally jumped into the water to save the child, some mental events or states of mine, e.g., my decision to jump in, would have caused the bodily movement consisting in my jumping in. In what follows I focus on decisions as the relevant mental entities, but nothing essential hangs on this (we could also take them to be intentions, belief/desire pairs, etc.).

The causalist, then, is committed to the truth of:

(Claim 1) My deciding not to jump in caused my not jumping in.

At first sight, Claim 1 might seem very plausible: it seems that I didn’t jump into the water because I decided against doing so. On the face of it, decisions (and other mental events or states) can cause people not to do things just as they can cause them to do things. For instance, it seems that my abstaining from voting in an election can be the result of a careful process of deliberation ending in a decision not to vote, just like my voting for a certain candidate can be the result of a careful process of deliberation ending in a decision to vote for that particular candidate. As a result, assuming that there is no problem with absences entering in causal relations, or with mental events and states in general entering in causal relations (that is, assuming that the two aforementioned problems for causalism can be overcome), Claim 1 appears to be true.

For the purposes of this paper, I will grant that omissions and absences in general are capable of entering into causal relations and that mental events and states can be causally relevant. Still, I will argue that there are specific problems for the claim that, in the case of an intentional omission, the relevant decision causes the relevant bodily state (and the same goes for other mental events or states of the agent). In particular, I will argue that, in the case of my intentionally failing to save the child, my decision not to jump in doesn’t cause my failure to jump in.

An implicit assumption so far has been that omissions should not be identified with actions. In particular, this assumption requires that an agent’s omission be distinguished from any action that the agent might have performed in place of the action omitted. For instance, my not saving the drowning child should be distinguished from my eating ice cream on the shore at the time when I could have been saving the child. This is something that almost everyone agrees with (note, in particular, that there would be no special problem of omissions if omissions were simply identical with actions). There are at least three reasons to preserve the distinction between omissions and actions. First, there might not be an action that I performed instead of saving the child. Second, there might be more than one. And third, even if there had been one and only one action, it still seems wrong to identify it with the omission. For instance, it seems wrong to say that I am responsible for what happened to the child in virtue of eating ice cream on the shore. If I am responsible, it’s because of what I didn’t do, not because of what I did in its place. Thus my failing to save the child, that in virtue of which I am responsible for his death, is not to be identified with my action of eating ice cream on the shore.

But now consider the potential causes of omissions such as my failure to save the child. Claim 1 says that this failure was caused by my decision not to jump in. Now, it seems that, just as we should distinguish between my eating ice cream on the shore (an action) and my failure to jump in (an omission), we should distinguish between my deciding not to jump in (an action) and my failing to decide to jump in (an omission). The latter are related to each other in the same way that the former are: although both of them obtain in the actual scenario, they are not identical. The omission represents what I didn’t do (mentally) whereas the action represents what I did instead. Relatedly, the action is more specific than the omission: in every possible circumstance where I decide not to jump in, I also fail to decide to jump in, but it is not the case that in every possible circumstance where I fail to decide to jump in, I also decide not to jump in. Imagine that I hadn’t been able to make a decision one way or the other, but that I had still been deliberating when the child died. This is a scenario where I fail to decide to jump in, but not by deciding not to jump in. This is parallel to the way in which my action of eating ice cream on the shore is more specific than my failing to jump in: if I was eating ice cream on the shore, I couldn’t have been saving the child, but I could have been failing to save the child without eating ice cream on the shore (e.g., I could have been reading a book instead).

The following chart represents the parallel between the ordinary acts (the ordinary action/omission) and the potential causes of the ordinary omission (my failure to jump in):

Ordinary actsAction: My eating ice cream

Omission: My failing to jump in

Potential causes of ordinary omissionAction: My decision not to jump in

Omission: My failure to decide to jump in

Now, on the basis of these distinctions, let us revisit Claim 1. Is Claim 1 really true? Was my not jumping in caused by my mental action—my decision not to jump in? Consider, as an alternative:

(Claim 2) My failing to decide to jump in caused my not jumping in.

In other words, whereas Claim 1 says that the cause of my not jumping in was what I did (mentally), Claim 2 says that it was what I didn’t do (mentally). Which one is more likely to be true? Or does the truth of one not exclude the truth of the other? In the next section I argue for Claim 2 and for the fact that the truth of Claim 2 threatens to undermine the truth of Claim 1. I will call this conjunctive thesis the “causal exclusion for omissions” thesis (CEO).

III

I will offer two interrelated arguments for CEO.

First Argument:

Start by focusing on ordinary actions and omissions. As I have pointed out, my failing to jump into the water (an ordinary omission) should not be identified with what I did instead of jumping in, e.g., my eating ice cream on the shore (an ordinary action). But then consider the question: What caused the child’s death? Did my failing to jump into the water cause it? Did my eating ice cream cause it? On the assumption that omissions are capable of entering in causal relations, it seems clear that my omission was a cause of the child’s death: the child died because I didn’t jump into the water to save him. In addition, given that we have not identified my omission with my action (what I did instead of jumping in), there is no further motivation to claim that the action also caused the death. Again, it seems that the child died because of what I didn’t do, not because of what I did in its place. In other words, the claim:

(Claim 3) My eating ice cream on the shore caused the child’s death.

seems false in light of the truth of the following claim:

(Claim 4) My failing to jump in caused the child’s death.[5]

Now, the argument continues, if the truth of Claim 4 is enough to cast doubt on Claim 3, then, by the same token, the truth of Claim 2 should be enough to cast doubt on Claim 1. For, again, on the assumption that omissions can be causes, Claim 2 seems clearly true: my failing to decide to jump in caused my not jumping in (I didn’t jump in because I didn’t decide to jump in). In addition, given that we are assuming that omissions are not identical to actions, there is no motivation to claim that my mental act (my decision) was also a cause. For, again, it seems that I didn’t jump in because of what I didn’t decide to do, not because of what I did decide to do.[6]

In other words, the argument suggests that the best way of conceiving my relationship to the outcome of the child’s death is as a negative relationship throughout the causal chain. This includes my mental behavior. In other words, it seems that what we should say is that the child died because of what I didn’t do, including what I didn’t decide to do. Even though I also made some positive decisions not to be involved in certain ways, they seem causally irrelevant. All that was causally relevant is the fact that I didn’t decide to (didn’t intend to, didn’t want to, etc.) be involved in certain ways.[7]

To clarify: I don’t mean to suggest that absences can only have other absences as causes, or that absences can only cause other absences. All I want to suggest is that this is true of the type of situation that is our focus here. It is certainly possible for absences to cause positive occurrences and to be caused by them. In fact, the same example of the drowning child illustrates the point that an omission can cause a positive occurrence: my failure to jump in (an omission) causes the child’s death (an event). More importantly, it also illustrates the point that a failure to make a certain decision (the type of omission that is the focus of Claim 2) can cause a positive occurrence: my failure to decide to jump in caused the child’s death (an event), via its causing my failure to jump in.[8] In turn, the following is an example of an omission being caused by an event. Imagine a similar scenario where someone (say, the lifeguard) could have saved the child, but where I couldn’t have saved him (I am not a good enough swimmer). Imagine that, when the lifeguard was about to jump in to rescue the child, I shot him. In that case my shooting the lifeguard (an action) caused the lifeguard’s not jumping in (an omission). This case also shows that a mental action like a decision (the type of act that is the focus of Claim 1) can cause an omission: my decision to shoot the lifeguard caused the lifeguard to fail to jump in, via its causing my shooting the lifeguard.

So omissions, and absences in general, can cause positive occurrences and be caused by them. Why doesn’t my decision not to jump in cause my failing to jump in, then? In other words, what is the difference between the original drowning child case and the lifeguard variant of the case? Why is the relevant omission caused by an event (a mental action) in one case but not in the other? The difference is the following. In the original version, in order for me to fail to jump in, I only had to fail to make a certain decision (the decision to jump in). By contrast, in the lifeguard version, in order for the lifeguard to fail to jump in, I actually had to make a certain decision (the decision to shoot the lifeguard). Thus, whereas in the original version the relevant omission (my not jumping in) obtained because of what I didn’t decide to do, in the lifeguard variant the relevant omission (the lifeguard’s not jumping in) obtained because of what I did decide to do. In other words, whereas in the original version my contribution is “negative,” in the lifeguard version it is “positive.”

This concludes my first argument for CEO. We can see now how the causal exclusion problem for omissions differs from the traditional exclusion problem. Whereas the traditional problem is, briefly: How can mental events and states be causally efficacious, if physical states seem to do all the causal work?, the problem for omissions is, briefly: How can mental events and states be causally efficacious in the case of omissions, if their absences seem to do all the causal work? These problems are independent. In particular, even if we solved the traditional problem, the problem for omissions would persist. For the problem for omissions doesn’t rest on a general “exclusion principle” according to which no phenomenon can have more than one sufficient cause, or anything of the sort.[9] Instead, as this first argument for CEO shows, it is enough for the problem to arise that we accept an independently plausible claim: the claim that omissions and actions don’t generally do the same causal work (e.g., my eating ice cream on the shore and my failure to jump into the water don’t have the same causal powers). As long as one accepts this, the argument for CEO seems to go through.

Second Argument:

The second argument for CEO appeals to the notion of counterfactual dependence. This notion is defined as follows:

Y counterfactually depends on X just in case, if X hadn’t happened, then Y wouldn’t have happened.

It is a well-recognized fact (even by critics of so-called “counterfactual analyses” of causation) that counterfactual dependence is tightly linked to causation. By this I mean: counterfactual dependence of the right type is, in normal circumstances, a reliable indicator of a causal relation. The first caveat has to do with the fact that some counterfactual relations reveal dependencies that are stronger than causal (see Kim (1993b)). For instance, Xanthippe’s becoming a widow counterfactually depends on Socrates’ death (if Socrates hadn’t died, Xanthippe wouldn’t have become a widow); however, the relation between Socrates’ death and Xanthippe’s widowhood isn’t causal, but “logical” or “conceptual”. Also my writing down the word “cat” counterfactually depends on my writing down the letter “c” (if I hadn’t written down the letter “c”, I wouldn’t have written down the word “cat”), but the relation between my writing down “c” and my writing down “cat” isn’t causal, but “mereological” (the relation that obtains between the parts of a whole and the whole).[10] The second caveat has to do with the fact that the link between counterfactual dependence and causation is lost in so-called “preemption” cases. Consider, as an example, the following case: Fast Assassin shoots and kills the victim, Slow Assassin also shoots but his bullet reaches the victim when he’s already dead. In this case Fast Assassin is a cause of the victim’s death but the death doesn’t counterfactually depend on his shooting: it would still have happened even if Fast Assassin hadn’t shot. The mark of preemption cases is the existence of a process that acts as a backup for the process actually leading to the effect: if the actual process hadn’t come through, the backup process would have stepped in and would have issued in the outcome all the same. By claiming that the link between counterfactual dependence and causation holds in “normal” circumstances I mean that it holds in circumstances not involving backup processes of this sort: setting aside cases of this type, counterfactual dependence (of the right type) is a reliable indicator of a causal relation.[11]