Tag Archives: pain

Reflections on My Dislocated Shoulder: Two Types of Pain and Their Moral Significance

I recently dislocated my right shoulder and not surprisingly this experience has caused me to reflect on the nature of pain. In this post I will use my own experience coupled with a thought experiment to argue for two distinct types of pain: reflective pain and nonreflective pain. Having spelled out this distinction, I will raise some difficult questions about their respective moral significance.

Reflective Pain

If you are right-handed like myself, a dislocated right shoulder is an example of an injury that occasions reflective pain par excellence.  In essence, reflective pain is pain that interferes with your day-to-day functioning by causing you to consciously reflect on it more than normal. Everything is now harder and more painfully deliberate to do e.g. taking a shower, putting on clothes, hugging my wife, wearing a backpack, opening a beer, etc. The thousands of micro-tasks I typically used my dominant hand for in coordination with my non-dominant left now must be performed awkwardly with my left hand alone in order to minimize pain in my right shoulder. This has halted my daily productivity significantly. For example, as a grad student and denizen of the 21st century, I spend much of my time on a laptop. It’s amazingly slow to type with only your left hand on a QWERTY keyboard. You actually type significantly less than half of normal speed because you have less fingers but you also have to stretch your fingers more to reach across the whole keyboard. This has made day-to-day academic housekeeping and research painfully tedious in a literal sense.

Thus, the salient feature of reflective pain is that you can’t help but reflect on it because throughout the day you are continually reminded of your injury every time you go to do something that you previously would have done without hesitation. Now every motor intention is tentative and the perception of thousands of lost affordances is palpable. Reflective pain intrudes and interferes with your thought processes because you are acutely aware of the bodily powers you have lost and the pain that has replaced them.

What about nonreflective pain?

Nonreflective Pain

Nonreflective pain is quite different from reflective pain. Imagine you are walking across a desert keenly intent on getting to the other side. It’s sweltering hot so you expose your back to the air. In so doing you introspectively notice a pain sensation localized to a patch of skin on your back. You can’t remember how long that pain sensation as been there. The pain isn’t screamingly intense nor does it burn or throb. It’s more like a light tingle or steady buzz. It doesn’t itch and you feel no compulsion to reach behind you and scratch or rub it. In fact, the pain seems to be minimized by simply leaving it alone. The pain is localized such that the movement of your muscles and skin across your skeleton doesn’t exacerbate the pain. In fact the pain doesn’t interfere with your walking at all.

 The pain doesn’t necessarily command your full attention and often when you are absorbed in watching out for rattlesnakes or walking across tough terrain you entirely forget the pain is there. It’s only when you get on flat easy ground again and your mind begins to wander that you can notice the pain, buzzing with the same steadiness as always.

As you walk you begin to use the pain as a source of introspective entertainment. The pain becomes more of an interesting sensation to play with than a genuine nuisance. The pain is neither pleasant nor unpleasant. It’s simply there. You can choose to attend to it or not. You can describe the sensation and localize it to a particular patch of skin, but you don’t mind the sensation; it doesn’t bother you. In fact you have grown to like it because it gives you something to reflect on as you walk mindlessly across the desert. What’s interesting about the pain is when you are not reflecting at all but entirely in the flow of walking the pain is not consciously noticed at all. There is seemingly no conscious awareness of the pain as you are absorbed in walking. There is only the ground before you and your movements. But even if you don’t consciously attend to the pain the pain is there nonetheless (presumably). It’s a steady sensation, but it seems then that not all sensations are necessarily conscious. This is what David Rosenthal might call “nonconscious qualia”.  If you didn’t introspect and reflect on the pain sensation, it’s hard to imagine it interfering with your cognitive functioning except at the grossest level of physiological nociception.

The Ethics of Pain

Now that I’ve distinguished these two types of pain, I want to ask a series of rhetorical questions. Do animals have reflective pains or are all their pains nonreflective? If so, which animals have reflective pain? All of them, or only the super-intelligent animals like apes, dolphins, and elephants? What about fish, insects, rats and cats? What is the evolutionary function of reflective pain, if it even has one? Is nonreflective pain just as morally significant as reflective pain? If we knew that a vegetative state patient had nonreflective pain, are clinicians obligated to give them pain medication?

Perhaps these are bad questions because the distinction is a false dichotomy, or conceptually or empirically mistaken. Maybe it’s a matter of degree. But it seems intuitive to me that there is something morally distinctive about the type of pains that cause us suffering and anguish on account of our reflecting on them and not just in virtue of the first-order sensory “painfulness” of them. I don’t mean to suggest that first-order painfulness has no moral significance but it seems to me that it should be weighted differently in a utilitarian calculus.


Filed under Consciousness, Philosophy, Psychology

A quick thought on pain and suffering

It is common for theorists to distinguish between pain and suffering. Pain is generally associated with nociception, a very primitive chemical detection system that responds to cellular damage signals. Suffering, in contrast, is usually defined as the minding of pain, sometimes called the “affectivity” or “unpleasantness” of pain. In humans and monkeys, the pain system and the minding system can be teased apart.  Such a distinction has considerable moral implications for how we treat nonhuman animals. Many philosophers think that it is only the minding of pain, and not pain itself, that deserves moral consideration. Thus, any creature who only has nociception but does not mind pain will not fall under the full umbrella of moral consideration. Moreover, the minding system has been associated with having an Anterior Cingulate Cortex. All mammals have an ACC. Therefore, this seems like a good reason to grant all mammals moral status.

But I propose to make a further distinction between the minding of pain and the introspective awareness that you mind pain. It is unfortunate that the term “minding pain” seems to imply a kind of higher-order awareness since “minding” sounds like a cognitively sophisticated capacity reminiscent of introspection. But if a rat can mind pain, how complex could it really be? Such an capacity doesn’t strike me as all that fancy. And I am skeptical that in humans we have really teased apart minding from introspective awareness of minding. Do we really know that what “bothers” humans is the minding shared with rats or the introspective awareness of minding? More experimentation will be needed to tease this apart, but it is difficult because the verbal reports necessary to determine minding levels seem to be confounded by introspective awareness.

Don’t take me the wrong way. I’m not arguing that only introspective awareness of minding is deserving of moral consideration. Otherwise, I’d be left with the conclusion that we can treat newborn babies as mere objects, a conclusion I obviously reject. It seems plausible that the ability to merely mind pain deserves some moral consideration. But the crucial question is, how much? It seems plausible to me that we have good reason to want to reduce all instances of minding pain in the universe. But it also seems plausible to me that we have good reason to prioritize the reduction of the introspective awareness of minding over the mere minding. This line of reasoning includes nonhuman mammals into the moral sphere, but does not place them on an equal status with well-developed human beings capable of introspective minding.

Leave a comment

Filed under Consciousness, Philosophy

Just how far should we expand the circle of ethics?

Right now I am reading Peter Singer’s book The Expanding Circle. It’s a good book so far. It’s clear, well-argued, and written with a sense of moral urgency. The central argument is that due to the way ethical reasoning works on the basis of impartiality, it would be arbitrary to restrict the moral community to a single group, such as your own tribe, gender, or race. Hence, the evolution of morality over the years is moving (and will hopefully continue to move) in the direction of ever greater impartiality as seen by societal advances in abolition, woman’s right’s, etc. However, Singer also argues that we should expand to circle of ethics beyond the human realm to all other sentient creatures capable of feeling pleasure or pain. Singer argues that it would be just as arbitrary to restrict ethical considerations to humans as it would be to restrict them to a certain class of humans.

But then how far down the evolutionary continuum should we go? Singer thinks we should probably draw the line around oysters and the like, since it seems implausible that oysters are capable of feeling pleasure or pain. And Singer definitely thinks we should not expand the circle to include inanimate entities like mountains or streams. So what’s so special about the ability to feel pleasure or pain? Singer thinks that this capacity is a nonarbitrary dividing line because it’s something that humans can take into consideration. On what basis could we include mountains and streams into our moral deliberation? There seems to be none. But the fellow capacity to feel pleasure and pain seems like a good candidate.

This is where I must disagree with Singer. I simply don’t see what’s so morally special about the ability to detect cellular damage. And that is all pain perception really is. It’s an evolved biological mechanism that registers damage to cells and then relays that information to appropriate motor centers to move the creature out of harm’s way, which increases the biological fitness of the creature, maintaining homeostasis and organisational structure. Vegetarians like Singer loathe this line of thinking because it brings to mind the old practice of torturing cats and dogs because Descartes argued they can’t really feel pleasure or pain because animals are simply unfeeling mechanisms. But I don’t think the permissibility of wanton torture follows from the idea that pain perception is just a simply biological mechanism for damage detection. Even if it is permissible to use animals for food, it doesn’t follow that it’s permissible to torture them for fun. Even if it’s permissible to eat animals for food, we might still be obligated to treat them with respect and try to lower the occurrence of pain to it’s absolute minimum. But, personally, I believe that just having the capacity to feel pain doesn’t launch you into the moral category whereby it becomes impermissible to be used for food for humans.

I’ve heard it claimed that this kind of speciesism is injustifiable if we consider the cognitive capacities of those who are extremely mentally handicapped or incapacitated. Since presumably I think speciesism is justifiable because humans are cognitively superior to nonhuman animals, then it should be ok to treat cognitively inferior humans just like we do cattle. Since we wouldn’t think it’s ok to do this to mental invalids, we can’t just use cognitive superiority to justify the way we treat nonhuman animals. My immediate response to this is that there is a difference between entities who, if everything had been biologically optimal, could have developed to the human cognitive level, and entities who could never reach that level despite being developed in optimal biological conditions. This principle of potentiality is enough to show how it’s nonarbitrary to treat human invalids different from nonhuman animals.

 There’s another point I want to make about the moral worth of pain itself. How could it be of that much importance when nonhuman animals themselves seem to be indifferent to it compared to the typical human response to pain? I read in Euan MacPhail’s The Evolution of Consciousness that there have been field reports of chimps getting into fights with other males, having their testicles bitten off, and immediately afterwards being capable of having sex with a female. I doubt there is any human who is horny enough to ignore the pain of having their genitals mutilated just to have sex. On the basis of this observation, we can infer that chimp pain perception is different from the awareness of pain that humans possess. And since chimps are seen by people like Singer as being the most worthy of our ethical consideration, what does this say about the pain capacities of animals even lower down the totum pole than chimps? Nonhuman animals don’t seem to “care” about their pain to the same extent that humans do. Caring about pain as opposed to pain itself goes by another name: suffering i.e meta-consciousness of pain. While it is plausible that some nonhuman animals have the capacity for a kind of protosuffering, it seems clear to me that human suffering is of a level of sophistication far beyond that of any nonhuman animal. Now, I don’t have a clear argument for why human suffering is more morally valuable than the mere pain of nonhuman animals, but it is at least a nonarbitrary cutting off point and one that has a kind of intuitive support.

However, I don’t think the moral worth of human suffering over nonhuman pain is enough to justify the claim that nonhuman pain has no moral worth at all. As a matter of fact, I agree with Singer that the pain of nonhuman sentient beings does have some moral worth, and that we are obligated, ultimately, to reduce that pain. For this reason, if I was presented in a supermarket with the choice of eating real beef or artificial beef grown in a lab, I would choose the artificial beef. So the only reason I am not a non-meateater is because the right technology has not been invented yet. As  soon as that technology becomes available (and they are working on it), I will gladly give up my practice of eating meat. But since I believe that eating meat is a very healthy way to get protein and animal fats into my diet, I do not think the current pains of nonhuman animals is enough to overcome the selfishness involved in maintaining my own health, for I value my own life over those of nonhuman animals. Again, this is not because I don’t place any value in nonhuman life. In my ideal world, not a single sentient entity would ever feel unnecessary pain. I feel predation to be evil, but I nevertheless eat animals for health reasons. If I sincerely thought vegetarianism was healthier than an omnivorous diet, I would be a vegetarian (which would be nice because it would line up with my beliefs in the evils of predation). But since I am a speciesist and value human life more than nonhuman life, I think it is permissible for me to continue my practice until the technology of artificial meat becomes widely available. I’m aware of the possibility that this reasoning could be nothing more than a post-hoc rationalization of my comfortable habits of meat eating. But I do think that there is a nonarbitrary argument to be made for speciesism that makes the exclusion of nonhuman animals from the moral sphere far less arbitrary than the exclusion of subclasses of humans. Contra Singer, I don’t think speciesism is equal to racism or sexism.


Filed under Philosophy

Thoughts on Dennett's distinction between personal and subpersonal levels of explanation

I recently purchased the anthology Philosophy of Psychology: Contemporary Readings, edited by José Bermúdez. The first article in the collection is by Dan Dennett and it’s called “Personal and Sub-personal Levels of Explanation”.It’s a classic Dennettian paper, both in style and content. His overall goal in the paper is to defend a sharp distinction between the personal and subpersonal level of explanation. His primary example to illustrate the need for this distinction is the phenomenon of pain. For Dennett, the subpersonal level of explanation for pain  is pretty obvious and straightforward: it involves a scientific account of the various neurophysiological activities triggered by afferent nerves responding to damage which would negatively affect the evolutionary fitness of an organism. The subpersonal LoE does not need to actually reference the phenomenon of “pain”. It merely explains the physical behavior of the system under the umbrella framework of evolutionary theory.

In contrast to the subpersonal LoE, the personal LoE for pain would explicitly use the word/concept “pain” in order to explain the phenomena of pain. What does this involve? The personal LoE basically involves recognizing that for the person having the pain, the pain is simply picked up on i.e. distinguished by acquaintance. If we ask a person to give a personal-level explanation of their pain, Dennett thinks that the best they can do is simply say “I just know I am in pain because I recognized that I was in pain because I had the sensation of pain because I just knew I was in pain because I was conscious of pain and I just immediately know whether I am in pain or not, and so on.” It might seem like on this LoE, there needs to be something additional, because the explanation seems to be strangely circular and nonexplanatory. Dennett thinks this is a feature, not a bug of the personal level of pain and absolutely cannot be avoided. Dennett thinks that if you are going to invoke the concept of pain at all in your explanation of a phenomenon, then you (should) automatically resign yourself that the explanation can never be in terms which violate the essential nature of the pain as being something “You just know you have” without being able to give a mechanical account of how you know it. You just know.

Dennett thinks that if we are going to use/think about the concept “pain”, then we must be ready to make a sharp distinction between these two LoE. On the subpersonal level, you need not refer to the phenomenon of pain. You simply account for the physical behavior of the system in whatever scientific vocabulary is appropriate. On the personal level, you acknowledge that the term “pain” does not directly refer to any neurophysiological mechanism. In fact, it doesn’t refer at all. It references the phenomena of “just knowing you are in pain”, in virtue of the immediate sensation of painfulness, which then produces “pain talk”. Of course, Dennett notes that we can sensibly inquire into the neural realizers for such “pain talk”, but for Dennett it is crucial to realize that on the personal LoE, pain-talk is not referential, but rather, only makes sense in terms of being the pain of a person (not a brain) who “just knows” they are in pain, when in pain.

My problem with Dennett’s sharp distinction is that he seems to too readily accept the personal level phenomena as “brute facts”, not susceptible to further levels of mechanical/functional analysis. Take pain, for example. A.D. Craig has been developing a rather interesting view of pain as a homeostatic emotion, in the same way that hunger is a homeostatic emotion. The “feelings” of pain can then be likened to the “feelings” of hunger. On this account, human pain is both a sensation (based on ascending nerve signals) and a motivation (which leads to pain avoidance behaviors). The sensory aspect of pain is clear enough, and no different from Dennett’s subpersonal account, but the motivational aspect of pain comes from the thalamocortical projections of the primate brain which provide a sensory image of the physiological condition of the body, and are more or less directly tied into limbic pathways (i.e. motivational pathways).

Crucially, this account of pain starts to provide an account of the personal feelings that go beyond an acceptance of the “brute facts” of painfulness. The “just knowing” that you are in pain is analogous to the “just knowing” that you are hungry. The interoception of homeostatic indicators is reliable since if it was not it probably wouldn’t have evolved. Just like I “just know” I am perceiving/interacting with my laptop right now, if I was in pain, I would “just know” I am in pain. This is because pain is a homeostatic emotion generated by the interoception of homeostatic indicators, just like hunger is a feeling generated by the interoception of homeostatic indicators, and the feeling of knowing the laptop is there in front me of is generated by exteroception of the actual laptop. Think about the “pain” of being cold. The regulation of temperature in the body is obviously a homeostatic process, and the process of regulation includes both a sensory component (the feeling of being cold) and a homeostatic motivational state (the motivation to do something about being cold). Pain works the same way. It has both a sensory component (which we feel), and a motivational aspect (pain leads to avoidance behaviors). And here we can start to see what a functional explanation of the personal level would look like. As Craig says,

In humans, this interoceptive cortical image engenders discriminative sensations, and it is re- represented in the middle insula and then in the right (non-dominant) anterior insula. This seems to provide a meta-representation of the state of the body that is associated with subjective awareness of the material self as a feeling (sentient) entity – that is,emotional awareness – consistent with the ideas of James and Damasio.

It seems like this “meta-representation” which generates feelings of self-hood and associated cognitive processes of a self-referential nature could lead to the feelings of personhood referenced in the personal LoE. So although we might still be able to rescue the sharpness of Dennett’s distinction between the different LoE, it seems like the distinction gets blurred and becomes unhelpful when you start talking about the meta-representational functions which give rise to the associated mental phenomena of personal level pain-feelings and pain-talk for adult human beings.

Leave a comment

Filed under Consciousness, Philosophy, Psychology

Some thoughts on pain, animals, and consciousness

I just started reading Euan Macphail’s book The Evolution of Consciousness and the first chapter raises an interesting question: do animals have consciousness?

First, we need to define consciousness in order to determine whether or not animals besides humans possess it. We could roughly distinguish between two types: feeling-consciousness and metaconsciousness. Metaconsciousness is often referred to as self-consciousness and seems to depend on there being a self-concept in place that allows for such metacognitive functions as knowing that you know, thinking that you think, desiring about your desires, etc. Metaconsciousness seems to be a very rare cognitive skill and could plausibly be restricted to humans only since it seems unlikely that a mouse knows that he knows something, or is aware of his own awareness. Moreover, we must be clear to distinguish metaconsciousness from prereflective bodily self-consciousness, which is the self-consciousness that arises from simply having an embodied perspective on the world and not necessarily from having an explicit self-concept structured by linguistic categories such as self, person, soul, mind, consciousness, etc. Although all animals could be said to have bodily self-consciousness, it is unlikely that nonhuman animals have a self-consciousness of this bodily self-consciousness.

In contrast to metaconsciousness, we can talk about what Macphail calls feeling-consciousness. Obvious examples of feeling-consciousness include the experience of pleasure, suffering, love, motivation, etc. Moreover, feeling-consciousness includes sensory feels such as my the feeling that I am currently looking at my laptop screen, or the feeling of my clothes on my body and the keyboard against my fingertips.

While many people would agree that nonhuman animals do not have metaconsciousness, it seems plainly wrong to deny animals feeling-consciousness. After all, isn’t it quite clear that an animal experiences pain in the same way humans do? This argument is often made through analogous comparisons of behavior. We assume that if a person pricks a human with a needle, and the human rapidly withdraws his hand, he does this because the needle hurts. And since we can prick the paw of an animal and the animal exhibits the same rapid withdraw, then we would be perfectly right in concluding that the animal also withdraws because it feels pain. The same goes with vocalization. If you prick a human with the needle, he might yelp or cry out in pain. And if we prick an animal, it will also make a vocalization in response. We can also measure involuntary responses like heart rate. When a human experiences pain, these involuntary processes occur. And when we prick an animal, we see the same involuntary responses. The obvious conclusion then is that animals feel pain just the same as humans.

But are these behavioral criteria necessary for feeling pain? We wouldn’t, for example, think that vocalization is necessary for the experience of pain, since a human born without vocal cords would surely experience pain just the same. Same goes for the withdrawal response. If you sever the spinal cord of a dog from its brain, the dog will still exhibit a withdrawal response. Same with humans. Humans with a severed spinal cord still exhibit withdrawal reflexes despite not feeling anything so the mere behavior of withdrawing a limb should not necessarily indicate the existence of feeling. After all, if we programmed a robot to rapidly withdraw its arm when exposed to a sharp force, we wouldn’t conclude that it feels anything simply because it shows the appropriate behavioral response. As Macphail puts it, “An actor could reproduce all these symptoms without feeling any pain at all, and that, in essence, is why none of these criteria is entirely convincing.”

Moreover, we could go beyond analogy and argue that of course animals feel pain since pain is highly advantageous from an evolutionary perspective. If an animal didnt have the appropriate mechanisms for feeling pain, then it would have not been nearly as successful as the creature who did experience pain. From this perspective, the function of pain is quite clear: to motivate us to avoid dangerous things.

But Macphail asks us to consider an armchair scenario about the evolution of pain. It is widely supposed that life began from the self-assembly of chemical building  blocks enclosed within a semipermeable membrane. These first organisms were basically complex chemical machines, and most people would agree that we can account for everything in terms of biochemical mechanisms. To explain the behavior of the organisms, we wouldn’t suppose that they have feeling-consciousness since, presumably, such chemical machines wouldn’t feel anything. Now, suppose that as multicellular organisms evolved there arose a cellular specialization wherein cells became nerve cells, sensory cells, and motor cells. The sensory cells function to detect information in the environment, which then act to encourage nerve cells to activate, which then encourage motor cells to activate.

The coordination of these different cells gives rise to to ability to react to dangerous stimuli. If the chemical machine wanders into a toxic area of the ocean, then the sensory cells can detect the significance of this stimuli and relay the information to the nerve cells, which then activate the motor cells which allows for the organism to escape from the dangerous stimulus. As Macphail says, “The point is, that it is easy to envisage the rapid early evolution of links between sensory systems and motor systems that would result in withdrawal from disadvantageous areas and of similar systems for approach to advantageous areas. It is equally easy to see that this scenario has proceeded without any appeal to notions of pain or pleasure.”

The question then is this: where does feeling-consciousness fit into this story? What is the function of feeling pain/pleasure that could not be accounted for in terms of the biochemical mechanisms and their increasing complexity? Why would an early organism need to feel pain when the mechanisms for avoiding dangerous stimuli and approaching advantageous stimuli are sufficient for the task of survival? Feelings don’t seem necessary for the adaptive success of an organism, a point which raises some very interesting philosophical questions.

With all that said, I need to make some qualifications. Although the above considerations lead us to believe that feeling-consciousness is not necessary for the adaptive success of animals, there is another sense of consciousness used by philosophers that does seem applicable to these lower organisms: phenomenal consciousness. Phenomenal consciousness is usually defined in terms of the “what-it-is-like” to exist. Presumably there is “something-it-is-like” to be a bat. This something-it-is-like is often talked about in terms of raw feels, such as the raw feeling that there is something-it-is-like to taste an apple or enjoy the blue sky. On my view, there is also something-it-is-like to be a bacterium, although it is very dull in comparison to the what-it-is-like of more complex organisms. However, I also want to claim that the raw feels which constitute the what-it-is-like of an organsim are not the same as the feeling-consciousness discussed above. Although many philosophers would disagree with me about this, I think that it is precisely the ubiquity of feeling-consciousness in humans that makes us think that the same feelings must be present in other animals. When humans gaze up at the blue sky and enjoy the feeling of pure sensory quality, I want to claim that this experience is unique to humans, for although a nonhuman animal is capable of perceiving or detecting the blue sky, it is probably not capable of feeling that it is perceiving, or feeling that it is detecting. To consciously feel sensory experiences requires that one “feel” how one perceives the world, as opposed to just perceiving the world. I claim that the perception of the world and the feeling that one is perceiving the world are two radically different phenomena, with the latter perhaps depending on the linguistic, self-reflexive cognition of human minds. Philosophers rarely recognize the significance of this distinction, and their philosophy of mind suffers accordingly.

Lastly, I want to briefly discuss the ethical implications of the seemingly radical position that animals don’t have feelings. Some people would think that even if this idea is true, it leads to such horrible ethical consequences that we should never even entertain it as a hypothesis. But I disagree. I think the idea that animals don’t consciously feel anything and the idea of animal rights are not mutually exclusive. One can hold the position that animals don’t feel pain, while still believing that we should be humane in our treatment of animals and that we shouldn’t cause animals any unnecessary discomfort. One could believe that animals don’t feel pain but merely detect dangerous stimuli while still believing that we should work to decrease the amount of dangerous stimuli detected by animals. In this way the idea of an animal ethics is perfectly compatible with the views I am entertaining here.


Filed under Consciousness, Philosophy, Psychology