Some thoughts on pain, animals, and consciousness

I just started reading Euan Macphail’s book The Evolution of Consciousness and the first chapter raises an interesting question: do animals have consciousness?

First, we need to define consciousness in order to determine whether or not animals besides humans possess it. We could roughly distinguish between two types: feeling-consciousness and metaconsciousness. Metaconsciousness is often referred to as self-consciousness and seems to depend on there being a self-concept in place that allows for such metacognitive functions as knowing that you know, thinking that you think, desiring about your desires, etc. Metaconsciousness seems to be a very rare cognitive skill and could plausibly be restricted to humans only since it seems unlikely that a mouse knows that he knows something, or is aware of his own awareness. Moreover, we must be clear to distinguish metaconsciousness from prereflective bodily self-consciousness, which is the self-consciousness that arises from simply having an embodied perspective on the world and not necessarily from having an explicit self-concept structured by linguistic categories such as self, person, soul, mind, consciousness, etc. Although all animals could be said to have bodily self-consciousness, it is unlikely that nonhuman animals have a self-consciousness of this bodily self-consciousness.

In contrast to metaconsciousness, we can talk about what Macphail calls feeling-consciousness. Obvious examples of feeling-consciousness include the experience of pleasure, suffering, love, motivation, etc. Moreover, feeling-consciousness includes sensory feels such as my the feeling that I am currently looking at my laptop screen, or the feeling of my clothes on my body and the keyboard against my fingertips.

While many people would agree that nonhuman animals do not have metaconsciousness, it seems plainly wrong to deny animals feeling-consciousness. After all, isn’t it quite clear that an animal experiences pain in the same way humans do? This argument is often made through analogous comparisons of behavior. We assume that if a person pricks a human with a needle, and the human rapidly withdraws his hand, he does this because the needle hurts. And since we can prick the paw of an animal and the animal exhibits the same rapid withdraw, then we would be perfectly right in concluding that the animal also withdraws because it feels pain. The same goes with vocalization. If you prick a human with the needle, he might yelp or cry out in pain. And if we prick an animal, it will also make a vocalization in response. We can also measure involuntary responses like heart rate. When a human experiences pain, these involuntary processes occur. And when we prick an animal, we see the same involuntary responses. The obvious conclusion then is that animals feel pain just the same as humans.

But are these behavioral criteria necessary for feeling pain? We wouldn’t, for example, think that vocalization is necessary for the experience of pain, since a human born without vocal cords would surely experience pain just the same. Same goes for the withdrawal response. If you sever the spinal cord of a dog from its brain, the dog will still exhibit a withdrawal response. Same with humans. Humans with a severed spinal cord still exhibit withdrawal reflexes despite not feeling anything so the mere behavior of withdrawing a limb should not necessarily indicate the existence of feeling. After all, if we programmed a robot to rapidly withdraw its arm when exposed to a sharp force, we wouldn’t conclude that it feels anything simply because it shows the appropriate behavioral response. As Macphail puts it, “An actor could reproduce all these symptoms without feeling any pain at all, and that, in essence, is why none of these criteria is entirely convincing.”

Moreover, we could go beyond analogy and argue that of course animals feel pain since pain is highly advantageous from an evolutionary perspective. If an animal didnt have the appropriate mechanisms for feeling pain, then it would have not been nearly as successful as the creature who did experience pain. From this perspective, the function of pain is quite clear: to motivate us to avoid dangerous things.

But Macphail asks us to consider an armchair scenario about the evolution of pain. It is widely supposed that life began from the self-assembly of chemical building  blocks enclosed within a semipermeable membrane. These first organisms were basically complex chemical machines, and most people would agree that we can account for everything in terms of biochemical mechanisms. To explain the behavior of the organisms, we wouldn’t suppose that they have feeling-consciousness since, presumably, such chemical machines wouldn’t feel anything. Now, suppose that as multicellular organisms evolved there arose a cellular specialization wherein cells became nerve cells, sensory cells, and motor cells. The sensory cells function to detect information in the environment, which then act to encourage nerve cells to activate, which then encourage motor cells to activate.

The coordination of these different cells gives rise to to ability to react to dangerous stimuli. If the chemical machine wanders into a toxic area of the ocean, then the sensory cells can detect the significance of this stimuli and relay the information to the nerve cells, which then activate the motor cells which allows for the organism to escape from the dangerous stimulus. As Macphail says, “The point is, that it is easy to envisage the rapid early evolution of links between sensory systems and motor systems that would result in withdrawal from disadvantageous areas and of similar systems for approach to advantageous areas. It is equally easy to see that this scenario has proceeded without any appeal to notions of pain or pleasure.”

The question then is this: where does feeling-consciousness fit into this story? What is the function of feeling pain/pleasure that could not be accounted for in terms of the biochemical mechanisms and their increasing complexity? Why would an early organism need to feel pain when the mechanisms for avoiding dangerous stimuli and approaching advantageous stimuli are sufficient for the task of survival? Feelings don’t seem necessary for the adaptive success of an organism, a point which raises some very interesting philosophical questions.

With all that said, I need to make some qualifications. Although the above considerations lead us to believe that feeling-consciousness is not necessary for the adaptive success of animals, there is another sense of consciousness used by philosophers that does seem applicable to these lower organisms: phenomenal consciousness. Phenomenal consciousness is usually defined in terms of the “what-it-is-like” to exist. Presumably there is “something-it-is-like” to be a bat. This something-it-is-like is often talked about in terms of raw feels, such as the raw feeling that there is something-it-is-like to taste an apple or enjoy the blue sky. On my view, there is also something-it-is-like to be a bacterium, although it is very dull in comparison to the what-it-is-like of more complex organisms. However, I also want to claim that the raw feels which constitute the what-it-is-like of an organsim are not the same as the feeling-consciousness discussed above. Although many philosophers would disagree with me about this, I think that it is precisely the ubiquity of feeling-consciousness in humans that makes us think that the same feelings must be present in other animals. When humans gaze up at the blue sky and enjoy the feeling of pure sensory quality, I want to claim that this experience is unique to humans, for although a nonhuman animal is capable of perceiving or detecting the blue sky, it is probably not capable of feeling that it is perceiving, or feeling that it is detecting. To consciously feel sensory experiences requires that one “feel” how one perceives the world, as opposed to just perceiving the world. I claim that the perception of the world and the feeling that one is perceiving the world are two radically different phenomena, with the latter perhaps depending on the linguistic, self-reflexive cognition of human minds. Philosophers rarely recognize the significance of this distinction, and their philosophy of mind suffers accordingly.

Lastly, I want to briefly discuss the ethical implications of the seemingly radical position that animals don’t have feelings. Some people would think that even if this idea is true, it leads to such horrible ethical consequences that we should never even entertain it as a hypothesis. But I disagree. I think the idea that animals don’t consciously feel anything and the idea of animal rights are not mutually exclusive. One can hold the position that animals don’t feel pain, while still believing that we should be humane in our treatment of animals and that we shouldn’t cause animals any unnecessary discomfort. One could believe that animals don’t feel pain but merely detect dangerous stimuli while still believing that we should work to decrease the amount of dangerous stimuli detected by animals. In this way the idea of an animal ethics is perfectly compatible with the views I am entertaining here.

Advertisements

11 Comments

Filed under Consciousness, Philosophy, Psychology

11 responses to “Some thoughts on pain, animals, and consciousness

  1. I am working with a similar perspective to the one you have been discussing in your excellent blog. I am curious about the distinction you are making here, since I take it that a phenomenological account takes affectivity to be the ground of being. What does the human gazing at the sky experience (leaving out higher-order thoughts) that the lion (let’s imagine the lion on the hilltop gazing out across the terrain – a classic image) or domestic cat does not experience? Words like “detection” suggest that the non-human animal perceives in the way a non-organic sensor does. Or is it that the affectivity is minimal “what-it-is-likeness, without valence? If the distinction is between the feelings of perception and the feelings that one is perceiving, then that distinction seems to be referencing the self-awareness that is already generally held to obtain only for more complex creatures (for you it is humans). I am very interested in your thoughts on this. Best, Rachel

  2. Gary Williams

    Thanks for the comment Rachel.

    I am curious about the distinction you are making here, since I take it that a phenomenological account takes affectivity to be the ground of being. What does the human gazing at the sky experience (leaving out higher-order thoughts) that the lion (let’s imagine the lion on the hilltop gazing out across the terrain – a classic image) or domestic cat does not experience?

    I’m not sure your example of the lion works here, since that seems like an entirely unnatural thing for the lion to do. If the lion is at the top of the hill, then it is surely not “gazing”, but rather, searching for prey, or scanning attentively. The point I am trying to make is that it seems plausible that “gazing” is itself a human kind of thing insofar as the gaze indicates a kind of detachment from the sensorimotor world. Im just not sure that the lion is ever “detached” from that sensorimotor world in the way a day-dreaming or reflective human is. As Merleau-ponty put it, “The sensible quality, far from being coextensive with perception, is the peculiar product of an attitude of curiosity or observation. It appears when, instead of yielding the whole gaze to the world, I turn toward the gaze itself, and when I ask myself what precisely it is I see.”

    Words like “detection” suggest that the non-human animal perceives in the way a non-organic sensor does. Or is it that the affectivity is minimal “what-it-is-likeness, without valence?

    This raises an interesting point. Although the nonorganic sensor is usually said to “detect” something, I think the detection of an organism is of a different nature than the detection of a nonorganic sensor. When a living body detects something, that something usually has some kind of significance in terms of how it either maintains or diminishes the autonomy of the living body. I think the notion of autonomy or homeostasis is what distinguishes nonorganic detection from organic detection insofar as the nonorganic entity has no autonomy to maintain, and is not regulated by any sort of normativity. When a videocamera “detects” a stimulus that stimulus is not meaningful insofar as it doesn’t take the stimulus “as” being either harmful/helpful, etc. With biological organisms, there is a fundamental drive embodied within the creature to maintain the unity of its organic being. The organism is constantly self-producing the very components which make up the organism as distinct from the environment. When a nonorganic robot is able to exhibit such autonomy in the face of perturbing stimuli, then I think it would be appropriate to talk about valance. Otherwise, it seems like valence or affectivity is something unique to the biological realm.

    With that said, I do not think that the capacity for encountering the world in terms of valences is synonymous with the feeling-consciousness I discussed in the post. I think that there could be a kind of prereflective affectivity wherein stimuli are valenced in terms of biological usefulness without there necessarily being conscious feelings in the sense that a human experiences conscious feeling.

    But I definitely agree that affectivity is the “ground of being”. If indeed humans are the only creatures capable of consciousness, then underlying that advanced capacity is the very affectivity we share with our animal cousins. It’s just that consciousness allows for a modification of that affectivity such that it gives rise to entirely new forms of phenomenological experience e.g. reflective gazing.

  3. Pingback: Tweets that mention Some thoughts on pain, animals, and consciousness | Minds and Brains -- Topsy.com

  4. An interesting way of addressing the “animal ethics” position without recourse to animal feels is by counterexample: is it morally neutral to torture a human with some form of analgesia which dis-enables them from feeling pain? Of course not, it is obviously still wrong. Therefore, a position can be developed which advocates for animal rights whether or not animals feel pain in the same manner as humans.

  5. I also wonder if a Spinozist/Deleuzian idea of an encounter between two or more singularities profiting or not profiting the singularities of encounter would be another way to advocate the ethical position you’re expounding. That is, perhaps a reflective awareness of pain, understood as pain, is a poor platform from which to determine whether or not an activity is destructive or damaging. Instead, the idea of flourishing according to the singularity itself in its ‘conatus’ seems a way to think ‘ethics’ beyond the typical expressions of ethical theory today?

    That said, I like your distinction between ‘phenomenal consciousness’ and ‘feeling consciousness.’ Would it be helpful to think in terms of pre-reflective and reflective (for simplicity’s sake). Thus, phenomenal consciousness would include all non-reflective moments of ‘thought’ (and i use this term very loosely, as I don’t restrict thought to an activity of a brain) and both feeling consciousness and metaconsciousness would fall into the reflective category. If this is the case, then I wonder if a tripartite categorization would serve as a useful guide to begin analysis: non-reflective consciousness (corresponding to ‘phenomenal’), pre-reflective (corresponding to ‘feeling’), and reflective (corresponding to ‘meta’).

    Actually, having just written that I wonder if 4 categories would be best in order to make a distinction between the ‘thoughtful’ connections that occur between non-organic singularities and organic. Thus, the categories would be non-reflective (corresponding to non-organic consciousness), phenomenal consciousness, pre-reflective/feeling, and then reflective/meta???

    Of course, I’m not a fan of absolutizing categories. I merely think they are useful to aid us in analysis as a nice platform from which we can begin our study. They must always be open to modification/expansion/etc…

    What do you think?

    ps. i’m sorry about flaking on those articles i promised. i’m actually still trying to track them down. my buddy that has them has yet to email them to me!!! i promise i’ll get them to you asap!

  6. Gary Williams

    Hey Austin, I really like your thoughts here. I think your tripartite distinction between the nonreflective, prereflective, and reflective is really nice. One must be careful though because in terms of the definition of phenomenal consciousness, even meta-reflection is phenomenal insofar as there is “something it is like” to meta-reflect. Also, I definitely think you are right to put both feeling consciousness and metaconsciousness on the “reflective side”.

    As for the nonorganic side of things, I am getting more and more used to the idea of there being “something it is like” to be a rock. I just think that it would be very dull without any sense of temporality.

  7. perhaps deleuze’s syntheses of time might be of some aid here. thus, the ‘something it is like’ of a rock would correspond to the first synthesis of time, habit. james williams, my supervisor, actually has a lecture called ‘contemplating pebbles’ in which he explores this idea a bit (‘contemplating’ is not the subject of the phrase but an adjective). you can check it out here:

    http://michaeloneillburns.wordpress.com/2010/05/25/real-objectsmaterial-subjects-audio/

    that said, i wonder if it would be helpful to make a distinction between chronos and aion here; the former corresponding to temporality generally understood and the latter corresponding to indefinite, virtual, ‘time’ – but a ‘time’ that is without moment, one that IS between moments – an ‘ontological priority,’ if you will. this would mean that any occurrence that takes place in time/space, and thus any emergence of consciousness/thought, would necessarily correspond with chronos. the non-organic ‘thought’ of say deleuzians would fit into this category as chronos pertains to the actual (not the virtual). and thus, all three distinctions that i made above (even though i’m starting to favor a 4-part distinction) inhere at the level of actuality and chronos. even rocks would then be characterized by chronos, by actuality, and by having ‘thought’ (all corresponding to the synthesis of ‘habit’). the reason i’d claim this is because thought and consciousness, for deleuze, are not on the plane of immanence. the plane of immanence is ‘a life’ (the indefinite article signifies that it is indefinite, a multiplicity, pre-individual, a-subjective, etc.). thus, thought and consciousness are ‘transcendents’ that both constitute and are constitutive of chronos and actuality. therefore, the truly novel, difference in itself, pure immanence insist at the level of aion and are only ever reflected upon by the emergence of thought (even non-organic ‘habit’ would be a ‘reflection’ of sorts on pure immanence). this means that the actual is only ever already in the ‘wake of the new’ and not the new in se. and this would mean that even rocks would have a ‘something it is like’ insofar as they are actual individualities, products of habit.

  8. Pingback: | Minds and Brains

  9. Tyle Stelzig

    The distinction between ‘phenomenal’ or ‘feeling’ consciousness on the one hand and ‘access’ or ‘meta’ consciousness on the other is typically introduced to disentangle the subjective character of experience from reflexive cognitive abilities.

    You seem to argue near the end of this post that the subjective character of experience gets influenced in humans by our ability to reflect. This is certainly true. A consequence of this might be that we cannot really disentangle phenomenal-consciousness from meta-consciousness. This might mean that the distinction is a bad one.

    On the other hand, it is probably possible to have phenomenal-consciousness (what-it-is-like) without meta-consciousness (reflective capacities; I am using this term more narrowly than you). For example, you noted in the post that you think animals have phenomenal consciousness but not meta-consciousness. And conversely, it is possible to write a simple computer program with meta-consciousness (reflective capacities); such a program probably doesn’t have phenomenal-consciousness.

    • Tyle Stelzig

      Maybe I should defend that meta-conscious computer program assertion!

      The idea is to include in our program a symbol which stands for the program, and symbols which stand for various computational processes that the program performs, etc. For example, our program can include a symbol ME, and update the relations between this ME-symbol and its representation of its environment in response to its actions, and in response to sensory information it receives about the environment, etc.

      The program can also include a class THOUGHTS, with instances like Thought1, Thought2,… which get instantiated when the relations between the ME-symbol and the environment-symbols get modified, or when certain conditions hold in the environment, etc. We can make the thought-generation process partially stochastic, if we want to make it more like human thought. It is also no problem to give these thoughts functional consequences: the idea is to define actions with particular thoughts as preconditions.

      Importantly, we can also define a class META-THOUGHTS, with particular thoughts as preconditions (and we can make this process stochastic also). Finally, there is no reason that these layers of reflection couldn’t be extended indefinitely, using recursive class definitions.

      Of course, there are many differences between a program like this and a person; for example, the preconditions for thoughts and actions in this program don’t change as a function of experience. But, the program DOES have reflective capacities; it IS meta-conscious. And of course, unless you’re a higher-order theorist, you probably won’t think that such a simple program has feeling-consciousness. 😉

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s