Tag Archives: ethics

Vegetative State Patients as Moral Patients

https://www.academia.edu/7692522/Vegetative_State_Patients_As_Moral_Patients

Abstract:

Adrian Owen (2006) recently discovered that some vegetative state (VS) patients have residual levels of cognition, enabling them to communicate using brain scanners. This discovery is clearly morally significant but the problem comes from specifying why exactly the discovery is morally significant and whether extant theories of moral patienthood can be applied to explain the significance. In this paper I explore Mark Bernstein’s theory of experientialism, which says an entity deserves moral consideration if they are a subject of conscious experience. Because VS is a disorder of consciousness it should be straightforward to apply Bernstein’s theory to Owen’s discovery but several problems arise. First, Bernstein’s theory is beset by ambiguity in several key respects that makes it difficult to apply to the discovery. Second, Bernstein’s theory of experientialism fails to fully account for the normative significance of what I call “narrative experience”. A deeper appreciation of narrative experience is needed to account for the normative significance of Owen’s findings.

 

 

This paper has gone through so many drafts. I swear I’ve rewritten it 5 times from more or less scratch. Each time I’ve tried to narrow my thesis to be ever smaller and less ambitious because I’m pretty sure that’s the only way I’m going to get this thing passed by my qualifying paper committee. As always, any thoughts or comments appreciated.

1 Comment

Filed under Consciousness, Neuroethics, Psychology

Reflecting On What Matters

1. Introduction

What does it take for your life to go better or worse? One idea is experientialism. For experientialists, what matters is sentience, the capacity to experience pain and pleasure. Experientialists typically appeal to a distinction between moral agency and moral patiency to argue that only sentient beings can be moral patients. The paradigm moral agent is the adult human, capable of both thinking morally and acting morally. Most moral agents are also moral patients because most adult humans are sentient. The paradigm moral patient that is not also a moral agent is a newborn baby or a nonhuman animal. For my purposes, the key doctrine of experientialism is that sentience is necessary for both moral agency and moral patiency.

The goal of this paper is to refute that doctrine and argue that the capacity for reflection by itself is sufficient for both moral agency and moral patiency. In other words, a purely reflective but insentient being would be both a moral agent and a moral patient simply in virtue of their capacity for reflection. Who explicitly denies this? Suchy-Dicey (2009) argues that a being that was reflective but not sentient would not be a moral patient. She states that “autonomy without the potential for experiencing welfare is not valuable…the ability to experience welfare is a precondition for the value of autonomy” (2009, p. 134). Thus, Suchy-Dicey says the value of reflection is parasitic upon sentience but not vice versa. That is, an entity is a moral patient if it is both sentient and reflective, or if it is only sentient—but if an entity is reflective but not sentient then on Suchy-Dicey’s view it does not count as a moral patient. Hence, Suchy-Dicey’s view is characterized by two features:

(1). Value Pluralism: Both sentience  and reflection are intrinsically valuable.

(2). Value Asymmetry: The value of sentience for moral patiency is independent of reflection but the value of reflection for moral patiency is dependent on sentience. Thus, if an entity is reflective but not sentient, it is not a moral patient.

I agree with (1) but deny (2). Instead, I will defend the following thesis:

(2*). Value Symmetry: the value of sentience for moral patiency is independent of reflection and vice versa. Thus, an entity that is reflective but not sentient would still be a moral patient.

This paper aims to defend (2*) against (2). To do so, I defend the following argument:

  1. Experientialism assumes that all moral patients and all moral agents are necessarily sentient.
  2. The capacity for reflection by itself is sufficient for both moral patiency and moral agency.
  3. By (2), if a purely reflective being existed, it would be both a moral patient and a moral agent.
  4. Purely reflective beings can exist.
  5. Thus, experientialism is false.

Premise (1) just falls out of the commitments of experientialism. The most controversial premise is arguably (2). To defend it, I will need to do several things. In section 2, I will explain what I mean by “the capacity for reflection”, explain why it’s sufficient for moral agency, and argue that purely reflective beings can exist. In section 3, I will continue by arguing that reflection is sufficient for moral patiency. Doing so will provide the needed ammunition to argue against experientialism.

2. What is reflection?

The paradigm reflective agent is a normal human adult, capable of reflective self-consciousness. Gallagher’s (2010) definition of reflective self-consciousness is a good place to start. He defines it as “an explicit, conceptual, and objectifying awareness that takes a lower-order consciousness as its attentional theme.” Several themes are important for my understanding of reflection. First, it must be explicit. A cat might think “I am hungry” but this thought is never explicitly articulated in its mind in the way a reflective human might reflect to themselves, “Boy if I don’t eat breakfast I’m going to be hungry this evening for sure.” Second, reflection must be conceptual. What I mean by that is that in order to reflect one must have the concept of “reflection”, or at least some concept of “consciousness”. A cat might have a psyche but it lacks a concept of psyche qua psyche. A reflective creature knows as its reflecting that it’s reflecting because it has at least one concept about reflection as such to distinguish it from other psychological events like behaving or perceiving.

Thus, to reflect in the full sense I intend one must have an explicit understanding of what it means to reflect and the ability to know that you are reflecting when you are reflecting. Furthermore, a distinguishing feature of reflection is that a reflective creature can reflect on just about anything: themselves, trees, rocks, numbers, philosophy, art, reflection itself, evolution, space-time, etc. While there might be some contents that are too unwieldy for human reflective agents to fully reflect on, a defining feature of reflection is its flexibility with regard to the contents of reflective acts. If a reflective agent is relaxed and not pressed for time it can very well reflect on almost anything so long as it has the right conceptual repertoire. Thus, I avoid the term “reflective self-consciousness” because reflective agents can actually take as an object of reflection just about any object or proposition, not just the “self”. Hence, I prefer to talk about “reflective consciousness” i.e. reflection. A feature of reflection closely related to flexibility is the ability to switch between different objects of reflection. A reflective creature, when suitably relaxed, can choose what to reflect on when it wants to. If it wants to reflect on the past, it can; if it wants to reflect on the future, it can.

Phenomenologically speaking, reflection is spatial, selective, and perspectival. Reflection is spatial because if I asked you to reflect on your cat and then your dog you would not imagine them mushed together; you would first reflect on your cat and then “move” onto your dog. All reflection is spatialized in this sense because the objects of reflection are “separated” from each other in mental space. This applies to the most abstract of ideas: if I ask you to reflect on the concept of liberty and then reflect on democracy there will be “movement” in your act of reflection as you go from idea to idea.  Reflection is selective because if I reflect on what I had for breakfast yesterday, I cannot simultaneously reflect on what I want for breakfast tomorrow. Reflection is perspectival because if I reflect on my walk through town yesterday the reflective act is done from a perspective. If my reflection is veridical I might reflect as if I were peering out of my head bobbing up and down as I walk but in all likelihood my reflection will be disembodied like a camera floating freely through space able to fly through the city at any speed.

Another feature of reflection is the capacity to explicitly reason and articulate about intentional actions qua intentional actions. To interact with something nonreflectively is to interact it without explicitly realizing you have done so and without the ability to give a reason why you have done so. Conversely, to interact with something reflectively enables you to reflect on your reasons for having chosen the action you did and the ability, if needed, to explicitly articulate your reasons for having acted in the way you did. The reasons you give might not be indicative of the true, underlying causal mechanisms for your action but what’s important is the ability to articulate in terms of intentional actions even if you are confabulating (Nisbett & Wilson, 1977). Moreover, even if your voicebox or muscles were completely paralyzed you would still have the ability to articulate your reasons so long as you can articulate them to yourself or so long as you possess the knowledge that if you had a means of expressing yourself you could actually articulate. Thus, what counts is not so much the literal articulation of reasons but the capacity or potential to articulate reasons for action. Moreover, by action I mean mental or behavioral action e.g. you could articulate to yourself why you chose to imagine yourself playing tennis as opposed to imagining yourself walking through your house.

Now that I have explained part of what it means to be a reflective agent, I want to explain why reflective agents are also moral agents, what I call reflective moral agents. Defending the cogency of reflective moral agency will clear the ground for my defense in the next section of reflective moral patiency. It’s relatively uncontroversial the ability to reflect has instrumental value for moral agents insofar as reflective creatures could reflect on better ways to help moral patients but why should reflective agents be moral agents just in virtue of their being reflective agents and not because reflection is instrumentally valuable? One reason is that reflective agency is important for realizing many things of intrinsic value according to what has been called “objective list” approaches to intrinsic goodness. Common items on these lists of intrinsically valuable goods include things such as: developing one’s talents, knowledge, accomplishment, autonomy, understanding, enjoyment, health, pleasure, friendship, self-respect, virtue, etc. Arguably reflection is not crucial for all these items but it is especially important for autonomy, which roughly speaking is the ability to rationally make decisions for oneself and be a “self-legislating will”, i.e. someone who makes decisions on the basis of rules that they impose on themselves. Arguably autonomy involves the capacity for reflection insofar as one cannot automatically or unconsciously self-legislate; to self-legislate in this sense necessarily involves stepping back and reflecting on the type of life one wants to live.

For example, consider the concept of an “advanced directive”, which is a special legal contract that allows people to decide how they want to die. Suppose your friend Alice had never heard of an advanced directive before nor had she ever considered the question of how she wanted to die e.g. whether she would want to live on life support for more than six months. Now if you asked Alice about advanced directives and she responded instantly with a “no” you would be confused. You would say, “How can you answer so quickly? Don’t you need to reflect a little longer on the question?” It would be one thing if she said “Oh, actually I have thought about this before and my answer is still no.” But it would be another thing altogether if she said “I don’t need to think about it – I just went with my gut reaction, and that gut reaction is no.” If she answered in this way you might think she did not understand the moral significance of advanced directives, which demand a certain kind of slowness in deliberation in order to be morally relevant.

Consider another example. You notice your friend Bob has grown really close to his girlfriend, Carol. One day you ask Bob if he wants to marry her and he instantly answers “Yes”. Surprised, you ask, “So you have thought about this before?” and Bob says “No, I’ve never thought about it before until you asked.” Most people would find this strange because marriage is such a significant life decision that it demands slow, deliberative reflection. To not reflect on such weighty issues indicates a failure of moral agency.These two examples illustrate a general principle about the crucial role reflection plays in supporting rational, autonomous choice, namely, that it must have an element of “slowness”. This kind of reflective autonomy is distinct from the autonomy of, say, cats, who are free to choose between sleeping on the mat or sleeping on the bed. The latter kind of autonomy is what we might call sentient autonomy because it’s possessed by almost all Earthly beings that are sentient. Sentient autonomy is important and distinguishes animals from, say, rocks and dust bunnies but it is not the only kind of autonomy relevant to moral agency. If there was a being that possessed reflective autonomy but wasn’t sentient, it seems absurd to deny them moral agency. Reflectively autonomous agents would be able choose to help moral patients regardless of their ability to sensuously feel pleasure or pain. Moreover, their decision procedures would be such that they are of a deliberative nature, grounded in reasons that they are able to explicitly articulate if necessary.

Consider the fictional character Commander Data from Star Trek. Data is an advanced android with a positronic brain that can compute trillions of operations per second. He is thus hyper-intelligent, processing information faster and more accurately than any human. Even if his brain is a computer Data is not merely a computer; he is a moral agent just the same as any human. The only difference is that Data is not a sentient being in the sense that he lacks the bodily consciousness of animals and other fleshy creatures.

Biting the bullet and denying Data moral agency is implausible given that Data was often the wisest and most morally principled of all the crewmembers, not to mention the most valiant in the face of action as evidenced by his many medals of honor. If anyone was capable of reflective autonomy if was Data. It might look from all appearances that he was acting out of just normal sentient autonomy but this is an illusion generated by the sheer speed of his reflective processing. Consider the numerous medals won for bravery and honor in service of Starfleet. All of Data’s valor and bravery were executed not because of any animal instinct or sentient autonomy but because he made a reflective choice. This is evident by the fact that if you asked Data why he performed action X in situation Y he would always be able to explicitly articulate a reason for having done so, even if that reason is “Because I was programmed to do so”. The relevant point however is that his actions betray the flexibility, switching, and autonomy relevant for moral agency as well as the explicitness characteristic of reflective agency.

3. Reflective Moral Patiency

In this section I will defend the second half of premise (2): the capacity for reflection by itself is sufficient for moral patiency. Any entity that can reflect is what I call a reflective patient. The guiding intuition behind experientialism is that welfare flows from the capacity to experience the world, not the capacity to reflect on the world. However, I contend that if there was a being that was insentient but capable of reflection it would be wrong to harm them. Take Data again.I contend that it would be wrong to treat Data poorly by either intentionally destroying him, being negligent to his robotic body, or needlessly destroying his prized belongings. In other words, Data is a moral patient that cannot be treated like just any mere physical object.

There are at least two objections someone might have to Data being a moral patient. First, the experientialist might simply balk at the thought Data cannot feel pain and pleasure. How could his cognitive life be identical to that of a rock or other insentient entities? Surely there is a qualitative or experiential dimension to Data’s existence that distinguishes his existence from that of rocks and dust bunnies. I would respond by saying there is indeed a certain “quality” to Data’s information processing but I’m not convinced we are forced to say such information processing is “experiential” unless that just means “has a quality”, which would trivialize the notion. I can grant the quality of Data’s positronic brain as it reflectively operates is different from the quality of a rock because of its informational complexity without supposing the quality is necessarily due to the information processing being experiential in way an animal’s sensuous pleasure or pain is experiential. In effect, I’m proposing that an entity could have the quality of being a reflective thinker without being a subject of phenomenal experience.

The second objection is that moral patiency plausibly flows from an entity having interests that can either be satisfied or frustrated. Didn’t Data have interests and aspirations like anyone, however “robotic” or “inhuman”? If Data is merely engaging in reflective thought but lacks any interests then the objector might say it’s implausible that his life could be made better or worse and thus would not count as a moral patient. Since we’ve already argued that Data surely is a moral patient then his patiency must be due to a kind of experiential welfare, as per experientialism. The underlying assumption seems to be that unless a cognitive capacity is experienced it cannot be intrinsically valuable and thus cannot be a suitable locus for moral patiency. Call this the Principle of Experience (PE). Kahane & Savulescu also endorse a version of PE writing that “phenomenal consciousness is required if a person is to have a point of view, that is for the satisfaction of some desire to be a benefit for someone” (2009, p. 17). The intuition behind PE is that what makes it permissible to randomly shoot a rock and impermissible to randomly shoot an animal is that rocks lack phenomenal experiences that can be negatively or positively affected.

However, I believe this objection fails to fully grasp the distinction between reflective patiency and sentiential patiency. Data can be a moral patient so long as we are careful to distinguish “bottom-up” interests that stem from animalistic sentience, and “top-down” interests that stem from reflection. It’s debatable whether Data has genuine bottom-up interests but undeniable he has top-down interests due to his capacity for complex, reflective thought. For example, Data might not have a sentential instinct to avoid pain but he can reflectively think “I do not want to be destroyed.” Data could surely sign an advanced directive and his signature would be morally relevant because he can explicitly articulate and reason about his decision. It would be wrong to intentionally destroy or mistreat Data not because he can experience the mistreatment but because it would violate his reflective interest to continue existing. If Data signed an advanced directive it would be wrong to intentionally ignore it for the exact same reason it’d be wrong to intentionally ignore a human’s advanced directive.

Another kind of thought experiment supports the intuition that reflective consciousness is relevant to moral patiency independently of its relation to sentience. Consider the hypothetical scenario where a chimpanzee and a chicken were in a burning building and you could only save one. Other things being equal, it seems overall better to save the chimpanzee because although both the chicken and chimp are sentient arguably the chimp has a greater amount of proto-reflectivity that is intrinsically valuable. Similarly, if the choice was between a chimpanzee and an adult human, it seems overall better to save the human for the same reason: the human is sentient and it is reflective. Furthermore, suppose your mother or father was dying and the doctors said they could save their life only on the condition that they would be insentient but reflective. They would be able to converse intelligibly, write emails, thoughtfully answer questions about their own folk psychology, cook dinner, and otherwise act like perfectly normal people except they couldn’t experience pleasure or pain. Would you accept the offer? It seems absurd not to. The rich, multidimensional intelligence associated with reflection is valuable independently of any contingent relation to sentience. These thought experiments lend credence to the thought that moral status comes in degrees and that reflective moral agents that are also sentient carry what some philosophers call “Full Moral Status” (Jaworska & Tannenbaum, 2013). Moral patients that are sentient only carry less than full moral status because they are not reflective patients.

Conclusion

I’ve argued that experientialism is false because it assumes that all moral patients and all moral agent are necessarily sentient. In contrast I’ve attempted to open up the conceptual space by arguing that the capacity for reflection itself is sufficient for both moral agency and moral patiency.

 

References

Bernstein, M. H. 1998. On Moral Considerability: An Essay on Who Morally Matters. New York: Oxford University Press.

Farah, M. J. (2008). Neuroethics and the problem of other minds: implications of neuroscience for the moral status of brain-damaged patients and nonhuman animals. Neuroethics, 1(1), 9-18

Jaworska, Agnieszka and Tannenbaum, Julie, “The Grounds of Moral Status”, The Stanford Encyclopedia of Philosophy (Summer 2013 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2013/entries/grounds-moral-status/&gt;.

Kahane, G., Savulescu, J. (2009). Brain damage and the moral significance of consciousness. Journal of Medicine and Philosophy, 34(1), 6-26.

Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological review, 84(3), 231.

 

Regan, T. (1986). The case for animal rights. In P. Singer (Ed.) In Defense of Animals (pp. 13-26). New York: Basil Blackwell

 

Suchy-Dicey, C. (2009). It Takes Two: Ethical Dualism in the Vegetative State.Neuroethics, 2(3), 125-136

 

2 Comments

Filed under Consciousness, Neuroethics, Philosophy

Some Thoughts on Moral Status

John Doris suggested to me that the concept of “moral status” is probably more complicated than many realize. A common framework for understanding what it means to have moral status is the two-fold moral agent/moral patient framework. Like most concepts, this framework is best illustrated via example. The paradigm moral agent is the adult human. The paradigm moral patient is a newborn baby. The moral agent is capable of thinking morally and acting morally. When a moral agent acts morally, they usually do so with a patient in mind. Moral agents typically do not act morally towards bits of garbage. We simply toss them in the trash because they are mere material objects. They lack moral status for they are not moral patients. Other instances of moral patients are arguably chimpanzees. It would be wrong to toss a chimp in a giant garbage compactor because the chimp is a moral patient towards whom moral agents have duties e.g. the duty not to needlessly or purposely harm patients. If a psychopath were to stab a chimp for the fun of it, this would be wrong. The psychopath is a defective moral agent, an agent that is failing to do his or her moral duty towards moral patients.

The moral agent/patient distinction is a fine one but as a philosopher my job is to often to expand or elaborate on the hidden complexity a seemingly simple concept affords. So here goes.

The problem with an overly simplistic moral agent/patient distinction is that it tends to classify all moral patients as sentient beings, which on Earth most people think includes the entire Mammalian family. All mammals are moral patients because all (normal) mammals can feel pain and moral agents have a duty to not needlessly inflict pain on moral patients, unless they have a compelling reason to do so. However, I tentatively propose a new taxonomy of moral status which I formulated haphazardly last night. It’s rough, so bear with me.

First, I propose there are two types of moral agents: reflective agents and sentient agents. An example of a reflective agent is a normal adult human. An example of a sentient agent is a cat. If you are capable of reflective thinking, you are a reflective agent. Typically, reflective agents are also sentient agents.

Second, I propose there are two types of moral patients: reflective patients and sentient patients. Again, an example of a reflective patient is a normal adult human. Adult humans are often in need of help from other moral agents so they are both agents and patients at the same time. An example of a sentient patient is a cat. If you can feel pain or pleasure then you are a sentient patient. A cat is not capable of reflective thinking yet it can feel pain and pleasure so moral agents have a duty to not needlessly harm cats without a compelling reason to do otherwise.

Arguably the weirdest category is a sentient agent. How can a cat be a moral agent if it cannot reflectively think? Well, the answer is that you can do a lot of good in the world without being able to reflect. Consider a mamma cat’s relationship to its newborn kittens. The kittens are sentient patients but not sentient agents. The kittens need help from mamma cat and the mamma cat normally has responsibilities towards her kittens although in the real world the mamma cat like other animals with litters will by necessity focus her powers on helping a subset of her litter.

From our new taxonomy of moral status we can now discuss different kinds of value. I propose there are two main types of value associated with each of the above types of agents. For reflective agents, there are two types of value: intrinsic reflective value and derived reflective value. An example of something with intrinsic reflective value is the act of reflective thought itself – it is valuable because reflective thought can potentially lead to a lot of good actions not possible otherwise. It would be wrong to needlessly destroy an adult human brain because that brain is the seat of reflective thinking.

An example of something with derived reflective value is a baseball signed by Babe Ruth. This baseball, though a mere physical object, has derived value because it is valued by some reflective agents, namely, baseball fans. It would be wrong to throw that baseball into the trash (without good reason) because this would cause harm to some reflective agents.

Turning to sentient agents, there are also two corresponding types of value: intrinsic sentiential value and derived sentiential value. An example of something with intrinsic sentiential value is the pleasure a dog feels as it is chewing on its favorite chew toy. My favorite category is derived sentiential value because it creates interesting overlaps. That very same baseball signed by Babe Ruth has the potential to possess derived sentiential value. Suppose a rich baseball fan has ten baseballs signed by Babe Ruth and decides to give one to his dog, Spike, to be used as a chewtoy. The baseball becomes Spike’s favorite chewtoy. It would be wrong to needlessly destroy that baseball not because of its derived reflective value because Spike cannot reflect and cannot appreciate how much it would be valued by other, not-so-rich baseball fans. What Spike can do however is value that baseball as a chewtoy. Thus, the baseball has derived sentiential value because it is valued by a sentient creature.

From the above, we can generate two new types of patients: derived reflective patients and derived sentiential patients. The Babe Ruth baseball can be an example of both. If the baseball was the property of a normal, reflective baseball fan it would be wrong to destroy it because it is highly valued by a reflective agent/patient. If the baseball was the property of Spike the dog then it would be wrong to destroy it because it is highly valued as a chewtoy by a sentient agent/patient.

1 Comment

Filed under Moral Philosophy, Philosophy

Reflections on My Dislocated Shoulder: Two Types of Pain and Their Moral Significance

I recently dislocated my right shoulder and not surprisingly this experience has caused me to reflect on the nature of pain. In this post I will use my own experience coupled with a thought experiment to argue for two distinct types of pain: reflective pain and nonreflective pain. Having spelled out this distinction, I will raise some difficult questions about their respective moral significance.

Reflective Pain

If you are right-handed like myself, a dislocated right shoulder is an example of an injury that occasions reflective pain par excellence.  In essence, reflective pain is pain that interferes with your day-to-day functioning by causing you to consciously reflect on it more than normal. Everything is now harder and more painfully deliberate to do e.g. taking a shower, putting on clothes, hugging my wife, wearing a backpack, opening a beer, etc. The thousands of micro-tasks I typically used my dominant hand for in coordination with my non-dominant left now must be performed awkwardly with my left hand alone in order to minimize pain in my right shoulder. This has halted my daily productivity significantly. For example, as a grad student and denizen of the 21st century, I spend much of my time on a laptop. It’s amazingly slow to type with only your left hand on a QWERTY keyboard. You actually type significantly less than half of normal speed because you have less fingers but you also have to stretch your fingers more to reach across the whole keyboard. This has made day-to-day academic housekeeping and research painfully tedious in a literal sense.

Thus, the salient feature of reflective pain is that you can’t help but reflect on it because throughout the day you are continually reminded of your injury every time you go to do something that you previously would have done without hesitation. Now every motor intention is tentative and the perception of thousands of lost affordances is palpable. Reflective pain intrudes and interferes with your thought processes because you are acutely aware of the bodily powers you have lost and the pain that has replaced them.

What about nonreflective pain?

Nonreflective Pain

Nonreflective pain is quite different from reflective pain. Imagine you are walking across a desert keenly intent on getting to the other side. It’s sweltering hot so you expose your back to the air. In so doing you introspectively notice a pain sensation localized to a patch of skin on your back. You can’t remember how long that pain sensation as been there. The pain isn’t screamingly intense nor does it burn or throb. It’s more like a light tingle or steady buzz. It doesn’t itch and you feel no compulsion to reach behind you and scratch or rub it. In fact, the pain seems to be minimized by simply leaving it alone. The pain is localized such that the movement of your muscles and skin across your skeleton doesn’t exacerbate the pain. In fact the pain doesn’t interfere with your walking at all.

 The pain doesn’t necessarily command your full attention and often when you are absorbed in watching out for rattlesnakes or walking across tough terrain you entirely forget the pain is there. It’s only when you get on flat easy ground again and your mind begins to wander that you can notice the pain, buzzing with the same steadiness as always.

As you walk you begin to use the pain as a source of introspective entertainment. The pain becomes more of an interesting sensation to play with than a genuine nuisance. The pain is neither pleasant nor unpleasant. It’s simply there. You can choose to attend to it or not. You can describe the sensation and localize it to a particular patch of skin, but you don’t mind the sensation; it doesn’t bother you. In fact you have grown to like it because it gives you something to reflect on as you walk mindlessly across the desert. What’s interesting about the pain is when you are not reflecting at all but entirely in the flow of walking the pain is not consciously noticed at all. There is seemingly no conscious awareness of the pain as you are absorbed in walking. There is only the ground before you and your movements. But even if you don’t consciously attend to the pain the pain is there nonetheless (presumably). It’s a steady sensation, but it seems then that not all sensations are necessarily conscious. This is what David Rosenthal might call “nonconscious qualia”.  If you didn’t introspect and reflect on the pain sensation, it’s hard to imagine it interfering with your cognitive functioning except at the grossest level of physiological nociception.

The Ethics of Pain

Now that I’ve distinguished these two types of pain, I want to ask a series of rhetorical questions. Do animals have reflective pains or are all their pains nonreflective? If so, which animals have reflective pain? All of them, or only the super-intelligent animals like apes, dolphins, and elephants? What about fish, insects, rats and cats? What is the evolutionary function of reflective pain, if it even has one? Is nonreflective pain just as morally significant as reflective pain? If we knew that a vegetative state patient had nonreflective pain, are clinicians obligated to give them pain medication?

Perhaps these are bad questions because the distinction is a false dichotomy, or conceptually or empirically mistaken. Maybe it’s a matter of degree. But it seems intuitive to me that there is something morally distinctive about the type of pains that cause us suffering and anguish on account of our reflecting on them and not just in virtue of the first-order sensory “painfulness” of them. I don’t mean to suggest that first-order painfulness has no moral significance but it seems to me that it should be weighted differently in a utilitarian calculus.

2 Comments

Filed under Consciousness, Philosophy, Psychology

Draft of Latest Paper – Awake But Not Aware: Probing For Consciousness in Unresponsive Patients

patient

Ok everyone, here’s a paper I’m really excited about. The topic is so “me” — the first project I’ve wholeheartedly thrown myself into since since I came to Wash U. I can see myself wanting to write a dissertation or book on the topic so this paper will likely serve as the basis for a prospectus in the near future. The issue I’m dealing with in the paper is situated at the intersection of a variety of fields ranging from philosophy of mind, philosophy of science, cutting edge neuroscience, clinical neurology and biomedical ethics. I could conceivably “sell” the project to a variety of people. The project is obviously at an early stage of development and the paper is drafty but I have the rest of the semester to work on this so I’m open to any comments, criticisms, or questions. Thanks!

For PDF of paper, click here –> Williams-AwakeButNotAware-Draft-3-03-14

Here’s a tentative abstract:

The standard approach in clinical neurology is to diagnose disorders of consciousness (DOC) on the basis of operationally defined behaviors. Critics of the standard approach argue that it relies on a flawed behaviorist epistemology that methodologically rules out the possibility of covert consciousness existing independently of any observable behavior or overt report. Furthermore, critics point to developments in neuroimaging that use fMRI to “actively probe” for consciousness in unresponsive patients using mental imagery tasks (Owen et al. 2006). Critics argue these studies showcase the limitations of the standard approach. The goal of this paper is to defend the standard approach against these objections. My defense comes in two parts: negative and positive. Negatively, I argue that these new “active probe” techniques are inconclusive as demonstrations of consciousness. Positively, I reinterpret these active probes in behavioral terms by arguing they are instances of “brain behaviors”, and thus not counterexamples to the standard approach.

Leave a comment

Filed under Academia, Consciousness, Philosophy, Philosophy of science, Psychology

The Immorality of Catholic Confessional

A Roman Catholic priest created an Ask Me Anything thread the other day on Reddit. One redditor asked the following question:

“If a man came to you in confessional and admitted to murdering someone and shares intent to do it again, do you go to the police or do you respect the rules of confession? If you read in the paper that he did it again the next day, how would you feel?I went to Catholic school for 12 years and this has been my favorite question to ask of priests since I was really young, because the answer actually varies.”

Surprisingly, this is how the priest answered:

” The seal of the confessional is inviolate, even if the person has murdered someone.”

This flabbergasted me. The immoral stupidity of such an absolutist rule can easily be demonstrated by performing a thought experiment and taking the logic of an “inviolate seal” to its logical extreme. Let’s say the confessor admits to the priest that he is planning to murder 1 billion people tomorrow with a doomsday device. If the Roman Catholic church still thinks it’s more important to keep the seal of the confessional inviolate than to prevent the death of 1 billion people, then I believe this is a reductio of the principle of the confession.

But, you might object, in order to make it a genuine confession, the confessor must genuinely repent, and you can’t really repent if you consciously plan on committing the sin you are repenting for tomorrow. So it wouldn’t be a real confession. But we need only tweak our thought experiment. Imagine the confessor has a Jekyll and Hyde personality (realistically, this could be done through hypnosis or dissociative identity disorder) and it is the good personality confessing what he thinks the bad personality is going to do. The confessor says, “I am genuinely sorry for this, but I know that I am still going to set off that doomsday device tomorrow because I can’t help it”. Would the seal of the confession still be inviolate? If so, then I think I have provided a reductio of the principle, since it seems obviously absurd to value the principle of the seal over the lives of 1 billion people (or 10 billion, it doesn’t matter for purposes of the thought experiment). Derek Parfit calls this the “Law of Large Numbers”. When you deal with extremely large numbers of lives, then “common sense” moral principles tend to wither under the pressure. If you really considered yourself a moral person, and you believed in a moral God, then surely you would reason that it’s more just to violate the seal and save 1 billion people. Upholding the rule for the sake of upholding the rule is immoral if you cannot give a justification that outweighs the prima facie reasonableness of saving 1 billion lives.

3 Comments

Filed under Theology

Should We Value Happiness? Subjectivism and Objectivism in Metaethics

This is another post inspired by the discussions we’ve been having in the Derek Parfit seminar. Metaethics seems to me to be a very difficult thing to talk about. So bear with me as I work this out in writing. The question debated in class today revolved around the distinction between Hard and Soft Naturalism. Hard Naturalism is the view that there are just natural facts, and that we do not need to talk using distinctively normative language; we can jettison normative talk and just use natural descriptions. Soft Natualism is the view that there are just natural facts, but we still need to use irreducibly normative language. Parfit thinks that Hard Naturalism makes normativity out to be trivial, and he thinks Soft Naturalism is incoherent because he thinks Naturalism is committed to a thesis about reduction. Just about everyone in the class was not satisfied by Parfit’s arguments against both Soft Naturalism and Hard Naturalism. Most people thought that we could somehow rescue normative language from purely naturalistic properties. That is, people were saying that we could dispense with talk about these spooky irreducible Non-Natural normative facts and be just fine in producing genuinely normative claims about what we ought to do.

Now, I’m certainly not endorsing any talk about spooky irreducible Non-Natural normative facts. But I’m not really sure a complete reduction of ought-statements is plausible. How does talk about atoms in the void getting you to ought-statements? Well, the thought goes, once you start talking about the biological and society level of reality, you can get statements about what’s most natural for humans to desire, and we can translate ought-statements into statements about how to maximize happiness in sentient organisms in virtue of well-known facts about the subjective preferences of organisms. This is basically the idea behind Subjectivism. Presumably, the thought goes, it’s rational to do what one ought to do. What is it that we ought to do? Subjectivism says we should satisfy the desires of ourselves and others under conditions of ideal information and deliberation.

Imagine a man who genuinely wanted to chop off his pinky finger. He had all the relevant information about what would happen to his subjective well-being if he cut off his finger, and he wasn’t deluded or out of his mind. He simply wants to cut off his finger because he has a genuine desire to do so. Here’s the question: would he have a good reason for cutting off his finger? The subjectivist position is this: the man would have a good reason to cut off his finger because that’s what would satisfy his desire, and rationality is about desire satisfaction. The objectivist would say that the man has no good reason to cut off his finger. Having a desire for something is not enough. One must have good reasons to want to do something.

Most people in class seem to think that Subjectivism is the right way to go, because it seems to be the only plausible theory compatible with naturalistic metaphysics. But I’m convinced there is a serious problem with Subjectivism and all other forms of noncognitivism, expressivism, quasi-realism, and any other desire-based, Humean story. The problem is this: all these theories assume the same thing: that all humans share the same values. Subjectivists make the following argument. They say that we can use naturalistic facts about what the average human desires, and use these facts to tell us what we ought to do. On this view, spooky nonnatural normative facts are just like regular natural facts, it’s just that these natural facts are about making animals happy or satisfying desires. But here’s the thing: Subjectivism does not seem like it is capable or even wants to give a rational justification for the desire for happiness, or any other bottom-level desire.

And here is where I think Parfit is really onto something when he says that for Subjectivism, nothing really matters. Notice in the subjectivist explanation of the man wanting to cut off his finger the justification looks like this: he wants to cut off his finger and it’s rational because he has a desire to do so. There is no need for the man to justify to Subjectivists why he desires to cut off his finger. He has thought about it long and hard, considered all the consequences, and he still desires to do so. Likewise with claims about happiness. Why ought we to promote happiness? The subjectivist says that we should promote happiness because we all fundamentally desire happiness. So the normative force of the moral principle “maximize happiness” stems from facts about what we, as typical humans, desire.

But why should we value happiness, as opposed to unhappiness? Why should we value life, as opposed to nonlife? If the suicidal person genuinely wants to end his life, how would appealing to the descriptive fact that most humans value life give the suicidal person a reason to not end his life? It just doesn’t seem to have any normative oomph to point to the descriptive fact about what typical humans under typical conditions value. The question is why should we value the things that we value. Should we value life? Should we value happiness? What reasons do we have for valuing such things?

This is why I do not think the complete reduction to preferences works. We cannot reduce the statement “One ought to value happiness” into statements about the natural fact that most people in fact value happiness. What if we just emphasized that, look, given that most people do in fact value happiness, doesn’t that provide enough reason to, say, prevent the killing of innocent life? Parfit’s answer is no. If rationality bottoms out at the level of desire satisfaction, and we can tell no justifying story about why we should have the bottom-level desires we have, then nothing really matters except the satisfaction of those desires. But take someone who happens to not have similar values to typical humans. Let’s say a man desires to kill an innocent person. Are we going to really just say that the only reason he is irrational is because he doesn’t have typical human preferences, that he is just biologically unusual?

I think Reason can do better than that. But as I emphasized in my last post, I think the Objectivist story about rationality only works with Human Rationality, which is distinct from the instrumental rationality we share with nonhuman animals. And this is why I don’t think evolutionarily inspired arguments for moral nihilism work. Such arguments would go through if the only form of rationality humans possessed was instrumental rationality. But humans are not limited to just that form of rationality. Human Rationality is capable of reflecting on the very bottom layer of human valuation and asking, yes we do in fact value happiness, but should we? Do we have good reasons for doing so beyond just appealing to the brute fact that we very often do in fact desire such things? Don’t we want more out of our moral theory than a translation of natural facts about what we already know we desire? Don’t we want our moral theory to tell us something above and beyond the natural facts? Don’t we want our theory to tell us what we ought to do, what we ought to value?

I don’t think any of this requires talk of spooky nonnatural properties. It requires only a proper understanding of what it is exactly that Human Reason is up to when it enables humans to augment their decision making and go beyond instrumental rationality.

1 Comment

Filed under Philosophy

The Argument From Marginal Cases For Animal Rights

As of late, I’ve been getting really interested in animal rights philosophy, not because I’m close to turning into a vegan or anything, but simply because I find philosophical arguments that depend on comparative animal psychology to be really interesting. And I’ve been interested in the philosophy of animal minds for a long time, so the connection to my research is obvious. In particular, the Argument from Marginal Cases (AMC) really interests me. The AMC is one of the primary arguments used to support the idea that nonhuman animals have rights just the same as humans.  I found the following summary of the AMC in a paper by Daniel Dombrowski:

1. It is undeniable that [members of ] many species other than our own have ‘interests’ — at least in the minimal sense that they feel and try to avoid pain, and feel and seek various sorts of pleasure and satisfaction.
2. It is equally undeniable that human infants and some of the profoundly retarded have interests in only the sense that members of these other species have them — and not in the sense that normal adult humans have them. That is, human infants and some of the profoundly retarded [i.e. the marginal cases of humanity] lack the normal adult qualities of purposiveness, self-consciousness, memory, imagination, and anticipation to the same extent that [members of ] some other species of animals lack those qualities.
3. Thus, in terms of the morally relevant characteristic of having interests, some humans must be equated with members of other species rather than with normal adult human beings.
4. Yet predominant moral judgments about conduct toward these humans are dramatically different from judgments about conduct toward the comparable animals. It is customary to raise the animals for food, to subject them to lethal scientific experiments, to treat them as chattels, and so forth. It is not customary — indeed it is abhorrent to most people even to consider — the same practices for human infants and the [severely] retarded.
5. But absent a finding of some morally relevant characteristic (other than having interests) that distinguishes these humans and animals, we must conclude that the predominant moral judgments about them are inconsistent. To be consistent, and to that extent rational, we must either treat the humans the same way we now treat the animals, or treat the animals the same way we now treat the humans.
6. And there does not seem to be a morally relevant characteristic that distinguishes all humans from all other animals. Sentience, rationality, personhood, and so forth all fail. The relevant theological doctrines are correctly regarded as unverifiable and hence unacceptable as a basis for a philosophical morality. The assertion that the difference lies in the potential to develop interests analogous to those of normal adult humans is also correctly dismissed. After all, it is easily shown that some humans — whom we nonetheless refuse to treat as animals — lack the relevant potential. In short, the standard candidates for a morally relevant differentiating characteristic can be rejected.
7. The conclusion is, therefore, that we cannot give a reasoned justification for the differences in ordinary conduct toward some humans as against some animals

So here’s why I think the AMC is rather weak.

I don’t have any problems with premise (1). Premise (2) is already problematic though. The claim is that “human infants and some of the profoundly retarded [i.e. the marginal cases of humanity] lack the normal adult qualities of purposiveness, self-consciousness, memory, imagination, and anticipation to the same extent that [members of ] some other species of animals lack those qualities.” While it is undoubtedly clear that a human baby possesses less self-consciousness, imagination, and anticipation that human adults, there is a lot of evidence to support the idea that human babies are remarkably well-developed cognitively, they just lack the capacity of expression. So a human baby is certainly more intelligent than a chicken, and possibly more intelligent than a cow. The problem is that human babies have no way to express their intelligence since they can’t speak yet nor can they use their motor skills to communicate. But subtle experiments demonstrate the extent of their cognitive sophistication.

Moreover, the AMC ignores an obvious extension of the “marginal case” of the human baby: human fetuses. It seems like many speciesist would not include human fetuses in the moral sphere precisely because of how marginal their cognition is. And the development of human-like cognition is one of the markers for where we start drawing the line for abortion. The more developed the brain becomes, the less we feel it’s right to abort a child. And it could be said that the actual birth is an arbitrary cut-off point. If a baby was born without any brain, then it’s likely we would not include that baby into the moral sphere and mercifully end its life without its explicit consent.

But what about mentally retarded people like those with severe autism or Alzheimers? Clearly these entities lack the uniquely human cognitive capacities that characterize a normal human adult, yet we don’t treat them like cattle. Isn’t this inconsistent? Hardly. In the case of most autistic children, I believe the evidence shows that they either have a reduced human cognitive skill set or a different cognitive skill set, but it is rare that they have no skill set at all. I would daresay that your average autistic child is more cognitively sophisticated than a chicken. And the same for your average Alzheimers patient. Likely, an Alzheimer patient, for the majority of their disease progression, has a reduced cognitive skill set, but they don’t lack one altogether. And when such persons do eventually completely lack consciousness, why would a speciesist assume that they have full moral rights? Personally, if I ever developed Alzheimers, I would hope that my society permitted assisted suicide or mercy killing once I reach a totally advanced stage of the disease. Likewise for vegetative coma patients. It seems as if humans who totally lack consciousness are not fully included into the moral sphere, as, say, a normal human adult. This explains our attitudes towards those in comas with no foreseeable chances of recovery.

Thus, I think premise (3) is wrong in almost all cases. Moreover, we can use a different strategy to show why it’s consistent for a speciesist to treat newborn infants differently than they treat cattle: counterfactual biological development. Under normal healthy circumstances, a human infant will grow into a cognitively sophisticated adult. Under healthy circumstances, it is very very unlikely that a cow will grow into a cognitively sophisticated adult. And if that cow ever does mutate and develop the ability to rationally talk and engage humans in high-level moral conversation, then we should include that cow into the moral sphere. But what about someone with severe mental retardation who has no potential to grow into a normal adult? Well, as I said before, it’s doubtful that most retarded children are as cognitively stupid as a cow or chicken. Moreover, we can engage in a counterfactual analysis and think that it would have taken much less different alignment of genes for a retarded child to have been born with the potential to grow into a normal adult than it would be for a chicken or cow. A cow would have to have a total restructuring of the genome in order to produce a brain capable of learning human-like cognitive skills. So the counterfactuals are in fact quite different.

And there is another point where the AMC fails: it paints a false dichotomy whereby either animals have rights equivalent to adult human rights or they have no rights at all. This is a false dichotomy, because we can imagine a continuum of rights rather than an on or off switch. It makes sense to me that although a bonobo or dolphin has less rights than a human adult, it has more rights than a chicken, and a chicken has more rights than an oyster. I would never treat a bonobo like I would a chicken or a mosquito, but I would not treat a bonobo like a human child or a human adult. If there was a burning building, I would rescue a normal human adult or child over a bonobo, but I would rescue a bonobo over a chicken. Moreover, it’s false that this reasoning is arbitrarily speciesist because I would rescue a bonobo over a vegetative coma patient or a human fetus.

Now I want to discuss premise (5): human uniqueness. I see it claimed a lot in animal rights literature that the attempt to find uniquely human cognitive attributes has failed. Oh yeah? What about the set of cognitive attribute that allows you to send a robot to Mars? Or write a philosophy book?* Although there are certainly many similarities between humans and nonhuman animals, I just don’t take seriously anyone who denies the obvious and vast differences. Robot to Mars! Seriously! For those skeptical of human uniqueness, I highly recommend Michael Gazzaniga’s excellent book Human: The Science Behind What Makes Us Unique. As evidenced by practically everything in our culture as well as particular neural structures/functions, we are not just different by degree, but in kind. And even if it was just in degree, the level of difference in degree is of such magnitude it stills warrants the conclusion of human cognitive uniqueness. See this post for more.

So yeah, imo, the AMC has so many problematic premises it can barely even get off the ground as a convincing argument.

*Edit: I’ve realized that someone might wonder why the ability to send a robot to Mars is morally relevant. I don’t think it is. But the type of creature capable of sending a robot to Mars is also probably capable of moral deliberation and reflection, which certainly seems to me like a candidate capacity for bestowing moral worth. But since I do in fact place some value on basic organic sentience, clearly moral reflection is not the source of all of human worth, but I do think it grounds the majority of human worth. In fact, I think moral reflection (which is a skill enables by reflective consciousness) is of such importance than it generates moral value in terms of the counterfactuals for biological potential.

5 Comments

Filed under Consciousness, Philosophy, Psychology

Just how far should we expand the circle of ethics?

Right now I am reading Peter Singer’s book The Expanding Circle. It’s a good book so far. It’s clear, well-argued, and written with a sense of moral urgency. The central argument is that due to the way ethical reasoning works on the basis of impartiality, it would be arbitrary to restrict the moral community to a single group, such as your own tribe, gender, or race. Hence, the evolution of morality over the years is moving (and will hopefully continue to move) in the direction of ever greater impartiality as seen by societal advances in abolition, woman’s right’s, etc. However, Singer also argues that we should expand to circle of ethics beyond the human realm to all other sentient creatures capable of feeling pleasure or pain. Singer argues that it would be just as arbitrary to restrict ethical considerations to humans as it would be to restrict them to a certain class of humans.

But then how far down the evolutionary continuum should we go? Singer thinks we should probably draw the line around oysters and the like, since it seems implausible that oysters are capable of feeling pleasure or pain. And Singer definitely thinks we should not expand the circle to include inanimate entities like mountains or streams. So what’s so special about the ability to feel pleasure or pain? Singer thinks that this capacity is a nonarbitrary dividing line because it’s something that humans can take into consideration. On what basis could we include mountains and streams into our moral deliberation? There seems to be none. But the fellow capacity to feel pleasure and pain seems like a good candidate.

This is where I must disagree with Singer. I simply don’t see what’s so morally special about the ability to detect cellular damage. And that is all pain perception really is. It’s an evolved biological mechanism that registers damage to cells and then relays that information to appropriate motor centers to move the creature out of harm’s way, which increases the biological fitness of the creature, maintaining homeostasis and organisational structure. Vegetarians like Singer loathe this line of thinking because it brings to mind the old practice of torturing cats and dogs because Descartes argued they can’t really feel pleasure or pain because animals are simply unfeeling mechanisms. But I don’t think the permissibility of wanton torture follows from the idea that pain perception is just a simply biological mechanism for damage detection. Even if it is permissible to use animals for food, it doesn’t follow that it’s permissible to torture them for fun. Even if it’s permissible to eat animals for food, we might still be obligated to treat them with respect and try to lower the occurrence of pain to it’s absolute minimum. But, personally, I believe that just having the capacity to feel pain doesn’t launch you into the moral category whereby it becomes impermissible to be used for food for humans.

I’ve heard it claimed that this kind of speciesism is injustifiable if we consider the cognitive capacities of those who are extremely mentally handicapped or incapacitated. Since presumably I think speciesism is justifiable because humans are cognitively superior to nonhuman animals, then it should be ok to treat cognitively inferior humans just like we do cattle. Since we wouldn’t think it’s ok to do this to mental invalids, we can’t just use cognitive superiority to justify the way we treat nonhuman animals. My immediate response to this is that there is a difference between entities who, if everything had been biologically optimal, could have developed to the human cognitive level, and entities who could never reach that level despite being developed in optimal biological conditions. This principle of potentiality is enough to show how it’s nonarbitrary to treat human invalids different from nonhuman animals.

 There’s another point I want to make about the moral worth of pain itself. How could it be of that much importance when nonhuman animals themselves seem to be indifferent to it compared to the typical human response to pain? I read in Euan MacPhail’s The Evolution of Consciousness that there have been field reports of chimps getting into fights with other males, having their testicles bitten off, and immediately afterwards being capable of having sex with a female. I doubt there is any human who is horny enough to ignore the pain of having their genitals mutilated just to have sex. On the basis of this observation, we can infer that chimp pain perception is different from the awareness of pain that humans possess. And since chimps are seen by people like Singer as being the most worthy of our ethical consideration, what does this say about the pain capacities of animals even lower down the totum pole than chimps? Nonhuman animals don’t seem to “care” about their pain to the same extent that humans do. Caring about pain as opposed to pain itself goes by another name: suffering i.e meta-consciousness of pain. While it is plausible that some nonhuman animals have the capacity for a kind of protosuffering, it seems clear to me that human suffering is of a level of sophistication far beyond that of any nonhuman animal. Now, I don’t have a clear argument for why human suffering is more morally valuable than the mere pain of nonhuman animals, but it is at least a nonarbitrary cutting off point and one that has a kind of intuitive support.

However, I don’t think the moral worth of human suffering over nonhuman pain is enough to justify the claim that nonhuman pain has no moral worth at all. As a matter of fact, I agree with Singer that the pain of nonhuman sentient beings does have some moral worth, and that we are obligated, ultimately, to reduce that pain. For this reason, if I was presented in a supermarket with the choice of eating real beef or artificial beef grown in a lab, I would choose the artificial beef. So the only reason I am not a non-meateater is because the right technology has not been invented yet. As  soon as that technology becomes available (and they are working on it), I will gladly give up my practice of eating meat. But since I believe that eating meat is a very healthy way to get protein and animal fats into my diet, I do not think the current pains of nonhuman animals is enough to overcome the selfishness involved in maintaining my own health, for I value my own life over those of nonhuman animals. Again, this is not because I don’t place any value in nonhuman life. In my ideal world, not a single sentient entity would ever feel unnecessary pain. I feel predation to be evil, but I nevertheless eat animals for health reasons. If I sincerely thought vegetarianism was healthier than an omnivorous diet, I would be a vegetarian (which would be nice because it would line up with my beliefs in the evils of predation). But since I am a speciesist and value human life more than nonhuman life, I think it is permissible for me to continue my practice until the technology of artificial meat becomes widely available. I’m aware of the possibility that this reasoning could be nothing more than a post-hoc rationalization of my comfortable habits of meat eating. But I do think that there is a nonarbitrary argument to be made for speciesism that makes the exclusion of nonhuman animals from the moral sphere far less arbitrary than the exclusion of subclasses of humans. Contra Singer, I don’t think speciesism is equal to racism or sexism.

11 Comments

Filed under Philosophy