Tag Archives: Philosophy

The Vegetative State as an Interactive Kind

Note: This is the introduction to the draft of my dissertation prospectus.

Doctors diagnosing the vegetative state have always found themselves embroiled in scientific and ethical controversy. Over the last several decades, the diagnosis of the vegetative state has stirred the public imagination and the writings of bioethicists in a way that few other diagnoses has. Take the example of Terri Schiavo, who suffered from a heart attack in 1990 and subsequently lapsed into a coma from lack of oxygen to her brain. After months of no recovery she was formally diagnosed with the vegetative state, a condition doctors describe as a state of “wakeful unawareness”. In this state Schiavo opened her eyes and appeared to be awake but showed no clear-cut intelligent, contingent behavior in response to any stimulation or human interaction. Contingent behaviors are behaviors that occur as appropriate responses to the behavior of other people or objects e.g. if someone sticks out their hand, the appropriate behavior (in some contexts) is to shake it. However, though Schiavo didn’t show any contingent behavior she would show reflexive behaviors such as laughing or crying or randomly moving her eyes or limbs. After years of no recovery from VS her husband Michael asked the state for permission to remove her artificial feeding and hydration.

However, when videos of Terri’s wakeful behaviors were released to the public widespread outrage was provoked in response to what many people considered to be the immoral murder of a living human being. Towards the end of her life during the 2000s, the Schiavo family was convinced that she was in fact in a state called the “minimally conscious state” (MCS) because they thought she showed intermittent signs of conscious awareness, such as laughing appropriately when a joke was told or responding to a family member with an appropriate emotional display. Because the operational standards of diagnosing MCS allow for the possibility of only showing signs of conscious awareness intermittently there is a genuine epistemic question of whether Schiavo was diagnosed properly with most experts retrospectively believing she could not have been in a MCS based on her autopsy reports, which revealed extensive cortical lesioning. But the public imagination was rarely if ever aware of these nuances distinguishing VS and MCS but instead took her wakeful behavior and physical health to be a clear sign that it would be wrong to kill Schiavo by removing her artificial life support.

The Schiavo case rests at the intersection of epistemology, medical diagnosis, ethics, the law, and the norms of society at large. These issues are intertwined. The goal of this dissertation will be to systematically argue that in diagnosing the vegetative state and other disorders of consciousness (DOC) these normative issues are essentially intertwined. In other words, the epistemic certainty attached to any diagnosis of the vegetative state cannot occur outside the broader context of ethics, law, and society. I call this the Thesis of Diagnostic Interaction. The thesis says that diagnosing disorders of consciousness is not a purely objective affair in the same way it is for physicists to determine the number of protons in a gold atom. In other words, a diagnostic label such as “the vegetative state” is not a natural kind because it does not cut nature at its joints in the way the kind GOLD does. The upshot of my thesis is that the question of whether Schiavo was truly in a vegetative state cannot be answered by merely examining her brain or behavior in isolation from the cultural time and place she was diagnosed. We must look at the broader culture of diagnostic practice which itself is essentially shaped by complex ethical and legal norms and steeped in the social milieu of the day.

Interactive Kinds

Instead of VS being understood as a natural kind like GOLD, INFLUENZA, or H2O, the vegetative state can be better understood as what Ian Hacking calls an interactive kind. An interactive kind is a concept that applies to classificatory schemes, ones that influence the thing being classified in what Hacking calls “looping effects”. Hacking’s examples of interactive kinds includes childhood, “transient” mental illnesses such as 19th-century hysteria, child abuse, feeblemindedness, anorexia, criminality, and homosexuality. Interactive classifications change how the people classified behave because they are either directly aware of the classification or the classification functions in a broader socio-cultural matrix whereby individuals and institutions use the classification to influence the individuals being classified. For Hacking, interactive kinds are

“Especially concerned with classifications that, when known by people or by those around them, and put to work in institutions, change the ways in which individuals experiences themselves–and may even lead people to evolve their feelings and behavior in part because they are so classified.” (Social Construction of What, p. 104).

Hacking’s proposal that some kinds of people are interactive kinds boils down to two features. First, scientific classifications of people can literally bring into being a new kind of person that did not exist before. Call this the “new people” effect. Second, such classifications are prone to “looping effects” because the classification interacts with people when they know about the classification or when the classification functions in a larger institutional settings which then influence the individual being classified. For example, consider the diagnosis of “dissociative identity disorder” (DID) otherwise known as “multiple personality disorder”. According Hacking, DID did not come into fruition until scientists and psychiatrists began to look for it i.e. until it became an accepted diagnostic category by a group of therapists and institutions. Moreover, once the classification of DID was popularized in novels and movies, the rates of diagnosis increased dramatically suggesting that the disease had a social-cultural origin not a purely biological origin like the Ebola virus, which is an example of what Hacking calls an “indifferent kind” because the virus does not know about human classification schemes. DID is an example of a looping kind because the spreading awareness of the diagnostic classification led people to conform to the diagnostic criteria.

Making Up Diagnostic Labels

I contend that the vegetative state can also be considered an interactive kind in a similar way that Hacking claims mental illness is. There are several, interrelated reasons why this is the case.

  1. Clinical diagnosis of DOC is essentially a process or an activity carried out by finite human beings. Diagnosis does not happen at discrete time points but is an unfolding activity of humans making fallible judgments that have an ineliminable human element of subjectivity.
  2. The classification of DOC is under continual revision and varies from time and place, doctor to doctor, institution to institution. A diagnosis about the vegetative state made in 2014 simply would not have made sense in 1990 because the classificatory schemes were different, giving rise to new kinds of patients with DOC. Some doctors are more skilled at making a diagnosis than others and different institutions utilize different classificatory procedures that are mutually exclusive yet equally justified given the pragmatic constraints of neurological diagnosis.
  3. The diagnosis of DOC is prone to “looping effects” due to the emergence of new technologies which affect diagnostic practice which in turn shape the development of newer technologies. Decisions to utilize different technology will affect the diagnostic outcomes of whether someone is in a vegetative state or not. For example, whether you use behavioral bedside methods, resting-state PET, or active probe fMRI methods will give different diagnostic outcomes.
  4. The diagnosis of DOC is prone to the “new people” effect because new diagnostic categories literally create new kinds of people that did not exist prior to the creation of the diagnostic category. And since the process of diagnosis is an on-going activity, clinical neurology is continually in the process of making up new kinds of people that did not exist before. Moreover, the individuals classified are susceptible to looping effects because once classified they are changed by the classification.
  5. The creation of diagnostic categories of DOC cannot be disentangled from broader issues in ethics, the law, and society. Consciousness plays a central role in many moral theories because of its central role in defining the interests of animals and people. We do not consider entities without the capacity for consciousness to have any interests, and therefore they do not deserve our moral consideration. Thus, facts about consciousness determine our ethical obligations in the clinic. A person diagnosed with the vegetative state by definition lacks consciousness. But the criteria for this diagnosis are continually changing in ways that do not reflect pure advances in scientific understanding.

 

Advertisements

2 Comments

Filed under Consciousness, Neuroethics, Philosophy of science

Can the Clinical Diagnosis of Disorders of Consciousness Avoid Behaviorism?

4127EEGGk7L._SS500_

The “standard approach” in clinical neurology has been accused of suffering from an implicit “behaviorist epistemology” because disorders of consciousness are typically diagnosed on the basis of a lack of behavior. All the gold standard diagnostic assessment programs such as the JFK-Coma Recovery Scale are behavioral in nature insofar as they are expressly looking for behavior or the lack of behavior, either motor behavior or verbal behavior. If the behavior occurs appropriately in response to the command or stimulus then they get points that accumulate towards “normal” consciousness. If no behavior is observable in response to the cue then they don’t get points and are said to have a “disorder of consciousness”.

The problem with this approach is both conceptual and empirical. Conceptually, there is no necessary link between behavior and consciousness because unless you are Gilbert Ryle or Wittgenstein you don’t want to define consciousness in terms of behavior. That is, we don’t want to define “pain” as simply the behavior of your limbs whenever your cells are damaged, or the disposition to say “ouch”. The reason we don’t want to do this is because pain is supposed to be a feeling, painfulness, not a behavior.

Empirically, we know of many cases where behavior and consciousness can be decoupled such as in the total locked-in state where someone’s mind is more-or-less normal but they are completely paralyzed, looking for all intents and purposes like someone in a deep coma or vegetative state yet retaining normal brain function. From the outside they would fail these behavioral assessment techniques yet from the inside have full consciousness. Furthermore we know that in some cases of general anesthesia there can be a complete lack of motor response to stimulation while the person maintains their conscious awareness.

Another problem with the behaviorist epistemology of clinical diagnosis is that the standard assessment scales require a certain level of human expertise in making the diagnostic judgment. Although for most scales there is high inter-rater reliability it nevertheless ultimately comes down to a fallible human making a judgment about someone’s consciousness on the basis of subtle differences between “random” and “meaningful” behavior. A random behavior is just that: a random, reflexive movement that signifies no higher purpose or goal. But if I ask someone to squeeze my hand and they squeeze it, this is a meaningful sign because it suggests that they can listen to language and translate a verbal command to a willed response. But what if the verbal command to squeeze just triggers an unconscious response to squeeze? Sure, it’s possible. No one should rule it out. But what if they do it 5 times in a row? Or what if I say “don’t squeeze my hand” and they don’t squeeze it? Now we are getting into what clinicians call “unambiguous signs of consciousness” because the behavior is expressive of a meaningful purpose and shows what they call “contingency”, which is just another way of saying “appropriate”.

But what does it mean for a behavior to really be meaningful? Just that there is a goal-structure behind it? Or that it is willed? Again, we don’t want to define “meaning” or “appropriateness” in terms of outward behavior because when you are sleepwalking your behavior is goal-structured yet you are not conscious. Or consider the case of automatic writing. In automatic writing one of your hands is capable of having a written conversation and writing meaningful linguistic statements without “you” being in control at all. So clearly there is a possible dissociation between “meaningful” behavior and consciousness. All we can say is that for normal people in normal circumstances meaningful behavior is a good indicator of normal consciousness. But notice how vacuous that statement is. It tells us nothing about the hard cases. 

So in a nutshell the diagnosis of disorders of consciousness has an inescapable element of human subjectivity in it. Which is precisely why researchers are trying to move to brain-based diagnostic tools such as fMRI or EEG, which are supposed to be more “objective” because they skip right over the question of meaningful behavior and look at the “source” of the behavior: the brain itself. But I want to argue such measures can never bypass the subjectivity of diagnosis without going full behaviorist. The reason why brain-based measures of disorders of consciousness are behaviorist is simply because you are looking at the behavior of neurons. You can’t see the “feelings” of neurons from a brain scanner anymore than you can see the “feeling” of pain from watching someone’s limb move. Looking at the brain does not grant you special powers to see consciousness more directly. It is still an indirect measure of consciousness and it will always require the human judgment of the clinician to say “Ok, this brain activity is going to count as a measure towards “normal” consciousness”. It might be slightly more objective but it will never be any less subjective unless you want to define normal consciousness in terms of neural behavior. But how is that any different from standard behaviorism? The only difference is that we are relying on the assumption that neural behavior is the substrate of consciousness. This might be true from a metaphysical perspective. But it’s no help in the epistemology of diagnosis because as an outside observer you don’t see the consciousness. You just see the squishy brain or some representation on a computer screen. I believe there is a circularity here that cannot be escaped but I won’t go into it here (I talk about it in this post).

2 Comments

Filed under Consciousness, Philosophy of science, Psychology, Uncategorized

Is Consciousness Required for Discrimination?

In their book A Universe of Consciousness, Edelman and Tononi use the example of a photodiode discriminating light to illustrate the problem of consciousness:

Consider a simple physical device, such as a photodiode, that can differentiate between light and dark and provide an audible output. Let us then consider a conscious human being performing the same task and then giving a verbal report. The problem of consciousness can now be posed in elementary terms: Why should the simple differentiation between light and dark performed by the human being be associated with and, indeed, require conscious experience, while that performed by the photodiode presumably does not? (p. 17)

Does discrimination of a stimulus from a background “require conscious experience”? I don’t see why it would. This seems like something the unconscious mind could do all on its own and indeed is doing all the time. But it comes down to how we are defining “consciousness”. If we are talking about consciousness as subjective experience, the question is: does discrimination require there be something-it-is-like for the brain to perform that discrimination? Perhaps. But I also don’t know how to answer that question empirically given the subjective nature of experience and the sheer difficulty of building an objective consciousness-meter.

On the other hand, assume by “consciousness” we mean something like System II style cognition i.e. slow, deliberate, conscious, introspective thinking. On this view of consciousness, consciousness is but the tip of the iceberg when it comes to cognitive processing so it would be absurd to say that the 99% unconscious mind is incapable of doing discrimination. This is the lesson of Oswald Külpe and the Würzburg school of imageless thought. They asked trained introspectors from the Wundtian tradition to make a discrimination between two weights with their hands, to see if one weight is heavier than the other. Then the subjects were asked to introspect and see if they were aware of the process of discrimination. To their surprise, there were no conscious images associated with the weight-discrimination. They simply held the weights in their hand, consciously made an intention to discriminate, the discrimination happened unconsciously, and then they were aware of the results of the unconscious judgment. Hence, Külpe and the Würzburg school discovered a whole class of “imageless thought” i.e. thought that happens beneath the level of conscious awareness.

Of course, the Würzburg school wasn’t talking about consciousness in terms of subjective qualia. They were talking about consciousness in terms of what’s introspectable. If you can’t introspect a thought process in your mind then it’s unconscious. On this view and in conjunction with evolutionary models of introspection it seems clear that a great deal of discriminations are happening beneath the surface of conscious awareness. This is how I prefer to talk about consciousness: in terms of System II-style introspection where consciousness is but the tip of a great cognitive iceberg. On Edelman and Tononi’s view, consciousness occurs anytime there is information integration. On my view information integration can occur unconsciously and indeed most if not all non-human animal life is unconscious. Human style slow deliberate introspective conscious reflection is rare in the animal kingdom even if during the normal human waking life it is constantly running, overlapping and integrating with the iceberg of unconsciousness so as to give an illusion of cognitive unity. It seems as if consciousness is everywhere all the time and that there is very little unconscious activity. But as Julian Jaynes once said, we cannot be conscious of what we are not conscious of, and so consciousness seems pervasive in our mental life when in fact it is not.

2 Comments

Filed under Consciousness

Vegetative State Patients as Moral Patients

https://www.academia.edu/7692522/Vegetative_State_Patients_As_Moral_Patients

Abstract:

Adrian Owen (2006) recently discovered that some vegetative state (VS) patients have residual levels of cognition, enabling them to communicate using brain scanners. This discovery is clearly morally significant but the problem comes from specifying why exactly the discovery is morally significant and whether extant theories of moral patienthood can be applied to explain the significance. In this paper I explore Mark Bernstein’s theory of experientialism, which says an entity deserves moral consideration if they are a subject of conscious experience. Because VS is a disorder of consciousness it should be straightforward to apply Bernstein’s theory to Owen’s discovery but several problems arise. First, Bernstein’s theory is beset by ambiguity in several key respects that makes it difficult to apply to the discovery. Second, Bernstein’s theory of experientialism fails to fully account for the normative significance of what I call “narrative experience”. A deeper appreciation of narrative experience is needed to account for the normative significance of Owen’s findings.

 

 

This paper has gone through so many drafts. I swear I’ve rewritten it 5 times from more or less scratch. Each time I’ve tried to narrow my thesis to be ever smaller and less ambitious because I’m pretty sure that’s the only way I’m going to get this thing passed by my qualifying paper committee. As always, any thoughts or comments appreciated.

1 Comment

Filed under Consciousness, Neuroethics, Psychology

Reflecting On What Matters

1. Introduction

What does it take for your life to go better or worse? One idea is experientialism. For experientialists, what matters is sentience, the capacity to experience pain and pleasure. Experientialists typically appeal to a distinction between moral agency and moral patiency to argue that only sentient beings can be moral patients. The paradigm moral agent is the adult human, capable of both thinking morally and acting morally. Most moral agents are also moral patients because most adult humans are sentient. The paradigm moral patient that is not also a moral agent is a newborn baby or a nonhuman animal. For my purposes, the key doctrine of experientialism is that sentience is necessary for both moral agency and moral patiency.

The goal of this paper is to refute that doctrine and argue that the capacity for reflection by itself is sufficient for both moral agency and moral patiency. In other words, a purely reflective but insentient being would be both a moral agent and a moral patient simply in virtue of their capacity for reflection. Who explicitly denies this? Suchy-Dicey (2009) argues that a being that was reflective but not sentient would not be a moral patient. She states that “autonomy without the potential for experiencing welfare is not valuable…the ability to experience welfare is a precondition for the value of autonomy” (2009, p. 134). Thus, Suchy-Dicey says the value of reflection is parasitic upon sentience but not vice versa. That is, an entity is a moral patient if it is both sentient and reflective, or if it is only sentient—but if an entity is reflective but not sentient then on Suchy-Dicey’s view it does not count as a moral patient. Hence, Suchy-Dicey’s view is characterized by two features:

(1). Value Pluralism: Both sentience  and reflection are intrinsically valuable.

(2). Value Asymmetry: The value of sentience for moral patiency is independent of reflection but the value of reflection for moral patiency is dependent on sentience. Thus, if an entity is reflective but not sentient, it is not a moral patient.

I agree with (1) but deny (2). Instead, I will defend the following thesis:

(2*). Value Symmetry: the value of sentience for moral patiency is independent of reflection and vice versa. Thus, an entity that is reflective but not sentient would still be a moral patient.

This paper aims to defend (2*) against (2). To do so, I defend the following argument:

  1. Experientialism assumes that all moral patients and all moral agents are necessarily sentient.
  2. The capacity for reflection by itself is sufficient for both moral patiency and moral agency.
  3. By (2), if a purely reflective being existed, it would be both a moral patient and a moral agent.
  4. Purely reflective beings can exist.
  5. Thus, experientialism is false.

Premise (1) just falls out of the commitments of experientialism. The most controversial premise is arguably (2). To defend it, I will need to do several things. In section 2, I will explain what I mean by “the capacity for reflection”, explain why it’s sufficient for moral agency, and argue that purely reflective beings can exist. In section 3, I will continue by arguing that reflection is sufficient for moral patiency. Doing so will provide the needed ammunition to argue against experientialism.

2. What is reflection?

The paradigm reflective agent is a normal human adult, capable of reflective self-consciousness. Gallagher’s (2010) definition of reflective self-consciousness is a good place to start. He defines it as “an explicit, conceptual, and objectifying awareness that takes a lower-order consciousness as its attentional theme.” Several themes are important for my understanding of reflection. First, it must be explicit. A cat might think “I am hungry” but this thought is never explicitly articulated in its mind in the way a reflective human might reflect to themselves, “Boy if I don’t eat breakfast I’m going to be hungry this evening for sure.” Second, reflection must be conceptual. What I mean by that is that in order to reflect one must have the concept of “reflection”, or at least some concept of “consciousness”. A cat might have a psyche but it lacks a concept of psyche qua psyche. A reflective creature knows as its reflecting that it’s reflecting because it has at least one concept about reflection as such to distinguish it from other psychological events like behaving or perceiving.

Thus, to reflect in the full sense I intend one must have an explicit understanding of what it means to reflect and the ability to know that you are reflecting when you are reflecting. Furthermore, a distinguishing feature of reflection is that a reflective creature can reflect on just about anything: themselves, trees, rocks, numbers, philosophy, art, reflection itself, evolution, space-time, etc. While there might be some contents that are too unwieldy for human reflective agents to fully reflect on, a defining feature of reflection is its flexibility with regard to the contents of reflective acts. If a reflective agent is relaxed and not pressed for time it can very well reflect on almost anything so long as it has the right conceptual repertoire. Thus, I avoid the term “reflective self-consciousness” because reflective agents can actually take as an object of reflection just about any object or proposition, not just the “self”. Hence, I prefer to talk about “reflective consciousness” i.e. reflection. A feature of reflection closely related to flexibility is the ability to switch between different objects of reflection. A reflective creature, when suitably relaxed, can choose what to reflect on when it wants to. If it wants to reflect on the past, it can; if it wants to reflect on the future, it can.

Phenomenologically speaking, reflection is spatial, selective, and perspectival. Reflection is spatial because if I asked you to reflect on your cat and then your dog you would not imagine them mushed together; you would first reflect on your cat and then “move” onto your dog. All reflection is spatialized in this sense because the objects of reflection are “separated” from each other in mental space. This applies to the most abstract of ideas: if I ask you to reflect on the concept of liberty and then reflect on democracy there will be “movement” in your act of reflection as you go from idea to idea.  Reflection is selective because if I reflect on what I had for breakfast yesterday, I cannot simultaneously reflect on what I want for breakfast tomorrow. Reflection is perspectival because if I reflect on my walk through town yesterday the reflective act is done from a perspective. If my reflection is veridical I might reflect as if I were peering out of my head bobbing up and down as I walk but in all likelihood my reflection will be disembodied like a camera floating freely through space able to fly through the city at any speed.

Another feature of reflection is the capacity to explicitly reason and articulate about intentional actions qua intentional actions. To interact with something nonreflectively is to interact it without explicitly realizing you have done so and without the ability to give a reason why you have done so. Conversely, to interact with something reflectively enables you to reflect on your reasons for having chosen the action you did and the ability, if needed, to explicitly articulate your reasons for having acted in the way you did. The reasons you give might not be indicative of the true, underlying causal mechanisms for your action but what’s important is the ability to articulate in terms of intentional actions even if you are confabulating (Nisbett & Wilson, 1977). Moreover, even if your voicebox or muscles were completely paralyzed you would still have the ability to articulate your reasons so long as you can articulate them to yourself or so long as you possess the knowledge that if you had a means of expressing yourself you could actually articulate. Thus, what counts is not so much the literal articulation of reasons but the capacity or potential to articulate reasons for action. Moreover, by action I mean mental or behavioral action e.g. you could articulate to yourself why you chose to imagine yourself playing tennis as opposed to imagining yourself walking through your house.

Now that I have explained part of what it means to be a reflective agent, I want to explain why reflective agents are also moral agents, what I call reflective moral agents. Defending the cogency of reflective moral agency will clear the ground for my defense in the next section of reflective moral patiency. It’s relatively uncontroversial the ability to reflect has instrumental value for moral agents insofar as reflective creatures could reflect on better ways to help moral patients but why should reflective agents be moral agents just in virtue of their being reflective agents and not because reflection is instrumentally valuable? One reason is that reflective agency is important for realizing many things of intrinsic value according to what has been called “objective list” approaches to intrinsic goodness. Common items on these lists of intrinsically valuable goods include things such as: developing one’s talents, knowledge, accomplishment, autonomy, understanding, enjoyment, health, pleasure, friendship, self-respect, virtue, etc. Arguably reflection is not crucial for all these items but it is especially important for autonomy, which roughly speaking is the ability to rationally make decisions for oneself and be a “self-legislating will”, i.e. someone who makes decisions on the basis of rules that they impose on themselves. Arguably autonomy involves the capacity for reflection insofar as one cannot automatically or unconsciously self-legislate; to self-legislate in this sense necessarily involves stepping back and reflecting on the type of life one wants to live.

For example, consider the concept of an “advanced directive”, which is a special legal contract that allows people to decide how they want to die. Suppose your friend Alice had never heard of an advanced directive before nor had she ever considered the question of how she wanted to die e.g. whether she would want to live on life support for more than six months. Now if you asked Alice about advanced directives and she responded instantly with a “no” you would be confused. You would say, “How can you answer so quickly? Don’t you need to reflect a little longer on the question?” It would be one thing if she said “Oh, actually I have thought about this before and my answer is still no.” But it would be another thing altogether if she said “I don’t need to think about it – I just went with my gut reaction, and that gut reaction is no.” If she answered in this way you might think she did not understand the moral significance of advanced directives, which demand a certain kind of slowness in deliberation in order to be morally relevant.

Consider another example. You notice your friend Bob has grown really close to his girlfriend, Carol. One day you ask Bob if he wants to marry her and he instantly answers “Yes”. Surprised, you ask, “So you have thought about this before?” and Bob says “No, I’ve never thought about it before until you asked.” Most people would find this strange because marriage is such a significant life decision that it demands slow, deliberative reflection. To not reflect on such weighty issues indicates a failure of moral agency.These two examples illustrate a general principle about the crucial role reflection plays in supporting rational, autonomous choice, namely, that it must have an element of “slowness”. This kind of reflective autonomy is distinct from the autonomy of, say, cats, who are free to choose between sleeping on the mat or sleeping on the bed. The latter kind of autonomy is what we might call sentient autonomy because it’s possessed by almost all Earthly beings that are sentient. Sentient autonomy is important and distinguishes animals from, say, rocks and dust bunnies but it is not the only kind of autonomy relevant to moral agency. If there was a being that possessed reflective autonomy but wasn’t sentient, it seems absurd to deny them moral agency. Reflectively autonomous agents would be able choose to help moral patients regardless of their ability to sensuously feel pleasure or pain. Moreover, their decision procedures would be such that they are of a deliberative nature, grounded in reasons that they are able to explicitly articulate if necessary.

Consider the fictional character Commander Data from Star Trek. Data is an advanced android with a positronic brain that can compute trillions of operations per second. He is thus hyper-intelligent, processing information faster and more accurately than any human. Even if his brain is a computer Data is not merely a computer; he is a moral agent just the same as any human. The only difference is that Data is not a sentient being in the sense that he lacks the bodily consciousness of animals and other fleshy creatures.

Biting the bullet and denying Data moral agency is implausible given that Data was often the wisest and most morally principled of all the crewmembers, not to mention the most valiant in the face of action as evidenced by his many medals of honor. If anyone was capable of reflective autonomy if was Data. It might look from all appearances that he was acting out of just normal sentient autonomy but this is an illusion generated by the sheer speed of his reflective processing. Consider the numerous medals won for bravery and honor in service of Starfleet. All of Data’s valor and bravery were executed not because of any animal instinct or sentient autonomy but because he made a reflective choice. This is evident by the fact that if you asked Data why he performed action X in situation Y he would always be able to explicitly articulate a reason for having done so, even if that reason is “Because I was programmed to do so”. The relevant point however is that his actions betray the flexibility, switching, and autonomy relevant for moral agency as well as the explicitness characteristic of reflective agency.

3. Reflective Moral Patiency

In this section I will defend the second half of premise (2): the capacity for reflection by itself is sufficient for moral patiency. Any entity that can reflect is what I call a reflective patient. The guiding intuition behind experientialism is that welfare flows from the capacity to experience the world, not the capacity to reflect on the world. However, I contend that if there was a being that was insentient but capable of reflection it would be wrong to harm them. Take Data again.I contend that it would be wrong to treat Data poorly by either intentionally destroying him, being negligent to his robotic body, or needlessly destroying his prized belongings. In other words, Data is a moral patient that cannot be treated like just any mere physical object.

There are at least two objections someone might have to Data being a moral patient. First, the experientialist might simply balk at the thought Data cannot feel pain and pleasure. How could his cognitive life be identical to that of a rock or other insentient entities? Surely there is a qualitative or experiential dimension to Data’s existence that distinguishes his existence from that of rocks and dust bunnies. I would respond by saying there is indeed a certain “quality” to Data’s information processing but I’m not convinced we are forced to say such information processing is “experiential” unless that just means “has a quality”, which would trivialize the notion. I can grant the quality of Data’s positronic brain as it reflectively operates is different from the quality of a rock because of its informational complexity without supposing the quality is necessarily due to the information processing being experiential in way an animal’s sensuous pleasure or pain is experiential. In effect, I’m proposing that an entity could have the quality of being a reflective thinker without being a subject of phenomenal experience.

The second objection is that moral patiency plausibly flows from an entity having interests that can either be satisfied or frustrated. Didn’t Data have interests and aspirations like anyone, however “robotic” or “inhuman”? If Data is merely engaging in reflective thought but lacks any interests then the objector might say it’s implausible that his life could be made better or worse and thus would not count as a moral patient. Since we’ve already argued that Data surely is a moral patient then his patiency must be due to a kind of experiential welfare, as per experientialism. The underlying assumption seems to be that unless a cognitive capacity is experienced it cannot be intrinsically valuable and thus cannot be a suitable locus for moral patiency. Call this the Principle of Experience (PE). Kahane & Savulescu also endorse a version of PE writing that “phenomenal consciousness is required if a person is to have a point of view, that is for the satisfaction of some desire to be a benefit for someone” (2009, p. 17). The intuition behind PE is that what makes it permissible to randomly shoot a rock and impermissible to randomly shoot an animal is that rocks lack phenomenal experiences that can be negatively or positively affected.

However, I believe this objection fails to fully grasp the distinction between reflective patiency and sentiential patiency. Data can be a moral patient so long as we are careful to distinguish “bottom-up” interests that stem from animalistic sentience, and “top-down” interests that stem from reflection. It’s debatable whether Data has genuine bottom-up interests but undeniable he has top-down interests due to his capacity for complex, reflective thought. For example, Data might not have a sentential instinct to avoid pain but he can reflectively think “I do not want to be destroyed.” Data could surely sign an advanced directive and his signature would be morally relevant because he can explicitly articulate and reason about his decision. It would be wrong to intentionally destroy or mistreat Data not because he can experience the mistreatment but because it would violate his reflective interest to continue existing. If Data signed an advanced directive it would be wrong to intentionally ignore it for the exact same reason it’d be wrong to intentionally ignore a human’s advanced directive.

Another kind of thought experiment supports the intuition that reflective consciousness is relevant to moral patiency independently of its relation to sentience. Consider the hypothetical scenario where a chimpanzee and a chicken were in a burning building and you could only save one. Other things being equal, it seems overall better to save the chimpanzee because although both the chicken and chimp are sentient arguably the chimp has a greater amount of proto-reflectivity that is intrinsically valuable. Similarly, if the choice was between a chimpanzee and an adult human, it seems overall better to save the human for the same reason: the human is sentient and it is reflective. Furthermore, suppose your mother or father was dying and the doctors said they could save their life only on the condition that they would be insentient but reflective. They would be able to converse intelligibly, write emails, thoughtfully answer questions about their own folk psychology, cook dinner, and otherwise act like perfectly normal people except they couldn’t experience pleasure or pain. Would you accept the offer? It seems absurd not to. The rich, multidimensional intelligence associated with reflection is valuable independently of any contingent relation to sentience. These thought experiments lend credence to the thought that moral status comes in degrees and that reflective moral agents that are also sentient carry what some philosophers call “Full Moral Status” (Jaworska & Tannenbaum, 2013). Moral patients that are sentient only carry less than full moral status because they are not reflective patients.

Conclusion

I’ve argued that experientialism is false because it assumes that all moral patients and all moral agent are necessarily sentient. In contrast I’ve attempted to open up the conceptual space by arguing that the capacity for reflection itself is sufficient for both moral agency and moral patiency.

 

References

Bernstein, M. H. 1998. On Moral Considerability: An Essay on Who Morally Matters. New York: Oxford University Press.

Farah, M. J. (2008). Neuroethics and the problem of other minds: implications of neuroscience for the moral status of brain-damaged patients and nonhuman animals. Neuroethics, 1(1), 9-18

Jaworska, Agnieszka and Tannenbaum, Julie, “The Grounds of Moral Status”, The Stanford Encyclopedia of Philosophy (Summer 2013 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2013/entries/grounds-moral-status/&gt;.

Kahane, G., Savulescu, J. (2009). Brain damage and the moral significance of consciousness. Journal of Medicine and Philosophy, 34(1), 6-26.

Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological review, 84(3), 231.

 

Regan, T. (1986). The case for animal rights. In P. Singer (Ed.) In Defense of Animals (pp. 13-26). New York: Basil Blackwell

 

Suchy-Dicey, C. (2009). It Takes Two: Ethical Dualism in the Vegetative State.Neuroethics, 2(3), 125-136

 

2 Comments

Filed under Consciousness, Neuroethics, Philosophy

Quote of the day – Depth of Processing in the Vegetative State

“In the [vegetative state] or [minimally conscious state] the EEG is by definition not flat and typically shows widespread slowing of brain rhythms. Does this mean that nothing is being processed? The answer is a definite ‘no’. A clear analogy is the emerging literature on the depth of processing of environmental input (i.e., the surgeon talking about something in the operating room) while the patient is under anesthesia with widespread EEG slowing akin to that observed in VS and MCS. By this logic it would be surprising if some sensory input were not being processed in all VS patients and certainly in all MCS patients. By extension, one might also propose that some internal thoughts are being generated in these devastating clinical states.

Indeed, the key issue from the neurologist’s perspective is whether the neurological insult, whether prolong hypoxia or severe traumatic brain injury, will leave any meaningful brain function. So, it is not clear if the key issue is ‘consciousness’ or the clinical experience with these patients per long-term recovery of ‘meaningful’ life. Of course, meaningful is as poorly defined as consciousness and herein lies the quandary.”

~ Robert Knight, (2008) “Consciousness Unchained: Ethical Issues and the Vegetative and Minimally Conscious State” The American Journal of Bioethics, 8(9): 1–2

Leave a comment

Filed under Consciousness, Neuroethics

Quote of the day – John Heil Explains What’s Wrong With Non-reductive Physicalism

What I object to is the unthinking move from linguistic premises to ontological conclusions, from the assumption, for instance, that if you have an ‘ineliminable’ predicate that features in an explanation of some phenomenon of interest, the predicate must name a property shared by everything to which it applies. (A predicate is ineliminable if it cannot be analyzed, paraphrased, or translated into less vexed predicates.)

Philosophers speak of ‘the pain predicate’. When you look at creatures plausibly regarded as being in pain, you do not see a single physical property they all share (and in virtue of which it would be true to say that they are in pain). Instead of thinking that the predicate, ‘is in pain’, designates a family of similar properties, philosophers (including Putnam in one of his moods) conclude that the predicate must name a ‘higher-level’ property possessed by a creature by virtue of that creature’s ‘lower-level’ physical properties. You have many different kinds of physical property supporting a single nonphysical property. This is the kind of ‘non-reductive physicalism’ you have in functionalism.

Non-reductive physicalism has become a default view, a heavyweight champ that retains its status until decisively defeated. Non-reductive physicalism acquired the crown, however, not by merit, but by a kind of linguistic subterfuge. If you read early anti-reductionist tracts – for instance, Jerry Fodor’s ‘Special Sciences (Or: The Disunity of Science as a Working Hypothesis)’ (Synthese, 1974) – you will see that the arguments concern predicates, categories, taxonomies. Fodor’s point, a correct one in my judgment, is that there is no prospect of replacing taxonomies in the special sciences with one drawn from physics. But from this no ontological conclusions follow – unless you assume that every ‘irreducible’ predicate names a property.

This language-driven way of thinking is not one that would have occurred to the ancients, the medievals, or the early moderns – or to my aforementioned philosophical models. It is an invention of the 20th century, one that has led to the emasculation of serious ontology.

~From an interview with Richard Marshall at 3am.

Leave a comment

Filed under Philosophy