Should We Ask Minimally Conscious Patients If They Want to Live?

Mo Constandi tackles this question in an excellent post reviewing the work of Adrian Owen, which I have been writing about myself.

Owen and Laureys have found a way to communicate with some of these patients, by posing questions to them as they lie inside a brain scanner. They ask patients to envision one of two scenarios, one if they mean to say “yes” and one for “no.” This raises the possibility of enabling these patients to make their own end-of-life decisions, but it also raises more ethical dilemmas. A big one: Should we even ask these patients if they wish to remain alive or die?

“That’s the question on everybody’s mind,” says Owen, “but it’s probably not appropriate to ask until we know what we will do with the answer. If a patient answers ‘Yes, I want to die,’ we still don’t have a procedure for allowing that to happen.” Most countries lack euthanasia laws; in those that do have them—such as Belgium and Switzerland—the vast majority of requests for euthanasia come from cancer patients; the laws are rarely, if ever, used in the context of patients with consciousness disorders.

Owen is collaborating with neuroethicist Judy Illes of the University of British Columbia to address these issues. With funding from the Canadian Institutes of Health Research, they are focusing on how these new technologies can provide information about such patients, how the tools could be incorporated into healthcare systems, and what they mean for patients, their families, and society.

“The question is how we can use this technology most beneficially,” says Illes, also a member of the Dana Alliance for Brain Initiatives. “It’s tempting to ask about end-of-life decisions, but that’s probably inappropriate. I think one of the best questions to ask is ‘Are you in pain?’ because that’s something we could respond to immediately.”

Patients could, she adds, also be asked about how their daily lives might be made more comfortable and enjoyable. “We might ask about their preferences for food or entertainment. Something that seems trivial to you and I may be super-important to somebody who is unable to do anything except lie in their bed.”

- See more at: http://dana.org/News/Details.aspx?id=43226#sthash.2TPWLZTR.dpuf

Leave a comment

Filed under Neuroethics

More Evidence for Vestigial Bicamerality

Acclaimed cultural anthropologist Tanya Luhrmann has a new paper out in the British Journal of Psychiatry: “Differences in voice-hearing experiences of people with psychosis in the USA, India and Ghana: interview-based study“.

The paper further corroborates the theoretical framework of Julian Jaynes and his idea of bicamerality. The bicameral paradigm is quintessentially a hallucinatory voice guiding or command you to do everyday tasks. Consider this summary of the interviews from patients in Chennai, India

These voices behaved as relatives do: they gave guidance, but they also scolded. They often gave commands to do domestic tasks. Although people did not always like them, they spoke about them as relationships. One man explained, ‘They talk as if elder people advising younger people’. A woman heard seven or eight of her female relatives scold her constantly. They told her that she should die; but they also told her to bathe, to shop, and to go into the kitchen and prepare food.

Now consider Jaynes’ hypothetical description of the Egyptian concept of “ka” or “spirit double”:

It is obvious from the preceding chapters that the ka requires a reinterpretation as a bicameral voice. It is, I believe, what the ili or personal god was in Mesopotamia. A man’s ka was his articulate directing voice which he heard inwardly, perhaps in a parental or authoritative accents, but which when heard by his friends or relatives even after his own death, was, of course, hallucinated as his own voice…

The ka of the god-king is of particular interest. It was heard, I suggest, by the king in the accents of his own father…

[In early civilizations]…each person had a part of his nervous system which was divine, by which he was ordered about like any slave, a voice or voices which indeed were what we call volition and empowered what they commanded and were related to the hallucinated voices of others in a carefully established hierarchy.

Going back to the Luhrmann interviews, we can see the essential social-hierarchical component of bicamerality still at work today in voice-hearers:

They made comments that suggested that these voices were both social relationships and entertainment: ‘I like my mother’s voice’; later, this woman added ‘I have a companion to talk [to] . . . [laughs] I need not go out to speak. I can talk within myself!’

Jaynes’ other suggestion about bicamerality is that the voices served a behavioral function: they weren’t just echoes of a broken nervous system, but were a way for the human nervous system to guide itself adaptively. They are a channel for what Jaynes called “stored-up admonitory wisdom”. Luhrmann cites one man as saying ‘[the voices] just tell me to do the right thing. If I hadn’t had these voices I would have been dead long ago.”

Now imagine an entire city where the majority of people are voice-hearers and there is an elaborate cultural mythology for interpreting the voices as “personal gods”, where hearing divine or special voices talk to you is perfectly normal in every way. Can you imagine it? Jaynes could. But it stretches the imagination. But that’s no reason to think it wasn’t the case. Just because modern people with modern minds not hearing voices find that situation “psychotic” or “crazy” doesn’t mean that bicamerality has always been limited to 1-2% of the population. It was likely spread throughout the population in much greater proportion than it is today. It is in fact part of the human gene pool, which is why schizophrenia today has such a large genetic component. Complicated cognitive mechanisms such as voice hearing don’t just stay in the gene pool for no reason. It suggests that it was adaptive in the not too distant past. And for some people in some cultures, as Luhrmann indicates, it still serves an adaptive function. John Geiger’s book The Third Man Factor also talks about the adaptive function of vestigial bicamerality in the context of extreme survival, where people on the verge of life and death have been guided to safety by following the instructions of hallucinated voices.

 

Leave a comment

Filed under Consciousness, Psychology

The Moral Patiency of Vegetative State Patients

https://www.academia.edu/7692522/The_Moral_Patiency_of_Vegetative_State_Patients

Abstract:

Neuroscientists have recently discovered that some vegetative state patients have residual levels of cognition that enable them to engage in acts of willful communication. This discovery is of obvious moral significance for both the patients themselves and their loved ones. The problem comes from specifying exactly why the discovery is morally significant and whether extant theories of welfare can be applied to explain the significance. In this paper I explore Mark Bernstein’s theory of experientialism, which says that an entity deserves moral consideration if they are a subject of conscious experience. Because VS is a disorder of consciousness it should be straightforward to apply Bernstein’s theory but several problems arise. First, Bernstein’s theory is beset by ambiguity in several key respects that makes it difficult to apply to the novel discovery. Second, Bernstein’s theory of experientialism fails to account for the normative significance of what I call “narrative experience”. A deeper appreciation of narrative experience will allow us to account for the full moral significance of these novel discoveries.

 

This paper has gone through so many drafts. I swear I’ve rewritten it 5 times from more or less scratch. Each time I’ve tried to narrow my thesis to be ever smaller and less ambitious because I’m pretty sure that’s the only way I’m going to get this thing passed by my qualifying paper committee. As always, any thoughts or comments appreciated.

1 Comment

Filed under Consciousness, Neuroethics, Psychology

Reflecting On What Matters

1. Introduction

What does it take for your life to go better or worse? One idea is experientialism. For experientialists, what matters is sentience, the capacity to experience pain and pleasure. Experientialists typically appeal to a distinction between moral agency and moral patiency to argue that only sentient beings can be moral patients. The paradigm moral agent is the adult human, capable of both thinking morally and acting morally. Most moral agents are also moral patients because most adult humans are sentient. The paradigm moral patient that is not also a moral agent is a newborn baby or a nonhuman animal. For my purposes, the key doctrine of experientialism is that sentience is necessary for both moral agency and moral patiency.

The goal of this paper is to refute that doctrine and argue that the capacity for reflection by itself is sufficient for both moral agency and moral patiency. In other words, a purely reflective but insentient being would be both a moral agent and a moral patient simply in virtue of their capacity for reflection. Who explicitly denies this? Suchy-Dicey (2009) argues that a being that was reflective but not sentient would not be a moral patient. She states that “autonomy without the potential for experiencing welfare is not valuable…the ability to experience welfare is a precondition for the value of autonomy” (2009, p. 134). Thus, Suchy-Dicey says the value of reflection is parasitic upon sentience but not vice versa. That is, an entity is a moral patient if it is both sentient and reflective, or if it is only sentient—but if an entity is reflective but not sentient then on Suchy-Dicey’s view it does not count as a moral patient. Hence, Suchy-Dicey’s view is characterized by two features:

(1). Value Pluralism: Both sentience  and reflection are intrinsically valuable.

(2). Value Asymmetry: The value of sentience for moral patiency is independent of reflection but the value of reflection for moral patiency is dependent on sentience. Thus, if an entity is reflective but not sentient, it is not a moral patient.

I agree with (1) but deny (2). Instead, I will defend the following thesis:

(2*). Value Symmetry: the value of sentience for moral patiency is independent of reflection and vice versa. Thus, an entity that is reflective but not sentient would still be a moral patient.

This paper aims to defend (2*) against (2). To do so, I defend the following argument:

  1. Experientialism assumes that all moral patients and all moral agents are necessarily sentient.
  2. The capacity for reflection by itself is sufficient for both moral patiency and moral agency.
  3. By (2), if a purely reflective being existed, it would be both a moral patient and a moral agent.
  4. Purely reflective beings can exist.
  5. Thus, experientialism is false.

Premise (1) just falls out of the commitments of experientialism. The most controversial premise is arguably (2). To defend it, I will need to do several things. In section 2, I will explain what I mean by “the capacity for reflection”, explain why it’s sufficient for moral agency, and argue that purely reflective beings can exist. In section 3, I will continue by arguing that reflection is sufficient for moral patiency. Doing so will provide the needed ammunition to argue against experientialism.

2. What is reflection?

The paradigm reflective agent is a normal human adult, capable of reflective self-consciousness. Gallagher’s (2010) definition of reflective self-consciousness is a good place to start. He defines it as “an explicit, conceptual, and objectifying awareness that takes a lower-order consciousness as its attentional theme.” Several themes are important for my understanding of reflection. First, it must be explicit. A cat might think “I am hungry” but this thought is never explicitly articulated in its mind in the way a reflective human might reflect to themselves, “Boy if I don’t eat breakfast I’m going to be hungry this evening for sure.” Second, reflection must be conceptual. What I mean by that is that in order to reflect one must have the concept of “reflection”, or at least some concept of “consciousness”. A cat might have a psyche but it lacks a concept of psyche qua psyche. A reflective creature knows as its reflecting that it’s reflecting because it has at least one concept about reflection as such to distinguish it from other psychological events like behaving or perceiving.

Thus, to reflect in the full sense I intend one must have an explicit understanding of what it means to reflect and the ability to know that you are reflecting when you are reflecting. Furthermore, a distinguishing feature of reflection is that a reflective creature can reflect on just about anything: themselves, trees, rocks, numbers, philosophy, art, reflection itself, evolution, space-time, etc. While there might be some contents that are too unwieldy for human reflective agents to fully reflect on, a defining feature of reflection is its flexibility with regard to the contents of reflective acts. If a reflective agent is relaxed and not pressed for time it can very well reflect on almost anything so long as it has the right conceptual repertoire. Thus, I avoid the term “reflective self-consciousness” because reflective agents can actually take as an object of reflection just about any object or proposition, not just the “self”. Hence, I prefer to talk about “reflective consciousness” i.e. reflection. A feature of reflection closely related to flexibility is the ability to switch between different objects of reflection. A reflective creature, when suitably relaxed, can choose what to reflect on when it wants to. If it wants to reflect on the past, it can; if it wants to reflect on the future, it can.

Phenomenologically speaking, reflection is spatial, selective, and perspectival. Reflection is spatial because if I asked you to reflect on your cat and then your dog you would not imagine them mushed together; you would first reflect on your cat and then “move” onto your dog. All reflection is spatialized in this sense because the objects of reflection are “separated” from each other in mental space. This applies to the most abstract of ideas: if I ask you to reflect on the concept of liberty and then reflect on democracy there will be “movement” in your act of reflection as you go from idea to idea.  Reflection is selective because if I reflect on what I had for breakfast yesterday, I cannot simultaneously reflect on what I want for breakfast tomorrow. Reflection is perspectival because if I reflect on my walk through town yesterday the reflective act is done from a perspective. If my reflection is veridical I might reflect as if I were peering out of my head bobbing up and down as I walk but in all likelihood my reflection will be disembodied like a camera floating freely through space able to fly through the city at any speed.

Another feature of reflection is the capacity to explicitly reason and articulate about intentional actions qua intentional actions. To interact with something nonreflectively is to interact it without explicitly realizing you have done so and without the ability to give a reason why you have done so. Conversely, to interact with something reflectively enables you to reflect on your reasons for having chosen the action you did and the ability, if needed, to explicitly articulate your reasons for having acted in the way you did. The reasons you give might not be indicative of the true, underlying causal mechanisms for your action but what’s important is the ability to articulate in terms of intentional actions even if you are confabulating (Nisbett & Wilson, 1977). Moreover, even if your voicebox or muscles were completely paralyzed you would still have the ability to articulate your reasons so long as you can articulate them to yourself or so long as you possess the knowledge that if you had a means of expressing yourself you could actually articulate. Thus, what counts is not so much the literal articulation of reasons but the capacity or potential to articulate reasons for action. Moreover, by action I mean mental or behavioral action e.g. you could articulate to yourself why you chose to imagine yourself playing tennis as opposed to imagining yourself walking through your house.

Now that I have explained part of what it means to be a reflective agent, I want to explain why reflective agents are also moral agents, what I call reflective moral agents. Defending the cogency of reflective moral agency will clear the ground for my defense in the next section of reflective moral patiency. It’s relatively uncontroversial the ability to reflect has instrumental value for moral agents insofar as reflective creatures could reflect on better ways to help moral patients but why should reflective agents be moral agents just in virtue of their being reflective agents and not because reflection is instrumentally valuable? One reason is that reflective agency is important for realizing many things of intrinsic value according to what has been called “objective list” approaches to intrinsic goodness. Common items on these lists of intrinsically valuable goods include things such as: developing one’s talents, knowledge, accomplishment, autonomy, understanding, enjoyment, health, pleasure, friendship, self-respect, virtue, etc. Arguably reflection is not crucial for all these items but it is especially important for autonomy, which roughly speaking is the ability to rationally make decisions for oneself and be a “self-legislating will”, i.e. someone who makes decisions on the basis of rules that they impose on themselves. Arguably autonomy involves the capacity for reflection insofar as one cannot automatically or unconsciously self-legislate; to self-legislate in this sense necessarily involves stepping back and reflecting on the type of life one wants to live.

For example, consider the concept of an “advanced directive”, which is a special legal contract that allows people to decide how they want to die. Suppose your friend Alice had never heard of an advanced directive before nor had she ever considered the question of how she wanted to die e.g. whether she would want to live on life support for more than six months. Now if you asked Alice about advanced directives and she responded instantly with a “no” you would be confused. You would say, “How can you answer so quickly? Don’t you need to reflect a little longer on the question?” It would be one thing if she said “Oh, actually I have thought about this before and my answer is still no.” But it would be another thing altogether if she said “I don’t need to think about it – I just went with my gut reaction, and that gut reaction is no.” If she answered in this way you might think she did not understand the moral significance of advanced directives, which demand a certain kind of slowness in deliberation in order to be morally relevant.

Consider another example. You notice your friend Bob has grown really close to his girlfriend, Carol. One day you ask Bob if he wants to marry her and he instantly answers “Yes”. Surprised, you ask, “So you have thought about this before?” and Bob says “No, I’ve never thought about it before until you asked.” Most people would find this strange because marriage is such a significant life decision that it demands slow, deliberative reflection. To not reflect on such weighty issues indicates a failure of moral agency.These two examples illustrate a general principle about the crucial role reflection plays in supporting rational, autonomous choice, namely, that it must have an element of “slowness”. This kind of reflective autonomy is distinct from the autonomy of, say, cats, who are free to choose between sleeping on the mat or sleeping on the bed. The latter kind of autonomy is what we might call sentient autonomy because it’s possessed by almost all Earthly beings that are sentient. Sentient autonomy is important and distinguishes animals from, say, rocks and dust bunnies but it is not the only kind of autonomy relevant to moral agency. If there was a being that possessed reflective autonomy but wasn’t sentient, it seems absurd to deny them moral agency. Reflectively autonomous agents would be able choose to help moral patients regardless of their ability to sensuously feel pleasure or pain. Moreover, their decision procedures would be such that they are of a deliberative nature, grounded in reasons that they are able to explicitly articulate if necessary.

Consider the fictional character Commander Data from Star Trek. Data is an advanced android with a positronic brain that can compute trillions of operations per second. He is thus hyper-intelligent, processing information faster and more accurately than any human. Even if his brain is a computer Data is not merely a computer; he is a moral agent just the same as any human. The only difference is that Data is not a sentient being in the sense that he lacks the bodily consciousness of animals and other fleshy creatures.

Biting the bullet and denying Data moral agency is implausible given that Data was often the wisest and most morally principled of all the crewmembers, not to mention the most valiant in the face of action as evidenced by his many medals of honor. If anyone was capable of reflective autonomy if was Data. It might look from all appearances that he was acting out of just normal sentient autonomy but this is an illusion generated by the sheer speed of his reflective processing. Consider the numerous medals won for bravery and honor in service of Starfleet. All of Data’s valor and bravery were executed not because of any animal instinct or sentient autonomy but because he made a reflective choice. This is evident by the fact that if you asked Data why he performed action X in situation Y he would always be able to explicitly articulate a reason for having done so, even if that reason is “Because I was programmed to do so”. The relevant point however is that his actions betray the flexibility, switching, and autonomy relevant for moral agency as well as the explicitness characteristic of reflective agency.

3. Reflective Moral Patiency

In this section I will defend the second half of premise (2): the capacity for reflection by itself is sufficient for moral patiency. Any entity that can reflect is what I call a reflective patient. The guiding intuition behind experientialism is that welfare flows from the capacity to experience the world, not the capacity to reflect on the world. However, I contend that if there was a being that was insentient but capable of reflection it would be wrong to harm them. Take Data again.I contend that it would be wrong to treat Data poorly by either intentionally destroying him, being negligent to his robotic body, or needlessly destroying his prized belongings. In other words, Data is a moral patient that cannot be treated like just any mere physical object.

There are at least two objections someone might have to Data being a moral patient. First, the experientialist might simply balk at the thought Data cannot feel pain and pleasure. How could his cognitive life be identical to that of a rock or other insentient entities? Surely there is a qualitative or experiential dimension to Data’s existence that distinguishes his existence from that of rocks and dust bunnies. I would respond by saying there is indeed a certain “quality” to Data’s information processing but I’m not convinced we are forced to say such information processing is “experiential” unless that just means “has a quality”, which would trivialize the notion. I can grant the quality of Data’s positronic brain as it reflectively operates is different from the quality of a rock because of its informational complexity without supposing the quality is necessarily due to the information processing being experiential in way an animal’s sensuous pleasure or pain is experiential. In effect, I’m proposing that an entity could have the quality of being a reflective thinker without being a subject of phenomenal experience.

The second objection is that moral patiency plausibly flows from an entity having interests that can either be satisfied or frustrated. Didn’t Data have interests and aspirations like anyone, however “robotic” or “inhuman”? If Data is merely engaging in reflective thought but lacks any interests then the objector might say it’s implausible that his life could be made better or worse and thus would not count as a moral patient. Since we’ve already argued that Data surely is a moral patient then his patiency must be due to a kind of experiential welfare, as per experientialism. The underlying assumption seems to be that unless a cognitive capacity is experienced it cannot be intrinsically valuable and thus cannot be a suitable locus for moral patiency. Call this the Principle of Experience (PE). Kahane & Savulescu also endorse a version of PE writing that “phenomenal consciousness is required if a person is to have a point of view, that is for the satisfaction of some desire to be a benefit for someone” (2009, p. 17). The intuition behind PE is that what makes it permissible to randomly shoot a rock and impermissible to randomly shoot an animal is that rocks lack phenomenal experiences that can be negatively or positively affected.

However, I believe this objection fails to fully grasp the distinction between reflective patiency and sentiential patiency. Data can be a moral patient so long as we are careful to distinguish “bottom-up” interests that stem from animalistic sentience, and “top-down” interests that stem from reflection. It’s debatable whether Data has genuine bottom-up interests but undeniable he has top-down interests due to his capacity for complex, reflective thought. For example, Data might not have a sentential instinct to avoid pain but he can reflectively think “I do not want to be destroyed.” Data could surely sign an advanced directive and his signature would be morally relevant because he can explicitly articulate and reason about his decision. It would be wrong to intentionally destroy or mistreat Data not because he can experience the mistreatment but because it would violate his reflective interest to continue existing. If Data signed an advanced directive it would be wrong to intentionally ignore it for the exact same reason it’d be wrong to intentionally ignore a human’s advanced directive.

Another kind of thought experiment supports the intuition that reflective consciousness is relevant to moral patiency independently of its relation to sentience. Consider the hypothetical scenario where a chimpanzee and a chicken were in a burning building and you could only save one. Other things being equal, it seems overall better to save the chimpanzee because although both the chicken and chimp are sentient arguably the chimp has a greater amount of proto-reflectivity that is intrinsically valuable. Similarly, if the choice was between a chimpanzee and an adult human, it seems overall better to save the human for the same reason: the human is sentient and it is reflective. Furthermore, suppose your mother or father was dying and the doctors said they could save their life only on the condition that they would be insentient but reflective. They would be able to converse intelligibly, write emails, thoughtfully answer questions about their own folk psychology, cook dinner, and otherwise act like perfectly normal people except they couldn’t experience pleasure or pain. Would you accept the offer? It seems absurd not to. The rich, multidimensional intelligence associated with reflection is valuable independently of any contingent relation to sentience. These thought experiments lend credence to the thought that moral status comes in degrees and that reflective moral agents that are also sentient carry what some philosophers call “Full Moral Status” (Jaworska & Tannenbaum, 2013). Moral patients that are sentient only carry less than full moral status because they are not reflective patients.

Conclusion

I’ve argued that experientialism is false because it assumes that all moral patients and all moral agent are necessarily sentient. In contrast I’ve attempted to open up the conceptual space by arguing that the capacity for reflection itself is sufficient for both moral agency and moral patiency.

 

References

Bernstein, M. H. 1998. On Moral Considerability: An Essay on Who Morally Matters. New York: Oxford University Press.

Farah, M. J. (2008). Neuroethics and the problem of other minds: implications of neuroscience for the moral status of brain-damaged patients and nonhuman animals. Neuroethics, 1(1), 9-18

Jaworska, Agnieszka and Tannenbaum, Julie, “The Grounds of Moral Status”, The Stanford Encyclopedia of Philosophy (Summer 2013 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2013/entries/grounds-moral-status/&gt;.

Kahane, G., Savulescu, J. (2009). Brain damage and the moral significance of consciousness. Journal of Medicine and Philosophy, 34(1), 6-26.

Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological review, 84(3), 231.

 

Regan, T. (1986). The case for animal rights. In P. Singer (Ed.) In Defense of Animals (pp. 13-26). New York: Basil Blackwell

 

Suchy-Dicey, C. (2009). It Takes Two: Ethical Dualism in the Vegetative State.Neuroethics, 2(3), 125-136

 

2 Comments

Filed under Consciousness, Neuroethics, Philosophy

Some Thoughts on Moral Status

John Doris suggested to me that the concept of “moral status” is probably more complicated than many realize. A common framework for understanding what it means to have moral status is the two-fold moral agent/moral patient framework. Like most concepts, this framework is best illustrated via example. The paradigm moral agent is the adult human. The paradigm moral patient is a newborn baby. The moral agent is capable of thinking morally and acting morally. When a moral agent acts morally, they usually do so with a patient in mind. Moral agents typically do not act morally towards bits of garbage. We simply toss them in the trash because they are mere material objects. They lack moral status for they are not moral patients. Other instances of moral patients are arguably chimpanzees. It would be wrong to toss a chimp in a giant garbage compactor because the chimp is a moral patient towards whom moral agents have duties e.g. the duty not to needlessly or purposely harm patients. If a psychopath were to stab a chimp for the fun of it, this would be wrong. The psychopath is a defective moral agent, an agent that is failing to do his or her moral duty towards moral patients.

The moral agent/patient distinction is a fine one but as a philosopher my job is to often to expand or elaborate on the hidden complexity a seemingly simple concept affords. So here goes.

The problem with an overly simplistic moral agent/patient distinction is that it tends to classify all moral patients as sentient beings, which on Earth most people think includes the entire Mammalian family. All mammals are moral patients because all (normal) mammals can feel pain and moral agents have a duty to not needlessly inflict pain on moral patients, unless they have a compelling reason to do so. However, I tentatively propose a new taxonomy of moral status which I formulated haphazardly last night. It’s rough, so bear with me.

First, I propose there are two types of moral agents: reflective agents and sentient agents. An example of a reflective agent is a normal adult human. An example of a sentient agent is a cat. If you are capable of reflective thinking, you are a reflective agent. Typically, reflective agents are also sentient agents.

Second, I propose there are two types of moral patients: reflective patients and sentient patients. Again, an example of a reflective patient is a normal adult human. Adult humans are often in need of help from other moral agents so they are both agents and patients at the same time. An example of a sentient patient is a cat. If you can feel pain or pleasure then you are a sentient patient. A cat is not capable of reflective thinking yet it can feel pain and pleasure so moral agents have a duty to not needlessly harm cats without a compelling reason to do otherwise.

Arguably the weirdest category is a sentient agent. How can a cat be a moral agent if it cannot reflectively think? Well, the answer is that you can do a lot of good in the world without being able to reflect. Consider a mamma cat’s relationship to its newborn kittens. The kittens are sentient patients but not sentient agents. The kittens need help from mamma cat and the mamma cat normally has responsibilities towards her kittens although in the real world the mamma cat like other animals with litters will by necessity focus her powers on helping a subset of her litter.

From our new taxonomy of moral status we can now discuss different kinds of value. I propose there are two main types of value associated with each of the above types of agents. For reflective agents, there are two types of value: intrinsic reflective value and derived reflective value. An example of something with intrinsic reflective value is the act of reflective thought itself – it is valuable because reflective thought can potentially lead to a lot of good actions not possible otherwise. It would be wrong to needlessly destroy an adult human brain because that brain is the seat of reflective thinking.

An example of something with derived reflective value is a baseball signed by Babe Ruth. This baseball, though a mere physical object, has derived value because it is valued by some reflective agents, namely, baseball fans. It would be wrong to throw that baseball into the trash (without good reason) because this would cause harm to some reflective agents.

Turning to sentient agents, there are also two corresponding types of value: intrinsic sentiential value and derived sentiential value. An example of something with intrinsic sentiential value is the pleasure a dog feels as it is chewing on its favorite chew toy. My favorite category is derived sentiential value because it creates interesting overlaps. That very same baseball signed by Babe Ruth has the potential to possess derived sentiential value. Suppose a rich baseball fan has ten baseballs signed by Babe Ruth and decides to give one to his dog, Spike, to be used as a chewtoy. The baseball becomes Spike’s favorite chewtoy. It would be wrong to needlessly destroy that baseball not because of its derived reflective value because Spike cannot reflect and cannot appreciate how much it would be valued by other, not-so-rich baseball fans. What Spike can do however is value that baseball as a chewtoy. Thus, the baseball has derived sentiential value because it is valued by a sentient creature.

From the above, we can generate two new types of patients: derived reflective patients and derived sentiential patients. The Babe Ruth baseball can be an example of both. If the baseball was the property of a normal, reflective baseball fan it would be wrong to destroy it because it is highly valued by a reflective agent/patient. If the baseball was the property of Spike the dog then it would be wrong to destroy it because it is highly valued as a chewtoy by a sentient agent/patient.

1 Comment

Filed under Moral Philosophy, Philosophy

Microblogging is the future, or at least my future

If it seems like I haven’t been posting my “usual” it’s only because you’re not following me on G+. Just saying.

2 Comments

Filed under Random

My Biggest Pet Peeve in Consciousness Research

 

Boy was I excited to read that new Nature paper where scientists report experimentally inducing lucid dreaming in people. Pretty cool, right? But then right in the abstract I run across my biggest pet peeve whenever people use the dreaded c-word: blatant terminological inconsistency. Not just an inconsistency across different papers, or buried in a footnote, but between a title and an abstract and within the abstract itself. Consider the title of the paper:

Induction of self awareness in dreams through frontal low current stimulation of gamma activity

The term “self-awareness” makes sense here because if normal dream awareness is environmentally-decoupled 1st order awareness than lucid dreaming is a 2nd order awareness because you become meta-aware of the fact that you are first-order dream-aware. So far so good. Now consider the abstract:

 Recent findings link fronto-temporal gamma electroencephalographic (EEG) activity to conscious awareness in dreams, but a causal relationship has not yet been established. We found that current stimulation in the lower gamma band during REM sleep influences ongoing brain activity and induces self-reflective awareness in dreams. Other stimulation frequencies were not effective, suggesting that higher order consciousness is indeed related to synchronous oscillations around 25 and 40 Hz.

Gah! What a confusing mess of conflicting concepts. The title says “self-awareness” but the first sentence talks instead about “conscious awareness”. It’s an elementary mistake to confuse consciousness with self-consciousness, or at least to conflate them without making an immediate qualification of why you are violating standard practice in so doing. While there are certainly theorists out there who are skeptical about the very idea of “1st order” awareness being cleanly demaracted from “2nd order” awareness (Dan Dennett comes to mind), it goes without saying this is a highly controversial position that cannot just be assumed without begging the question. Immediate red flag.

The first sentence also references previous findings about the neural correlates of “conscious awareness” being linked to specific gamma frequencies of neural activity in fronto-temporal networks. The authors say though that correlation is not causation. The next sentence then makes us believe the study will provide that missing causal evidence about conscious awareness and gamma frequencies.

Yet the authors don’t say that. What they say instead is that they’ve found evidence that gamma frequencies are linked to “self-reflective awareness” and “higher-order consciousness”, which are again are theoretically distinct concepts than “conscious awareness” unless you are pretheoretically committed to a kind of higher-order theory of consciousness. But even that wouldn’t be quite right because on, e.g. Rosenthal’s HOT theory, a higher-order thought would give rise to first-order awareness not lucid dreaming, which is about self-awareness. On higher-order views, you would technically need a 3rd order awareness to count as lucid dreaming.

So please, if you are writing about consciousness, remember that consciousness is distinct from self-consciousness and keep your terms straight.

1 Comment

Filed under Academia, Consciousness, Random