Tag Archives: consciousness

Man in Vegetative State Shows Brain Activity to Movie: What Does It Mean?

In a recent study, Naci et al. investigated how the brain responds to an 8 minute Alfred Hitchcock movie. In healthy subjects they found that frontal and parietal areas indicative of executive functioning were active during the most suspenseful parts of the movie. Then they showed the same movie to two patients diagnosed as being in a vegetative state, one of which who had been in VS for 16 years. In one of the patients they found that “activity in a network of frontal and parietal regions that are known to support executive processing significantly synchronized to that of healthy participants”. In other words, the vegetative man’s brain “tracked” the suspense-points of the movie in the same way that healthy controls did. They reasoned that the patient was therefore consciously aware of the video, despite being behaviorally unresponsive:

The patient’s brain activity in frontal and parietal regions was tightly synchronized with the healthy participants’ over time, and, crucially, it reflected the executive demands of specific events in the movie, as measured both qualitatively and quantitatively in healthy individuals. This suggested that the patient had a conscious cognitive experience highly similar to that of each and every healthy participant, while watching the same movie.

But what’s the connection between executive functioning and conscious experience? The authors write:

The “executive” function of the brain refers to those processes that coordinate and schedule a host of other more basic cognitive operations, such as monitoring and analyzing information from the environment and integrating it with internally generated goals, as well as planning and adapting new behavioral schemas to take account of this information. As such, executive function is integral to our conscious experience of the world as prior knowledge is integrated into the current “state of play” to make predictions about likely future events.

Does this mean that executive functioning is always conscious? Is the unconscious brain incapable of “monitoring and analyzing information from the environment” and “integrating” that information with goals? Color me skeptical but I believe in the power of the unconscious mind to perform these functions without the input of conscious awareness.

Several examples come to mind. In the “long-distance truck driver” phenomenon people can drive automobiles for minutes if not hours without the input of conscious awareness. Surely driving requires “monitoring and analyzing information from the environment” in addition to integrating with goals and adapting new behaviors to deal with novel road conditions.

Another example is automatic writing, where people can write whole intelligent paragraphs without the input of conscious attention and the “voice” of the writing is distinct from that of the person’s normal personality, channeling the personalities of deceased persons or famous literary people. People would hold conversations with their automatic writing indicating that the unconscious writer was responding to the environment and surely “monitoring and analyzing information”. Im not aware of any brain imaging studies of automatic writing but I would not be surprised if frontal and parietal regions were active given the complexity of handwriting as a cognitive task. Same with long-distance truck driving.

My point is simply to raise the question: Can executive function happen unconsciously? Naci et al. say that executive function is “integral” to conscious experience. That might be true. But is conscious experience integral to executive functioning? Maybe not. There is a litany of complex behaviors that can be performed unconsciously, all of which likely recruit frontal and parietal networks of the brain. We can’t simply assume that just because information integration occurred that conscious awareness was involved. To make that inference would require us to think that the unconscious mind is “dumb” and incapable of integrating information. But there is plenty of reason to think that what Timothy Wilson calls the “adaptive unconscious” is highly intelligent and capable of many “higher-order” cognitive functions including monitoring, integrating, planning, reasoning, etc.

2 Comments

Filed under Consciousness, Psychology

Can the Clinical Diagnosis of Disorders of Consciousness Avoid Behaviorism?

4127EEGGk7L._SS500_

The “standard approach” in clinical neurology has been accused of suffering from an implicit “behaviorist epistemology” because disorders of consciousness are typically diagnosed on the basis of a lack of behavior. All the gold standard diagnostic assessment programs such as the JFK-Coma Recovery Scale are behavioral in nature insofar as they are expressly looking for behavior or the lack of behavior, either motor behavior or verbal behavior. If the behavior occurs appropriately in response to the command or stimulus then they get points that accumulate towards “normal” consciousness. If no behavior is observable in response to the cue then they don’t get points and are said to have a “disorder of consciousness”.

The problem with this approach is both conceptual and empirical. Conceptually, there is no necessary link between behavior and consciousness because unless you are Gilbert Ryle or Wittgenstein you don’t want to define consciousness in terms of behavior. That is, we don’t want to define “pain” as simply the behavior of your limbs whenever your cells are damaged, or the disposition to say “ouch”. The reason we don’t want to do this is because pain is supposed to be a feeling, painfulness, not a behavior.

Empirically, we know of many cases where behavior and consciousness can be decoupled such as in the total locked-in state where someone’s mind is more-or-less normal but they are completely paralyzed, looking for all intents and purposes like someone in a deep coma or vegetative state yet retaining normal brain function. From the outside they would fail these behavioral assessment techniques yet from the inside have full consciousness. Furthermore we know that in some cases of general anesthesia there can be a complete lack of motor response to stimulation while the person maintains their conscious awareness.

Another problem with the behaviorist epistemology of clinical diagnosis is that the standard assessment scales require a certain level of human expertise in making the diagnostic judgment. Although for most scales there is high inter-rater reliability it nevertheless ultimately comes down to a fallible human making a judgment about someone’s consciousness on the basis of subtle differences between “random” and “meaningful” behavior. A random behavior is just that: a random, reflexive movement that signifies no higher purpose or goal. But if I ask someone to squeeze my hand and they squeeze it, this is a meaningful sign because it suggests that they can listen to language and translate a verbal command to a willed response. But what if the verbal command to squeeze just triggers an unconscious response to squeeze? Sure, it’s possible. No one should rule it out. But what if they do it 5 times in a row? Or what if I say “don’t squeeze my hand” and they don’t squeeze it? Now we are getting into what clinicians call “unambiguous signs of consciousness” because the behavior is expressive of a meaningful purpose and shows what they call “contingency”, which is just another way of saying “appropriate”.

But what does it mean for a behavior to really be meaningful? Just that there is a goal-structure behind it? Or that it is willed? Again, we don’t want to define “meaning” or “appropriateness” in terms of outward behavior because when you are sleepwalking your behavior is goal-structured yet you are not conscious. Or consider the case of automatic writing. In automatic writing one of your hands is capable of having a written conversation and writing meaningful linguistic statements without “you” being in control at all. So clearly there is a possible dissociation between “meaningful” behavior and consciousness. All we can say is that for normal people in normal circumstances meaningful behavior is a good indicator of normal consciousness. But notice how vacuous that statement is. It tells us nothing about the hard cases. 

So in a nutshell the diagnosis of disorders of consciousness has an inescapable element of human subjectivity in it. Which is precisely why researchers are trying to move to brain-based diagnostic tools such as fMRI or EEG, which are supposed to be more “objective” because they skip right over the question of meaningful behavior and look at the “source” of the behavior: the brain itself. But I want to argue such measures can never bypass the subjectivity of diagnosis without going full behaviorist. The reason why brain-based measures of disorders of consciousness are behaviorist is simply because you are looking at the behavior of neurons. You can’t see the “feelings” of neurons from a brain scanner anymore than you can see the “feeling” of pain from watching someone’s limb move. Looking at the brain does not grant you special powers to see consciousness more directly. It is still an indirect measure of consciousness and it will always require the human judgment of the clinician to say “Ok, this brain activity is going to count as a measure towards “normal” consciousness”. It might be slightly more objective but it will never be any less subjective unless you want to define normal consciousness in terms of neural behavior. But how is that any different from standard behaviorism? The only difference is that we are relying on the assumption that neural behavior is the substrate of consciousness. This might be true from a metaphysical perspective. But it’s no help in the epistemology of diagnosis because as an outside observer you don’t see the consciousness. You just see the squishy brain or some representation on a computer screen. I believe there is a circularity here that cannot be escaped but I won’t go into it here (I talk about it in this post).

2 Comments

Filed under Consciousness, Philosophy of science, Psychology, Uncategorized

Is Consciousness Required for Discrimination?

In their book A Universe of Consciousness, Edelman and Tononi use the example of a photodiode discriminating light to illustrate the problem of consciousness:

Consider a simple physical device, such as a photodiode, that can differentiate between light and dark and provide an audible output. Let us then consider a conscious human being performing the same task and then giving a verbal report. The problem of consciousness can now be posed in elementary terms: Why should the simple differentiation between light and dark performed by the human being be associated with and, indeed, require conscious experience, while that performed by the photodiode presumably does not? (p. 17)

Does discrimination of a stimulus from a background “require conscious experience”? I don’t see why it would. This seems like something the unconscious mind could do all on its own and indeed is doing all the time. But it comes down to how we are defining “consciousness”. If we are talking about consciousness as subjective experience, the question is: does discrimination require there be something-it-is-like for the brain to perform that discrimination? Perhaps. But I also don’t know how to answer that question empirically given the subjective nature of experience and the sheer difficulty of building an objective consciousness-meter.

On the other hand, assume by “consciousness” we mean something like System II style cognition i.e. slow, deliberate, conscious, introspective thinking. On this view of consciousness, consciousness is but the tip of the iceberg when it comes to cognitive processing so it would be absurd to say that the 99% unconscious mind is incapable of doing discrimination. This is the lesson of Oswald Külpe and the Würzburg school of imageless thought. They asked trained introspectors from the Wundtian tradition to make a discrimination between two weights with their hands, to see if one weight is heavier than the other. Then the subjects were asked to introspect and see if they were aware of the process of discrimination. To their surprise, there were no conscious images associated with the weight-discrimination. They simply held the weights in their hand, consciously made an intention to discriminate, the discrimination happened unconsciously, and then they were aware of the results of the unconscious judgment. Hence, Külpe and the Würzburg school discovered a whole class of “imageless thought” i.e. thought that happens beneath the level of conscious awareness.

Of course, the Würzburg school wasn’t talking about consciousness in terms of subjective qualia. They were talking about consciousness in terms of what’s introspectable. If you can’t introspect a thought process in your mind then it’s unconscious. On this view and in conjunction with evolutionary models of introspection it seems clear that a great deal of discriminations are happening beneath the surface of conscious awareness. This is how I prefer to talk about consciousness: in terms of System II-style introspection where consciousness is but the tip of a great cognitive iceberg. On Edelman and Tononi’s view, consciousness occurs anytime there is information integration. On my view information integration can occur unconsciously and indeed most if not all non-human animal life is unconscious. Human style slow deliberate introspective conscious reflection is rare in the animal kingdom even if during the normal human waking life it is constantly running, overlapping and integrating with the iceberg of unconsciousness so as to give an illusion of cognitive unity. It seems as if consciousness is everywhere all the time and that there is very little unconscious activity. But as Julian Jaynes once said, we cannot be conscious of what we are not conscious of, and so consciousness seems pervasive in our mental life when in fact it is not.

2 Comments

Filed under Consciousness

Some Comments on Edelman and Tononi’s book A Universe of Consciousness

I started reading Edelman and Tononi’s book A Universe of Consciousness and I wanted to offer some skeptical comments. I’m generally skeptical about any theorizing of consciousness these days, not because I’m against theorizing in science but because I have been leaning more Mysterian in my epistemology towards “consciousness”, where “consciousness” refers to subjective experience. I think any fundamental theory of consciousness is doomed to fail because it will run into vicious circularity as I will explain below. Take this seemingly innocuous statement offered at the beginning of chapter 1:

Everyone knows what consciousness is: It is what abandons you every evening when you fall asleep and reappears the next morning when you wake up.

Already E&T are helping themselves to some heavy duty theoretically loaded assumptions. E&T are talking about consciousness as subjectivity, so why assume subjectivity goes away completely during dreamless sleep? How do we know there isn’t something-it-is-like to be asleep and we just don’t remember what-it’s-like? If subjectivity is at 100% during wakefulness why not think it goes down to 1% or .05% while sleeping instead of 0%? Perhaps what-it-is-like for humans to be asleep is analogous in subjective intensity to what-it-is-like to be bee or lizard when awake.

By helping themselves to the assumption that consciousness goes away completely during asleep E&T allow themselves a “starting point” or “fixed point” from which to begin their theorizing. It becomes their rock-solid assumption against which they can begin doing experimental work. But from a fundamental point of view, it is an unargued for assumption. Where’s the evidence for it? Introspective evidence is not enough because introspection is turned off during asleep. And empirical evidence? How are you going to measure it? With a consciousness-meter? Well, how are you going to validate that it’s calibrated properly? Say you build one and point it at a sleeping brain at it registers “0”. How do you know the measurement is correct? What’s the calibration method?

They also assume that consciousness is an “relatively recent development” evolutionarily speaking. If we are talking about self-consciousness this makes sense but they are not. They are talking about subjectivity, the having of a “point-of-view”. But why not think a bee has a point of view on the world? Or why assume you need a brain or nervous system at all? For all we know there is something-it-is-like to be an amoeba. E&T want this to be another “fixed point” because if you assume that subjectivity requires a brain or nervous system it gives you a starting place scientifically. It tells you where to look. But again, it’s never argued for, simply assumed. But it’s not logically incoherent to think a creature without a nervous system has a dim phenomenology.

Suppose you assumed that only brained creatures have consciousness and you devise a theory accordingly. Having made your theory you devise a series of experimental techniques and measurements and then apply them to brained creatures. You “confirm” that yes indeed brained creatures are conscious all right. What happens when you apply the same technique to a non-brained creature like an amoeba, testing for whether the amoeba has consciousness? Surprise surprise, your technique fails to register any consciousness in the amoeba. But there is a blatant epistemic circularity here because you designed your measurement technique according to certain theoretical assumptions starting with the “fixed point” that consciousness requires a nervous system. But why make that assumption? Why not assume instead that subjectivity starts with life itself and is progressively modified as nervous systems are introduced? Moreover they assume that

Conscious experience is integrated (conscious states cannot be subdivided into independent components) and, at the same time, is highly differentiated (one can experience billions of different conscious states).

Why can’t conscious states be subdivided? Why assume that? What does that even mean? Divided from what into what? Take the sleeping at .01% consciousness example. Why not think wakeful “unified” consciousness at 100% is the result of 1000 tiny microconsciousness “singing” side-by-side such that the total choir of microconsciousness gives rise to an illusion of a single large singer? When E&T say “one” can experience billions of states, who is this “one”? Why one, and not many? Their assumption of conscious unity is another “fixed point” but it’s just an assumption. Granted, it’s an assumption that stems from introspective experience but why trust introspection here? Introspection also says consciousness completely goes away during asleep but as we’ve seen it might be wrong about that.

3 Comments

Filed under Consciousness

Vegetative State Patients as Moral Patients

https://www.academia.edu/7692522/Vegetative_State_Patients_As_Moral_Patients

Abstract:

Adrian Owen (2006) recently discovered that some vegetative state (VS) patients have residual levels of cognition, enabling them to communicate using brain scanners. This discovery is clearly morally significant but the problem comes from specifying why exactly the discovery is morally significant and whether extant theories of moral patienthood can be applied to explain the significance. In this paper I explore Mark Bernstein’s theory of experientialism, which says an entity deserves moral consideration if they are a subject of conscious experience. Because VS is a disorder of consciousness it should be straightforward to apply Bernstein’s theory to Owen’s discovery but several problems arise. First, Bernstein’s theory is beset by ambiguity in several key respects that makes it difficult to apply to the discovery. Second, Bernstein’s theory of experientialism fails to fully account for the normative significance of what I call “narrative experience”. A deeper appreciation of narrative experience is needed to account for the normative significance of Owen’s findings.

 

 

This paper has gone through so many drafts. I swear I’ve rewritten it 5 times from more or less scratch. Each time I’ve tried to narrow my thesis to be ever smaller and less ambitious because I’m pretty sure that’s the only way I’m going to get this thing passed by my qualifying paper committee. As always, any thoughts or comments appreciated.

1 Comment

Filed under Consciousness, Neuroethics, Psychology

Reflecting On What Matters

1. Introduction

What does it take for your life to go better or worse? One idea is experientialism. For experientialists, what matters is sentience, the capacity to experience pain and pleasure. Experientialists typically appeal to a distinction between moral agency and moral patiency to argue that only sentient beings can be moral patients. The paradigm moral agent is the adult human, capable of both thinking morally and acting morally. Most moral agents are also moral patients because most adult humans are sentient. The paradigm moral patient that is not also a moral agent is a newborn baby or a nonhuman animal. For my purposes, the key doctrine of experientialism is that sentience is necessary for both moral agency and moral patiency.

The goal of this paper is to refute that doctrine and argue that the capacity for reflection by itself is sufficient for both moral agency and moral patiency. In other words, a purely reflective but insentient being would be both a moral agent and a moral patient simply in virtue of their capacity for reflection. Who explicitly denies this? Suchy-Dicey (2009) argues that a being that was reflective but not sentient would not be a moral patient. She states that “autonomy without the potential for experiencing welfare is not valuable…the ability to experience welfare is a precondition for the value of autonomy” (2009, p. 134). Thus, Suchy-Dicey says the value of reflection is parasitic upon sentience but not vice versa. That is, an entity is a moral patient if it is both sentient and reflective, or if it is only sentient—but if an entity is reflective but not sentient then on Suchy-Dicey’s view it does not count as a moral patient. Hence, Suchy-Dicey’s view is characterized by two features:

(1). Value Pluralism: Both sentience  and reflection are intrinsically valuable.

(2). Value Asymmetry: The value of sentience for moral patiency is independent of reflection but the value of reflection for moral patiency is dependent on sentience. Thus, if an entity is reflective but not sentient, it is not a moral patient.

I agree with (1) but deny (2). Instead, I will defend the following thesis:

(2*). Value Symmetry: the value of sentience for moral patiency is independent of reflection and vice versa. Thus, an entity that is reflective but not sentient would still be a moral patient.

This paper aims to defend (2*) against (2). To do so, I defend the following argument:

  1. Experientialism assumes that all moral patients and all moral agents are necessarily sentient.
  2. The capacity for reflection by itself is sufficient for both moral patiency and moral agency.
  3. By (2), if a purely reflective being existed, it would be both a moral patient and a moral agent.
  4. Purely reflective beings can exist.
  5. Thus, experientialism is false.

Premise (1) just falls out of the commitments of experientialism. The most controversial premise is arguably (2). To defend it, I will need to do several things. In section 2, I will explain what I mean by “the capacity for reflection”, explain why it’s sufficient for moral agency, and argue that purely reflective beings can exist. In section 3, I will continue by arguing that reflection is sufficient for moral patiency. Doing so will provide the needed ammunition to argue against experientialism.

2. What is reflection?

The paradigm reflective agent is a normal human adult, capable of reflective self-consciousness. Gallagher’s (2010) definition of reflective self-consciousness is a good place to start. He defines it as “an explicit, conceptual, and objectifying awareness that takes a lower-order consciousness as its attentional theme.” Several themes are important for my understanding of reflection. First, it must be explicit. A cat might think “I am hungry” but this thought is never explicitly articulated in its mind in the way a reflective human might reflect to themselves, “Boy if I don’t eat breakfast I’m going to be hungry this evening for sure.” Second, reflection must be conceptual. What I mean by that is that in order to reflect one must have the concept of “reflection”, or at least some concept of “consciousness”. A cat might have a psyche but it lacks a concept of psyche qua psyche. A reflective creature knows as its reflecting that it’s reflecting because it has at least one concept about reflection as such to distinguish it from other psychological events like behaving or perceiving.

Thus, to reflect in the full sense I intend one must have an explicit understanding of what it means to reflect and the ability to know that you are reflecting when you are reflecting. Furthermore, a distinguishing feature of reflection is that a reflective creature can reflect on just about anything: themselves, trees, rocks, numbers, philosophy, art, reflection itself, evolution, space-time, etc. While there might be some contents that are too unwieldy for human reflective agents to fully reflect on, a defining feature of reflection is its flexibility with regard to the contents of reflective acts. If a reflective agent is relaxed and not pressed for time it can very well reflect on almost anything so long as it has the right conceptual repertoire. Thus, I avoid the term “reflective self-consciousness” because reflective agents can actually take as an object of reflection just about any object or proposition, not just the “self”. Hence, I prefer to talk about “reflective consciousness” i.e. reflection. A feature of reflection closely related to flexibility is the ability to switch between different objects of reflection. A reflective creature, when suitably relaxed, can choose what to reflect on when it wants to. If it wants to reflect on the past, it can; if it wants to reflect on the future, it can.

Phenomenologically speaking, reflection is spatial, selective, and perspectival. Reflection is spatial because if I asked you to reflect on your cat and then your dog you would not imagine them mushed together; you would first reflect on your cat and then “move” onto your dog. All reflection is spatialized in this sense because the objects of reflection are “separated” from each other in mental space. This applies to the most abstract of ideas: if I ask you to reflect on the concept of liberty and then reflect on democracy there will be “movement” in your act of reflection as you go from idea to idea.  Reflection is selective because if I reflect on what I had for breakfast yesterday, I cannot simultaneously reflect on what I want for breakfast tomorrow. Reflection is perspectival because if I reflect on my walk through town yesterday the reflective act is done from a perspective. If my reflection is veridical I might reflect as if I were peering out of my head bobbing up and down as I walk but in all likelihood my reflection will be disembodied like a camera floating freely through space able to fly through the city at any speed.

Another feature of reflection is the capacity to explicitly reason and articulate about intentional actions qua intentional actions. To interact with something nonreflectively is to interact it without explicitly realizing you have done so and without the ability to give a reason why you have done so. Conversely, to interact with something reflectively enables you to reflect on your reasons for having chosen the action you did and the ability, if needed, to explicitly articulate your reasons for having acted in the way you did. The reasons you give might not be indicative of the true, underlying causal mechanisms for your action but what’s important is the ability to articulate in terms of intentional actions even if you are confabulating (Nisbett & Wilson, 1977). Moreover, even if your voicebox or muscles were completely paralyzed you would still have the ability to articulate your reasons so long as you can articulate them to yourself or so long as you possess the knowledge that if you had a means of expressing yourself you could actually articulate. Thus, what counts is not so much the literal articulation of reasons but the capacity or potential to articulate reasons for action. Moreover, by action I mean mental or behavioral action e.g. you could articulate to yourself why you chose to imagine yourself playing tennis as opposed to imagining yourself walking through your house.

Now that I have explained part of what it means to be a reflective agent, I want to explain why reflective agents are also moral agents, what I call reflective moral agents. Defending the cogency of reflective moral agency will clear the ground for my defense in the next section of reflective moral patiency. It’s relatively uncontroversial the ability to reflect has instrumental value for moral agents insofar as reflective creatures could reflect on better ways to help moral patients but why should reflective agents be moral agents just in virtue of their being reflective agents and not because reflection is instrumentally valuable? One reason is that reflective agency is important for realizing many things of intrinsic value according to what has been called “objective list” approaches to intrinsic goodness. Common items on these lists of intrinsically valuable goods include things such as: developing one’s talents, knowledge, accomplishment, autonomy, understanding, enjoyment, health, pleasure, friendship, self-respect, virtue, etc. Arguably reflection is not crucial for all these items but it is especially important for autonomy, which roughly speaking is the ability to rationally make decisions for oneself and be a “self-legislating will”, i.e. someone who makes decisions on the basis of rules that they impose on themselves. Arguably autonomy involves the capacity for reflection insofar as one cannot automatically or unconsciously self-legislate; to self-legislate in this sense necessarily involves stepping back and reflecting on the type of life one wants to live.

For example, consider the concept of an “advanced directive”, which is a special legal contract that allows people to decide how they want to die. Suppose your friend Alice had never heard of an advanced directive before nor had she ever considered the question of how she wanted to die e.g. whether she would want to live on life support for more than six months. Now if you asked Alice about advanced directives and she responded instantly with a “no” you would be confused. You would say, “How can you answer so quickly? Don’t you need to reflect a little longer on the question?” It would be one thing if she said “Oh, actually I have thought about this before and my answer is still no.” But it would be another thing altogether if she said “I don’t need to think about it – I just went with my gut reaction, and that gut reaction is no.” If she answered in this way you might think she did not understand the moral significance of advanced directives, which demand a certain kind of slowness in deliberation in order to be morally relevant.

Consider another example. You notice your friend Bob has grown really close to his girlfriend, Carol. One day you ask Bob if he wants to marry her and he instantly answers “Yes”. Surprised, you ask, “So you have thought about this before?” and Bob says “No, I’ve never thought about it before until you asked.” Most people would find this strange because marriage is such a significant life decision that it demands slow, deliberative reflection. To not reflect on such weighty issues indicates a failure of moral agency.These two examples illustrate a general principle about the crucial role reflection plays in supporting rational, autonomous choice, namely, that it must have an element of “slowness”. This kind of reflective autonomy is distinct from the autonomy of, say, cats, who are free to choose between sleeping on the mat or sleeping on the bed. The latter kind of autonomy is what we might call sentient autonomy because it’s possessed by almost all Earthly beings that are sentient. Sentient autonomy is important and distinguishes animals from, say, rocks and dust bunnies but it is not the only kind of autonomy relevant to moral agency. If there was a being that possessed reflective autonomy but wasn’t sentient, it seems absurd to deny them moral agency. Reflectively autonomous agents would be able choose to help moral patients regardless of their ability to sensuously feel pleasure or pain. Moreover, their decision procedures would be such that they are of a deliberative nature, grounded in reasons that they are able to explicitly articulate if necessary.

Consider the fictional character Commander Data from Star Trek. Data is an advanced android with a positronic brain that can compute trillions of operations per second. He is thus hyper-intelligent, processing information faster and more accurately than any human. Even if his brain is a computer Data is not merely a computer; he is a moral agent just the same as any human. The only difference is that Data is not a sentient being in the sense that he lacks the bodily consciousness of animals and other fleshy creatures.

Biting the bullet and denying Data moral agency is implausible given that Data was often the wisest and most morally principled of all the crewmembers, not to mention the most valiant in the face of action as evidenced by his many medals of honor. If anyone was capable of reflective autonomy if was Data. It might look from all appearances that he was acting out of just normal sentient autonomy but this is an illusion generated by the sheer speed of his reflective processing. Consider the numerous medals won for bravery and honor in service of Starfleet. All of Data’s valor and bravery were executed not because of any animal instinct or sentient autonomy but because he made a reflective choice. This is evident by the fact that if you asked Data why he performed action X in situation Y he would always be able to explicitly articulate a reason for having done so, even if that reason is “Because I was programmed to do so”. The relevant point however is that his actions betray the flexibility, switching, and autonomy relevant for moral agency as well as the explicitness characteristic of reflective agency.

3. Reflective Moral Patiency

In this section I will defend the second half of premise (2): the capacity for reflection by itself is sufficient for moral patiency. Any entity that can reflect is what I call a reflective patient. The guiding intuition behind experientialism is that welfare flows from the capacity to experience the world, not the capacity to reflect on the world. However, I contend that if there was a being that was insentient but capable of reflection it would be wrong to harm them. Take Data again.I contend that it would be wrong to treat Data poorly by either intentionally destroying him, being negligent to his robotic body, or needlessly destroying his prized belongings. In other words, Data is a moral patient that cannot be treated like just any mere physical object.

There are at least two objections someone might have to Data being a moral patient. First, the experientialist might simply balk at the thought Data cannot feel pain and pleasure. How could his cognitive life be identical to that of a rock or other insentient entities? Surely there is a qualitative or experiential dimension to Data’s existence that distinguishes his existence from that of rocks and dust bunnies. I would respond by saying there is indeed a certain “quality” to Data’s information processing but I’m not convinced we are forced to say such information processing is “experiential” unless that just means “has a quality”, which would trivialize the notion. I can grant the quality of Data’s positronic brain as it reflectively operates is different from the quality of a rock because of its informational complexity without supposing the quality is necessarily due to the information processing being experiential in way an animal’s sensuous pleasure or pain is experiential. In effect, I’m proposing that an entity could have the quality of being a reflective thinker without being a subject of phenomenal experience.

The second objection is that moral patiency plausibly flows from an entity having interests that can either be satisfied or frustrated. Didn’t Data have interests and aspirations like anyone, however “robotic” or “inhuman”? If Data is merely engaging in reflective thought but lacks any interests then the objector might say it’s implausible that his life could be made better or worse and thus would not count as a moral patient. Since we’ve already argued that Data surely is a moral patient then his patiency must be due to a kind of experiential welfare, as per experientialism. The underlying assumption seems to be that unless a cognitive capacity is experienced it cannot be intrinsically valuable and thus cannot be a suitable locus for moral patiency. Call this the Principle of Experience (PE). Kahane & Savulescu also endorse a version of PE writing that “phenomenal consciousness is required if a person is to have a point of view, that is for the satisfaction of some desire to be a benefit for someone” (2009, p. 17). The intuition behind PE is that what makes it permissible to randomly shoot a rock and impermissible to randomly shoot an animal is that rocks lack phenomenal experiences that can be negatively or positively affected.

However, I believe this objection fails to fully grasp the distinction between reflective patiency and sentiential patiency. Data can be a moral patient so long as we are careful to distinguish “bottom-up” interests that stem from animalistic sentience, and “top-down” interests that stem from reflection. It’s debatable whether Data has genuine bottom-up interests but undeniable he has top-down interests due to his capacity for complex, reflective thought. For example, Data might not have a sentential instinct to avoid pain but he can reflectively think “I do not want to be destroyed.” Data could surely sign an advanced directive and his signature would be morally relevant because he can explicitly articulate and reason about his decision. It would be wrong to intentionally destroy or mistreat Data not because he can experience the mistreatment but because it would violate his reflective interest to continue existing. If Data signed an advanced directive it would be wrong to intentionally ignore it for the exact same reason it’d be wrong to intentionally ignore a human’s advanced directive.

Another kind of thought experiment supports the intuition that reflective consciousness is relevant to moral patiency independently of its relation to sentience. Consider the hypothetical scenario where a chimpanzee and a chicken were in a burning building and you could only save one. Other things being equal, it seems overall better to save the chimpanzee because although both the chicken and chimp are sentient arguably the chimp has a greater amount of proto-reflectivity that is intrinsically valuable. Similarly, if the choice was between a chimpanzee and an adult human, it seems overall better to save the human for the same reason: the human is sentient and it is reflective. Furthermore, suppose your mother or father was dying and the doctors said they could save their life only on the condition that they would be insentient but reflective. They would be able to converse intelligibly, write emails, thoughtfully answer questions about their own folk psychology, cook dinner, and otherwise act like perfectly normal people except they couldn’t experience pleasure or pain. Would you accept the offer? It seems absurd not to. The rich, multidimensional intelligence associated with reflection is valuable independently of any contingent relation to sentience. These thought experiments lend credence to the thought that moral status comes in degrees and that reflective moral agents that are also sentient carry what some philosophers call “Full Moral Status” (Jaworska & Tannenbaum, 2013). Moral patients that are sentient only carry less than full moral status because they are not reflective patients.

Conclusion

I’ve argued that experientialism is false because it assumes that all moral patients and all moral agent are necessarily sentient. In contrast I’ve attempted to open up the conceptual space by arguing that the capacity for reflection itself is sufficient for both moral agency and moral patiency.

 

References

Bernstein, M. H. 1998. On Moral Considerability: An Essay on Who Morally Matters. New York: Oxford University Press.

Farah, M. J. (2008). Neuroethics and the problem of other minds: implications of neuroscience for the moral status of brain-damaged patients and nonhuman animals. Neuroethics, 1(1), 9-18

Jaworska, Agnieszka and Tannenbaum, Julie, “The Grounds of Moral Status”, The Stanford Encyclopedia of Philosophy (Summer 2013 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2013/entries/grounds-moral-status/&gt;.

Kahane, G., Savulescu, J. (2009). Brain damage and the moral significance of consciousness. Journal of Medicine and Philosophy, 34(1), 6-26.

Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological review, 84(3), 231.

 

Regan, T. (1986). The case for animal rights. In P. Singer (Ed.) In Defense of Animals (pp. 13-26). New York: Basil Blackwell

 

Suchy-Dicey, C. (2009). It Takes Two: Ethical Dualism in the Vegetative State.Neuroethics, 2(3), 125-136

 

2 Comments

Filed under Consciousness, Neuroethics, Philosophy

My Biggest Pet Peeve in Consciousness Research

 

Boy was I excited to read that new Nature paper where scientists report experimentally inducing lucid dreaming in people. Pretty cool, right? But then right in the abstract I run across my biggest pet peeve whenever people use the dreaded c-word: blatant terminological inconsistency. Not just an inconsistency across different papers, or buried in a footnote, but between a title and an abstract and within the abstract itself. Consider the title of the paper:

Induction of self awareness in dreams through frontal low current stimulation of gamma activity

The term “self-awareness” makes sense here because if normal dream awareness is environmentally-decoupled 1st order awareness than lucid dreaming is a 2nd order awareness because you become meta-aware of the fact that you are first-order dream-aware. So far so good. Now consider the abstract:

 Recent findings link fronto-temporal gamma electroencephalographic (EEG) activity to conscious awareness in dreams, but a causal relationship has not yet been established. We found that current stimulation in the lower gamma band during REM sleep influences ongoing brain activity and induces self-reflective awareness in dreams. Other stimulation frequencies were not effective, suggesting that higher order consciousness is indeed related to synchronous oscillations around 25 and 40 Hz.

Gah! What a confusing mess of conflicting concepts. The title says “self-awareness” but the first sentence talks instead about “conscious awareness”. It’s an elementary mistake to confuse consciousness with self-consciousness, or at least to conflate them without making an immediate qualification of why you are violating standard practice in so doing. While there are certainly theorists out there who are skeptical about the very idea of “1st order” awareness being cleanly demaracted from “2nd order” awareness (Dan Dennett comes to mind), it goes without saying this is a highly controversial position that cannot just be assumed without begging the question. Immediate red flag.

The first sentence also references previous findings about the neural correlates of “conscious awareness” being linked to specific gamma frequencies of neural activity in fronto-temporal networks. The authors say though that correlation is not causation. The next sentence then makes us believe the study will provide that missing causal evidence about conscious awareness and gamma frequencies.

Yet the authors don’t say that. What they say instead is that they’ve found evidence that gamma frequencies are linked to “self-reflective awareness” and “higher-order consciousness”, which are again are theoretically distinct concepts than “conscious awareness” unless you are pretheoretically committed to a kind of higher-order theory of consciousness. But even that wouldn’t be quite right because on, e.g. Rosenthal’s HOT theory, a higher-order thought would give rise to first-order awareness not lucid dreaming, which is about self-awareness. On higher-order views, you would technically need a 3rd order awareness to count as lucid dreaming.

So please, if you are writing about consciousness, remember that consciousness is distinct from self-consciousness and keep your terms straight.

1 Comment

Filed under Academia, Consciousness, Random