I recently joined the site Steemit and will be cross-posting some old content from this site. My Steemit username is @rachelsmantra
James Bernat writes that it’s built into the “paradigm” of death that “death is fundamentally a biological phenomenon.” But suppose humans in the distant future are successful in building an artificial intelligence that has person-level properties such as consciousness, memory, etc. And suppose this robot is destroyed. Would we not want to say that the robot died? What other concept would be appropriate for describing what happened to this artificial intelligence? Thus it seems like death is not a fundamentally biological phenomenon.
In a recent study, Naci et al. investigated how the brain responds to an 8 minute Alfred Hitchcock movie. In healthy subjects they found that frontal and parietal areas indicative of executive functioning were active during the most suspenseful parts of the movie. Then they showed the same movie to two patients diagnosed as being in a vegetative state, one of which who had been in VS for 16 years. In one of the patients they found that “activity in a network of frontal and parietal regions that are known to support executive processing significantly synchronized to that of healthy participants”. In other words, the vegetative man’s brain “tracked” the suspense-points of the movie in the same way that healthy controls did. They reasoned that the patient was therefore consciously aware of the video, despite being behaviorally unresponsive:
The patient’s brain activity in frontal and parietal regions was tightly synchronized with the healthy participants’ over time, and, crucially, it reflected the executive demands of specific events in the movie, as measured both qualitatively and quantitatively in healthy individuals. This suggested that the patient had a conscious cognitive experience highly similar to that of each and every healthy participant, while watching the same movie.
But what’s the connection between executive functioning and conscious experience? The authors write:
The “executive” function of the brain refers to those processes that coordinate and schedule a host of other more basic cognitive operations, such as monitoring and analyzing information from the environment and integrating it with internally generated goals, as well as planning and adapting new behavioral schemas to take account of this information. As such, executive function is integral to our conscious experience of the world as prior knowledge is integrated into the current “state of play” to make predictions about likely future events.
Does this mean that executive functioning is always conscious? Is the unconscious brain incapable of “monitoring and analyzing information from the environment” and “integrating” that information with goals? Color me skeptical but I believe in the power of the unconscious mind to perform these functions without the input of conscious awareness.
Several examples come to mind. In the “long-distance truck driver” phenomenon people can drive automobiles for minutes if not hours without the input of conscious awareness. Surely driving requires “monitoring and analyzing information from the environment” in addition to integrating with goals and adapting new behaviors to deal with novel road conditions.
Another example is automatic writing, where people can write whole intelligent paragraphs without the input of conscious attention and the “voice” of the writing is distinct from that of the person’s normal personality, channeling the personalities of deceased persons or famous literary people. People would hold conversations with their automatic writing indicating that the unconscious writer was responding to the environment and surely “monitoring and analyzing information”. Im not aware of any brain imaging studies of automatic writing but I would not be surprised if frontal and parietal regions were active given the complexity of handwriting as a cognitive task. Same with long-distance truck driving.
My point is simply to raise the question: Can executive function happen unconsciously? Naci et al. say that executive function is “integral” to conscious experience. That might be true. But is conscious experience integral to executive functioning? Maybe not. There is a litany of complex behaviors that can be performed unconsciously, all of which likely recruit frontal and parietal networks of the brain. We can’t simply assume that just because information integration occurred that conscious awareness was involved. To make that inference would require us to think that the unconscious mind is “dumb” and incapable of integrating information. But there is plenty of reason to think that what Timothy Wilson calls the “adaptive unconscious” is highly intelligent and capable of many “higher-order” cognitive functions including monitoring, integrating, planning, reasoning, etc.
Note: This is the introduction to the draft of my dissertation prospectus.
Doctors diagnosing the vegetative state have always found themselves embroiled in scientific and ethical controversy. Over the last several decades, the diagnosis of the vegetative state has stirred the public imagination and the writings of bioethicists in a way that few other diagnoses has. Take the example of Terri Schiavo, who suffered from a heart attack in 1990 and subsequently lapsed into a coma from lack of oxygen to her brain. After months of no recovery she was formally diagnosed with the vegetative state, a condition doctors describe as a state of “wakeful unawareness”. In this state Schiavo opened her eyes and appeared to be awake but showed no clear-cut intelligent, contingent behavior in response to any stimulation or human interaction. Contingent behaviors are behaviors that occur as appropriate responses to the behavior of other people or objects e.g. if someone sticks out their hand, the appropriate behavior (in some contexts) is to shake it. However, though Schiavo didn’t show any contingent behavior she would show reflexive behaviors such as laughing or crying or randomly moving her eyes or limbs. After years of no recovery from VS her husband Michael asked the state for permission to remove her artificial feeding and hydration.
However, when videos of Terri’s wakeful behaviors were released to the public widespread outrage was provoked in response to what many people considered to be the immoral murder of a living human being. Towards the end of her life during the 2000s, the Schiavo family was convinced that she was in fact in a state called the “minimally conscious state” (MCS) because they thought she showed intermittent signs of conscious awareness, such as laughing appropriately when a joke was told or responding to a family member with an appropriate emotional display. Because the operational standards of diagnosing MCS allow for the possibility of only showing signs of conscious awareness intermittently there is a genuine epistemic question of whether Schiavo was diagnosed properly with most experts retrospectively believing she could not have been in a MCS based on her autopsy reports, which revealed extensive cortical lesioning. But the public imagination was rarely if ever aware of these nuances distinguishing VS and MCS but instead took her wakeful behavior and physical health to be a clear sign that it would be wrong to kill Schiavo by removing her artificial life support.
The Schiavo case rests at the intersection of epistemology, medical diagnosis, ethics, the law, and the norms of society at large. These issues are intertwined. The goal of this dissertation will be to systematically argue that in diagnosing the vegetative state and other disorders of consciousness (DOC) these normative issues are essentially intertwined. In other words, the epistemic certainty attached to any diagnosis of the vegetative state cannot occur outside the broader context of ethics, law, and society. I call this the Thesis of Diagnostic Interaction. The thesis says that diagnosing disorders of consciousness is not a purely objective affair in the same way it is for physicists to determine the number of protons in a gold atom. In other words, a diagnostic label such as “the vegetative state” is not a natural kind because it does not cut nature at its joints in the way the kind GOLD does. The upshot of my thesis is that the question of whether Schiavo was truly in a vegetative state cannot be answered by merely examining her brain or behavior in isolation from the cultural time and place she was diagnosed. We must look at the broader culture of diagnostic practice which itself is essentially shaped by complex ethical and legal norms and steeped in the social milieu of the day.
Instead of VS being understood as a natural kind like GOLD, INFLUENZA, or H2O, the vegetative state can be better understood as what Ian Hacking calls an interactive kind. An interactive kind is a concept that applies to classificatory schemes, ones that influence the thing being classified in what Hacking calls “looping effects”. Hacking’s examples of interactive kinds includes childhood, “transient” mental illnesses such as 19th-century hysteria, child abuse, feeblemindedness, anorexia, criminality, and homosexuality. Interactive classifications change how the people classified behave because they are either directly aware of the classification or the classification functions in a broader socio-cultural matrix whereby individuals and institutions use the classification to influence the individuals being classified. For Hacking, interactive kinds are
“Especially concerned with classifications that, when known by people or by those around them, and put to work in institutions, change the ways in which individuals experiences themselves–and may even lead people to evolve their feelings and behavior in part because they are so classified.” (Social Construction of What, p. 104).
Hacking’s proposal that some kinds of people are interactive kinds boils down to two features. First, scientific classifications of people can literally bring into being a new kind of person that did not exist before. Call this the “new people” effect. Second, such classifications are prone to “looping effects” because the classification interacts with people when they know about the classification or when the classification functions in a larger institutional settings which then influence the individual being classified. For example, consider the diagnosis of “dissociative identity disorder” (DID) otherwise known as “multiple personality disorder”. According Hacking, DID did not come into fruition until scientists and psychiatrists began to look for it i.e. until it became an accepted diagnostic category by a group of therapists and institutions. Moreover, once the classification of DID was popularized in novels and movies, the rates of diagnosis increased dramatically suggesting that the disease had a social-cultural origin not a purely biological origin like the Ebola virus, which is an example of what Hacking calls an “indifferent kind” because the virus does not know about human classification schemes. DID is an example of a looping kind because the spreading awareness of the diagnostic classification led people to conform to the diagnostic criteria.
Making Up Diagnostic Labels
I contend that the vegetative state can also be considered an interactive kind in a similar way that Hacking claims mental illness is. There are several, interrelated reasons why this is the case.
- Clinical diagnosis of DOC is essentially a process or an activity carried out by finite human beings. Diagnosis does not happen at discrete time points but is an unfolding activity of humans making fallible judgments that have an ineliminable human element of subjectivity.
- The classification of DOC is under continual revision and varies from time and place, doctor to doctor, institution to institution. A diagnosis about the vegetative state made in 2014 simply would not have made sense in 1990 because the classificatory schemes were different, giving rise to new kinds of patients with DOC. Some doctors are more skilled at making a diagnosis than others and different institutions utilize different classificatory procedures that are mutually exclusive yet equally justified given the pragmatic constraints of neurological diagnosis.
- The diagnosis of DOC is prone to “looping effects” due to the emergence of new technologies which affect diagnostic practice which in turn shape the development of newer technologies. Decisions to utilize different technology will affect the diagnostic outcomes of whether someone is in a vegetative state or not. For example, whether you use behavioral bedside methods, resting-state PET, or active probe fMRI methods will give different diagnostic outcomes.
- The diagnosis of DOC is prone to the “new people” effect because new diagnostic categories literally create new kinds of people that did not exist prior to the creation of the diagnostic category. And since the process of diagnosis is an on-going activity, clinical neurology is continually in the process of making up new kinds of people that did not exist before. Moreover, the individuals classified are susceptible to looping effects because once classified they are changed by the classification.
- The creation of diagnostic categories of DOC cannot be disentangled from broader issues in ethics, the law, and society. Consciousness plays a central role in many moral theories because of its central role in defining the interests of animals and people. We do not consider entities without the capacity for consciousness to have any interests, and therefore they do not deserve our moral consideration. Thus, facts about consciousness determine our ethical obligations in the clinic. A person diagnosed with the vegetative state by definition lacks consciousness. But the criteria for this diagnosis are continually changing in ways that do not reflect pure advances in scientific understanding.
The “standard approach” in clinical neurology has been accused of suffering from an implicit “behaviorist epistemology” because disorders of consciousness are typically diagnosed on the basis of a lack of behavior. All the gold standard diagnostic assessment programs such as the JFK-Coma Recovery Scale are behavioral in nature insofar as they are expressly looking for behavior or the lack of behavior, either motor behavior or verbal behavior. If the behavior occurs appropriately in response to the command or stimulus then they get points that accumulate towards “normal” consciousness. If no behavior is observable in response to the cue then they don’t get points and are said to have a “disorder of consciousness”.
The problem with this approach is both conceptual and empirical. Conceptually, there is no necessary link between behavior and consciousness because unless you are Gilbert Ryle or Wittgenstein you don’t want to define consciousness in terms of behavior. That is, we don’t want to define “pain” as simply the behavior of your limbs whenever your cells are damaged, or the disposition to say “ouch”. The reason we don’t want to do this is because pain is supposed to be a feeling, painfulness, not a behavior.
Empirically, we know of many cases where behavior and consciousness can be decoupled such as in the total locked-in state where someone’s mind is more-or-less normal but they are completely paralyzed, looking for all intents and purposes like someone in a deep coma or vegetative state yet retaining normal brain function. From the outside they would fail these behavioral assessment techniques yet from the inside have full consciousness. Furthermore we know that in some cases of general anesthesia there can be a complete lack of motor response to stimulation while the person maintains their conscious awareness.
Another problem with the behaviorist epistemology of clinical diagnosis is that the standard assessment scales require a certain level of human expertise in making the diagnostic judgment. Although for most scales there is high inter-rater reliability it nevertheless ultimately comes down to a fallible human making a judgment about someone’s consciousness on the basis of subtle differences between “random” and “meaningful” behavior. A random behavior is just that: a random, reflexive movement that signifies no higher purpose or goal. But if I ask someone to squeeze my hand and they squeeze it, this is a meaningful sign because it suggests that they can listen to language and translate a verbal command to a willed response. But what if the verbal command to squeeze just triggers an unconscious response to squeeze? Sure, it’s possible. No one should rule it out. But what if they do it 5 times in a row? Or what if I say “don’t squeeze my hand” and they don’t squeeze it? Now we are getting into what clinicians call “unambiguous signs of consciousness” because the behavior is expressive of a meaningful purpose and shows what they call “contingency”, which is just another way of saying “appropriate”.
But what does it mean for a behavior to really be meaningful? Just that there is a goal-structure behind it? Or that it is willed? Again, we don’t want to define “meaning” or “appropriateness” in terms of outward behavior because when you are sleepwalking your behavior is goal-structured yet you are not conscious. Or consider the case of automatic writing. In automatic writing one of your hands is capable of having a written conversation and writing meaningful linguistic statements without “you” being in control at all. So clearly there is a possible dissociation between “meaningful” behavior and consciousness. All we can say is that for normal people in normal circumstances meaningful behavior is a good indicator of normal consciousness. But notice how vacuous that statement is. It tells us nothing about the hard cases.
So in a nutshell the diagnosis of disorders of consciousness has an inescapable element of human subjectivity in it. Which is precisely why researchers are trying to move to brain-based diagnostic tools such as fMRI or EEG, which are supposed to be more “objective” because they skip right over the question of meaningful behavior and look at the “source” of the behavior: the brain itself. But I want to argue such measures can never bypass the subjectivity of diagnosis without going full behaviorist. The reason why brain-based measures of disorders of consciousness are behaviorist is simply because you are looking at the behavior of neurons. You can’t see the “feelings” of neurons from a brain scanner anymore than you can see the “feeling” of pain from watching someone’s limb move. Looking at the brain does not grant you special powers to see consciousness more directly. It is still an indirect measure of consciousness and it will always require the human judgment of the clinician to say “Ok, this brain activity is going to count as a measure towards “normal” consciousness”. It might be slightly more objective but it will never be any less subjective unless you want to define normal consciousness in terms of neural behavior. But how is that any different from standard behaviorism? The only difference is that we are relying on the assumption that neural behavior is the substrate of consciousness. This might be true from a metaphysical perspective. But it’s no help in the epistemology of diagnosis because as an outside observer you don’t see the consciousness. You just see the squishy brain or some representation on a computer screen. I believe there is a circularity here that cannot be escaped but I won’t go into it here (I talk about it in this post).
In their book A Universe of Consciousness, Edelman and Tononi use the example of a photodiode discriminating light to illustrate the problem of consciousness:
Consider a simple physical device, such as a photodiode, that can differentiate between light and dark and provide an audible output. Let us then consider a conscious human being performing the same task and then giving a verbal report. The problem of consciousness can now be posed in elementary terms: Why should the simple differentiation between light and dark performed by the human being be associated with and, indeed, require conscious experience, while that performed by the photodiode presumably does not? (p. 17)
Does discrimination of a stimulus from a background “require conscious experience”? I don’t see why it would. This seems like something the unconscious mind could do all on its own and indeed is doing all the time. But it comes down to how we are defining “consciousness”. If we are talking about consciousness as subjective experience, the question is: does discrimination require there be something-it-is-like for the brain to perform that discrimination? Perhaps. But I also don’t know how to answer that question empirically given the subjective nature of experience and the sheer difficulty of building an objective consciousness-meter.
On the other hand, assume by “consciousness” we mean something like System II style cognition i.e. slow, deliberate, conscious, introspective thinking. On this view of consciousness, consciousness is but the tip of the iceberg when it comes to cognitive processing so it would be absurd to say that the 99% unconscious mind is incapable of doing discrimination. This is the lesson of Oswald Külpe and the Würzburg school of imageless thought. They asked trained introspectors from the Wundtian tradition to make a discrimination between two weights with their hands, to see if one weight is heavier than the other. Then the subjects were asked to introspect and see if they were aware of the process of discrimination. To their surprise, there were no conscious images associated with the weight-discrimination. They simply held the weights in their hand, consciously made an intention to discriminate, the discrimination happened unconsciously, and then they were aware of the results of the unconscious judgment. Hence, Külpe and the Würzburg school discovered a whole class of “imageless thought” i.e. thought that happens beneath the level of conscious awareness.
Of course, the Würzburg school wasn’t talking about consciousness in terms of subjective qualia. They were talking about consciousness in terms of what’s introspectable. If you can’t introspect a thought process in your mind then it’s unconscious. On this view and in conjunction with evolutionary models of introspection it seems clear that a great deal of discriminations are happening beneath the surface of conscious awareness. This is how I prefer to talk about consciousness: in terms of System II-style introspection where consciousness is but the tip of a great cognitive iceberg. On Edelman and Tononi’s view, consciousness occurs anytime there is information integration. On my view information integration can occur unconsciously and indeed most if not all non-human animal life is unconscious. Human style slow deliberate introspective conscious reflection is rare in the animal kingdom even if during the normal human waking life it is constantly running, overlapping and integrating with the iceberg of unconsciousness so as to give an illusion of cognitive unity. It seems as if consciousness is everywhere all the time and that there is very little unconscious activity. But as Julian Jaynes once said, we cannot be conscious of what we are not conscious of, and so consciousness seems pervasive in our mental life when in fact it is not.
I started reading Edelman and Tononi’s book A Universe of Consciousness and I wanted to offer some skeptical comments. I’m generally skeptical about any theorizing of consciousness these days, not because I’m against theorizing in science but because I have been leaning more Mysterian in my epistemology towards “consciousness”, where “consciousness” refers to subjective experience. I think any fundamental theory of consciousness is doomed to fail because it will run into vicious circularity as I will explain below. Take this seemingly innocuous statement offered at the beginning of chapter 1:
Everyone knows what consciousness is: It is what abandons you every evening when you fall asleep and reappears the next morning when you wake up.
Already E&T are helping themselves to some heavy duty theoretically loaded assumptions. E&T are talking about consciousness as subjectivity, so why assume subjectivity goes away completely during dreamless sleep? How do we know there isn’t something-it-is-like to be asleep and we just don’t remember what-it’s-like? If subjectivity is at 100% during wakefulness why not think it goes down to 1% or .05% while sleeping instead of 0%? Perhaps what-it-is-like for humans to be asleep is analogous in subjective intensity to what-it-is-like to be bee or lizard when awake.
By helping themselves to the assumption that consciousness goes away completely during asleep E&T allow themselves a “starting point” or “fixed point” from which to begin their theorizing. It becomes their rock-solid assumption against which they can begin doing experimental work. But from a fundamental point of view, it is an unargued for assumption. Where’s the evidence for it? Introspective evidence is not enough because introspection is turned off during asleep. And empirical evidence? How are you going to measure it? With a consciousness-meter? Well, how are you going to validate that it’s calibrated properly? Say you build one and point it at a sleeping brain at it registers “0”. How do you know the measurement is correct? What’s the calibration method?
They also assume that consciousness is an “relatively recent development” evolutionarily speaking. If we are talking about self-consciousness this makes sense but they are not. They are talking about subjectivity, the having of a “point-of-view”. But why not think a bee has a point of view on the world? Or why assume you need a brain or nervous system at all? For all we know there is something-it-is-like to be an amoeba. E&T want this to be another “fixed point” because if you assume that subjectivity requires a brain or nervous system it gives you a starting place scientifically. It tells you where to look. But again, it’s never argued for, simply assumed. But it’s not logically incoherent to think a creature without a nervous system has a dim phenomenology.
Suppose you assumed that only brained creatures have consciousness and you devise a theory accordingly. Having made your theory you devise a series of experimental techniques and measurements and then apply them to brained creatures. You “confirm” that yes indeed brained creatures are conscious all right. What happens when you apply the same technique to a non-brained creature like an amoeba, testing for whether the amoeba has consciousness? Surprise surprise, your technique fails to register any consciousness in the amoeba. But there is a blatant epistemic circularity here because you designed your measurement technique according to certain theoretical assumptions starting with the “fixed point” that consciousness requires a nervous system. But why make that assumption? Why not assume instead that subjectivity starts with life itself and is progressively modified as nervous systems are introduced? Moreover they assume that
Conscious experience is integrated (conscious states cannot be subdivided into independent components) and, at the same time, is highly differentiated (one can experience billions of different conscious states).
Why can’t conscious states be subdivided? Why assume that? What does that even mean? Divided from what into what? Take the sleeping at .01% consciousness example. Why not think wakeful “unified” consciousness at 100% is the result of 1000 tiny microconsciousness “singing” side-by-side such that the total choir of microconsciousness gives rise to an illusion of a single large singer? When E&T say “one” can experience billions of states, who is this “one”? Why one, and not many? Their assumption of conscious unity is another “fixed point” but it’s just an assumption. Granted, it’s an assumption that stems from introspective experience but why trust introspection here? Introspection also says consciousness completely goes away during asleep but as we’ve seen it might be wrong about that.