Category Archives: Philosophy of science

The Vegetative State as an Interactive Kind

Note: This is the introduction to the draft of my dissertation prospectus.

Doctors diagnosing the vegetative state have always found themselves embroiled in scientific and ethical controversy. Over the last several decades, the diagnosis of the vegetative state has stirred the public imagination and the writings of bioethicists in a way that few other diagnoses has. Take the example of Terri Schiavo, who suffered from a heart attack in 1990 and subsequently lapsed into a coma from lack of oxygen to her brain. After months of no recovery she was formally diagnosed with the vegetative state, a condition doctors describe as a state of “wakeful unawareness”. In this state Schiavo opened her eyes and appeared to be awake but showed no clear-cut intelligent, contingent behavior in response to any stimulation or human interaction. Contingent behaviors are behaviors that occur as appropriate responses to the behavior of other people or objects e.g. if someone sticks out their hand, the appropriate behavior (in some contexts) is to shake it. However, though Schiavo didn’t show any contingent behavior she would show reflexive behaviors such as laughing or crying or randomly moving her eyes or limbs. After years of no recovery from VS her husband Michael asked the state for permission to remove her artificial feeding and hydration.

However, when videos of Terri’s wakeful behaviors were released to the public widespread outrage was provoked in response to what many people considered to be the immoral murder of a living human being. Towards the end of her life during the 2000s, the Schiavo family was convinced that she was in fact in a state called the “minimally conscious state” (MCS) because they thought she showed intermittent signs of conscious awareness, such as laughing appropriately when a joke was told or responding to a family member with an appropriate emotional display. Because the operational standards of diagnosing MCS allow for the possibility of only showing signs of conscious awareness intermittently there is a genuine epistemic question of whether Schiavo was diagnosed properly with most experts retrospectively believing she could not have been in a MCS based on her autopsy reports, which revealed extensive cortical lesioning. But the public imagination was rarely if ever aware of these nuances distinguishing VS and MCS but instead took her wakeful behavior and physical health to be a clear sign that it would be wrong to kill Schiavo by removing her artificial life support.

The Schiavo case rests at the intersection of epistemology, medical diagnosis, ethics, the law, and the norms of society at large. These issues are intertwined. The goal of this dissertation will be to systematically argue that in diagnosing the vegetative state and other disorders of consciousness (DOC) these normative issues are essentially intertwined. In other words, the epistemic certainty attached to any diagnosis of the vegetative state cannot occur outside the broader context of ethics, law, and society. I call this the Thesis of Diagnostic Interaction. The thesis says that diagnosing disorders of consciousness is not a purely objective affair in the same way it is for physicists to determine the number of protons in a gold atom. In other words, a diagnostic label such as “the vegetative state” is not a natural kind because it does not cut nature at its joints in the way the kind GOLD does. The upshot of my thesis is that the question of whether Schiavo was truly in a vegetative state cannot be answered by merely examining her brain or behavior in isolation from the cultural time and place she was diagnosed. We must look at the broader culture of diagnostic practice which itself is essentially shaped by complex ethical and legal norms and steeped in the social milieu of the day.

Interactive Kinds

Instead of VS being understood as a natural kind like GOLD, INFLUENZA, or H2O, the vegetative state can be better understood as what Ian Hacking calls an interactive kind. An interactive kind is a concept that applies to classificatory schemes, ones that influence the thing being classified in what Hacking calls “looping effects”. Hacking’s examples of interactive kinds includes childhood, “transient” mental illnesses such as 19th-century hysteria, child abuse, feeblemindedness, anorexia, criminality, and homosexuality. Interactive classifications change how the people classified behave because they are either directly aware of the classification or the classification functions in a broader socio-cultural matrix whereby individuals and institutions use the classification to influence the individuals being classified. For Hacking, interactive kinds are

“Especially concerned with classifications that, when known by people or by those around them, and put to work in institutions, change the ways in which individuals experiences themselves–and may even lead people to evolve their feelings and behavior in part because they are so classified.” (Social Construction of What, p. 104).

Hacking’s proposal that some kinds of people are interactive kinds boils down to two features. First, scientific classifications of people can literally bring into being a new kind of person that did not exist before. Call this the “new people” effect. Second, such classifications are prone to “looping effects” because the classification interacts with people when they know about the classification or when the classification functions in a larger institutional settings which then influence the individual being classified. For example, consider the diagnosis of “dissociative identity disorder” (DID) otherwise known as “multiple personality disorder”. According Hacking, DID did not come into fruition until scientists and psychiatrists began to look for it i.e. until it became an accepted diagnostic category by a group of therapists and institutions. Moreover, once the classification of DID was popularized in novels and movies, the rates of diagnosis increased dramatically suggesting that the disease had a social-cultural origin not a purely biological origin like the Ebola virus, which is an example of what Hacking calls an “indifferent kind” because the virus does not know about human classification schemes. DID is an example of a looping kind because the spreading awareness of the diagnostic classification led people to conform to the diagnostic criteria.

Making Up Diagnostic Labels

I contend that the vegetative state can also be considered an interactive kind in a similar way that Hacking claims mental illness is. There are several, interrelated reasons why this is the case.

  1. Clinical diagnosis of DOC is essentially a process or an activity carried out by finite human beings. Diagnosis does not happen at discrete time points but is an unfolding activity of humans making fallible judgments that have an ineliminable human element of subjectivity.
  2. The classification of DOC is under continual revision and varies from time and place, doctor to doctor, institution to institution. A diagnosis about the vegetative state made in 2014 simply would not have made sense in 1990 because the classificatory schemes were different, giving rise to new kinds of patients with DOC. Some doctors are more skilled at making a diagnosis than others and different institutions utilize different classificatory procedures that are mutually exclusive yet equally justified given the pragmatic constraints of neurological diagnosis.
  3. The diagnosis of DOC is prone to “looping effects” due to the emergence of new technologies which affect diagnostic practice which in turn shape the development of newer technologies. Decisions to utilize different technology will affect the diagnostic outcomes of whether someone is in a vegetative state or not. For example, whether you use behavioral bedside methods, resting-state PET, or active probe fMRI methods will give different diagnostic outcomes.
  4. The diagnosis of DOC is prone to the “new people” effect because new diagnostic categories literally create new kinds of people that did not exist prior to the creation of the diagnostic category. And since the process of diagnosis is an on-going activity, clinical neurology is continually in the process of making up new kinds of people that did not exist before. Moreover, the individuals classified are susceptible to looping effects because once classified they are changed by the classification.
  5. The creation of diagnostic categories of DOC cannot be disentangled from broader issues in ethics, the law, and society. Consciousness plays a central role in many moral theories because of its central role in defining the interests of animals and people. We do not consider entities without the capacity for consciousness to have any interests, and therefore they do not deserve our moral consideration. Thus, facts about consciousness determine our ethical obligations in the clinic. A person diagnosed with the vegetative state by definition lacks consciousness. But the criteria for this diagnosis are continually changing in ways that do not reflect pure advances in scientific understanding.

 

Advertisements

2 Comments

Filed under Consciousness, Neuroethics, Philosophy of science

Can the Clinical Diagnosis of Disorders of Consciousness Avoid Behaviorism?

4127EEGGk7L._SS500_

The “standard approach” in clinical neurology has been accused of suffering from an implicit “behaviorist epistemology” because disorders of consciousness are typically diagnosed on the basis of a lack of behavior. All the gold standard diagnostic assessment programs such as the JFK-Coma Recovery Scale are behavioral in nature insofar as they are expressly looking for behavior or the lack of behavior, either motor behavior or verbal behavior. If the behavior occurs appropriately in response to the command or stimulus then they get points that accumulate towards “normal” consciousness. If no behavior is observable in response to the cue then they don’t get points and are said to have a “disorder of consciousness”.

The problem with this approach is both conceptual and empirical. Conceptually, there is no necessary link between behavior and consciousness because unless you are Gilbert Ryle or Wittgenstein you don’t want to define consciousness in terms of behavior. That is, we don’t want to define “pain” as simply the behavior of your limbs whenever your cells are damaged, or the disposition to say “ouch”. The reason we don’t want to do this is because pain is supposed to be a feeling, painfulness, not a behavior.

Empirically, we know of many cases where behavior and consciousness can be decoupled such as in the total locked-in state where someone’s mind is more-or-less normal but they are completely paralyzed, looking for all intents and purposes like someone in a deep coma or vegetative state yet retaining normal brain function. From the outside they would fail these behavioral assessment techniques yet from the inside have full consciousness. Furthermore we know that in some cases of general anesthesia there can be a complete lack of motor response to stimulation while the person maintains their conscious awareness.

Another problem with the behaviorist epistemology of clinical diagnosis is that the standard assessment scales require a certain level of human expertise in making the diagnostic judgment. Although for most scales there is high inter-rater reliability it nevertheless ultimately comes down to a fallible human making a judgment about someone’s consciousness on the basis of subtle differences between “random” and “meaningful” behavior. A random behavior is just that: a random, reflexive movement that signifies no higher purpose or goal. But if I ask someone to squeeze my hand and they squeeze it, this is a meaningful sign because it suggests that they can listen to language and translate a verbal command to a willed response. But what if the verbal command to squeeze just triggers an unconscious response to squeeze? Sure, it’s possible. No one should rule it out. But what if they do it 5 times in a row? Or what if I say “don’t squeeze my hand” and they don’t squeeze it? Now we are getting into what clinicians call “unambiguous signs of consciousness” because the behavior is expressive of a meaningful purpose and shows what they call “contingency”, which is just another way of saying “appropriate”.

But what does it mean for a behavior to really be meaningful? Just that there is a goal-structure behind it? Or that it is willed? Again, we don’t want to define “meaning” or “appropriateness” in terms of outward behavior because when you are sleepwalking your behavior is goal-structured yet you are not conscious. Or consider the case of automatic writing. In automatic writing one of your hands is capable of having a written conversation and writing meaningful linguistic statements without “you” being in control at all. So clearly there is a possible dissociation between “meaningful” behavior and consciousness. All we can say is that for normal people in normal circumstances meaningful behavior is a good indicator of normal consciousness. But notice how vacuous that statement is. It tells us nothing about the hard cases. 

So in a nutshell the diagnosis of disorders of consciousness has an inescapable element of human subjectivity in it. Which is precisely why researchers are trying to move to brain-based diagnostic tools such as fMRI or EEG, which are supposed to be more “objective” because they skip right over the question of meaningful behavior and look at the “source” of the behavior: the brain itself. But I want to argue such measures can never bypass the subjectivity of diagnosis without going full behaviorist. The reason why brain-based measures of disorders of consciousness are behaviorist is simply because you are looking at the behavior of neurons. You can’t see the “feelings” of neurons from a brain scanner anymore than you can see the “feeling” of pain from watching someone’s limb move. Looking at the brain does not grant you special powers to see consciousness more directly. It is still an indirect measure of consciousness and it will always require the human judgment of the clinician to say “Ok, this brain activity is going to count as a measure towards “normal” consciousness”. It might be slightly more objective but it will never be any less subjective unless you want to define normal consciousness in terms of neural behavior. But how is that any different from standard behaviorism? The only difference is that we are relying on the assumption that neural behavior is the substrate of consciousness. This might be true from a metaphysical perspective. But it’s no help in the epistemology of diagnosis because as an outside observer you don’t see the consciousness. You just see the squishy brain or some representation on a computer screen. I believe there is a circularity here that cannot be escaped but I won’t go into it here (I talk about it in this post).

2 Comments

Filed under Consciousness, Philosophy of science, Psychology, Uncategorized

Draft of Latest Paper – Awake But Not Aware: Probing For Consciousness in Unresponsive Patients

patient

Ok everyone, here’s a paper I’m really excited about. The topic is so “me” — the first project I’ve wholeheartedly thrown myself into since since I came to Wash U. I can see myself wanting to write a dissertation or book on the topic so this paper will likely serve as the basis for a prospectus in the near future. The issue I’m dealing with in the paper is situated at the intersection of a variety of fields ranging from philosophy of mind, philosophy of science, cutting edge neuroscience, clinical neurology and biomedical ethics. I could conceivably “sell” the project to a variety of people. The project is obviously at an early stage of development and the paper is drafty but I have the rest of the semester to work on this so I’m open to any comments, criticisms, or questions. Thanks!

For PDF of paper, click here –> Williams-AwakeButNotAware-Draft-3-03-14

Here’s a tentative abstract:

The standard approach in clinical neurology is to diagnose disorders of consciousness (DOC) on the basis of operationally defined behaviors. Critics of the standard approach argue that it relies on a flawed behaviorist epistemology that methodologically rules out the possibility of covert consciousness existing independently of any observable behavior or overt report. Furthermore, critics point to developments in neuroimaging that use fMRI to “actively probe” for consciousness in unresponsive patients using mental imagery tasks (Owen et al. 2006). Critics argue these studies showcase the limitations of the standard approach. The goal of this paper is to defend the standard approach against these objections. My defense comes in two parts: negative and positive. Negatively, I argue that these new “active probe” techniques are inconclusive as demonstrations of consciousness. Positively, I reinterpret these active probes in behavioral terms by arguing they are instances of “brain behaviors”, and thus not counterexamples to the standard approach.

Leave a comment

Filed under Academia, Consciousness, Philosophy, Philosophy of science, Psychology

Latest Draft of Mental Time Travel Paper

CLICK HERE to read the latest draft of “Measuring Mental Time Travel in Animals”.

I’ve been working on this paper over the semester, responding to comments and generally cleaning it up. I’ve also added a new sub-section that explores an analogy with–believe it or not–whether Pluto is a planet. I also cut down on some repetitiveness towards the end. I will be turning it in as a Qualifying Paper very soon, so any last minute comments/suggestions/corrections would be greatly appreciated.

Leave a comment

Filed under Philosophy, Philosophy of science, Psychology

New paper – Measuring Mental Time Travel in Animals

For pdf click here: Williams – Measuring Mental Time Travel In Animals

Hasok Chang describes in Inventing Temperature how scientists dealt with the problem of measurement verification circularity when standardizing the first thermometers ever constructed. The problem can be illustrated by imagining you are the first scientist who wanted to measure the temperature of boiling water. What materials should you use to construct the measuring instrument? Once built, how do you verify your thermometer is measuring what you claim it is without circularly relying on your thermometer? Appealing to more experimentation is unhelpful because we must use a thermometer to carry out these experiments, and thermometers are what we are trying to determine the reliability of in the first place. Hasok Chang calls this the Problem of Nomic Measurement (PNM), which is defined as:

The problem of circularity in attempting to justify a measurement method that relies on an empirical law that connects the quantity to be measured with another quantity that is (more) directly observable.1 The verification of the law would require the knowledge of various values of the quantity to be measured, which one cannot reliably obtain without confidence in the method of measurement.

Stated more precisely, the PNM goes as follows:

1. We want to measure unknown quantity X.

2. Quantity X is not directly observable, so we infer it from another quantity Y, which is directly observable.

3. For this inference we need a law that expresses X as a function of Y, as follows:X = f(Y).

4. The form of this function f cannot be discovered or tested empirically because that would involve knowing the values of both Y and X, and X is the unknown variable that we are trying to measure.

My aim for this paper is to apply the PNM to an on-going debate in cross-comparative psychology about whether and to what extent non-human animals can “mentally time travel”. In 1997, Suddendorf and Corballis argued “the human ability to travel mentally in time constitutes a discontinuity between ourselves and other animals”.2 In 2002, Roberts argued non-human animals are “stuck-in-time”. Since then, a number of psychologists have defended similar claims. Endel Tulving states this hypothesis clearly:

There is no evidence that any nonhuman animals—including what we might call higher animals—ever think about what we could call subjective time…they do not seem to have the same kind of ability humans do to travel back in time in their own minds, probably because they do not need to. (Tulving, 2002, p. 2)

Call the claim that mental time travel is unique to humans Uniqueness. Naturally, Uniqueness has not gone unchallenged. One worry is that different theoretical assumptions about what counts as “mental time travel” are leading to disagreements over whether animals do or do not possess MTT. Furthermore, both sides of the debate more or less agree about the behavioral evidence, but disagree about how to interpret the evidence qua evidence for or against Uniqueness. This raises a problem of verification circularity similar to the PNM:

1. We want to measure MTT in animals

2. MTT is not directly observable, so we infer it from behavior Y, which is directly observable.

3. For this to work, we need to know how to infer MTT from behavior alone.

4. The form of this function cannot be discovered or tested empirically because that would involve knowing the unknown variable we are trying to measure (MTT).

Accordingly, my central thesis is that the question of whether animals can mentally time travel is not a purely empirical question. My argument hinges on premise (3): if psychologists have irreconcilable differences in opinion about which behaviors best express MTT, they will use the construct “mental time travel” to describe distinct phenomena and thus make different inferences from behavior to MTT. For example, if defenders of Uniqueness are using MTT as a label to describe a human autapomorphy3 but critics of Uniqueness are using MTT as a label for a core capacity shared with other animals, then they are clearly talking past each other and the debate is reduced to a semantic dispute about whether the term “MTT” is applied to “core” capacities or uniquely human traits.4 Therefore, I argue the empirical question of whether animals can in fact mentally time travel is intractable unless theorists can agree on both the connotative and denotative definitions of the term i.e. approximate agreement on the conceptual definition as well as agreement on its conditions of realization in the physical, measurable world.

1Chang does not analytically define the notion of “direct observation” but the paradigm case is observing the read-out of an instrument e.g. writing down the height of a column of mercury in a glass tube. Chang defends a hybrid version of foundationalism and coherentism whereby we begin scientific inquiry with some tentatively held beliefs justified by experience, especially the belief that we are capable of accurately observing the read-outs of our instruments.

2Citing neurological overlaps between “episodic-like” memory in non-human animals and human episodic memory, Corballis has recently dissented (2012). In his (2011) book, Corballis argues that what makes humans unique is our capacity for MTT and symbolic language super-charged by the capacity for recursivity i.e. Alice believes Bob desires that Chris thinks highly of Bob’s desire for Alice. Another recent convert is Roberts (2007), taking back his (2002) claims about MTT in animals.

3An autapomorphy is a derived trait that is unique to a terminal branch of a clade and not shared by other any members of the clade, including their closest relatives with whom they share a common ancestor.

4“We caution against grounding the concept of episodic-like memory in the phenomenology of the modern mind, rather than in terms of core cognitive capacities.” (Clayton et al 2003, p. 437)

Leave a comment

Filed under Consciousness, Philosophy, Philosophy of science, Psychology, Science

New paper: Minimal Models Make for Minimal Explanations

Williams – Minimal Models Make for Minimal Explanations

Abstract:

The ontic view of scientific explanation is that explanations are objectively in the world. Critics of the ontic view argue it fails to capture the importance of idealization as a critical component of scientific practice. Specifically, Robert Batterman argues that highly idealized mathematical models in physics are counter-examples to the ontic view or at least show why the ontic view is incomplete as an account of scientific explanation. My aim in this paper is to defend the ontic view of scientific explanation against Batterman’s objections.

Feedback welcome! This may or may not be turned in as my second qualifying paper at Wash U.

3 Comments

Filed under Philosophy, Philosophy of science

Quote for the Day – Einstein’s Louse and the Limits of Scientific Understanding

Nature is showing us only the tail of the lion, but I have no doubt that the lion belongs to it even though, because of its large size, it cannot totally reveal itself all at once. We can see it only the way a louse that is sitting on it would.

 ~Albert Einstein to Heinrich Zangger, quoted in Clifford Pickover, Archimedes to Hawking

3 Comments

Filed under Books, Philosophy, Philosophy of science, Science