Tag Archives: philosophy of mind

Man in Vegetative State Shows Brain Activity to Movie: What Does It Mean?

In a recent study, Naci et al. investigated how the brain responds to an 8 minute Alfred Hitchcock movie. In healthy subjects they found that frontal and parietal areas indicative of executive functioning were active during the most suspenseful parts of the movie. Then they showed the same movie to two patients diagnosed as being in a vegetative state, one of which who had been in VS for 16 years. In one of the patients they found that “activity in a network of frontal and parietal regions that are known to support executive processing significantly synchronized to that of healthy participants”. In other words, the vegetative man’s brain “tracked” the suspense-points of the movie in the same way that healthy controls did. They reasoned that the patient was therefore consciously aware of the video, despite being behaviorally unresponsive:

The patient’s brain activity in frontal and parietal regions was tightly synchronized with the healthy participants’ over time, and, crucially, it reflected the executive demands of specific events in the movie, as measured both qualitatively and quantitatively in healthy individuals. This suggested that the patient had a conscious cognitive experience highly similar to that of each and every healthy participant, while watching the same movie.

But what’s the connection between executive functioning and conscious experience? The authors write:

The “executive” function of the brain refers to those processes that coordinate and schedule a host of other more basic cognitive operations, such as monitoring and analyzing information from the environment and integrating it with internally generated goals, as well as planning and adapting new behavioral schemas to take account of this information. As such, executive function is integral to our conscious experience of the world as prior knowledge is integrated into the current “state of play” to make predictions about likely future events.

Does this mean that executive functioning is always conscious? Is the unconscious brain incapable of “monitoring and analyzing information from the environment” and “integrating” that information with goals? Color me skeptical but I believe in the power of the unconscious mind to perform these functions without the input of conscious awareness.

Several examples come to mind. In the “long-distance truck driver” phenomenon people can drive automobiles for minutes if not hours without the input of conscious awareness. Surely driving requires “monitoring and analyzing information from the environment” in addition to integrating with goals and adapting new behaviors to deal with novel road conditions.

Another example is automatic writing, where people can write whole intelligent paragraphs without the input of conscious attention and the “voice” of the writing is distinct from that of the person’s normal personality, channeling the personalities of deceased persons or famous literary people. People would hold conversations with their automatic writing indicating that the unconscious writer was responding to the environment and surely “monitoring and analyzing information”. Im not aware of any brain imaging studies of automatic writing but I would not be surprised if frontal and parietal regions were active given the complexity of handwriting as a cognitive task. Same with long-distance truck driving.

My point is simply to raise the question: Can executive function happen unconsciously? Naci et al. say that executive function is “integral” to conscious experience. That might be true. But is conscious experience integral to executive functioning? Maybe not. There is a litany of complex behaviors that can be performed unconsciously, all of which likely recruit frontal and parietal networks of the brain. We can’t simply assume that just because information integration occurred that conscious awareness was involved. To make that inference would require us to think that the unconscious mind is “dumb” and incapable of integrating information. But there is plenty of reason to think that what Timothy Wilson calls the “adaptive unconscious” is highly intelligent and capable of many “higher-order” cognitive functions including monitoring, integrating, planning, reasoning, etc.

2 Comments

Filed under Consciousness, Psychology

Can the Clinical Diagnosis of Disorders of Consciousness Avoid Behaviorism?

4127EEGGk7L._SS500_

The “standard approach” in clinical neurology has been accused of suffering from an implicit “behaviorist epistemology” because disorders of consciousness are typically diagnosed on the basis of a lack of behavior. All the gold standard diagnostic assessment programs such as the JFK-Coma Recovery Scale are behavioral in nature insofar as they are expressly looking for behavior or the lack of behavior, either motor behavior or verbal behavior. If the behavior occurs appropriately in response to the command or stimulus then they get points that accumulate towards “normal” consciousness. If no behavior is observable in response to the cue then they don’t get points and are said to have a “disorder of consciousness”.

The problem with this approach is both conceptual and empirical. Conceptually, there is no necessary link between behavior and consciousness because unless you are Gilbert Ryle or Wittgenstein you don’t want to define consciousness in terms of behavior. That is, we don’t want to define “pain” as simply the behavior of your limbs whenever your cells are damaged, or the disposition to say “ouch”. The reason we don’t want to do this is because pain is supposed to be a feeling, painfulness, not a behavior.

Empirically, we know of many cases where behavior and consciousness can be decoupled such as in the total locked-in state where someone’s mind is more-or-less normal but they are completely paralyzed, looking for all intents and purposes like someone in a deep coma or vegetative state yet retaining normal brain function. From the outside they would fail these behavioral assessment techniques yet from the inside have full consciousness. Furthermore we know that in some cases of general anesthesia there can be a complete lack of motor response to stimulation while the person maintains their conscious awareness.

Another problem with the behaviorist epistemology of clinical diagnosis is that the standard assessment scales require a certain level of human expertise in making the diagnostic judgment. Although for most scales there is high inter-rater reliability it nevertheless ultimately comes down to a fallible human making a judgment about someone’s consciousness on the basis of subtle differences between “random” and “meaningful” behavior. A random behavior is just that: a random, reflexive movement that signifies no higher purpose or goal. But if I ask someone to squeeze my hand and they squeeze it, this is a meaningful sign because it suggests that they can listen to language and translate a verbal command to a willed response. But what if the verbal command to squeeze just triggers an unconscious response to squeeze? Sure, it’s possible. No one should rule it out. But what if they do it 5 times in a row? Or what if I say “don’t squeeze my hand” and they don’t squeeze it? Now we are getting into what clinicians call “unambiguous signs of consciousness” because the behavior is expressive of a meaningful purpose and shows what they call “contingency”, which is just another way of saying “appropriate”.

But what does it mean for a behavior to really be meaningful? Just that there is a goal-structure behind it? Or that it is willed? Again, we don’t want to define “meaning” or “appropriateness” in terms of outward behavior because when you are sleepwalking your behavior is goal-structured yet you are not conscious. Or consider the case of automatic writing. In automatic writing one of your hands is capable of having a written conversation and writing meaningful linguistic statements without “you” being in control at all. So clearly there is a possible dissociation between “meaningful” behavior and consciousness. All we can say is that for normal people in normal circumstances meaningful behavior is a good indicator of normal consciousness. But notice how vacuous that statement is. It tells us nothing about the hard cases. 

So in a nutshell the diagnosis of disorders of consciousness has an inescapable element of human subjectivity in it. Which is precisely why researchers are trying to move to brain-based diagnostic tools such as fMRI or EEG, which are supposed to be more “objective” because they skip right over the question of meaningful behavior and look at the “source” of the behavior: the brain itself. But I want to argue such measures can never bypass the subjectivity of diagnosis without going full behaviorist. The reason why brain-based measures of disorders of consciousness are behaviorist is simply because you are looking at the behavior of neurons. You can’t see the “feelings” of neurons from a brain scanner anymore than you can see the “feeling” of pain from watching someone’s limb move. Looking at the brain does not grant you special powers to see consciousness more directly. It is still an indirect measure of consciousness and it will always require the human judgment of the clinician to say “Ok, this brain activity is going to count as a measure towards “normal” consciousness”. It might be slightly more objective but it will never be any less subjective unless you want to define normal consciousness in terms of neural behavior. But how is that any different from standard behaviorism? The only difference is that we are relying on the assumption that neural behavior is the substrate of consciousness. This might be true from a metaphysical perspective. But it’s no help in the epistemology of diagnosis because as an outside observer you don’t see the consciousness. You just see the squishy brain or some representation on a computer screen. I believe there is a circularity here that cannot be escaped but I won’t go into it here (I talk about it in this post).

2 Comments

Filed under Consciousness, Philosophy of science, Psychology, Uncategorized

Is Consciousness Required for Discrimination?

In their book A Universe of Consciousness, Edelman and Tononi use the example of a photodiode discriminating light to illustrate the problem of consciousness:

Consider a simple physical device, such as a photodiode, that can differentiate between light and dark and provide an audible output. Let us then consider a conscious human being performing the same task and then giving a verbal report. The problem of consciousness can now be posed in elementary terms: Why should the simple differentiation between light and dark performed by the human being be associated with and, indeed, require conscious experience, while that performed by the photodiode presumably does not? (p. 17)

Does discrimination of a stimulus from a background “require conscious experience”? I don’t see why it would. This seems like something the unconscious mind could do all on its own and indeed is doing all the time. But it comes down to how we are defining “consciousness”. If we are talking about consciousness as subjective experience, the question is: does discrimination require there be something-it-is-like for the brain to perform that discrimination? Perhaps. But I also don’t know how to answer that question empirically given the subjective nature of experience and the sheer difficulty of building an objective consciousness-meter.

On the other hand, assume by “consciousness” we mean something like System II style cognition i.e. slow, deliberate, conscious, introspective thinking. On this view of consciousness, consciousness is but the tip of the iceberg when it comes to cognitive processing so it would be absurd to say that the 99% unconscious mind is incapable of doing discrimination. This is the lesson of Oswald Külpe and the Würzburg school of imageless thought. They asked trained introspectors from the Wundtian tradition to make a discrimination between two weights with their hands, to see if one weight is heavier than the other. Then the subjects were asked to introspect and see if they were aware of the process of discrimination. To their surprise, there were no conscious images associated with the weight-discrimination. They simply held the weights in their hand, consciously made an intention to discriminate, the discrimination happened unconsciously, and then they were aware of the results of the unconscious judgment. Hence, Külpe and the Würzburg school discovered a whole class of “imageless thought” i.e. thought that happens beneath the level of conscious awareness.

Of course, the Würzburg school wasn’t talking about consciousness in terms of subjective qualia. They were talking about consciousness in terms of what’s introspectable. If you can’t introspect a thought process in your mind then it’s unconscious. On this view and in conjunction with evolutionary models of introspection it seems clear that a great deal of discriminations are happening beneath the surface of conscious awareness. This is how I prefer to talk about consciousness: in terms of System II-style introspection where consciousness is but the tip of a great cognitive iceberg. On Edelman and Tononi’s view, consciousness occurs anytime there is information integration. On my view information integration can occur unconsciously and indeed most if not all non-human animal life is unconscious. Human style slow deliberate introspective conscious reflection is rare in the animal kingdom even if during the normal human waking life it is constantly running, overlapping and integrating with the iceberg of unconsciousness so as to give an illusion of cognitive unity. It seems as if consciousness is everywhere all the time and that there is very little unconscious activity. But as Julian Jaynes once said, we cannot be conscious of what we are not conscious of, and so consciousness seems pervasive in our mental life when in fact it is not.

2 Comments

Filed under Consciousness

Some Comments on Edelman and Tononi’s book A Universe of Consciousness

I started reading Edelman and Tononi’s book A Universe of Consciousness and I wanted to offer some skeptical comments. I’m generally skeptical about any theorizing of consciousness these days, not because I’m against theorizing in science but because I have been leaning more Mysterian in my epistemology towards “consciousness”, where “consciousness” refers to subjective experience. I think any fundamental theory of consciousness is doomed to fail because it will run into vicious circularity as I will explain below. Take this seemingly innocuous statement offered at the beginning of chapter 1:

Everyone knows what consciousness is: It is what abandons you every evening when you fall asleep and reappears the next morning when you wake up.

Already E&T are helping themselves to some heavy duty theoretically loaded assumptions. E&T are talking about consciousness as subjectivity, so why assume subjectivity goes away completely during dreamless sleep? How do we know there isn’t something-it-is-like to be asleep and we just don’t remember what-it’s-like? If subjectivity is at 100% during wakefulness why not think it goes down to 1% or .05% while sleeping instead of 0%? Perhaps what-it-is-like for humans to be asleep is analogous in subjective intensity to what-it-is-like to be bee or lizard when awake.

By helping themselves to the assumption that consciousness goes away completely during asleep E&T allow themselves a “starting point” or “fixed point” from which to begin their theorizing. It becomes their rock-solid assumption against which they can begin doing experimental work. But from a fundamental point of view, it is an unargued for assumption. Where’s the evidence for it? Introspective evidence is not enough because introspection is turned off during asleep. And empirical evidence? How are you going to measure it? With a consciousness-meter? Well, how are you going to validate that it’s calibrated properly? Say you build one and point it at a sleeping brain at it registers “0”. How do you know the measurement is correct? What’s the calibration method?

They also assume that consciousness is an “relatively recent development” evolutionarily speaking. If we are talking about self-consciousness this makes sense but they are not. They are talking about subjectivity, the having of a “point-of-view”. But why not think a bee has a point of view on the world? Or why assume you need a brain or nervous system at all? For all we know there is something-it-is-like to be an amoeba. E&T want this to be another “fixed point” because if you assume that subjectivity requires a brain or nervous system it gives you a starting place scientifically. It tells you where to look. But again, it’s never argued for, simply assumed. But it’s not logically incoherent to think a creature without a nervous system has a dim phenomenology.

Suppose you assumed that only brained creatures have consciousness and you devise a theory accordingly. Having made your theory you devise a series of experimental techniques and measurements and then apply them to brained creatures. You “confirm” that yes indeed brained creatures are conscious all right. What happens when you apply the same technique to a non-brained creature like an amoeba, testing for whether the amoeba has consciousness? Surprise surprise, your technique fails to register any consciousness in the amoeba. But there is a blatant epistemic circularity here because you designed your measurement technique according to certain theoretical assumptions starting with the “fixed point” that consciousness requires a nervous system. But why make that assumption? Why not assume instead that subjectivity starts with life itself and is progressively modified as nervous systems are introduced? Moreover they assume that

Conscious experience is integrated (conscious states cannot be subdivided into independent components) and, at the same time, is highly differentiated (one can experience billions of different conscious states).

Why can’t conscious states be subdivided? Why assume that? What does that even mean? Divided from what into what? Take the sleeping at .01% consciousness example. Why not think wakeful “unified” consciousness at 100% is the result of 1000 tiny microconsciousness “singing” side-by-side such that the total choir of microconsciousness gives rise to an illusion of a single large singer? When E&T say “one” can experience billions of states, who is this “one”? Why one, and not many? Their assumption of conscious unity is another “fixed point” but it’s just an assumption. Granted, it’s an assumption that stems from introspective experience but why trust introspection here? Introspection also says consciousness completely goes away during asleep but as we’ve seen it might be wrong about that.

3 Comments

Filed under Consciousness

Vegetative State Patients as Moral Patients

https://www.academia.edu/7692522/Vegetative_State_Patients_As_Moral_Patients

Abstract:

Adrian Owen (2006) recently discovered that some vegetative state (VS) patients have residual levels of cognition, enabling them to communicate using brain scanners. This discovery is clearly morally significant but the problem comes from specifying why exactly the discovery is morally significant and whether extant theories of moral patienthood can be applied to explain the significance. In this paper I explore Mark Bernstein’s theory of experientialism, which says an entity deserves moral consideration if they are a subject of conscious experience. Because VS is a disorder of consciousness it should be straightforward to apply Bernstein’s theory to Owen’s discovery but several problems arise. First, Bernstein’s theory is beset by ambiguity in several key respects that makes it difficult to apply to the discovery. Second, Bernstein’s theory of experientialism fails to fully account for the normative significance of what I call “narrative experience”. A deeper appreciation of narrative experience is needed to account for the normative significance of Owen’s findings.

 

 

This paper has gone through so many drafts. I swear I’ve rewritten it 5 times from more or less scratch. Each time I’ve tried to narrow my thesis to be ever smaller and less ambitious because I’m pretty sure that’s the only way I’m going to get this thing passed by my qualifying paper committee. As always, any thoughts or comments appreciated.

1 Comment

Filed under Consciousness, Neuroethics, Psychology

Quote of the day – John Heil Explains What’s Wrong With Non-reductive Physicalism

What I object to is the unthinking move from linguistic premises to ontological conclusions, from the assumption, for instance, that if you have an ‘ineliminable’ predicate that features in an explanation of some phenomenon of interest, the predicate must name a property shared by everything to which it applies. (A predicate is ineliminable if it cannot be analyzed, paraphrased, or translated into less vexed predicates.)

Philosophers speak of ‘the pain predicate’. When you look at creatures plausibly regarded as being in pain, you do not see a single physical property they all share (and in virtue of which it would be true to say that they are in pain). Instead of thinking that the predicate, ‘is in pain’, designates a family of similar properties, philosophers (including Putnam in one of his moods) conclude that the predicate must name a ‘higher-level’ property possessed by a creature by virtue of that creature’s ‘lower-level’ physical properties. You have many different kinds of physical property supporting a single nonphysical property. This is the kind of ‘non-reductive physicalism’ you have in functionalism.

Non-reductive physicalism has become a default view, a heavyweight champ that retains its status until decisively defeated. Non-reductive physicalism acquired the crown, however, not by merit, but by a kind of linguistic subterfuge. If you read early anti-reductionist tracts – for instance, Jerry Fodor’s ‘Special Sciences (Or: The Disunity of Science as a Working Hypothesis)’ (Synthese, 1974) – you will see that the arguments concern predicates, categories, taxonomies. Fodor’s point, a correct one in my judgment, is that there is no prospect of replacing taxonomies in the special sciences with one drawn from physics. But from this no ontological conclusions follow – unless you assume that every ‘irreducible’ predicate names a property.

This language-driven way of thinking is not one that would have occurred to the ancients, the medievals, or the early moderns – or to my aforementioned philosophical models. It is an invention of the 20th century, one that has led to the emasculation of serious ontology.

~From an interview with Richard Marshall at 3am.

Leave a comment

Filed under Philosophy

Concepts of Consciousness – a Flowchart

Concepts of Consciousness - a Flowchart

I’ve been playing with chrome’s Lucidchart.

2 Comments

May 3, 2014 · 8:17 am