Tag Archives: cognitive science

Man in Vegetative State Shows Brain Activity to Movie: What Does It Mean?

In a recent study, Naci et al. investigated how the brain responds to an 8 minute Alfred Hitchcock movie. In healthy subjects they found that frontal and parietal areas indicative of executive functioning were active during the most suspenseful parts of the movie. Then they showed the same movie to two patients diagnosed as being in a vegetative state, one of which who had been in VS for 16 years. In one of the patients they found that “activity in a network of frontal and parietal regions that are known to support executive processing significantly synchronized to that of healthy participants”. In other words, the vegetative man’s brain “tracked” the suspense-points of the movie in the same way that healthy controls did. They reasoned that the patient was therefore consciously aware of the video, despite being behaviorally unresponsive:

The patient’s brain activity in frontal and parietal regions was tightly synchronized with the healthy participants’ over time, and, crucially, it reflected the executive demands of specific events in the movie, as measured both qualitatively and quantitatively in healthy individuals. This suggested that the patient had a conscious cognitive experience highly similar to that of each and every healthy participant, while watching the same movie.

But what’s the connection between executive functioning and conscious experience? The authors write:

The “executive” function of the brain refers to those processes that coordinate and schedule a host of other more basic cognitive operations, such as monitoring and analyzing information from the environment and integrating it with internally generated goals, as well as planning and adapting new behavioral schemas to take account of this information. As such, executive function is integral to our conscious experience of the world as prior knowledge is integrated into the current “state of play” to make predictions about likely future events.

Does this mean that executive functioning is always conscious? Is the unconscious brain incapable of “monitoring and analyzing information from the environment” and “integrating” that information with goals? Color me skeptical but I believe in the power of the unconscious mind to perform these functions without the input of conscious awareness.

Several examples come to mind. In the “long-distance truck driver” phenomenon people can drive automobiles for minutes if not hours without the input of conscious awareness. Surely driving requires “monitoring and analyzing information from the environment” in addition to integrating with goals and adapting new behaviors to deal with novel road conditions.

Another example is automatic writing, where people can write whole intelligent paragraphs without the input of conscious attention and the “voice” of the writing is distinct from that of the person’s normal personality, channeling the personalities of deceased persons or famous literary people. People would hold conversations with their automatic writing indicating that the unconscious writer was responding to the environment and surely “monitoring and analyzing information”. Im not aware of any brain imaging studies of automatic writing but I would not be surprised if frontal and parietal regions were active given the complexity of handwriting as a cognitive task. Same with long-distance truck driving.

My point is simply to raise the question: Can executive function happen unconsciously? Naci et al. say that executive function is “integral” to conscious experience. That might be true. But is conscious experience integral to executive functioning? Maybe not. There is a litany of complex behaviors that can be performed unconsciously, all of which likely recruit frontal and parietal networks of the brain. We can’t simply assume that just because information integration occurred that conscious awareness was involved. To make that inference would require us to think that the unconscious mind is “dumb” and incapable of integrating information. But there is plenty of reason to think that what Timothy Wilson calls the “adaptive unconscious” is highly intelligent and capable of many “higher-order” cognitive functions including monitoring, integrating, planning, reasoning, etc.

Advertisements

2 Comments

Filed under Consciousness, Psychology

Some Comments on Edelman and Tononi’s book A Universe of Consciousness

I started reading Edelman and Tononi’s book A Universe of Consciousness and I wanted to offer some skeptical comments. I’m generally skeptical about any theorizing of consciousness these days, not because I’m against theorizing in science but because I have been leaning more Mysterian in my epistemology towards “consciousness”, where “consciousness” refers to subjective experience. I think any fundamental theory of consciousness is doomed to fail because it will run into vicious circularity as I will explain below. Take this seemingly innocuous statement offered at the beginning of chapter 1:

Everyone knows what consciousness is: It is what abandons you every evening when you fall asleep and reappears the next morning when you wake up.

Already E&T are helping themselves to some heavy duty theoretically loaded assumptions. E&T are talking about consciousness as subjectivity, so why assume subjectivity goes away completely during dreamless sleep? How do we know there isn’t something-it-is-like to be asleep and we just don’t remember what-it’s-like? If subjectivity is at 100% during wakefulness why not think it goes down to 1% or .05% while sleeping instead of 0%? Perhaps what-it-is-like for humans to be asleep is analogous in subjective intensity to what-it-is-like to be bee or lizard when awake.

By helping themselves to the assumption that consciousness goes away completely during asleep E&T allow themselves a “starting point” or “fixed point” from which to begin their theorizing. It becomes their rock-solid assumption against which they can begin doing experimental work. But from a fundamental point of view, it is an unargued for assumption. Where’s the evidence for it? Introspective evidence is not enough because introspection is turned off during asleep. And empirical evidence? How are you going to measure it? With a consciousness-meter? Well, how are you going to validate that it’s calibrated properly? Say you build one and point it at a sleeping brain at it registers “0”. How do you know the measurement is correct? What’s the calibration method?

They also assume that consciousness is an “relatively recent development” evolutionarily speaking. If we are talking about self-consciousness this makes sense but they are not. They are talking about subjectivity, the having of a “point-of-view”. But why not think a bee has a point of view on the world? Or why assume you need a brain or nervous system at all? For all we know there is something-it-is-like to be an amoeba. E&T want this to be another “fixed point” because if you assume that subjectivity requires a brain or nervous system it gives you a starting place scientifically. It tells you where to look. But again, it’s never argued for, simply assumed. But it’s not logically incoherent to think a creature without a nervous system has a dim phenomenology.

Suppose you assumed that only brained creatures have consciousness and you devise a theory accordingly. Having made your theory you devise a series of experimental techniques and measurements and then apply them to brained creatures. You “confirm” that yes indeed brained creatures are conscious all right. What happens when you apply the same technique to a non-brained creature like an amoeba, testing for whether the amoeba has consciousness? Surprise surprise, your technique fails to register any consciousness in the amoeba. But there is a blatant epistemic circularity here because you designed your measurement technique according to certain theoretical assumptions starting with the “fixed point” that consciousness requires a nervous system. But why make that assumption? Why not assume instead that subjectivity starts with life itself and is progressively modified as nervous systems are introduced? Moreover they assume that

Conscious experience is integrated (conscious states cannot be subdivided into independent components) and, at the same time, is highly differentiated (one can experience billions of different conscious states).

Why can’t conscious states be subdivided? Why assume that? What does that even mean? Divided from what into what? Take the sleeping at .01% consciousness example. Why not think wakeful “unified” consciousness at 100% is the result of 1000 tiny microconsciousness “singing” side-by-side such that the total choir of microconsciousness gives rise to an illusion of a single large singer? When E&T say “one” can experience billions of states, who is this “one”? Why one, and not many? Their assumption of conscious unity is another “fixed point” but it’s just an assumption. Granted, it’s an assumption that stems from introspective experience but why trust introspection here? Introspection also says consciousness completely goes away during asleep but as we’ve seen it might be wrong about that.

3 Comments

Filed under Consciousness

My Biggest Pet Peeve in Consciousness Research

 

Boy was I excited to read that new Nature paper where scientists report experimentally inducing lucid dreaming in people. Pretty cool, right? But then right in the abstract I run across my biggest pet peeve whenever people use the dreaded c-word: blatant terminological inconsistency. Not just an inconsistency across different papers, or buried in a footnote, but between a title and an abstract and within the abstract itself. Consider the title of the paper:

Induction of self awareness in dreams through frontal low current stimulation of gamma activity

The term “self-awareness” makes sense here because if normal dream awareness is environmentally-decoupled 1st order awareness than lucid dreaming is a 2nd order awareness because you become meta-aware of the fact that you are first-order dream-aware. So far so good. Now consider the abstract:

 Recent findings link fronto-temporal gamma electroencephalographic (EEG) activity to conscious awareness in dreams, but a causal relationship has not yet been established. We found that current stimulation in the lower gamma band during REM sleep influences ongoing brain activity and induces self-reflective awareness in dreams. Other stimulation frequencies were not effective, suggesting that higher order consciousness is indeed related to synchronous oscillations around 25 and 40 Hz.

Gah! What a confusing mess of conflicting concepts. The title says “self-awareness” but the first sentence talks instead about “conscious awareness”. It’s an elementary mistake to confuse consciousness with self-consciousness, or at least to conflate them without making an immediate qualification of why you are violating standard practice in so doing. While there are certainly theorists out there who are skeptical about the very idea of “1st order” awareness being cleanly demaracted from “2nd order” awareness (Dan Dennett comes to mind), it goes without saying this is a highly controversial position that cannot just be assumed without begging the question. Immediate red flag.

The first sentence also references previous findings about the neural correlates of “conscious awareness” being linked to specific gamma frequencies of neural activity in fronto-temporal networks. The authors say though that correlation is not causation. The next sentence then makes us believe the study will provide that missing causal evidence about conscious awareness and gamma frequencies.

Yet the authors don’t say that. What they say instead is that they’ve found evidence that gamma frequencies are linked to “self-reflective awareness” and “higher-order consciousness”, which are again are theoretically distinct concepts than “conscious awareness” unless you are pretheoretically committed to a kind of higher-order theory of consciousness. But even that wouldn’t be quite right because on, e.g. Rosenthal’s HOT theory, a higher-order thought would give rise to first-order awareness not lucid dreaming, which is about self-awareness. On higher-order views, you would technically need a 3rd order awareness to count as lucid dreaming.

So please, if you are writing about consciousness, remember that consciousness is distinct from self-consciousness and keep your terms straight.

1 Comment

Filed under Academia, Consciousness, Random

Quote for the Day – The Lake Wobegon Effect – We Are All Above-Average

When drivers rated their ability behind the wheel, about three-quarters thought they were better than average. Strangely, those who had been in an auto accident were more likely to rate themselves as better drivers than did those who driving record was accident-free.

Even stranger: In general, most people rate themselves as being less likely than others to overrate their abilities. These inflated self-ratings reflect the ‘better-than-average’ effect, which has been found for just about any positive trait, from competence and creativity to friendliness and honesty.

~ Daniel Goleman, Focus: The Hidden Driver of Excellence (2013), p. 74

See: http://en.wikipedia.org/wiki/Illusory_superiority

Leave a comment

Filed under Books, Psychology

Quote for the Day – The Attention Schema Theory of Consciousness

One way to approach the theory is through social perception. If you notice Harry paying attention to the coffee stain on this shirt, when you see the direction of Harry’s gaze, the expression on his face, and his gestures as he touches the stain, and when you put all those clues into context your brain does something quite specific: it attributes awareness to Harry. Harry is aware of the stain on his shirt. Machinery in your brain, in the circuitry that participates in social perception, is expert at this task of attributing awareness to other people. It sees another brain-controlled creature focusing its computing resources on an item and generates the construct that person Y is aware of thing X. In the theory proposed in this book, the same machinery is engaged in attributing awareness to yourself-computing that you are aware of thing X.

~Michael Graziano, Consciousness and the Social Brain

I’m planning on doing a write up on this book soon. I could not put the book down and read it in a few days. Compared to most books on consciousness, Graziano’s central thesis is clearly stated, suitably modest in ambition, neurologically plausible, and theoretically compelling.  I was impressed that Graziano applied his theory to explain “weird” aspects of human experience like out-of-body experiences, Mesmerism, religion, etc. I predict Graziano is going to be a big player in the consciousness debates from here on out. That I am really drawn to the theory is not surprising given its affinities with some things Julian Jaynes said e.g. “It is thus a possibility that before an individual man had an interior self, he unconsciously first posited it in others, particularly contradictory strangers, as the thing that caused their different and bewildering behavior…We may first unconsciously (sic) suppose other consciousnesses, and then infer our own by generalization” (Origin, p. 217) Jaynes also explicitly proposed that some features of consciousness are analogs (models) of sensory attention, which is at the heart of Graziano’s theory, albeit not as worked out rigorously.

3 Comments

Filed under Books, Consciousness, Psychology

Quote of the Day – The Code of Consciousness

The code used to register information in the brain is of little importance in determining what we perceive, so long as the code is used appropriately by the brain to determine our actions. For example, no one today thinks that in order to perceive redness some kind of red fluid must ooze out of neurons in the brain. Similarly, to perceive the world as right side up, the retinal image need not be right side up

~

J. Kevin O’Regan, Why Red Doesn’t Sound Like a Bell, p. 6

Leave a comment

Filed under Consciousness, Psychology

New Paper: In Defense of the Extraordinary in Religious Belief

Read it here: In Defense of the Extraordinary in Religious Belief

So this is a paper I wrote for Ron Mallon’s Culture and Evolution seminar. I’m really happy with how the paper turned out, and I believe this is the direction I want to go for my future dissertation project. The paper is really a response to some of Pascal Boyer’s claims about the importance of extraordinary religious experience in explaining the origins and cultural success of religious belief. For example, Boyer says:

Even if prophets were the main source of new religious information, that information would still require ordinary nonprophets’ minds to turn it into some particular form of religion…This is why we will probably not understand the diffusion of religion by studying exceptional people, but we may well have a better grasp of religion in general, including that of prophets and other virtuosos, by considering how it is derived from ordinary cognitive capacities. (Boyer, 2001, pp. 310-311)

This is a standard thing to say in the evolutionary origins of religion literature. Most psychologists who are trying to explain religious belief do so in terms of the operation of various ordinary cognitive mechanisms like the Agency Detection Device or our theory of mind capacities. The basic idea then is that we don’t need to posit any sort of “special” religious mechanism that serves as the generator of religious belief. According to what I am calling the Standard Cognitive Model (SCM) of religious belief, religious thoughts are really not that different from any other kind of cognitive operation. Crucially,  the SCM is committed to the idea that the order of explanation is that you explain both religion in general as well as extraordinary experience in terms of the ordinary, and not the other way around.

It’s this emphasis on the “ordinary” that I am arguing against in the paper. My argument is basically this: we cannot use contemporary ratios of ordinary to extraordinary experience as a mirror of what that ratio might have been like in ancient times. Borrowing heavily from Jaynesian theory, I provide several lines of evidence for thinking that what we now consider extraordinary might have actually been quite ordinary in ancient times. If this is right, then we don’t need to think about extraordinary experience as being the exclusive domain of “religious specialists”, as Boyer is prone to think. Instead, we can think about extraordinary experiences such as hearing the voice of a god or demigod talk to you as being quite ordinary.

In the paper, I look at contemporary research on both the incidence of auditory hallucination in children and the factors that lead to the persistence of such hallucinations. What the research shows is that the best predictor of persistence of voice hearing in children is whether they assign the voices to external sources. And prior to the recent invention of the concept of “hallucination”, all ancient voice hearers (like Socrates) would have automatically interpreted their experience in terms of being a communication from an external agent, namely, a god or demigod. Since such attributions are the key predictors of persistence, we can now imagine a society where upwards of 25% or more of adults are actively experiencing auditory hallucinations and interpreting them as being messages from gods or demigods. Accordingly, would we want to still say that “extraordinary experience” is  still exceptional and the exclusive domain of religious specialists?

If this is at all historically accurate, then it looks like we can reverse the explanatory arrow of the SCM. Rather than extraordinary experiences being on the sidelines in determining the cultural success of religion, the familiar experience of auditory hallucination and the shared cultural narratives for interpreting such experiences would have played a much greater role in the spread of religion than the SCM allows. To respond to Boyer then, we can say that perhaps the reason why the “insights” of holy persons were widely accepted is because the ordinary population was already quite familiar with what-it-is-like to hear the voice of a god or demigod commanding you to do something.

1 Comment

Filed under Psychology, Theology