Tag Archives: cognitive science

Man in Vegetative State Shows Brain Activity to Movie: What Does It Mean?

In a recent study, Naci et al. investigated how the brain responds to an 8 minute Alfred Hitchcock movie. In healthy subjects they found that frontal and parietal areas indicative of executive functioning were active during the most suspenseful parts of the movie. Then they showed the same movie to two patients diagnosed as being in a vegetative state, one of which who had been in VS for 16 years. In one of the patients they found that “activity in a network of frontal and parietal regions that are known to support executive processing significantly synchronized to that of healthy participants”. In other words, the vegetative man’s brain “tracked” the suspense-points of the movie in the same way that healthy controls did. They reasoned that the patient was therefore consciously aware of the video, despite being behaviorally unresponsive:

The patient’s brain activity in frontal and parietal regions was tightly synchronized with the healthy participants’ over time, and, crucially, it reflected the executive demands of specific events in the movie, as measured both qualitatively and quantitatively in healthy individuals. This suggested that the patient had a conscious cognitive experience highly similar to that of each and every healthy participant, while watching the same movie.

But what’s the connection between executive functioning and conscious experience? The authors write:

The “executive” function of the brain refers to those processes that coordinate and schedule a host of other more basic cognitive operations, such as monitoring and analyzing information from the environment and integrating it with internally generated goals, as well as planning and adapting new behavioral schemas to take account of this information. As such, executive function is integral to our conscious experience of the world as prior knowledge is integrated into the current “state of play” to make predictions about likely future events.

Does this mean that executive functioning is always conscious? Is the unconscious brain incapable of “monitoring and analyzing information from the environment” and “integrating” that information with goals? Color me skeptical but I believe in the power of the unconscious mind to perform these functions without the input of conscious awareness.

Several examples come to mind. In the “long-distance truck driver” phenomenon people can drive automobiles for minutes if not hours without the input of conscious awareness. Surely driving requires “monitoring and analyzing information from the environment” in addition to integrating with goals and adapting new behaviors to deal with novel road conditions.

Another example is automatic writing, where people can write whole intelligent paragraphs without the input of conscious attention and the “voice” of the writing is distinct from that of the person’s normal personality, channeling the personalities of deceased persons or famous literary people. People would hold conversations with their automatic writing indicating that the unconscious writer was responding to the environment and surely “monitoring and analyzing information”. Im not aware of any brain imaging studies of automatic writing but I would not be surprised if frontal and parietal regions were active given the complexity of handwriting as a cognitive task. Same with long-distance truck driving.

My point is simply to raise the question: Can executive function happen unconsciously? Naci et al. say that executive function is “integral” to conscious experience. That might be true. But is conscious experience integral to executive functioning? Maybe not. There is a litany of complex behaviors that can be performed unconsciously, all of which likely recruit frontal and parietal networks of the brain. We can’t simply assume that just because information integration occurred that conscious awareness was involved. To make that inference would require us to think that the unconscious mind is “dumb” and incapable of integrating information. But there is plenty of reason to think that what Timothy Wilson calls the “adaptive unconscious” is highly intelligent and capable of many “higher-order” cognitive functions including monitoring, integrating, planning, reasoning, etc.

2 Comments

Filed under Consciousness, Psychology

Some Comments on Edelman and Tononi’s book A Universe of Consciousness

I started reading Edelman and Tononi’s book A Universe of Consciousness and I wanted to offer some skeptical comments. I’m generally skeptical about any theorizing of consciousness these days, not because I’m against theorizing in science but because I have been leaning more Mysterian in my epistemology towards “consciousness”, where “consciousness” refers to subjective experience. I think any fundamental theory of consciousness is doomed to fail because it will run into vicious circularity as I will explain below. Take this seemingly innocuous statement offered at the beginning of chapter 1:

Everyone knows what consciousness is: It is what abandons you every evening when you fall asleep and reappears the next morning when you wake up.

Already E&T are helping themselves to some heavy duty theoretically loaded assumptions. E&T are talking about consciousness as subjectivity, so why assume subjectivity goes away completely during dreamless sleep? How do we know there isn’t something-it-is-like to be asleep and we just don’t remember what-it’s-like? If subjectivity is at 100% during wakefulness why not think it goes down to 1% or .05% while sleeping instead of 0%? Perhaps what-it-is-like for humans to be asleep is analogous in subjective intensity to what-it-is-like to be bee or lizard when awake.

By helping themselves to the assumption that consciousness goes away completely during asleep E&T allow themselves a “starting point” or “fixed point” from which to begin their theorizing. It becomes their rock-solid assumption against which they can begin doing experimental work. But from a fundamental point of view, it is an unargued for assumption. Where’s the evidence for it? Introspective evidence is not enough because introspection is turned off during asleep. And empirical evidence? How are you going to measure it? With a consciousness-meter? Well, how are you going to validate that it’s calibrated properly? Say you build one and point it at a sleeping brain at it registers “0”. How do you know the measurement is correct? What’s the calibration method?

They also assume that consciousness is an “relatively recent development” evolutionarily speaking. If we are talking about self-consciousness this makes sense but they are not. They are talking about subjectivity, the having of a “point-of-view”. But why not think a bee has a point of view on the world? Or why assume you need a brain or nervous system at all? For all we know there is something-it-is-like to be an amoeba. E&T want this to be another “fixed point” because if you assume that subjectivity requires a brain or nervous system it gives you a starting place scientifically. It tells you where to look. But again, it’s never argued for, simply assumed. But it’s not logically incoherent to think a creature without a nervous system has a dim phenomenology.

Suppose you assumed that only brained creatures have consciousness and you devise a theory accordingly. Having made your theory you devise a series of experimental techniques and measurements and then apply them to brained creatures. You “confirm” that yes indeed brained creatures are conscious all right. What happens when you apply the same technique to a non-brained creature like an amoeba, testing for whether the amoeba has consciousness? Surprise surprise, your technique fails to register any consciousness in the amoeba. But there is a blatant epistemic circularity here because you designed your measurement technique according to certain theoretical assumptions starting with the “fixed point” that consciousness requires a nervous system. But why make that assumption? Why not assume instead that subjectivity starts with life itself and is progressively modified as nervous systems are introduced? Moreover they assume that

Conscious experience is integrated (conscious states cannot be subdivided into independent components) and, at the same time, is highly differentiated (one can experience billions of different conscious states).

Why can’t conscious states be subdivided? Why assume that? What does that even mean? Divided from what into what? Take the sleeping at .01% consciousness example. Why not think wakeful “unified” consciousness at 100% is the result of 1000 tiny microconsciousness “singing” side-by-side such that the total choir of microconsciousness gives rise to an illusion of a single large singer? When E&T say “one” can experience billions of states, who is this “one”? Why one, and not many? Their assumption of conscious unity is another “fixed point” but it’s just an assumption. Granted, it’s an assumption that stems from introspective experience but why trust introspection here? Introspection also says consciousness completely goes away during asleep but as we’ve seen it might be wrong about that.

3 Comments

Filed under Consciousness

My Biggest Pet Peeve in Consciousness Research

 

Boy was I excited to read that new Nature paper where scientists report experimentally inducing lucid dreaming in people. Pretty cool, right? But then right in the abstract I run across my biggest pet peeve whenever people use the dreaded c-word: blatant terminological inconsistency. Not just an inconsistency across different papers, or buried in a footnote, but between a title and an abstract and within the abstract itself. Consider the title of the paper:

Induction of self awareness in dreams through frontal low current stimulation of gamma activity

The term “self-awareness” makes sense here because if normal dream awareness is environmentally-decoupled 1st order awareness than lucid dreaming is a 2nd order awareness because you become meta-aware of the fact that you are first-order dream-aware. So far so good. Now consider the abstract:

 Recent findings link fronto-temporal gamma electroencephalographic (EEG) activity to conscious awareness in dreams, but a causal relationship has not yet been established. We found that current stimulation in the lower gamma band during REM sleep influences ongoing brain activity and induces self-reflective awareness in dreams. Other stimulation frequencies were not effective, suggesting that higher order consciousness is indeed related to synchronous oscillations around 25 and 40 Hz.

Gah! What a confusing mess of conflicting concepts. The title says “self-awareness” but the first sentence talks instead about “conscious awareness”. It’s an elementary mistake to confuse consciousness with self-consciousness, or at least to conflate them without making an immediate qualification of why you are violating standard practice in so doing. While there are certainly theorists out there who are skeptical about the very idea of “1st order” awareness being cleanly demaracted from “2nd order” awareness (Dan Dennett comes to mind), it goes without saying this is a highly controversial position that cannot just be assumed without begging the question. Immediate red flag.

The first sentence also references previous findings about the neural correlates of “conscious awareness” being linked to specific gamma frequencies of neural activity in fronto-temporal networks. The authors say though that correlation is not causation. The next sentence then makes us believe the study will provide that missing causal evidence about conscious awareness and gamma frequencies.

Yet the authors don’t say that. What they say instead is that they’ve found evidence that gamma frequencies are linked to “self-reflective awareness” and “higher-order consciousness”, which are again are theoretically distinct concepts than “conscious awareness” unless you are pretheoretically committed to a kind of higher-order theory of consciousness. But even that wouldn’t be quite right because on, e.g. Rosenthal’s HOT theory, a higher-order thought would give rise to first-order awareness not lucid dreaming, which is about self-awareness. On higher-order views, you would technically need a 3rd order awareness to count as lucid dreaming.

So please, if you are writing about consciousness, remember that consciousness is distinct from self-consciousness and keep your terms straight.

1 Comment

Filed under Academia, Consciousness, Random

Quote for the Day – The Lake Wobegon Effect – We Are All Above-Average

When drivers rated their ability behind the wheel, about three-quarters thought they were better than average. Strangely, those who had been in an auto accident were more likely to rate themselves as better drivers than did those who driving record was accident-free.

Even stranger: In general, most people rate themselves as being less likely than others to overrate their abilities. These inflated self-ratings reflect the ‘better-than-average’ effect, which has been found for just about any positive trait, from competence and creativity to friendliness and honesty.

~ Daniel Goleman, Focus: The Hidden Driver of Excellence (2013), p. 74

See: http://en.wikipedia.org/wiki/Illusory_superiority

Leave a comment

Filed under Books, Psychology

Quote for the Day – The Attention Schema Theory of Consciousness

One way to approach the theory is through social perception. If you notice Harry paying attention to the coffee stain on this shirt, when you see the direction of Harry’s gaze, the expression on his face, and his gestures as he touches the stain, and when you put all those clues into context your brain does something quite specific: it attributes awareness to Harry. Harry is aware of the stain on his shirt. Machinery in your brain, in the circuitry that participates in social perception, is expert at this task of attributing awareness to other people. It sees another brain-controlled creature focusing its computing resources on an item and generates the construct that person Y is aware of thing X. In the theory proposed in this book, the same machinery is engaged in attributing awareness to yourself-computing that you are aware of thing X.

~Michael Graziano, Consciousness and the Social Brain

I’m planning on doing a write up on this book soon. I could not put the book down and read it in a few days. Compared to most books on consciousness, Graziano’s central thesis is clearly stated, suitably modest in ambition, neurologically plausible, and theoretically compelling.  I was impressed that Graziano applied his theory to explain “weird” aspects of human experience like out-of-body experiences, Mesmerism, religion, etc. I predict Graziano is going to be a big player in the consciousness debates from here on out. That I am really drawn to the theory is not surprising given its affinities with some things Julian Jaynes said e.g. “It is thus a possibility that before an individual man had an interior self, he unconsciously first posited it in others, particularly contradictory strangers, as the thing that caused their different and bewildering behavior…We may first unconsciously (sic) suppose other consciousnesses, and then infer our own by generalization” (Origin, p. 217) Jaynes also explicitly proposed that some features of consciousness are analogs (models) of sensory attention, which is at the heart of Graziano’s theory, albeit not as worked out rigorously.

3 Comments

Filed under Books, Consciousness, Psychology

Quote of the Day – The Code of Consciousness

The code used to register information in the brain is of little importance in determining what we perceive, so long as the code is used appropriately by the brain to determine our actions. For example, no one today thinks that in order to perceive redness some kind of red fluid must ooze out of neurons in the brain. Similarly, to perceive the world as right side up, the retinal image need not be right side up

~

J. Kevin O’Regan, Why Red Doesn’t Sound Like a Bell, p. 6

Leave a comment

Filed under Consciousness, Psychology

New Paper: In Defense of the Extraordinary in Religious Belief

Read it here: In Defense of the Extraordinary in Religious Belief

So this is a paper I wrote for Ron Mallon’s Culture and Evolution seminar. I’m really happy with how the paper turned out, and I believe this is the direction I want to go for my future dissertation project. The paper is really a response to some of Pascal Boyer’s claims about the importance of extraordinary religious experience in explaining the origins and cultural success of religious belief. For example, Boyer says:

Even if prophets were the main source of new religious information, that information would still require ordinary nonprophets’ minds to turn it into some particular form of religion…This is why we will probably not understand the diffusion of religion by studying exceptional people, but we may well have a better grasp of religion in general, including that of prophets and other virtuosos, by considering how it is derived from ordinary cognitive capacities. (Boyer, 2001, pp. 310-311)

This is a standard thing to say in the evolutionary origins of religion literature. Most psychologists who are trying to explain religious belief do so in terms of the operation of various ordinary cognitive mechanisms like the Agency Detection Device or our theory of mind capacities. The basic idea then is that we don’t need to posit any sort of “special” religious mechanism that serves as the generator of religious belief. According to what I am calling the Standard Cognitive Model (SCM) of religious belief, religious thoughts are really not that different from any other kind of cognitive operation. Crucially,  the SCM is committed to the idea that the order of explanation is that you explain both religion in general as well as extraordinary experience in terms of the ordinary, and not the other way around.

It’s this emphasis on the “ordinary” that I am arguing against in the paper. My argument is basically this: we cannot use contemporary ratios of ordinary to extraordinary experience as a mirror of what that ratio might have been like in ancient times. Borrowing heavily from Jaynesian theory, I provide several lines of evidence for thinking that what we now consider extraordinary might have actually been quite ordinary in ancient times. If this is right, then we don’t need to think about extraordinary experience as being the exclusive domain of “religious specialists”, as Boyer is prone to think. Instead, we can think about extraordinary experiences such as hearing the voice of a god or demigod talk to you as being quite ordinary.

In the paper, I look at contemporary research on both the incidence of auditory hallucination in children and the factors that lead to the persistence of such hallucinations. What the research shows is that the best predictor of persistence of voice hearing in children is whether they assign the voices to external sources. And prior to the recent invention of the concept of “hallucination”, all ancient voice hearers (like Socrates) would have automatically interpreted their experience in terms of being a communication from an external agent, namely, a god or demigod. Since such attributions are the key predictors of persistence, we can now imagine a society where upwards of 25% or more of adults are actively experiencing auditory hallucinations and interpreting them as being messages from gods or demigods. Accordingly, would we want to still say that “extraordinary experience” is  still exceptional and the exclusive domain of religious specialists?

If this is at all historically accurate, then it looks like we can reverse the explanatory arrow of the SCM. Rather than extraordinary experiences being on the sidelines in determining the cultural success of religion, the familiar experience of auditory hallucination and the shared cultural narratives for interpreting such experiences would have played a much greater role in the spread of religion than the SCM allows. To respond to Boyer then, we can say that perhaps the reason why the “insights” of holy persons were widely accepted is because the ordinary population was already quite familiar with what-it-is-like to hear the voice of a god or demigod commanding you to do something.

1 Comment

Filed under Psychology, Theology

Strong and Weak Modularity

When evaluating the truth of the modularity thesis about the brain, it’s important to distinguish between two forms modularity can take: a strong form and a weak form. The strong form is the view that the brain is organized along the lines of a swiss army knife, with hundreds or thousands of modules like the “mate selection module”, “food detection module”, or “cheater detection module”, with each module running a dedicated task. The weak form is simply the thesis that you can turn off or take out some parts of the brain without shutting down the whole system. For example, weak modularity is the idea that if you removed the auditory cortex your visual system would not completely crash and vice-versa.

The strong form is usually committed to things like “information encapsulation”. But there are two forms encapsulation might take: strong and weak. The stronger form says that any given module runs completely independently from other modules and when it is running its processes it uses its own internal knowledge to process it. This is supposed to be why the Müller-Lyer can’t be turned off even if you know it’s an illusion. The weak form views encapsulation a little different. On the weak view, each module is “talking” to a lot of other modules, and the idea is that when you have different modules talking to each other, new functions arise. The weak form thus sees modules built out of other modules, like a nested hierarchy. On this view, “encapsulation” has the wrong metaphorical connotations. Encapsulated seems to mean something like “isolated”. But on the weak interpretation, modules are not isolated at all; they are situated in a complex causal network of different modules. Moreover, the stronger form usually says that each module only really runs one process e.g. the cheater detection module only detects cheating. On the weak view however, it’s theoretically possible that a module could do more than one thing.

So when we look at task-based fMRI data using subtraction logic and are tempted to talk about a “theory of mind module” at one particular loci, we need to think about both the weak and strong forms of modularity and the weak and strong forms of information encapsulation. For the weak view of modularity, the theory of mind module is only modular because you could lesion it without shutting down the rest of the brain. And on the weak view of encapsulation, it’s more likely that theory of mind capacity stems from the powers of a distributed network of modules with the one particular loci that is “subtracted” out also being capable of helping out in other things beside theory of mind. The strong view of modularity and encapsulation would say the particular loci that is “most active” is the place where theory of mind happens. Michael Anderson has recently done meta-analyses of fMRI data and concluded that what’s going on often is cases where cortical areas are redeployed to perform new tasks, so the idea that any given brain loci does just one thing is mistaken. Since the brain constantly recruits old circuits to do new tasks, the strong form of encapsulation is going to be wrong: each loci can participate in different tasks in a slightly different way.

Leave a comment

Filed under Philosophy, Psychology

New paper published: Consciousness, Plasticity, and Connectomics

The paper I co-authored with Micah Allen is finally out! It is published in the open-access journal Frontiers in Psychology (special topic issue on neuralplasticity and consciousness in the subsection Frontiers in Consciousness Research). Download it for free here:

Consciousness, Plasticity, and Connectomics: The Role of Intersubjectivity in Human Cognition

The paper is a hypothesis and theory article, meaning that we develop a new operational definition of consciousness in addition to postulating novel hypotheses about the neural substrate of consciousness. The paper is a synthesis of diverse research traditions in the field of consciousness studies. We borrow equally from sensorimotor enactivists like Alva Noe and Evan Thompson,”Global workspace” theorists like Bernard Baars, higher-order theorists like Rosenthal, Lycan, and Armstrong, social-constructivists in the tradition of Vygotsky, and recent developments in the study of “mind wandering” or “meta-awareness” in the cognitive neurosciences. We take the best of all approaches and discard the worst.

How is this paper different from all the other articles on consciousness being published today? Besides our novel theoretical synthesis of diverse research traditions, we also take the time to map out a comprehensive mental taxonomy based on both phenomenological and empirical evidence. We also take the time to define exactly what we mean by the term “consciousness”. Our most basic idea is that there is a difference between prereflective and reflective consciousness. We claim that almost all animal are restricted to prereflective consciousness whereas language-using adult humans are capable of this mentality plus reflective consciousness. Here is a table showing the qualitative differences between prereflective and reflective consciousness:

Photobucket

(click for full image)

We contend that we prevailing theoretical spectrum in consciousness studies has often conflated these two phenomena and/or focused on one at the expensive of the other. For example, we think that the Higher-order Representation (HOR) theorists have been trying to use reflective consciousness to explain prereflective consciousness, the “what-it-is-like” of an organism. In contrast to the higher-order theorists, we think that there are phenomenal feels (“what-is-is-likeness” or “qualia”) independently of whether there are any higher-order representations active in the brain. So although the HOR people are definitely on the right track insofar as they are interested in meta-awareness (rather than just awareness), we think they have been barking up the wrong tree by explaining “what-it-is-likeness” in terms of HORs.Micah and I contend that what-is-is-likeness is shared by all living organisms insofar as they have organized and unitary bodies. This mind-in-life thesis is taken directly from the enactivist sensorimotor tradition.

However, in contrast to the enactivist tradition, we don’t think that sensorimotor connectivity exhausts the phenomena of consciousness. In fact, we believe that an over emphasis on embodied sensorimotor connectivity is likely to overlook or downplay the significance of reflective consciousness, which we argue is grounded by language and learned through exposure to narrative practice in childhood. We contend that HORs, although not the origin of what-it-is-likeness, do significantly change the phenomenal quality of what-it-is-likeness, giving rise to new forms of narratological subjectivity. As I mentioned in a previous post , there is good reason to believe that reflective consciousness gives rise to entirely new forms of phenomenal feeling such as sensory quales (e.g. the experience of gazing at a pure red patch). Conscious pain itself could plausibly be seen a side-effect of reflective consciousness feeding back into prereflective consciousness, allowing for conscious suffering (meta-awareness of pain). In this respect, we think that the HOR theorists are perfectly right to insist that meta-awareness or meta-consciousness of lower-order mental states allows for the emergence of special forms of subjectivity. However, we side with HOR theorists like Peter Carruthers (and against van Gulick) in arguing that this meta-consciousness is not widespread in the animal kingdom, and is perhaps restricted only to those animals capable of language. As Andy Clark says,

“[T]hinking about thinking” is a good candidate for a distinctively human capacity – one not evidently shared by the non-language using animals that share our planet. Thus, it is natural to wonder whether this might be an entire species of thought in which language plays the generative role – a species of thought that is not just reflected in (or extended by) our use of words but is directly dependent on language for its very existence. (1997, p. 209)

So the philosophical significance of our paper lies in our synthesis of Higher-order Representationalism and sensorimotor theorists of consciousness. Moreover, we synthesize HOR theory with Global Workspace Theory and Dan Hutto’s Narrative Practice Hypothesis, which emphasizes the importance of embodied narrative learning as the substrate for complex folk psychological attitudes and social cognitive processing.

But this is just the philosophical significance of the paper. There is also empirical significance. Micah developed a novel understanding of the “Default Mode Network” and synthesized a great deal of current data in the cognitive neurosciences in terms of our distinction between prereflective and reflective consciousness. The devil is in the details here, so I highly recommend reading the paper for a full overview of the empirical novelty of our paper. Needless to say, we feel like our paper marks a theoretical breakthrough on both philosophical and empirical fronts. Our theory of consciousness is complex and multifaceted, which is appropriate given the target of what we are trying to explain.

3 Comments

Filed under Consciousness

On the relevance of phenomenology to cognitive science

I just started reading Shaun Gallagher and Dan Zahavi’s textbook The Phenomenological Mind and I thought this was a particularly clear paragraph on the relevance on phenomenology to cognitive science.

Compare two situations. In the first situation we, as scientists who are interested in explaining perception, have no phenomenological description of perceptual experience. How would we begin to develop our explanation? We would have to start somewhere. Perhaps we would startwith a pre-established theory of perception, and begin by testing the various predictions this theory makes. Quite frequently this is the way that science is done. We may ask where this pre-established theory comes from, and find that in part it may be based on certain observations or assumptions about perception. We may question these observations or assumptions, and based on how we think perception actually works, formulate counter-arguments or alternative hypotheses to be tested out. This seems somewhat hit or miss, although science often makesprogress in this way. In the second situation, we have a well-developed phenomenological description of perceptual experience as intentional, spatial, temporal, and phenomenal. We suggest that starting with this description, we already have a good idea of what we need to explain. If we know that perception is always perspectivally incomplete, and yet that we perceive objects as if they have volume, and other sides that we cannot see in the perceptual moment,then we know what we have to explain, and we may have good clues about how to design experiments to get to just this feature of perception. If the phenomenological description is systematic and detailed, then to start with this rich description seems a lot less hit or miss. So phenomenology and science may be aiming for different kinds of accounts, but it seems clear that phenomenology can be relevant and useful for scientific work.

~The Phenomenological Mind, Shaun Gallagher and Dan Zahavi, p. 9-10

This general idea is echoed in Julian Jaynes’ quip that the attempt to find consciousness in the brain will inevitably fail unless you know what you are looking for in the first place.

2 Comments

Filed under Phenomenology, Psychology