Tag Archives: philosophy of cognitive science

Man in Vegetative State Shows Brain Activity to Movie: What Does It Mean?

In a recent study, Naci et al. investigated how the brain responds to an 8 minute Alfred Hitchcock movie. In healthy subjects they found that frontal and parietal areas indicative of executive functioning were active during the most suspenseful parts of the movie. Then they showed the same movie to two patients diagnosed as being in a vegetative state, one of which who had been in VS for 16 years. In one of the patients they found that “activity in a network of frontal and parietal regions that are known to support executive processing significantly synchronized to that of healthy participants”. In other words, the vegetative man’s brain “tracked” the suspense-points of the movie in the same way that healthy controls did. They reasoned that the patient was therefore consciously aware of the video, despite being behaviorally unresponsive:

The patient’s brain activity in frontal and parietal regions was tightly synchronized with the healthy participants’ over time, and, crucially, it reflected the executive demands of specific events in the movie, as measured both qualitatively and quantitatively in healthy individuals. This suggested that the patient had a conscious cognitive experience highly similar to that of each and every healthy participant, while watching the same movie.

But what’s the connection between executive functioning and conscious experience? The authors write:

The “executive” function of the brain refers to those processes that coordinate and schedule a host of other more basic cognitive operations, such as monitoring and analyzing information from the environment and integrating it with internally generated goals, as well as planning and adapting new behavioral schemas to take account of this information. As such, executive function is integral to our conscious experience of the world as prior knowledge is integrated into the current “state of play” to make predictions about likely future events.

Does this mean that executive functioning is always conscious? Is the unconscious brain incapable of “monitoring and analyzing information from the environment” and “integrating” that information with goals? Color me skeptical but I believe in the power of the unconscious mind to perform these functions without the input of conscious awareness.

Several examples come to mind. In the “long-distance truck driver” phenomenon people can drive automobiles for minutes if not hours without the input of conscious awareness. Surely driving requires “monitoring and analyzing information from the environment” in addition to integrating with goals and adapting new behaviors to deal with novel road conditions.

Another example is automatic writing, where people can write whole intelligent paragraphs without the input of conscious attention and the “voice” of the writing is distinct from that of the person’s normal personality, channeling the personalities of deceased persons or famous literary people. People would hold conversations with their automatic writing indicating that the unconscious writer was responding to the environment and surely “monitoring and analyzing information”. Im not aware of any brain imaging studies of automatic writing but I would not be surprised if frontal and parietal regions were active given the complexity of handwriting as a cognitive task. Same with long-distance truck driving.

My point is simply to raise the question: Can executive function happen unconsciously? Naci et al. say that executive function is “integral” to conscious experience. That might be true. But is conscious experience integral to executive functioning? Maybe not. There is a litany of complex behaviors that can be performed unconsciously, all of which likely recruit frontal and parietal networks of the brain. We can’t simply assume that just because information integration occurred that conscious awareness was involved. To make that inference would require us to think that the unconscious mind is “dumb” and incapable of integrating information. But there is plenty of reason to think that what Timothy Wilson calls the “adaptive unconscious” is highly intelligent and capable of many “higher-order” cognitive functions including monitoring, integrating, planning, reasoning, etc.

Advertisements

2 Comments

Filed under Consciousness, Psychology

Thoughts on the fundamental problem of representation

I’ve been thinking a lot about the so-called “fundamental problem of representation”: what is a representation and how does it work as a representation? How do representations represent? We need to first answer what a representation is. Many philosophers seem to agree that it has something to do with “standing in”. The obvious example is a photograph. A photograph of a cat is a representation of a cat because the photograph “stands in” for the real cat. How does this work? Well, it seems to need an interpreter to interpret the photograph as a representation of a cat. But if we want to explain how the brain represents something, it obviously won’t do to posit an interpreter, for this is just a homuncular explanation.

So the photograph example is kind of a nonstarter when it comes to understanding how the brain represents something. Many philosophers believe that in order for the brain to perceive the world, it must form a representation of the world. In this way, perceiving the world is seen as forming a model of the world, which is used to compute action plans. But is this really a scientific explanation? When a brain perceives a cat, what does it mean for brain activity to “stand in” for that cat? This “standing in” function is obscure. For this reason, Eric Dietrich prefers to talk about representations as “mediators”. A representation is a mediator between a stimulus and behavior.

This makes sense to me, for I can imagine what it means for something to mediate between a physical perturbation and physical behavior. But doesn’t a thermometer also mediate between a stimulus and a behavior? What makes neurons different from thermometers? Aren’t neurons just complex bio-machines? And machines are machines. But I’m convinced there is a difference between a thermometer and a brain. I think representations in the brain are genuinely mental whereas I do not think this is true of the thermometer. Why? I haven’t quite worked this out, but I think the difference is that the mechanisms of mediation in the brain are responding to meaningful information, whereas the mechanisms of mediation in the thermometer are not responding to meaning at all. Meaning is mental. Mental mediations are mediations in response to meaning. We can thus make a fundamental distinction between nonmental representation and mental representation.

But what is meaningful information? How can we understand meaning ontologically? I think the concept of affordances is useful here. Let’s start simple. Think of sucrose molecules. Now imagine a thermometer like machine that had sensors designed to respond to sucrose, mechanisms of mediation (“processing”), and then an output behavior (turning on a green light). We have no reason to think of this machine as instantiating any truly mental mediations. Its mediations are purely responding to the sucrose causally. But now imagine a bacteria. It too has biochemical sensors designed to discriminate sucrose, mechanisms of mediation for processing it, and output behaviors. Some people think I’m crazy for holding this view, but I genuinely think that in the case of the bacteria, the mechanisms of mediation are mental. Why? Because the sucrose affords something to the bacteria, namely, nutrition. The sucrose is thus meaningful to the bacteria, whereas sucrose is not meaningful to the machine. There is an affective valence even at the level of the bacteria, it is just hard to imagine. But put yourself in the “mind” of a bacteria. Its whole world has a valence. It is attracted/repelled by physical perturbations. But unlike the machine, the bacteria perceives these perturbations as genuinely meaningful, for the sucrose affords the possibility of an opportunity for helping maintain a norm (the norm of survival). I think this emphasis on survival and affective valence is important, because I think of it as a means to solve the frame problem. Having a norm which regulates behavior enables the mechanisms of mediation to be responsive to more than just brute causal information. In enables the perception of affordances. The norm of survival is the Ur-desire, the spark of mentation. Arguably, the category between life and nonlife is fuzzy, so it’s not quite clear where to draw the line, but that there is a line is undeniable. I don’t doubt that robots could in principle instantiate their own norms to solve the frame problem, but I imagine they will look similar to biological norms.

But what about neurons? Many contemporary philosophers of mind think that “mental stuff” only happens in sufficiently complex brains. I think this is a mistake, for any association of representation with strictly neural processes will fail to answer the fundamental problem of representation. I think Dietrich is right that the concept of mediation is the right way to understand representation. But I think that neural mediation is just one form of mediation. Evolutionarily speaking, neural mediation was highly adaptive for it allowed organisms to increase the complexity of the mediation between stimulus and behavior. More complexity in mediation leads to “deeper” processing i.e. more complex behavior. Neural processing allowed for the mediations to become “abstract”. By this, I mean that the mechanisms of response become sensitive to more “global” features of a stimulus profile. This is called an increase in invariance, for differences in low-level stimulus detail will make the “higher-order” circuits fire steady. Think of perceiving a chair. We can recognize a chair from almost any angle of viewing. As we change angles, the lower-level mechanisms of response fire only in response to very specific and low-level features of the chair. At higher levels of processing, the response is steady regardless of the lower-level features. In this way, we say that the representations in the brain have become more “abstract”. Language is the ultimate in abstract mediation, for linguistic “tagging” of the world enables us to respond to very abstract kinds of information, particularly in respect to social cognition and our collapse of human behavior into abstract folk psychological categories. The vocabulary term “mind” is one of these ultimate abstractions, for it abstracts over all physical behavior and gives us a new category of response: person. Such linguistic representations are meta-representational insofar as they allow organisms to represent representations, to mediate mediations. Many theorists, including myself, think that it is meta-representation which separates the mental life of humans from that of other animals.

In summary, the fundamental problem of representation is to understand what a representation is and to answer how it works as a representation. Representations are stand ins for stimuli. A stand in for a stimuli is a mechanisms of mediation between stimuli and behavior. There are two fundamental types of mediation: mental and nonmental. Nonmental mediation is ubiquitous in the physical world, whereas mental mediation is rare. Mental mediation is mental because the mechanisms of mediation are sensitive to affordance information, which is grounded by norms, the most evolutionarily basic being the norm of survival. Mental representations thus form a continuum of possible abstraction, with neural representations only being a kind of mediation, enabling deeper abstraction through stimulus-invariance. There is thus nothing mysterious about representation. The term itself is a shorthand description of the complex mechanisms of mediation intrinsic to an entity.

5 Comments

Filed under Philosophy, Psychology