Monthly Archives: June 2011

Steven Crowell defending phenomenology from the critique of Speculative Realism

From figure/ground interview

Let’s get technical. In one of his books, Guerrilla Metaphysics, Graham Harman, one of the co-founders of the philosophical movement known as Speculative Realism, makes a powerful critique of phenomenology. First, he identifies some inherent contradictions: “The cumulative lesson of this book so far is that phenomenology is caught at the midpoint of two intersections: (1) On the one hand, we deal only with objects, since sheer formless sense data are never encountered; on the other hand, an “objects-only” world could not be tangible or experienceable in any way, since objects always elude us. (2) On the one hand, phenomena are united with our consciousness in a single intentional act, while on the other hand they are clearly separate, since they fascinate us as end points of awareness rather then melting indistinguishably into us.” Second, he accuses phenomenology of remaining a “philosophy of access” and neglecting to recognize what his colleague Levi R. Bryant has called a “Democracy of Objects.” Harman writes: “Of any philosophy we encounter, it can be asked whether it has anything at all to tell us about the impact of inanimate objects upon one another, apart from any human awareness of this fact. If the answer is “yes,” then we have a philosophy of objects. This does not require a model of solid cinder blocks existing in a vacuum without context, but only a standpoint equally capable of treating human and inhuman entities on an equal footing. If the answer is “no,” then we have the philosophy of access, which for all practical purposes is idealism, even if no explicit denial is made of a world outside of human cognition.” What do you make of Harman’s critique of phenomenology and his new brand of realism?

Having not read this book (though a very good grad student in the English department who was taking my phenomenology seminar introduced me to some of its ideas), I don’t think I can comment responsibly on it, but the characterization of phenomenology seems insensitive to the crucial distinction between transcendental-phenomenological idealism and metaphysical or subjective idealism. In simplest terms: I reject the idea that phenomenology does not give us the world as it is. It is indeed a “philosophy of access,” but it is access to the world as it is. And I would also argue that it is a standpoint “equally capable of treating human and inhuman entities on an equal footing,” if by “equal footing” one means: attending to the things themselves, not setting up one entity as the measure of all the others, but letting entities show themselves as they are. However, I find the idea that one could do this without any concern for “access,” in a broad sense, very naive. For instance, it seems plausible to say that physics tells us about “the impact of inanimate objects upon one another, apart from any human awareness of this fact,” but presumably this is not what the author means. There are the standard examples from quantum mechanics about the influence of the observer, and the like. But beyond that, there is the fact that physics is a theory and a set of practices which provide normative conditions that allow for distinctions to be made between genuine interactions and mere “artefacts” of one’s standpoint, etc. Do these theories and practices count as a mode of “awareness”? If so, then physics must still be too idealistic. But I doubt that any scientific or philosophical position is conceivable that does not involve theories and practices that establish such normative conditions, and if that is so, then Speculative Realism will also involve some reference to conditions of our “awareness” of the objects it references. Transcendental phenomenology strives to do justice to this fact, and if that is a kind of “idealism,” it is one I can live with. As Husserl pointed out, the “transcendental subject” is not the “human being” as this is envisioned in the question, and I would argue that the same holds for Heidegger’s position. I am not impressed by positions that try to circumvent this point by appeal to primordial “events” or to a kind of post-humanism that most often merely borrows – very selectively – from biology and the like to answer philosophical questions. One does not need to make a fetish out of method to believe that certain questions need to be approached differently than others; in particular, philosophical questions have a reference to access built into them, and there’s nothing wrong with that. As for a “democracy of objects,” where does the “subject” fit in? If it is just another object, then we have lost our grip on the distinction.

I think Crowell presents a very nice reply to the critique Speculative realists usually bring to “philosophies of access”. Do yourself a favor and read the full interview (although I disagree with his critique of information processing, and some of the things he says about naturalism are a little disappointing).

Leave a comment

Filed under Phenomenology, Philosophy

Has philosophy made any progress at all in the last 3000 years?

Philosopher Eric Dietrich has recently written a paper called “There is no progress in philosophy“. I’m not even going to spend that much time talking about the various ironies in the paper, such as that it’s a philosophy paper trying to make progress in philosophy by arguing that there has been no progress in philosophy. Actually, I do want to talk about one delicious irony in the paper. Dietrich says:

Philosophers…suffer from the Illusion of Explanatory Depth.IOED is the universal error that all of us make in believing that we know more about something than we actually do.

The irony is that Dietrich has not kept up to date in the latest developments in all philosophical fields, and thus cannot say with authority that “no real progress is made, none”. He claims to know more about the field of philosophy than he actually does, since he seems to speak on behalf of all fields, including my own, philosophy of mind. I think there have been great leaps and bounds of progress in the philosophy of mind/psychology. Anyone who knows anything about the history of psychology will tell you that the philosophy of psychology has advanced considerably since the field discovered the entire ocean of mental activity that is the unconscious mind. This is the equivalent of discovering the New World in terms of philosophy of mind. I thus think Dietrich has some serious catching up to do and is patently wrong when he says “Philosophy does not move forward at all. It is exactly the same today as it was 3000 years ago.” Dietrich seems to be under the mistaken belief that the only explanation of the mind that philosophy can give is something weak-kneed like supervenience. I daresay that my own work and that of others on the philosophy of the prereflective mind in the tradition of Julian Jaynes constitutes a considerable advance in the philosophy of mind by spelling out plausible functions and mechanisms of consciousness. So when Dietrich says “Philosophy is essentially destructive”, I say “Speak for yourself.”

ht: Pete Mandik

 

EDIT:

After looking at Dietrich’s bio, it appears that his main area of specialty is actually philosophy of mind. This attack on philosophy as a discipline really strikes me as odd then. Does he really want to say that the philosophy of mind put forward by Plato is just as good as the philosophy of mind being put forward by himself? That seem absurd. If there was no room for improvement, what would the motivation be for even doing philosophy at all? Why is he a professor of philosophy? Progress in the generation of concepts seems not just possible, but something which has actually happened. Sure, there are some philosophers who might fall into Eric’s category of “nonprogressive”, but why throw the baby out with the bathwater? I guess the piece is really more polemical as metaphilosophy than anything.

2 Comments

Filed under Philosophy

Review of Nicholas Humphrey's new book "Soul Dust: The Magic of Consciousness"

Nicholas Humphrey’s new book Soul Dust: The Magic of Consciousness is all about trying to solve the so-called “Hard problem of phenomenal consciousness”. And Humphrey does indeed take on an immense task. He says that “I felt challenged to have one more go at writing the earth-shattering book—or, at any rate, the book that shows the fly the way out of the fly bottle.” [I apologize for the lack of page numbers in this review, but I am reading the Kindle edition and no page numbers are provided – G.] So was my world shattered upon reading about his ideas? Not really. Which isn’t to say that I think he made no progress on solving the Hard problem. So what is the Hard problem exactly? As Humphrey puts it, “The hard problem is to explain how an entity made entirely of physical matter—such as a human being—can experience conscious feelings.” And what are conscious feelings? Humphrey sticks close to the philosophical orthodoxy in setting out the explanandum of phenomenal consciousness in terms of “what-it-is-like-ness”. He says that “A subject is “phenomenally conscious” (or plain “conscious”) when and if there is something it’s like to be him at this moment.”

So for Humphrey the big problem is to give a naturalistic account of why, for humans, there is “something it is like” to interact with the world when we are awake. I have expressed before in my blog posts and research papers my dissatisfaction with this way of setting up the problem of consciousness in terms of what-it-is-like-ness, primarily because I am convinced that it makes sense to say that there is something it is like to be entirely nonconscious. But this terminological quibble hasn’t prevented me from appreciating Humphrey’s approach to the problem. In fact, despite this confusion about defining phenomenal consciousness, I think Humphrey is on the right track as far as what consciousness “is” and what it “does”. The very fact that Humphrey thinks that consciousness “does” something at all makes me like him tremendously because coming up with a functional, adaptive benefit for consciousness is the most promising route for the naturalization of consciousness. So I have to give Humphrey props for not buying into the idea of a “philosophical zombie”, a being that is functionally identical to humans yet lacks consciousness. Humphrey rightly sees that this concept is “daft” because we actually can give an account of how having “conscious feelings” changes the functional/behavioral profile of any organism which has the capacity to be conscious.

So if Humphrey thinks that consciousness “does” something, what is it exactly and what does it do? As he says “To experience [conscious] sensations “as having” these [phenomenal] features is to form a mental representation to that effect…Thus “consciousness” (or “being conscious”), as a state of mind, is the cognitive state of entertaining such mental representations.” As we can see from these quotes, Humphrey seems to be taking a kind of higher-order approach to consciousness in that he makes a distinction between creatures who merely respond to stimuli and creatures who respond to that response with a kind of higher-order mental representation. As he says, “In short, for the subject to have a sensory experience that is like something is just for him to experience it as what it is like.” Humphrey thus makes an important distinction between reactions to stimuli and the experience of neurally entertaining that reaction in a higher-order cognitive space. As he says, “In modern human beings, [conscious] sensation—for all its special phenomenal features—is still essentially the way in which you represent your interaction with the environmental stimuli that touch your body: red light at your eyes, sugar on your tongue, pressure on your skin, and so on.” So for Humphrey, it is possible to have nonconscious perception (what I have called “reactivity”) . A unicellular organism, for instance, reacts to stimuli in a meaningful and coherent way, but it does not react to its own reaction in terms of creating a “as-if” representation of that experience.

But how do we represent our experience? “Consciousness is no more or less than a piece of magical ‘theater’.” So Humphrey thinks that conscious experience involves experiencing ourselves as being part of a nonphysical, magical “theater”. But it is crucial to realize that Humphrey thinks that, from the perspective of science, this theater is but a trick or illusion of the brain. Two points can then be made: “First, from the subject’s point of view, consciousness appears to be a gateway to a transcendental world of as-if entities. Second, from the point of view of theory, consciousness is the product of some kind of illusion chamber, a charade.” This is a classic Dennettian thesis, and Humphrey acknowledges that the philosopher he is closest to is Dennett. And this is why I like Humphrey’s approach to consciousness despite my hangups with denying nonconscious animals a “what it is like”.

Perhaps my biggest problem with Humphrey’s book is not that I disagree with the “consciousness is a fiction” approach, but rather, with his attitude of originality when he rhetorically asks “Is this the clever new idea we need?” Indeed, Humphrey seems to be rather proud of his “new” idea that consciousness is essentially a cognitive phenomenon generated by higher-order representations which represent reactions to stimuli as being “magical” or “theater like”. But as readers of my blog know, Julian Jaynes said it first, and better.

Indeed, Jaynes had the idea that consciousness is in a sense a “virtual illusion” that takes on the phenomenal properties of something like a theater. Jaynes called this theater the “analog mind space”, and theorized that it was generated by an “as if” function. So when Humphrey says that our response to stimuli “has become a virtual expression occurring at the level of a virtual body, hidden inside your head”, he is essentially redescribing Jaynes’ theory that consciousness involves, in part, the creation of an “Analog I” which is an virtual analog of the body, hidden “inside” the head in the form of an analog mind-space, which is an analog of the physical environment our sense has become familiar with. And when Humphrey says that “our ancestors were nonconscious before they were conscious”, this is not a new idea at all, but one that Jaynes argued for over 30 years ago. So when Humphrey says that “Thus, for you to have the [conscious] sensation of red means nothing other than for you to observe your own redding”, this is more or less the same distinction Jaynes developed between reactivity and consciousness of reactivity. Jaynes thought that most animals are capable of intelligently reacting to their environment without being conscious. This is the same idea Humphrey argues for.

And another area where I think Jaynes is superior and (still) cutting edge is that Jaynes isolated the evolution of this “theater function” as being a product of language. The “as if” functions which generates a “virtual body” and a “virtual theater” are, for Jaynes, dependent on the special analogical capacities of lexical metaphors. Without metaphorical language, we would have never been able to generate the “as if” functions which integrate sensory reactions with higher-order representations. So not only does Jaynes’ theory have more flesh on it in terms of historical specificity and the concreteness of his empirical claims, Jaynes’ theory is more cutting edge insofar as he looks at consciousness as being developmentally dependent on certain metaphorical capacities being in place (a hot topic right now in the cog sci world).

In sum, I highly recommend Soul Dust for anyone looking to get a better understanding of consciousness. But if you want an even better account of what consciousness is, how and when it evolved, and how it works, then you must do yourself a favor and read Jaynes’ magnum opus The Origin of Consciousness in the Breakdown of the Bicameral Mind.

4 Comments

Filed under Consciousness, Philosophy, Psychology

Speculations on the Neurocomputational Foundations of Consciousness

Take a typical example of conscious thought: imagining a Christmas tree planted on the moon. Take 5 seconds and do it now: imagine in your mind a tree on the moon. This is something you have never experienced, yet it is easy imaginable by your consciousness. What kinds of operations are involved in this conscious thought? Although it sounds strange at first, Julian Jaynes argued that the cognitive basis for this kind of thinking (as well as all the other instantiations of consciousness) is grounded in metaphor and metaphorical processes. Despite the common assumption that metaphor is limited to mere linguistic frills, like icing on the cognitive cake, metaphor is actually a deep principle of human cognition. It governs not just how we speak and write, but how we think and comprehend reality in a very primordial way. As James Geary puts it in his new book I Is an Other: The Secret Life of Metaphor and How It Shapes the Way We See the World,

We think metaphorically. Metaphorical thinking is the way we make sense of the world, and every individual metaphor is a specific instance of this imaginative process at work. Metaphors are therefore not confined to spoken or written language.

This is an essentially Jaynesian thesis. Jaynes thought that “[Consciousness] operates by way of analogy, by way of constructing an analog space with an analog “I” that can observe that space, and move metaphorically in it”. Let’s go back to our example of imagining a Christmas tree planted on the moon. When we execute this conscious operation, it involves several things of importance. First, there is the spatialization of the objects in the scene insofar as the tree is spatially separated from the moon ground, and the individual ornaments are spatially separated from each other, and so on. Moreover, the very fact that you are imagining a spatial arrangement indicates the importance of spatialization for consciousness. The space in our minds (what Jaynes called our “mind-space”) is not as detailed as the space we can perceive by opening our eyes. The conscious space-worlds are mere excerptions, as Jaynes called them. The visual details of the conscious excerption of our inner mind-space pales in comparison to looking at a real Christmas tree . Yet the conscious mind world is there in our minds, with some detail, some specificity. For Jaynes, the real behavioral world of perception and action is a model or source for the construction of conscious imagery and thought. In a sense then, Jaynes thought that all conscious operations are a form of modeling, or analogizing. We take something we know very well (the physical spatial environment) and based on our knowledge of this world, the mind constructs an analogous space which is useful for higher-order cognitive operations such as the famous mental rotation task.

Another component of the conscious operation of imagination is the fact that you are imagining the Christmas tree from a particular perspective. This is the perspective of the “mind’s eye”, what Jaynes called the “Analog ‘I'”. When we imagine anything in our conscious mind-space, it is always done from the perspective of an “I” which is doing the imagining from a particular mental perspective. The “model” or “source” for this analog I is of course our own bodies and the experience of our bodies interacting in a physical environment, which we are familiar with having a certain limited perspective on a space before us.

To explain the mechanisms of consciousness then, we have to develop a theory of how analog spaces are constructed in the brain along with analog bodies to perceive these analog spaces. We would also have to develop a theory of how these analogical processes generate phenomenal associations which Jaynes called “paraphrands”, and which we know of as “conscious feelings”. The mind-space world of the moon and Christmas tree is a paraphrand of the analogical construction of mind-space and the analog I. Explaining consciousness in this way would seem to involve a theory of how the brain uses metaphor at the neurocomputational level. Since metaphor is based on the recycling of basic perceptuo-motor schemas of familiar stimuli burnt into the neural circuitry for the purpose of comprehending unfamiliar stimuli to generate adaptive behavior, it seems like we could use the neuronal cycling hypothesis of Stanislas Dehaene to explain how metaphor works, and thus, how consciousness constructs “analogs” of everything it has experienced. This might be related to the fundamentally “echo-y” or “loopy” nature of cognition that Hofstadter has emphasized (and it is telling that Hofstadter himself has claimed that analogy is the “core” of cognition). This would point to the “networkological” or “intrinsic” nature of brain activity, which only gets modified by exposure to the world rather than completely specified by it. The neurocomputational explanation of consciousness would then look like a neurocomputational explanation of how analogical thinking in the brain works, particularly the analogizing of things/events spatially, especially our experience of time and of our own autobiographical self. Part and parcel of this analogizing cognition is based on linguistic skills, but the underlying cognitive cross-modal mapping is probably prelinguistic in nature. By spatializing time, we can develop a narratively grounded,  “story like” understanding of the world which allows us to consciously assign causes and reasons to things, leading to theory of mind and the development of propositional attitudinal thinking (ascribing beliefs, desires, intentions, etc. to either yourself, others, or inanimate objects). This ability is of course dependent on the linguistic-analogical capacities of human articulatory cognition. The functions of consciousness to explain are excerption, narratization, spatialization, and conciliation (which is the putting of things into unified object in your conscious mind space, such as the unified mental image of a Christmas tree planted on the moon).

Jaynes says consciousness is a “metaphor-generated model”. In order to learn more about consciousness then, I need to learn more about metaphor, and how metaphor works neurocomputationally. It seems like the “mapping” of metaphor, of abstract (unknown) onto concrete (known), is the core process which allows for the “constructing” capacity of modern conscious thought (the ability to effectively close your eyes and consciously construct whole mental vistas). Could Andy Clark’s “epistemic actions” = Jaynes’ “metaphored actions”?

To speculate on the neurocomputational origins of analogical thinking, could there be a link between “convergence” or “association” areas in higher-cortical processing and the computational processing of metaphorical comprehension, which is essentially saying “X = Y”? This “crosstalk” of domain specific modalities is crucial to the complex intelligence of human typical cognition, and now we might see a way to link such informational convergence to the very process of consciousness itself.  This would fit with the original meaning of metaphor as “to carry across”. Metaphorical thinking “carries across” the domain specific schemas and integrates or “associates” (conciliates?) that information into another domain, allowing for novel comprehension of novel stimuli, which would have adaptive success and provide a scaffolding for the evolution of conscious operations in a unconscious world.

My thoughts on this subject are kind of scattered. I am unsure of where metaphor as cross-computational convergence and metaphor as linguistic mapping come apart. Perhaps the nonlinguistic “core analogy” processing was the neural scaffold for verbal analogy to take hold and become useful. The brain was already making cross-modal convergence in a limited sense. Maybe language hijacked these processes and “recycled” the crossing-circuits for a new purpose: linguistic mapping and associating based on communal norms of symbolic information exchange.

p.s. An interesting game is to try and find all the metaphors I naturally used in this post (e.g. thoughts = scattered objects).

2 Comments

Filed under Consciousness, Psychology

On the Nature and Origin of Mental Content

In the philosophy of mind, the issue of “content” has always been a central and guiding concern. What is mental content? Colloquially, when we speak about mental contents, we are referring to “what’s inside” someone’s mind. Laypersons talk about content like beliefs, desires, perceptions, fantasies, intentions, etc. If someone is looking at a red apple, we might say they have the content of a red apple stored in their mind in a perceptual format. This can lead to the production of belief content, such as content about the location, color, and tastiness of the apple, as well as beliefs about its nutrition or domestic origins. If someone knows where the apples are in the kitchen, we might say that the person has mental content stored in their mind, which can be brought up and manipulated when it is appropriate to do so. Hence, the popularity of “information manipulation” accounts of mental content that use metaphors based on digital computing.

There are many philosophical puzzles associated with content. Is mental content a physical process? Or is it immaterial? If so, what is the relationship between immaterial content and physical brains? How is content structured? Is it structured in terms of symbolic patterns and meanings, or is it structured in some kind of causal-functional manner? Where does content come from? Is content manipulation based on semantic or syntactic properties? Are there different kinds of content? If so, how many and what is their evolutionary trajectory? Is there content unique to human minds? What human content is shared across species? What is the relationship between scientific accounts of content and “folk psychological” accounts? Is the folk psychological account conceptually structured in the same way as a physical account is? How does content “work”? Is content necessarily representational? How does content “refer” to things or how can it be “about” things, especially if it is just physical?These are but a few of the philosophical questions associated with the problem of content. The issue is deep and crisscrosses multiple philosophical disciplines.

Personally, I’m greatly interested in the problem of (1) the varieties of content in the animal kingdom (2) the question of human-unique content (let’s call it H-content) and (3) the question of the evolution of H-content in relation to content that is shared across species.

Let’s approach issue (1) first: the variety of content. This is the kind of inquiry where drawing distinctions is essential. Theorists make the broadest distinction of kinds of content differently. Some make a distinction between nonconceptual and conceptual content, which kind of looks like the distinction between nonpropositional and propositional content, which in turn is kind of like the distinction between nonverbal and verbal content. You can also make a similar distinction between nonreflective and reflective content, automatic vs controlled content, nonsymbolic versus symbolic content, unconscious vs conscious content, and so on.

If you kind of extract a common core to all these broad distinctions, you start to see our answer to the question about H-content. If you can come up with a good argument to show how verbal, propositional, symbolic, reflective, and conscious content depends on there being in place certain cultural-linguistic scaffolding, then it looks like H-content is somehow tied up with our ability to reflect and verbally articulate propositional content.

This ability to articulate propositional attitudes grounds a distinction between two types of information: what we can call a more “causal-functional” type of information related to nonrandom covariation of physical matter (which leads to explanations in terms of causal functions) and a “symbolic-semantic” type of information which is related to there being a communal practice of discursive communication through information content bearing symbolic and conventional meaning e.g. the handsign for “distant foodsource” came to bear it’s symbolic content in virtue of system of communal norms which make the signer pragmatically responsible for adhering to a shared, communicative system of norms which has slowly accumulated through the ratcheting processes of cultural/behavioral evolution.

This kind of normative content based on communitarian and conventional norms mediated through behavioral learning strategies like imitation can be called semantic content, and it “works” in virtue of a kind of holist, rational, “web-of-belief” type normativity which seems to be the unique product of growing up in a linguistic-symbolic cognitive niche while possessing the appropriate biological learning dispositions for the acquirement of such symbolic content. This semantic content is in contrast to a kind of nonsemantic content that is applicable to perceptual-motor cognition shared with nonhuman animals. Thus, when we say that a frog has the mental content for “a fly”, we do not mean that it understands fly-content symbolically in the way a human would perceive/understand “a fly”. Rather, the mental content is purely functional-causal insofar as the mental content for fly must be ontologically understood strictly in terms of how the stimulus-perturbation of a “small black dot” or something like it starts a causal chain reaction which eventually leads to evolutionary adaptive behaviors like flicking out its tongue and capturing the object. It is difficult if not impossible to properly describe the mental content of the frog in the vocabulary of propositional attitudes such as belief, desires, intentions, etc. Although it kind of makes sense to us to say that the frog “believed” the fly was good for him, and that’s why he flicked his tongue out, there is another sense in which these ascriptions of semantic content to the frog are ontologically inappropriate. The frog didn’t flick his tongue out for any reason; rather, it was a reaction to a physical perturbation in the ambient energy field which surrounds the frog.

Although it can be pragmatic and useful to ascribe semantic content to frogs, it is not appropriate if it is true that semantic content is generated by the linguistic-function. If genuine semantic content is only generated in bio-cultural situations analogous to Homo sapiens sapiens wherein there is a biological readiness for learning symbolic cognition and a community of systematic language users, then it is simply not appropriate to ascribe certain kinds of content to many animals (or humans). Perhaps dolphins or other animals possess content similar to humans, in which case we will have to modify our conception of H-content, allowing for convergent evolution or types of proto-H-content.

I should add, just because the distinction at hand is between a causal-functional type of content shared by nonhuman animals and a kind of symbolic-semantic type of content specific to humans, that does not mean that symbolic-semantic content fails to play a functional role, or is not amenable to functional analysis. Although the explanation works differently, symbolic cognition is amenable to functional analysis, although an analysis which recognizes the unique way in which symbols operate in our cognitive economy. Such a functional analysis is going to probably look different than a functional analysis of a toaster since “language functions” are incredibly complex and structured by complex information processing. The explanation of such functions will probably require norms of explanation which differ from those of physics. Instead, the norms of explanation are more closely related to those of psychology and biology, where mechanisms are more important than universal covering laws and we recognize the ontological reality of normativity.

1 Comment

Filed under Consciousness, Philosophy, Psychology

Initial thoughts on David Eagleman's new book Incognito: The Secret Lives of the Brain

I just purchased David Eagleman’s new book Incognito: The Secret Lives of the Brain and I like what I’m reading so far based on the first chapter. What immediately strikes me about Eagleman’s ideas is that his understanding of consciousness is very Jaynesian. Compare these two paragraphs:

“Reactivity covers all stimuli my behavior takes account of in any way, while consciousness is something quite distinct and a far less ubiquitous phenomenon. We are conscious of what we are reacting to only from time to time. … We are continually reacting to things in ways that have no…component in consciousness whatever.”

“Brains are in the business of gathering information and steering behavior appropriately. It doesn’t matter whether consciousness is involved in the decision making. And most of the time, it’s not. Whether we’re talking about dilated eyes, jealously, attraction, the love of fatty foods, or the great idea you had last week, consciousness is the smallest player in the operations of the brain. Our brains run mostly on autopilot, and the conscious mind has little access to the giant and mysterious factory that runs below it.”

Which quote is which? It’s hard to tell isn’t it? (Answer: the first is Jaynes, the second is Eagleman). Eagleman seems to be arguing for an essentially Jaynesian thesis: consciousness is not ubiquitous in the daily life of humans (and almost entirely absent from many of our animal cousins, as well as newborn infants), it flickers in and out, hovering over the surface of the deep unconscious ocean, occasionally getting access to the abbreviated, filtered, narratized version of information that consciousness operates over and then feeds back into the unconscious system.

In one of his most lively metaphors, Eagleman likens our conscious mind to a newspaper. Imagine all the economic, social, and political activity that is going on in the world at any given time. It would be impossible for anyone to gather or comprehend all that information. So what do we do? We read a newspaper filled with headlines and articles that condense that mountain of information into digestible, easy to understand bites. The reader uses the newspaper to gather useful information without bogging down in the huge complexity of reality. But Eagleman points out that we are curious readers, for we read the headline and take credit for coming up with the thought ourselves. As Eagleman puts it, “You gleefully say, ‘I just thought of something!’, when in fact your brain performed an enormous amount of work before your moment of genius struck. When an idea is served up from behind the scenes, your neural circuitry has been working on it for hours or days or years, consolidating information and trying out new combinations. But you take credit without further wonderment at the vast, hidden machinery behind the scenes.”

Here’s another takeaway message from Eagleman: “One does not need to be consciously aware to perform sophisticated motor acts.” As I mentioned before, this is an essentially Jaynesian thesis. Just from reading the first chapter, I can already see that Eagleman understands perfectly what conscious is not: it is not at the center of our mental lives when scaled against the entirety of the unconscious mind. It only appears to the conscious mind that it is at the center of the show. This is a firm and convincing neural trick, one very hard to overcome without deliberately or inadvertently tampering with the neural machinery through drugs or worse. The illusion of centrality and “inwards looking outwards” generates a phenomenon of unified experience along with a narrative, autobiographical identity over time. While it is true we have unified conscious experience and sense of conscious self-hood, especially when we reflect on that self, it is true only in the sense that we experience ourselves as having unified experience, but not in the sense that our experience actually is consciously unified. A closer examination reveals that the unified experience is a neurological artifact of consciousness knitting itself over the unconscious mind in the form of “newspaper headlines” i.e. narratives summarized and constructed after the fact. Eagleman uses the example of baseball. A fastball flies from the mound to the batter in 4/10ths of a second. This is far too fast for a narratized headline to be useful in directing behavior. Luckily, the unconscious is quick enough to respond, otherwise no one could ever hit a fastball.

But does this mean that consciousness is a mere epiphenomenon, lagging behind but not exerting any influence of its own? No, not at all. Just because consciousness comes after the fact doesn’t mean that it has no causal effects. Afterall, newspaper headlines have causal force insofar as the consolidated information hits the brain and leads directly or indirectly to new behavior. Imagine you saw a newspaper headline that the world was about the end. This would have instant behavioral effects. Although the headline was generated “after the fact”, it still has causal force insofar as the digestion of that headline by the conscious mind allows for behavioral shortcuts to be made through higher-order categorization and planning. So the fact that consciousness “lags behind” and deludes itself into running the whole show does not imply that it has no effect on the show at all. It does have an effect, a great one actually. But part of the effect is the feeling of being more than just a neurological newspaper. The newspapers wants to think that it is more than just a narratized summary in higher-order packaging.

1 Comment

Filed under Consciousness