Tag Archives: Dennett

Quote for the Day – Dennett on the Importance of Philosophers Knowing the History of Philosophy

The history of philosophy is in large measure the history of very smart people making very tempting mistakes, and if you don’t know the history, you are doomed to making the same darn mistakes all over again.

~Dan Dennett, Intuition Pumps and Other Tools for Thinking

Leave a comment

Filed under Books, Philosophy

Quote of the day 5-13-2012

“What convinces me that a cognitivistic theory could capture all the dear features I discover in my inner life is not any ‘argument’, and not just the programmatic appeal of thereby preserving something like ‘the unity of science’, but rather a detailed attempt to describe to myself exactly those features of my life and the nature of my acquaintance with me that I could cite as my ‘grounds’ for claiming that I am -and do not merely seem to be – conscious.” ~Dan Dennett, “Toward a Cognitive Theory of Consciousness”, in Brainstorms

Leave a comment

Filed under Random

Does a global workspace really exist?

Bernard Baars’ Global Workspace Theory (GWT) of consciousness is arguably the “hottest” theory of consciousness on the market right now. The essence of the theory is that most mental contents are nonconscious and localized to specific sensorimotor circuits e.g. a nonconscious visual mental content is primarily localized to the visual cortex. In order for a mental content to be “conscious”, the GWT says that the localized nonconscious content must be made available to a globally distributed neuronal workspace that integrates and associates that content with other localized content to form a unified, multimodal representation that is our conscious experience and which enables certain functions which can only operate on the basis of such global information. For example, when a human becomes conscious of a blue coffee mug, the GWT says that the localized circuits specific to different sensory modalities must become “globally available” for use by a distributed network such that each different sensory modality becomes “unified” into a conscious percept. The key idea of the GWT is thus the transition from nonconscious, local information to conscious, global information. The global information is said to now be in a “global workspace” whereby it can enable functions like (1) the integration of novel information into prexisting circuits (2) working memory functions such as inner speech and visual imagery (3) diverse kinds of learning (4) voluntary control enabled by conscious goals (5) access to the autobiographical self and self-referential articulation, etc.

The GWT has been born out by numerous experiments whereby consciously reported percepts are correlated with a wider, globally distributed brain activation whereas nonconscious percepts are correlated with less widely distributed activation in the localized sensory cortices.  But as a philosopher, I am interested in questions of ontology. A question that interests me about the GWT, and which seems to have garnered little attention from proponents of the GWT, is whether or not the global workspace really exists. Now, in one sense of the term, the global neuronal workspace really exists insofar as there really is a “widely distributed network of brain circuitry” that has the functional properties associated with consciousness. But notice the terminology of the global workspace. What is the nature of this “space”? Is the space inside our skulls? Can we open up the brain and point to this workspace? Evidently not, for the “workspace” seems to refer to a functional space that is only experienced as a literally internal space. The only real “space” in the brain is the physiological space of brain tissue. But if you described to a layperson the nature of working memory such as inner speech and visual imagery, and then asked them “where” in space such functions are located, they would likely tell you the workspace is inside their heads. Now ask them how they know this and they will likely tell you they know it because they experience inner speech and visual imagery as taking place inside their heads. But as philosophers, we should be skeptical about deriving the reality of the workspace from our normal experience of the workspace. And it is this distinction between the “real” global distribution of neuronal circuitry and how we experience the workspace that captures my worry about the “reality” of the workspace.  For it is one thing to say that there is a global distribution of neuronal activity in the brain, and it is another to say that there really is a “spotlight” in our heads that “shines an attentional beam” onto a theater in our brains.

Proponents of the GWT far too often fall into the trap of confusing the metaphorical nature of such theater talk with the “globally distributed” talk of neuron populations. It is important here to keep distinct what we are trying to explain with the GWT and the mechanisms which we use to explain that phenomenon. What we are trying to explain is the experience of ourselves as having a unified conscious experience that is coherent, stable, richly detailed, and continuous across time. This is what needs explaining. But a neuronal explanation of this phenomenon based on global distribution of neural information is fully compatible with the unified conscious experience being an illusion. I think this is a point that Dennett has been trying to make for decades. But when most people hear the word “illusion” they think that Dennett is trying to dismiss the phenomenon trying to be explained, namely, unified narratologically continuous experience. But what Dennett is actually trying to do is to get people to realize that one of the capacities of the human brain is indeed the capacity to generate illusions. And the “unified” Cartesian theater “in our heads” is the biggest illusion of all. For it is equally possible that those globally distributed neural mechanisms could make us experience ourselves as being outside our heads.

That this is true is evidenced by both clinical and experimental evidence. On the clinical side, we have reports of out of body experiences. Julian Jaynes recounts such a report:

That there is no phenomenal necessity in locating consciousness in the brain is further reinforced by various adnormal instances in which consciousness seems to be outside the body. A friend who received a left frontal brain injury in the war regained consciousness in the corner of the ceiling of a hospital ward looking down euphorically at himself on the cot swathed in bandages, Those who have taken lysergic acid diethylamide commonly report similar out-of-the-body or exosomatic experiences, as they are called. Such occurences do not demonstrates anything metaphysical whatever; simply that locating consciousness can be an arbitrary matter.

On the experimental side, we have people like Thomas Metzinger who can induce out of body experiences through virtual reality setups where you are given visual feedback through a camera of your own backside. After adjustment, your brain will make you feel like you really are floating outside your body. This tells us that the “location” of the experential global workspace is more or less arbitrary. If the brain wanted to, it could make you feel like the global workspace exists 3 feet above your skull. Now, of course, there are good reasons why the brain generates the illusion as taking place in your brain, for this is tied into the closeness of volition and internal sensations with our bodily experience. But it is important to remember that thousands of years ago the “internal mind-space” was experienced as being in the heart, not the head.

So the moral of this post is that if we are going to develop an adequate scientific theory of consciousness like GWT, we must be clear on the distinction between what is being explained and the mechanisms we posit to explain the phenomena. The phenomena to be explained is the experience of a unified Cartesian theater in our heads. The explanation is a global distribution of localized, nonconscious information. But when asked whether the global workspace “really exists”, we have to be clear between the workspace as experienced by us and the workspace as hypothesized by science. As we experience it, the “location” of the workspace inside our heads is arbitrary. As we explain it, the location of the workspace is the precise network of globally distributed brain* activity. So does a global workspace really exist? Yes. But it exists both as an illusion we experience, and as a brain distribution.


*I say “brain” activity and not “neuronal” activity because there is growing evidence that astrocytes play a role in modulating neuronal information processing through modulation of glutamate uptake in the synaptic cleft.

1 Comment

Filed under Consciousness, Philosophy, Psychology

Thoughts on Dennett's distinction between personal and subpersonal levels of explanation

I recently purchased the anthology Philosophy of Psychology: Contemporary Readings, edited by José Bermúdez. The first article in the collection is by Dan Dennett and it’s called “Personal and Sub-personal Levels of Explanation”.It’s a classic Dennettian paper, both in style and content. His overall goal in the paper is to defend a sharp distinction between the personal and subpersonal level of explanation. His primary example to illustrate the need for this distinction is the phenomenon of pain. For Dennett, the subpersonal level of explanation for pain  is pretty obvious and straightforward: it involves a scientific account of the various neurophysiological activities triggered by afferent nerves responding to damage which would negatively affect the evolutionary fitness of an organism. The subpersonal LoE does not need to actually reference the phenomenon of “pain”. It merely explains the physical behavior of the system under the umbrella framework of evolutionary theory.

In contrast to the subpersonal LoE, the personal LoE for pain would explicitly use the word/concept “pain” in order to explain the phenomena of pain. What does this involve? The personal LoE basically involves recognizing that for the person having the pain, the pain is simply picked up on i.e. distinguished by acquaintance. If we ask a person to give a personal-level explanation of their pain, Dennett thinks that the best they can do is simply say “I just know I am in pain because I recognized that I was in pain because I had the sensation of pain because I just knew I was in pain because I was conscious of pain and I just immediately know whether I am in pain or not, and so on.” It might seem like on this LoE, there needs to be something additional, because the explanation seems to be strangely circular and nonexplanatory. Dennett thinks this is a feature, not a bug of the personal level of pain and absolutely cannot be avoided. Dennett thinks that if you are going to invoke the concept of pain at all in your explanation of a phenomenon, then you (should) automatically resign yourself that the explanation can never be in terms which violate the essential nature of the pain as being something “You just know you have” without being able to give a mechanical account of how you know it. You just know.

Dennett thinks that if we are going to use/think about the concept “pain”, then we must be ready to make a sharp distinction between these two LoE. On the subpersonal level, you need not refer to the phenomenon of pain. You simply account for the physical behavior of the system in whatever scientific vocabulary is appropriate. On the personal level, you acknowledge that the term “pain” does not directly refer to any neurophysiological mechanism. In fact, it doesn’t refer at all. It references the phenomena of “just knowing you are in pain”, in virtue of the immediate sensation of painfulness, which then produces “pain talk”. Of course, Dennett notes that we can sensibly inquire into the neural realizers for such “pain talk”, but for Dennett it is crucial to realize that on the personal LoE, pain-talk is not referential, but rather, only makes sense in terms of being the pain of a person (not a brain) who “just knows” they are in pain, when in pain.

My problem with Dennett’s sharp distinction is that he seems to too readily accept the personal level phenomena as “brute facts”, not susceptible to further levels of mechanical/functional analysis. Take pain, for example. A.D. Craig has been developing a rather interesting view of pain as a homeostatic emotion, in the same way that hunger is a homeostatic emotion. The “feelings” of pain can then be likened to the “feelings” of hunger. On this account, human pain is both a sensation (based on ascending nerve signals) and a motivation (which leads to pain avoidance behaviors). The sensory aspect of pain is clear enough, and no different from Dennett’s subpersonal account, but the motivational aspect of pain comes from the thalamocortical projections of the primate brain which provide a sensory image of the physiological condition of the body, and are more or less directly tied into limbic pathways (i.e. motivational pathways).

Crucially, this account of pain starts to provide an account of the personal feelings that go beyond an acceptance of the “brute facts” of painfulness. The “just knowing” that you are in pain is analogous to the “just knowing” that you are hungry. The interoception of homeostatic indicators is reliable since if it was not it probably wouldn’t have evolved. Just like I “just know” I am perceiving/interacting with my laptop right now, if I was in pain, I would “just know” I am in pain. This is because pain is a homeostatic emotion generated by the interoception of homeostatic indicators, just like hunger is a feeling generated by the interoception of homeostatic indicators, and the feeling of knowing the laptop is there in front me of is generated by exteroception of the actual laptop. Think about the “pain” of being cold. The regulation of temperature in the body is obviously a homeostatic process, and the process of regulation includes both a sensory component (the feeling of being cold) and a homeostatic motivational state (the motivation to do something about being cold). Pain works the same way. It has both a sensory component (which we feel), and a motivational aspect (pain leads to avoidance behaviors). And here we can start to see what a functional explanation of the personal level would look like. As Craig says,

In humans, this interoceptive cortical image engenders discriminative sensations, and it is re- represented in the middle insula and then in the right (non-dominant) anterior insula. This seems to provide a meta-representation of the state of the body that is associated with subjective awareness of the material self as a feeling (sentient) entity – that is,emotional awareness – consistent with the ideas of James and Damasio.

It seems like this “meta-representation” which generates feelings of self-hood and associated cognitive processes of a self-referential nature could lead to the feelings of personhood referenced in the personal LoE. So although we might still be able to rescue the sharpness of Dennett’s distinction between the different LoE, it seems like the distinction gets blurred and becomes unhelpful when you start talking about the meta-representational functions which give rise to the associated mental phenomena of personal level pain-feelings and pain-talk for adult human beings.

Leave a comment

Filed under Consciousness, Philosophy, Psychology

Forget Quining Qualia; We need to start Jaynesing Qualia!

Yes, this post is about qualia, that oh so enigmatic subject of discussion in contemporary philosophy of mind. The debates are heated. Positively oozing with philosophical deftness in argumentation, distinctions about distinctions upon distinctions, arguments for and against the existence of qualia, about what the definition of qualia is, about just about everything that can be said about qualia. So what are qualia? Do they exist? Is it a coherent concept? Where did the concept come from? Is it a piece of metaphysical baggage left over from prescientific soul theory or does it have any grounding in the world that can experienced, introspectively or experimentally and confirmed by the intersubjective community? These are tough questions.

First: definitions. Most philosophers today define qualia as the “subjective properties” of experience itself. They are “intrinsic”, “internal” (most say it is internal, though some have recently said it is external; turns out, both are wrong, as you will see later), “private”, “subjective”, and so on.They report that when they introspect on their experience, they are aware of qualia or that they are conscious of having qualia. They are aware that there is “something it is like” to have that particular experience while they are introspecting on that experience. Sensations are a species of qualia metaphysics. A good way to get a sense of what philosophers are talking about is to think about what it would be like to consciously experience the sensation “red” as you introspect on your experience of gazing at a red apple on your desk, under normal, well-lit conditions. Or imagine what it is like to have a toothache, or feel the warmth of a fire on a cold night.

Next, some properties of qualia. The most enigmatic property of qualia is their privateness. It doesn’t seem like I can, in principle, know what it is like to have the experience of another person (barring some neurophysiological oddity like Siamese brains or something). I can certainly make educated guesses about what it is like experience someone else’s perspective (novels and psychology are two good tools for this), but there is no way to really double-check or confirm what I think their experience is like. What’s curious about qualia privateness is that under normal social conditions, we use language to either express or hide what our experience is like, about what kinds of sensations we are having (e.g. whether we are in pain), what we are feeling, what are our emotional mood is, what we believe, desire, intend, and so on. Moreover, most people’s mind is amazingly adept at picking up nonverbal body cues about mental states, since much emotion gets directly “leaked” in complex feedback through external bodily components/schemas (especially the face, body posture, eyes, etc. ). If someone is standing in the corner feeling uncomfortable, it is likely that everyone else, if they pay attention, will be able to perceive that he or she is uncomfortable, and be correct in their judgement.

So, do qualia exist? This seems like an obvious “Yes”. You would have to be clinically insane to “quine” (eliminate) qualia from your metaphysical baggage, right? Well, Daniel Dennett is a pretty smart fellow, and he seems to have a lot of good insights about consciousness and qualia, so let’s think about why he would say that qualia are simply a “trick” of the brain, an illusion we can’t really help but experience, and have false (or confused) beliefs about. How could the common person have false and confused beliefs about his or her own consciousness? Isn’t that what he or she is most familiar with? Absolutely not, as this long and insightful quote from Julian Jaynes illustrates perfectly (to me at least):

The final fallacy which I wish to discuss is both important and interesting, and I have left it for the last because I think it deals with the coup de grâce to the everyman theory of consciousness. Where does consciousness take place?

Everyone, or almost everyone, immediately replies, in my head. This is because when we introspect, we seem to look inward on an inner space somewhere behind our eyes. But what on earth do we mean by “look”? We even close are eyes sometimes to introspect even more clearly. Upon what? Its spatial character seems unquestionable. Moreover we seem to move or at least “look” in different directions. And if we press ourselves too strongly to further characterize this space (apart from its imagined contents), we feel a vague irritation, as if there were something that did not want to be known, some quality which to question was somehow ungrateful, like rudeness in a friendly place.

We not only locate this space of consciousness inside our own heads. We also assume it is there in others’. In talking with a friend, maintaining periodic eye-to-eye contact (that remnant of our primate past when eye-to-eye contact was concerned in establishing tribal hierarchies), we are always assuming a space behind our companion’s eyes into which we are talking, similar to the space we imagine inside our own heads where we are talking from.

And this is the very heartbeat of the matter. For we know perfectly well that there is no such space in anyone’s head at all! There is nothing inside my head or yours except physiological tissue of one sort or another. And the fact that it is predominantly neurological issue is irrelevant.

This this thought takes a little thinking to get used to. It means that we are continually inventing these spaces in our own an other people’s heads, knowing perfectly well that they don’t exist anatomically; and there location of these “spaces” is indeed quite arbitrary. The Aristotelian writings, for example, located consciousness or the abode of thought in and just above the heart, believing the brain to be a mere cooling organ since it was insensitive to touch or injury. And some readers will not have found this discussion valid since they locate their thinking selves somewhere in the upper chest. For most of us, however, the habit of locating consciousness in the head is so ingrained that it is difficult to think otherwise. But, actually, you could, as you remain where you are, just as well locate your consciousness around the corner in the next room against the wall near the floor, and do your thinking there as well as in your head. Not really just as well. For there are very good reasons why it is better to imagine your mind-space inside of you, reasons to do with volition and internal sensations, with the relationship of your body and your “I” which will become apparent as we go on.

That there is no phenomenal necessity in locating consciousness in the brain is further reinforced by various adnormal instances in which consciousness seems to be outside the body. A friend who received a left frontal brain injury in the war regained consciousness in the corner of the ceiling of a hospital ward looking down euphorically at himself on the cot swathed in bandages, Those who have taken lysergic acid diethylamide commonly report similar out-of-the-body or exosomatic experiences, as they are called. Such occurences do not demonstrates anything metaphysical whatever; simply that locating consciousness can be an arbitrary matter.

Let us not mistake. When I am conscious, I am always and definitely using certain parts of my brain inside my head. But so am I when riding a bicycle, and the bicycle riding does not go on inside my head. The cases are different of course, since bicycle riding has a definitely geographical location, while consciousness does not. In reality, consciousness has no location whatever except as we imagine it has.  ~ The Origin of Consciousness, p. 44-46

I hope this quote shows why, instead of Quining qualia, we need to Jaynes them! Consciousness is a complex mental phenomena that greatly complicates matters when thinking about qualia. Take this having of a “mind-space”. Now subtract it. You are sleepwalking. A zombie. What it is like to be you without your mind-space? Clearly you can still perceive, but can you consciously feel those perceptions? It doesn’t seem like it. But should we say that the zombie doesn’t have qualia? I don’t think that follows, since it is intuitive to me that there is something it is like to lack a mind-space, and in fact I think the experience of experiencing the world without an actively imagined mind-space is the norm in the animal kingdom, and it is humans and their mind-spaces that is rare. So it is not obvious that we need to Quine qualia. Dennett was wrong about this, because he didn’t see clearly enough how reflective consciousness changes the “what it is like” of prereflective consciousness. Introspecting on qualia is itself a rare cognitive feat since the animal at the watering hole is not thinking to itself “Man, I better watch out for predators” it just simply is watching out for predators. Introspection on experience changes the what it is like of experience, and (many, but not all) philosophers who introspect on their experience and talk about qualia have not adequately addressed this difference being made by the special act of introspection itself. Jaynesing qualia helps us see both what needs to be explained (the difference between having and not-having a mind-space) and how to explain it.


Filed under Consciousness

The Myth of Double Transduction and some thoughts on Dennett

[The idea of information processing] sometimes leads to serious confusions. The most seductive confusion could be called the Myth of Double Transduction: first, the nervous system transduces light, sound, temperature, and so forth into neural signals (trains of impulses in nerve fibers) and second, in some special central place, it transduces these trains of impulses into some other medium, the medium of consciousness! That’s what Descartes thought, and he suggested that the pineal gland, right in the center of the brain, was the place where this second transduction took place–into the mysterious, nonphysical medium of the mind. Today almost no one working on the mind thinks there is any such nonphysical medium. Strangely enough,  though, the idea of a second transduction into some special physical or material medium, in some yet-to-be-identified place in the brain, continues to beguile unwary theorists. It is as if they saw — or thought they saw — that since peripheral activity in the nervous system was mere sensitivity, there had to be some more central place where the sentience was created. After all, a live eyeball, disconnected from the rest of the brain, cannot see, had no conscious visual experience, so that must happen later, when the mysterious x is added to mere sensitivity to yield sentience.

~Daniel Dennett, Kinds of Minds, p. 72

Dennett’s Kinds of Minds has been on my to-read list for quite some time and I am glad that I am finally getting around to reading it. Although I am still on the fence about the philosophical utility of the so-called “Intentional stance” and the metaphysical agnosticism it seems to lead to, I am very much sympathetic to Dennett’s ideas on minds, especially his view of the difference between animal minds and human minds and his emphasis on the importance of language for transforming sensitive-reactive systems into minds proper. Dennett also seems to perfectly understand the looming threat of Cartesian dualism behind even the most hard-nosed scientific reductionisms. Understanding the Myth of Double Transduction is crucial for understanding why the Neural Correlates of Consciousness is a bankrupt research program that starts from the illicit assumption that phenomenal experience is somehow “produced” or “generated” in the brain like a special material substance.

Coming back to metaphysical agnosticism though, I am troubled by Dennett’s willingness to call anything an intentional system so long as we can appropriately treat it as if it were an intentional system. This “stance view” seems to waver on the real metaphysical question of demarcating “true minds” from pseudominds. Presumably, Dennett holds onto the stance view because he thinks that robots, could, in principle have genuine minds, and anything except a stance-oriented, functionalist position would amount to some kind of biological chauvinism. However, I’m not sure that functionalism necessarily implies a stance-oriented view. It seems to me that we could use a kind of microfunctionalism to make a strong demarcation between real minds and pseudominds (like thermometers), while still preserving a sense of mind that an advanced robot could theoretically possess in the future. Dennett thinks this press for realism and philosophical clarity leads to all kinds of chauvinisms, but I don’t think such a chauvinism is at odds with functionalism provided we are clear about the kinds of functions unique to biological systems, or at least very difficult to achieve artificially (autonomy, self-maintenance, homeostatic regulation, etc.) Instead of saying that an intentional system is merely whatever can be appropriately labeled as if it were a mind (while remaining agnostic about what they really are), we could instead offer a genuine demarcation for a mind, and say that robots or thermometers can either fail or succeed in meeting this pattern and their metaphysical status can be secured (I think thermometers fail to qualify as minds, and at best are pseudocognitive systems). However, we could still account for our propensity to overestimate the extent to which inanimate objects have minds, as well as account for the explanatory utility of taking the “intentional stance” as a late-blooming evolutionary adaptation (or, most likely, an exaptation). In my opinion, Dennett buys too much of Jamesian pragmatism, which seems to waver on metaphysical issues for the sake of achieving a philosophical productivity (“The intentional stance is such a useful way of talking!”). I want to know what minds really are, independently of any stance we might take towards them. But such a realism about minds certainly doesn’t necessitate a dualism, nor does it necessitate an essentialism about minds, biological chauvinism, or abandonment of the functional position.

Just my thoughts.


Filed under Consciousness, Philosophy

Dreyfus Strikes Again

Heterophenomenology: Heavy-handed sleight-of-hand

Abstract:We argue that heterophenomenology both over- and under-populates the intentional realm. For example, when one is involved in coping, one’s mind does not contain beliefs. Since the heterophenomenologist interprets all intentional commitment as belief, he necessarily overgenerates the belief contents of the mind. Since beliefs cannot capture the normative aspect of coping and perceiving, any method, such as heterophenomenology, that allows for only beliefs is guaranteed not only to overgenerate beliefs but also to undergenerate other kinds of intentional phenomena.

I thought this was an interesting critique of Dennett’s heterophenomenology. If you don’t know, heterophenomenology is a research methodology that acts as “a bridge – the bridge – between the subjectivity of human consciousness and the natural sciences.” Essentially, the heterophenomenologist is an objective gatherer and interpreter of first-person subjective reports who doesn’t construe the reporter as completely authoritative.

What this interpersonal communication enables you, the investigator, to do is to compose a catalogue of what the subject believes to be true about his or her conscious experience.

So, the heterophenomenologist interprets all intentional phenomena as beliefs. This is a problem for Dreyfus and Kelly because it overgenerates mental content. They use the example of going out of a door to illustrate their point on overgeneration. If you ask someone going out of a door whether they “believed there was a chasm on the other side”, they might say yes, but in reality, as they were going out of the door, they were thinking no such thing but were merely responding to the “to-go-out” solicitation given by the door. No beliefs were involved in the act at all, just pure motor intentionality.

This last point on “motor intentionality” is crucial, because Dreyfus and company also accuse the heterophenomenologist of undergenerating intentional contents.

But to deny that skillful coping involves belief is not to deny that it lacks intentional content altogether. There is a form of motorintentional content that is experienced as a solicitation to act. This content cannot be captured in the belief that I’m experiencing an affordance. Indeed, as soon as I step back from and reflect on an affordance, the experience of the current tension slips away. Since beliefs cannot capture this normative aspect of coping and perceiving, any method, such as heterophenomenology, that allows for only beliefs is guaranteed not only to overgenerate beliefs but also to undergenerate other kinds of intentional phenomena.

Leave a comment

Filed under Philosophy, Psychology