Monthly Archives: May 2011

Some thoughts on Weisberg's new paper "The Zombie's Cogito: Meditations on Type-Q Materialism"

“One line of defense for the materialist pursuing a reductive explanation of consciousness is to claim that, despite appearances, qualia can be functionally characterized. The antimaterialist rejection of a functional characterization is based on introspection and reflection on the results of introspection. We access qualia introspectively and then reflect on their functionality, concluding that qualia and function can come apart at least in principle. But must we accept all that introspective reflection has to say about this matter? Perhaps introspection fails to reveal the underlying functional nature of qualia and thus misleads us when we reason about function and conscious feel. So long as the materialist can produce a plausible functional account of the mechanisms of first-person access leading us astray, there is no need to accept an irreducibly nonfunctional element in conscious experience. A reductive explanation of consciousness, one that effectively “locates” consciousness in a physical world, can then proceed.” ~ Josh Weisberg, “The Zombie’s Cogito: Meditations on Type-Q Materialism

I like this quote from Weisberg. He seems to be “Jaynesing” qualia in the way I have previously outlined on my blog. Crucially, Weisberg recognizes that introspection about qualia can mislead us about the actual nature of qualia in virtue of being theory-laden. If we introspect on qualia with the background theory that qualia are nonfunctional, then we will not surprisingly discover in our introspection that qualia are nonfunctional. However, if we had a functional characterization of qualia from the get go, then perhaps an introspection on qualia could report that, in fact, qualia do have functional properties. As Weisberg says, “The materialist is claiming, on this line of argument, that qualia are functional even though they do not seem functional…Qualia, like other aspects of the mind, may not be as they appear.”

I really like this approach to the zombie argument. It basically tries to argue that armchair introspection is not a reliable guide to discovering the essence of qualia. Just because when we reflect on qualia, it seems that qualia are nonfunctional, that doesn’t mean that qualia actually are nonfunctional. Although I would probably nuance this a little in terms of specifying how biological functions generate experiential (qualia) properties differently from how a toaster generates experiential properties (if at all). Although both the organism and the toaster have functional properties, it seems like the organism would have a totally different type of qualia in virtue of its homeostatic and self-regulating nature. It seems then that biological autonomy is a great source of experiential distinction between ourselves and inanimate objects, so much so that it becomes questionable whether it is appropriate to even say that inanimate objects have any experience at all. I concede that it might be appropriate, but that the gulf of the distinction is so great as to almost be conceptually unbridgeable. We simply cannot imagine what it is like to be a toaster, since we have no experience of being so…stone-like.

In responding to the zombie argument, it is important to note that Weisberg does not deny that zombies are conceivable. Instead, he claims that they are conceiving of a world where a theory is false (the theory being that qualia are nonfunctional). But Weisberg claims that it is an empirical matter whether these conceived zombies are metaphysically possible. Weisberg, on solid ground, then claims that according to our best empirical science, zombies are not possible since it is based on a false theory or belief (namely, that qualia are nonfunctional, which science rejects). Since empirical mind science tells us that qualia are functional, then we can claim that zombies are conceivable, but not metaphysically possible. It only seems like zombies are possible if, prior to introspecting on qualia, you believe that qualia are nonfunctional. Or, you could introspect on qualia and then come to believe that qualia are nonfunctional, and then conclude zombies are possible. But since we have no reason to believe that introspection is the most reliable way of determining the essence of qualia, it is possible and highly likely that such introspection is giving rise to a delusion or false belief.

Weisberg is thus committed to there being an appearance/reality distinction at the level of consciousness itself. In other words, instead of saying that it is impossible to be deluded about your own consciousness (since on this view consciousness just is whatever it appears to be), Weisberg wants to claim that qualia can appear to us to be one way, and in fact, be another way. That is, we can be deluded about our own consciousness. As Weisberg says,

“There seem to be coherent cases of both unconscious pains (pains lacking the appearance of pain) and “false positives” of pain (states appearing to be pain, but are not). We might have a headache all day that we fail to notice at times while we’re engrossed in work. Or we might suffer an injury in a dangerous, stressful situation, and only notice that we’ve been in pain at a later, calmer moment. And we might mistake another feeling for pain.”

Weisberg then conducts a thought experiment to prove his point. Imagine Rene Descartes’s zombie twin, RZ. Since zombies have all the same beliefs as their counterparts, we can suppose that RZ would be able to conduct the same process of methodological skepticism as the real Descartes. Remember, it is defined that zombies have no seemings. But the real Descartes line of reasoning ends up concluding that although it seems like he is perceiving, he might not be (he could be deluded by a demon). But the important part is that he believes that it seems like he is perceiving. Accordingly, RZ would reach the same conclusion, namely, that something seems one way to him. But now we have the strange result that zombies have seemings as well, despite zombies being defined as having no seemings. This leads us to the conclusion that we might be zombies just like RZ, since we certainly have seemings as well. As Weisberg says, “We might have full empirical and rational confidence that we are not zombies, despite the fact that “all is dark inside” and we are not in fact phenomenally conscious!” Accordingly, “The “zombie’s cogito” shows us that we can conceive of beings that seem to be in pain but are not.”

There is more to Weisberg’s argument than I feel like writing about, but I think I have captured the gist. To me, the most important upshot of his paper is the defense of an appearance/reality distinction for our inner world of consciousness. He argues convincingly that we can have false beliefs about how things seem to us. It might seem that qualia are nonfunctional when we introspect, but we are quite wrong about this according to our best theoretical frameworks in the mind sciences.

Advertisements

Leave a comment

Filed under Consciousness, Philosophy

A thought about memory and intention

Imagine that you are reading at 3pm and you suddenly remember that you have a pressing appointment at 4pm. The thought simply pops into your mind, and completely surprises you. What’s curious about this example is that the process of remembering was carried out without your intending it to happen. You did not do the remembering as your conscious mind was focusing on reading and understanding the text. It was the nonconscious mind which did the remembering, and then forced the results of its processing into the input-tray for consciousness. As you were reading, your nonconscious mind informed the conscious mind that it had just carried out a remembering process and that the result of the process is the content that you have a pressing appointment at 4pm. Your consciousness then completely reroutes itself and starts to focus on the process of getting to the appointment on time.

I like this example because it illustrates how there are many mental processes being carried out below the surface of consciousness. We usually think that remembering is something we consciously do. In a way this could be analytically true if we simply defined remembering as requiring our consciousness to be involved. But if we didn’t define remembering in this way, then we get the interesting idea that crucially important mental activities like remembering can happen below the surface of consciousness. Often, the results of these nonconscious processes are never even introduced into consciousness, and directly influence the behavior of the body without our conscious awareness. But sometimes the end-products of the nonconscious mental processes are loaded into consciousness and our conscious mind becomes aware of the content and is now able to start performing conscious operations on the content, which often involve the narratization of the content into a folk psychological story (often causal, involving reasons), which then has top-down effects on the entire behavior of the system e.g. the narratization of the nonconscious remembering process about the appointment enables the conscious rumination of behavior possibilities and allows for a shortcut in the decision making process through the higher-order linguistic categorization of the appointment in terms of simpler, more abstract categorical structures and schemas like “4:00pm”,”get documents”, “find keys”,”get in car”,”take highway”, “second floor”.

These abstract categorical structures allow for the construction of a mental narrative-schema through which consciousness acts and is able to influence the world. We become capable of consciously thinking thoughts like “Shoot! My appointment is at 4pm, I better get ready now and take the highway so I can make it on time.” Thoughts like these provide decision-making shortcuts and start a chain-reaction of reciprocal information exchange between the conscious and nonconscious systems. The tight functional loops between these systems give rise to complex and fluid human behaviors, such as scrambling to get ready and driving a car to make a pressing social appointment.

Leave a comment

Filed under Consciousness

Thoughts on Dennett's distinction between personal and subpersonal levels of explanation

I recently purchased the anthology Philosophy of Psychology: Contemporary Readings, edited by José Bermúdez. The first article in the collection is by Dan Dennett and it’s called “Personal and Sub-personal Levels of Explanation”.It’s a classic Dennettian paper, both in style and content. His overall goal in the paper is to defend a sharp distinction between the personal and subpersonal level of explanation. His primary example to illustrate the need for this distinction is the phenomenon of pain. For Dennett, the subpersonal level of explanation for pain  is pretty obvious and straightforward: it involves a scientific account of the various neurophysiological activities triggered by afferent nerves responding to damage which would negatively affect the evolutionary fitness of an organism. The subpersonal LoE does not need to actually reference the phenomenon of “pain”. It merely explains the physical behavior of the system under the umbrella framework of evolutionary theory.

In contrast to the subpersonal LoE, the personal LoE for pain would explicitly use the word/concept “pain” in order to explain the phenomena of pain. What does this involve? The personal LoE basically involves recognizing that for the person having the pain, the pain is simply picked up on i.e. distinguished by acquaintance. If we ask a person to give a personal-level explanation of their pain, Dennett thinks that the best they can do is simply say “I just know I am in pain because I recognized that I was in pain because I had the sensation of pain because I just knew I was in pain because I was conscious of pain and I just immediately know whether I am in pain or not, and so on.” It might seem like on this LoE, there needs to be something additional, because the explanation seems to be strangely circular and nonexplanatory. Dennett thinks this is a feature, not a bug of the personal level of pain and absolutely cannot be avoided. Dennett thinks that if you are going to invoke the concept of pain at all in your explanation of a phenomenon, then you (should) automatically resign yourself that the explanation can never be in terms which violate the essential nature of the pain as being something “You just know you have” without being able to give a mechanical account of how you know it. You just know.

Dennett thinks that if we are going to use/think about the concept “pain”, then we must be ready to make a sharp distinction between these two LoE. On the subpersonal level, you need not refer to the phenomenon of pain. You simply account for the physical behavior of the system in whatever scientific vocabulary is appropriate. On the personal level, you acknowledge that the term “pain” does not directly refer to any neurophysiological mechanism. In fact, it doesn’t refer at all. It references the phenomena of “just knowing you are in pain”, in virtue of the immediate sensation of painfulness, which then produces “pain talk”. Of course, Dennett notes that we can sensibly inquire into the neural realizers for such “pain talk”, but for Dennett it is crucial to realize that on the personal LoE, pain-talk is not referential, but rather, only makes sense in terms of being the pain of a person (not a brain) who “just knows” they are in pain, when in pain.

My problem with Dennett’s sharp distinction is that he seems to too readily accept the personal level phenomena as “brute facts”, not susceptible to further levels of mechanical/functional analysis. Take pain, for example. A.D. Craig has been developing a rather interesting view of pain as a homeostatic emotion, in the same way that hunger is a homeostatic emotion. The “feelings” of pain can then be likened to the “feelings” of hunger. On this account, human pain is both a sensation (based on ascending nerve signals) and a motivation (which leads to pain avoidance behaviors). The sensory aspect of pain is clear enough, and no different from Dennett’s subpersonal account, but the motivational aspect of pain comes from the thalamocortical projections of the primate brain which provide a sensory image of the physiological condition of the body, and are more or less directly tied into limbic pathways (i.e. motivational pathways).

Crucially, this account of pain starts to provide an account of the personal feelings that go beyond an acceptance of the “brute facts” of painfulness. The “just knowing” that you are in pain is analogous to the “just knowing” that you are hungry. The interoception of homeostatic indicators is reliable since if it was not it probably wouldn’t have evolved. Just like I “just know” I am perceiving/interacting with my laptop right now, if I was in pain, I would “just know” I am in pain. This is because pain is a homeostatic emotion generated by the interoception of homeostatic indicators, just like hunger is a feeling generated by the interoception of homeostatic indicators, and the feeling of knowing the laptop is there in front me of is generated by exteroception of the actual laptop. Think about the “pain” of being cold. The regulation of temperature in the body is obviously a homeostatic process, and the process of regulation includes both a sensory component (the feeling of being cold) and a homeostatic motivational state (the motivation to do something about being cold). Pain works the same way. It has both a sensory component (which we feel), and a motivational aspect (pain leads to avoidance behaviors). And here we can start to see what a functional explanation of the personal level would look like. As Craig says,

In humans, this interoceptive cortical image engenders discriminative sensations, and it is re- represented in the middle insula and then in the right (non-dominant) anterior insula. This seems to provide a meta-representation of the state of the body that is associated with subjective awareness of the material self as a feeling (sentient) entity – that is,emotional awareness – consistent with the ideas of James and Damasio.

It seems like this “meta-representation” which generates feelings of self-hood and associated cognitive processes of a self-referential nature could lead to the feelings of personhood referenced in the personal LoE. So although we might still be able to rescue the sharpness of Dennett’s distinction between the different LoE, it seems like the distinction gets blurred and becomes unhelpful when you start talking about the meta-representational functions which give rise to the associated mental phenomena of personal level pain-feelings and pain-talk for adult human beings.

Leave a comment

Filed under Consciousness, Philosophy, Psychology

Throwing, Brain Evolution, Modularity, and the Origins of Human Language

I’m currently reading William H. Calvin’s eccentric little book The Throwing Madonna: Essays on the Brain. I picked it up on Kindle for .99c (a great deal). His basic theory seems to be that an adaptation to throw rocks at small prey sparked the lateralization of the brain which then drove the evolution of language (which is our most lateralized brain function). He thinks that the start of brain lateralization was based on the functionality of a rapid, fine-motor skills module that plans and executes sequential motor actions, such as throwing. He hypothesizes (based on a few scattered lines of evidence) that the skills related to being able to throw changed the brain in such a way that language is some kind of side-effect or offshoot of this process. The side-effect would have enormous benefits given the growing communicative skills of the group, which were probably based primarily on gesture. The fine-sequence module is theorized to lead to better gestural communication, which provides the neurological grammatical base for the development of more sophisticated linguistic utterances structured by a more complex syntax, which would have taken advantage of the lateralization and developing skills of fine motor sequencing for moving the mouth and tongue in complex ways. Calvin doesn’t mention this possibility, but singing could have been the intermediary between gesture and complex vocal utterances that depended on the lateralized fine-motor skills demonstrated in human handedness. It is rather curious why no other animals seems to be as handed as the human, and that handedness is more or less defined in terms of the right arm/hand being capable of executing fine-motor skills in a continuous and intelligent sequence.

I should add that talking about a fine-motor skills “module” should not bring to mind a single component, as if the module were like a gear in a clock, with one unique function demarcated by its spatial structure. A neurological module could (1) have more than one function and (2) be spread out among distinct neurological spaces, perhaps even functionally distributed in populations of neural firing rates. A fine-sequence module could actually be subsumed by a wide variety of submodules acting together and sharing information in a looped hierarchy. The module could be distributed in space among multiple neural structures, each of which could have different functional roles depending on the immediate neurological context (i.e. the rate of firing of surrounding populations). Since the functionality of the system is probably distributed at least in some respects over multiple neurological regions, we have no reason to suppose that the fine-sequence module is realized by a single clump of neural tissue (nor do we have reason for thinking pretty much any large-scale behavior/function – like perception or language – is exhaustively realized by any one clump of tissue; although some clumps do have functional specialization, this doesn’t mean that the clumps only function is that one specialized function.) In all likelihood, the module for just about anything as complex and multifaceted as language is probably realized by a variety of neural components which are distributed across many different areas of the brain, although there are probably clusters of populations which are more or less crucial in the processing loops, without which the function would be completely lost, as compared to some components of the loop which are less essential, but removal or malfunction of which causes minor processing difficulties.

The modularity is simply a result of thinking about the skills in terms of functions, and realizing that neural tissue clumps can have multiple functions and be subsumed by multiple subcomponents, which can feed information towards the top and loop around and have an feedback effect on the larger functional module. There is thus no reason to suppose that modularity requires one to imagine a phrenological layout of brain function, with each neural tissue clump having one and only one function. In reality, different neural clumps realize multiple functions based on the patterns of neural activity, which are computationally continuous and thus whose explanations requires a sui generis concept of computation, one grounded in the real-time constraints of neural population spiking codes.

The “fine-motor sequence” theory of lateralization and language origin seems compatible with the theory recently put forward in Michael Tomasello’s book Origins of Human Communication. Tomasello thinks that the grammar of modern human language got part of its developmental foundation from gesture rather than vocalization. The vocalizations seen in our ape-cousins are rather preprogrammed and neurologically specific in what triggers them. The vocalizations seems to be more of emotional expressions rather than expressions of intentional communication. Ape gestures, on the other hand, have the intentional structure necessary to provide a simple grammar of requesting, which can incorporate attention-grabbing gestural signals like stomping on the ground, requests for food, or requests for play. It is this grammar of requesting that, when coupled with the development of shared communicative ground, joint-attention, leads later to a more syntactically developed grammar of informing, which is based less on a individual-based selfishness of requesting than a shared, communal, reciprocity of information sharing grounded in a shared attentional/motivational context. For apes who have been trained in communicate through sign-language, 90% of their communicational intentions are requests, often for simple, immediate bodily desires like food or play. Humans in contrast, seem to take an intrinsic pleasure in the act of social communication and sharing helpful information for the sake of sociality. Of course, apes like to play, but human children seem to think its fun to communicate and share just for the sake of communicating and sharing (e.g. a child seeing a dog, pointing at it, and saying to his parents “Look! A doggie!”). This intrinsic shared communicative context is what gets the process of language learning really off the ground in such a way as to develop the more syntactically complex grammars related to providing “bird’s eye view” information to members of the social community within a shared, normatively structured context.

 

4 Comments

Filed under Psychology

Some quick thoughts on Dewey

Lately, I have been reading John Dewey’s Democracy and Education: an Introduction to the Philosophy of Education. It’s absolutely riveting considering it’s philosophy of education. You’d think something with that title would be dull and dry, abstracted from anything concrete or interesting. But it’s much better phenomenology than anything in the Heideggerian tradition, even more clear than Merleau-Ponty. And it’s actually psychologically astute, to a stunning degree. The more I read the American pragmatists, the more I think they are superior philosophers to the three H’s (Hegel, Husserl, Heidegger). The intellectual weight of Peirce, James, and Dewey is mighty indeed. Dewey is so clear and precise. He uses simple language but talks about big, important and morally pressing ideas. He shows a deep understanding of the need for education, not just training. There is a difference, manifested in the social community and the propagation of socially important norms, ideas, beliefs, and habits of action/thought. Every philosopher should aim to emulate his succinctness and simultaneous depth of thought. And I think his psychological theories are spot on, considering their date. I cannot really fault him on anything other than not being neurologically specific, but since brain scanners weren’t available as a tool I really don’t blame him. But his philosophy of psychology is accurate, as far as I am concerned. While I might talk about things differently, I think the gist of his ideas is very close to the truth. He had such keen insights on the nature of experience, the role of consciousness and nonconsciousness in everyday life, and many other important phenomenological ideas, many of which are articulated more clearly than Heidegger was ever able to do. Dewey is particularly good in respect to the social nature of humanity and what it means to live in a community. Great stuff.

Leave a comment

Filed under Philosophy, Psychology

Forget Quining Qualia; We need to start Jaynesing Qualia!

Yes, this post is about qualia, that oh so enigmatic subject of discussion in contemporary philosophy of mind. The debates are heated. Positively oozing with philosophical deftness in argumentation, distinctions about distinctions upon distinctions, arguments for and against the existence of qualia, about what the definition of qualia is, about just about everything that can be said about qualia. So what are qualia? Do they exist? Is it a coherent concept? Where did the concept come from? Is it a piece of metaphysical baggage left over from prescientific soul theory or does it have any grounding in the world that can experienced, introspectively or experimentally and confirmed by the intersubjective community? These are tough questions.

First: definitions. Most philosophers today define qualia as the “subjective properties” of experience itself. They are “intrinsic”, “internal” (most say it is internal, though some have recently said it is external; turns out, both are wrong, as you will see later), “private”, “subjective”, and so on.They report that when they introspect on their experience, they are aware of qualia or that they are conscious of having qualia. They are aware that there is “something it is like” to have that particular experience while they are introspecting on that experience. Sensations are a species of qualia metaphysics. A good way to get a sense of what philosophers are talking about is to think about what it would be like to consciously experience the sensation “red” as you introspect on your experience of gazing at a red apple on your desk, under normal, well-lit conditions. Or imagine what it is like to have a toothache, or feel the warmth of a fire on a cold night.

Next, some properties of qualia. The most enigmatic property of qualia is their privateness. It doesn’t seem like I can, in principle, know what it is like to have the experience of another person (barring some neurophysiological oddity like Siamese brains or something). I can certainly make educated guesses about what it is like experience someone else’s perspective (novels and psychology are two good tools for this), but there is no way to really double-check or confirm what I think their experience is like. What’s curious about qualia privateness is that under normal social conditions, we use language to either express or hide what our experience is like, about what kinds of sensations we are having (e.g. whether we are in pain), what we are feeling, what are our emotional mood is, what we believe, desire, intend, and so on. Moreover, most people’s mind is amazingly adept at picking up nonverbal body cues about mental states, since much emotion gets directly “leaked” in complex feedback through external bodily components/schemas (especially the face, body posture, eyes, etc. ). If someone is standing in the corner feeling uncomfortable, it is likely that everyone else, if they pay attention, will be able to perceive that he or she is uncomfortable, and be correct in their judgement.

So, do qualia exist? This seems like an obvious “Yes”. You would have to be clinically insane to “quine” (eliminate) qualia from your metaphysical baggage, right? Well, Daniel Dennett is a pretty smart fellow, and he seems to have a lot of good insights about consciousness and qualia, so let’s think about why he would say that qualia are simply a “trick” of the brain, an illusion we can’t really help but experience, and have false (or confused) beliefs about. How could the common person have false and confused beliefs about his or her own consciousness? Isn’t that what he or she is most familiar with? Absolutely not, as this long and insightful quote from Julian Jaynes illustrates perfectly (to me at least):

The final fallacy which I wish to discuss is both important and interesting, and I have left it for the last because I think it deals with the coup de grâce to the everyman theory of consciousness. Where does consciousness take place?

Everyone, or almost everyone, immediately replies, in my head. This is because when we introspect, we seem to look inward on an inner space somewhere behind our eyes. But what on earth do we mean by “look”? We even close are eyes sometimes to introspect even more clearly. Upon what? Its spatial character seems unquestionable. Moreover we seem to move or at least “look” in different directions. And if we press ourselves too strongly to further characterize this space (apart from its imagined contents), we feel a vague irritation, as if there were something that did not want to be known, some quality which to question was somehow ungrateful, like rudeness in a friendly place.

We not only locate this space of consciousness inside our own heads. We also assume it is there in others’. In talking with a friend, maintaining periodic eye-to-eye contact (that remnant of our primate past when eye-to-eye contact was concerned in establishing tribal hierarchies), we are always assuming a space behind our companion’s eyes into which we are talking, similar to the space we imagine inside our own heads where we are talking from.

And this is the very heartbeat of the matter. For we know perfectly well that there is no such space in anyone’s head at all! There is nothing inside my head or yours except physiological tissue of one sort or another. And the fact that it is predominantly neurological issue is irrelevant.

This this thought takes a little thinking to get used to. It means that we are continually inventing these spaces in our own an other people’s heads, knowing perfectly well that they don’t exist anatomically; and there location of these “spaces” is indeed quite arbitrary. The Aristotelian writings, for example, located consciousness or the abode of thought in and just above the heart, believing the brain to be a mere cooling organ since it was insensitive to touch or injury. And some readers will not have found this discussion valid since they locate their thinking selves somewhere in the upper chest. For most of us, however, the habit of locating consciousness in the head is so ingrained that it is difficult to think otherwise. But, actually, you could, as you remain where you are, just as well locate your consciousness around the corner in the next room against the wall near the floor, and do your thinking there as well as in your head. Not really just as well. For there are very good reasons why it is better to imagine your mind-space inside of you, reasons to do with volition and internal sensations, with the relationship of your body and your “I” which will become apparent as we go on.

That there is no phenomenal necessity in locating consciousness in the brain is further reinforced by various adnormal instances in which consciousness seems to be outside the body. A friend who received a left frontal brain injury in the war regained consciousness in the corner of the ceiling of a hospital ward looking down euphorically at himself on the cot swathed in bandages, Those who have taken lysergic acid diethylamide commonly report similar out-of-the-body or exosomatic experiences, as they are called. Such occurences do not demonstrates anything metaphysical whatever; simply that locating consciousness can be an arbitrary matter.

Let us not mistake. When I am conscious, I am always and definitely using certain parts of my brain inside my head. But so am I when riding a bicycle, and the bicycle riding does not go on inside my head. The cases are different of course, since bicycle riding has a definitely geographical location, while consciousness does not. In reality, consciousness has no location whatever except as we imagine it has.  ~ The Origin of Consciousness, p. 44-46

I hope this quote shows why, instead of Quining qualia, we need to Jaynes them! Consciousness is a complex mental phenomena that greatly complicates matters when thinking about qualia. Take this having of a “mind-space”. Now subtract it. You are sleepwalking. A zombie. What it is like to be you without your mind-space? Clearly you can still perceive, but can you consciously feel those perceptions? It doesn’t seem like it. But should we say that the zombie doesn’t have qualia? I don’t think that follows, since it is intuitive to me that there is something it is like to lack a mind-space, and in fact I think the experience of experiencing the world without an actively imagined mind-space is the norm in the animal kingdom, and it is humans and their mind-spaces that is rare. So it is not obvious that we need to Quine qualia. Dennett was wrong about this, because he didn’t see clearly enough how reflective consciousness changes the “what it is like” of prereflective consciousness. Introspecting on qualia is itself a rare cognitive feat since the animal at the watering hole is not thinking to itself “Man, I better watch out for predators” it just simply is watching out for predators. Introspection on experience changes the what it is like of experience, and (many, but not all) philosophers who introspect on their experience and talk about qualia have not adequately addressed this difference being made by the special act of introspection itself. Jaynesing qualia helps us see both what needs to be explained (the difference between having and not-having a mind-space) and how to explain it.

4 Comments

Filed under Consciousness

Is the mind identical to the nervous system?

Some philosophers of mind once thought (and perhaps still think) that the best answer to the question “What is the mind?” is simply “The mind is the nervous system”. In defending this claim, these philosophers sometimes make an analogy between prescientific attempts to answer the question “What is lightning?” and the question of “What is the mind?” Our ancestors once answered the lightning question by saying it was a manifestation of a god’s wrath or something. Modern science tells us, however, that lightning is some kind of electrical discharge. And our ancestors used to answer the question “What is the mind” by saying that the mind was the soul, or some thinking substance detached from the body and brain. So in the same way the identity theorists want to claim that, just as with lightning, the modern scientific answer to the question “What is the mind?” is simply “the nervous system”. What else could it be?

One response to this argument is the idea that the mind is not identical with the nervous system, but rather, with the functioning of the nervous system. This response is designed to answer questions of biological chauvinism e.g. If some entity does not have a human nervous system yet demonstrates intelligent behavior are we to say that it does not have a mind simply because it doesn’t have a nervous system like ours? Thus, the identity thesis seems too restricting.

But does identifying  the mind with the function of the nervous system also exclude too much? What if we were to say that the mind is identical to the function of the entire body+brain and not just the nervous system alone? After all, it seems like the “internal milieu” of the body might play a functional role of such importance that it would be problematic to simply identify the mind with the nervous system and not the total system of brain + body e.g. the diffusion of hormones in the blood system seems to play a functional role in mental processes.  On this view, the nervous system is simply too entangled with the body for there to be a clear-cut psychological distinction of brain and body. To acknowledge the role of the body in cognition and “what it is like” to be a human animal would be to emphasize an “embodied” perspective. So it seems we have recourse for saying that the mind is not identical to the nervous system, and that it might actually supervene on the total brain-body system given that importance of the bodily milieu for determining what it is like to be human.

But is this the end of the issue? Certainly not. Even thornier problems can be raised with the question of “Is the mind identical to the nervous system?” For me, a critical problem with this question is that it does not distinguish between the unconscious mind and the conscious mind. It seems like the question of identity is different for these two types of minds. Given the importance of the unconscious mind for regulating behavior in response to the registration of homeostatic markers in the bodily milieu, it seems like we would be right to say that the unconscious mind is identical with the functioning of the brain-body system. But what about the conscious mind? It might seem plausible to entertain the hypothesis that the conscious mind is not identical with the functioning of the total brain-body system, but rather, is only identical with the functioning of certain parts of the nervous system, such as those which are involved in the “global broadcasting” and multimodal convergence of information in the higher cortical areas.

On this view, the unconscious mind would be identical to the functionality of the total brain-body system insofar as there isn’t an experiential distinction at this level between experiencing oneself as either being “in the head” or “in the body”. For the unconscious mind, the feeling of being is simply distributed throughout the total brain-body system (and perhaps also into the environment, as per the Extended Mind Thesis and “externalist” theories of perception). But for the conscious mind, there seems to be an experiential component wherein we consciously feel ourselves to exist “inside our heads”, looking out from behind our eyes, and capable of losing ourselves inside a detached mind-space in contemplation, remembrance, and deliberation. For the conscious mind, it seems problematic to identify it strictly with the functioning of the total brain-body system precisely because the conscious mind rarely, if ever, experiences itself as if it were the total brain-body system.

So what’s going on here? It seems like the functionality of certain recently evolved, human specific neural circuits is such that it causes us to consciously experience our conscious mind as if it weren’t constrained by mere physical embodiment. After all, humans report instances wherein they float outside their bodies (in surgery, astral projection, or near death experiences, for example). Moreover, controlled experimental work is capable of inducing wild illusions such as that our conscious feeling of our hand is transported into a rubber hand, or that we experience ourselves as looking at our own backs through clever virtual reality setups. Experimental and clinical work has shown beyond a doubt that the conscious mind is capable of experiencing itself as if it weren’t identical to the brain-body system.

So it seems like if we want to answer the question, “Is the mind identical to the nervous system?” we need to first make a distinction between the unconscious mind and the conscious mind. For the unconscious mind, it seems plausible that it is identical to the functionality of the brain-body system plus certain features of the environment integrated functionally in the right way. For the conscious mind, it seems plausible to suppose that it is identical to the functionality of recently evolved human specific neural structures (that might even develop in ontogeny) that allow for generation of a “virtual” level of experience, wherein the “mind-space” we experience ourselves as inhabiting is not constrained by any actual space, but is itself a virtual construction that is essentially a “functional delusion”. Our brain tricks our conscious mind into thinking that it isn’t identical to the functionality of the nervous system, but this is just a clever delusion. I thus think that Daniel Wegner, Ben Libet, Dan Dennett and other thinkers in the “trick of the brain” tradition are right to postulate that, in some sense, the conscious mind is an brain-generated illusion. But it is a brain-generated illusion that allows us to do things that animals without the illusion are not able to do.

2 Comments

Filed under Consciousness