The Zombie argument has always rubbed me the wrong way. This post will attempt to explain why. Let me first try and reconstruct the argument in my own words, with the intention of being fair to Chalmers’ underlying intuitions about consciousness.
For Chalmers, consciousness is the “what it is like” of an agent. He claims that he can only know for sure that he is conscious, that there is “something it is like” to be him. For every other conscious being, Chalmers thinks he can only infer that they are based on third-person evidence, as opposed to having self-evident first-person knowledge of qualia states, the “qualitative” or “subjective” phenomena of having a perspective on the world, having a “phenomenal” world, etc. Chalmers thinks this phenomena of “phenomenal consciousness” is really strange philosophically. It is the source of the famous mind-body problem, the basic idea that there are two general aspects known in experience, mental phenomena and physical phenomena. Mental phenomena are “things” like sensations, perceptions, beliefs, desires, imaginations, feelings, pains, and thoughts. For Chalmers it is important to distinguish two ways of understanding these mental phenomena. The first way is in terms of their roles in a causal-functional economy, of how they “do stuff” that is useful. This is what Chalmers calls “access consciousness”. It is “easy” to explain access consciousness neurologically, because we can make sense of functions in terms of causes, and we know how neurons cause things to happen in the brain and body. The second way of understanding the mental phenomena is in terms of how there is a qualitative “something it is like” to be the subject of those sensations, beliefs, pains, etc. This is what Chalmers calls “phenomenal consciousness”. It is “hard” to explain psychologically in terms of functions or adaptive usefulness.
This is the central claim for Chalmers: he claims to not be able to conceive of any functional usefulness for this phenomenal consciousness. Imagine an atom-for-atom duplicate of your body in an alternate world. Chalmers thinks that he can coherently conceive of this duplicate as a “Zombie”, that is, a being who lacks phenomenal consciousness. If you prick a Zombie, he will yelp and move his arm back. If you asked him if it hurt, he could produce verbal behavior that describes in great detail what it is like to feel pain. He could even write philosophical essays about the distinction between phenomenal and access consciousness, and wax poetic about the great joy of sensing and experiencing the world from a first person perspective. But the Zombie would not be conscious, in any way, even though it shared exactly the same set of 100 billion neurons, all arranged in the same same chemical soup and organized in the exact same way. Imagine hooking up a conscious human and his Zombie to a brain scanner, and asking the conscious subject to report when he starts to mind-wander and ruminate to himself, about either the past, present, or future. The scanner for the Zombie would look exactly the same, of course. And if you asked the Zombie to report any mind-wandering or self-conscious thinking, his report of the time of the conscious thought would be exactly the same as the conscious subject. In fact, if the conscious subject and the Zombie were physically identical, you could mix them up in the lab room and there would be, in principle, no way of telling one from the other.
This lack of coherent criteria for telling a Zombie from a conscious subject in a real life setting should set off huge philosophical warning bells. Chalmers’ basic argument seems to be that since he cannot imagine any possible way of telling a functional story about the “qualitative” or “first person” perspective, and thus about consciousness, it is necessary that physicalism is not true, since physicalism claims that mental phenomena are really just physical phenomena. Chalmers is a monist, but he thinks that “conscious” things (i.e. qualitative properties), exist in a fundamentally different way than physical things like the brain. Hence Chalmers is an old-fashioned dualist wrapped up in modern garb. He thinks that physical matter gives rise to two types of properties: physical properties and mental properties. And since these two properties have to be explained in fundamentally different ways, physicalism (the idea that only “physical, causal-functional stories” suffice to explain mental phenomena) is necessarily false.
I think the problem here is as follows. Chalmers leads us astray from the start when he articulates what needs explaining and what is philosophically interesting. For Chalmers, what is philosophically interesting is first-person experience, “what it is likeness”. But this concept is never sufficiently defined or explained. He says something to the effect of “If you got to ask what it is, you aint never going to know.” This, of course, doesn’t satisfy me at all. First, I want to know just what this first-person experience is. What concept of person are we working under? Is a coma patient a subject of experience? Why not? It seems perfectly conceivable that there is “something it is like” to be a coma patient insofar as she is “living” in the world and interacting with it according to the individual idiosyncrasies of her still vegetatively working brain. Surely, on pains of conceptual parsimony, the coma patient’s unconscious mind has privileged, first-person access to the mental content that is the unconscious mental phenomena such as the processing and manipulation of information streaming in from the total environmental envelope, even if on a dim and vegetative scale. It makes perfect sense to say that the coma patient’s brain is most assuredly processing auditory data unconsciously, and this constitutes an instance of a mental phenomena, under almost every modern definition of mental phenomena. And since mental phenomena are defined by Chalmers as having a qualitative component, then, on pains of contradiction, we are forced to conclude that “there is something it is like to be unconscious”.
And if this is the case, then the conceptual usefulness of defining what needs explaining about minds purely in terms of phenomenal consciousness looks doubtful, for the problem is this: We all know intuitively that there is a huge “mental” difference between a coma patient and a fully awake, linguistically competent human. Yet in terms of Chalmers very conceptual framework, there is not a fundamental constitutive difference in phenomenal consciousness between the coma patient and the adult human, only a difference in degree of “phenomenal dimness”. If you doubt this, ask yourself this: what is the difference in epistemic access to the mind of a bat or the mind of an unconscious coma patient? If we are allowed to posit that there is “something it is like” to be a bat even when we cannot ask it if it is conscious, then why are we not allowed to posit that there is “something it is like” to be a coma patient? In both cases, you cannot ask the agent if they are conscious. We can either claim that the criterion of consciousness is reportability, or we are stuck wondering whether the bat or the coma patient is conscious. But rather than claiming the bat is probably conscious and we just can’t know it for sure, I think we should claim that the bat is not conscious simply because it doesn’t have the cognitive acumen to be meta-conscious of its first-order awareness in such a way as to be able to report on those states, either internally in thought or externally in verbal behavior.
So what is the fundamental difference between the coma patient and adult human? I claim that it is the difference between nonconscious reactivity and the operation of consciousness proper. Phenomenal consciousness as a concept is less interesting to me precisely because it isn’t useful for distinguishing humans from nonhuman animals, nor infants and coma patients from linguistically competent and verbally/intentionally responsive mentalities. And as a researcher of the mind, I think the differences between humans and nonhumans are far more psychologically and philosophically interesting than the similarities.
For starters, humans are a literate species, and one immersed in language, symbols, culture, ritual, and artificial constructions to a degree off the charts in comparison to nonhuman animals. This has huge effects on the development of the brain and the potentiality for new forms of narratological subjectivity. Moreover, the long period of time before sexual maturation in humans allows for a greater plasticity and capacity for adopting to changing environments than any other animal. And it is not the volume of frontal matter that distinguishes us from apes when body size is controlled for; it is connective fibers, the neural tissue most open to effects of plasticity.There is “something it is like” to be transported into an imaginary world while reading a book. I would be greatly surprised if any nonhuman animal was capable of experiencing such forms of subjectivity. Chalmers greatly under estimates the qualitative differences between brains competent in language and those restricted only to nonverbal mentalities (with verbal cognition referring to at least some natural capacity for understanding symbolic signs). There is also something it is like to “talk to oneself”, to tell oneself what to do, to initiate action through the slow deliberation of “inner speech”. These inner speech mechanisms are directly tied into our autobiographical memory, and help constitute our sense of conscious identity, our explicit knowledge about who we are, where we came from, what we believe, what we desire, what it is like to be us. Paraphrasing Julian Jaynes, you cannot be conscious of what you are not conscious of. For if you could not remember that you were conscious at time T, in what sense could you ever consciously know that you were conscious at time T? There is a fundamental link between autobiographical memory, capacity to self-report, and consciousness that cannot be explained in terms of Chalmers distinction between “easy problems” and “hard problems”.
This is the only way to make sense of philosophers like Robert Brandom or Dan Dennett who claim that consciousness constitutively depends on the capacity for report and be meta-aware that you are conscious. In a very real sense, what we all intuitively understand to be philosophically interesting, that which separates conscious adults from coma patients, fetuses, and birds, is not the very capacity to experience the world from a first-person view, since even coma patients are still persons in an absolutely minimal sense of bodily self-consciousness. No, what’s philosophically interesting is not awareness of the environment — something shared with earthworms, as Darwin demonstrated — but awareness of your awareness of the environment, in terms of inferentially linked concepts like “sensation”, “belief”, “desire” “meta-awareness”, “representation”, “perception”, “memory”, “thinking”, “I”, “me”, “mine”, “Soul”, “mind”, “consciousness”.
As it turns out then, consciousness depends on the concept of consciousness being active within the mental economy of the conscious subject. We thus can distinguish between nonconscious first-person subjectivity shared by all organisms and conscious first-person subjectivity dependent on the capacity to be meta-aware that you are aware, and capable of reporting on past instances of awareness in terms of narratologically structured and inferentially linked concepts learned in through childhood through exposure to intersubjective linguistic stimulation.