Why we should disentangle "what-it-is-like-ness" from consciousness

If you ask almost any mainstream philosopher familiar with the problem of consciousness for a definition of consciousness, more often than not they will define it in terms of “what-it-is-like-ness”. For these philosophers, if there is “something it is like” to be an entity then that entity is conscious, period. This works pretty well for most objects. Is there something it is like to be human? Most people would say yes. Therefore, humans are conscious. Is there something it is like to be a rock? Most people would say no. Therefore, rocks are not conscious.  On first blush then, it seems like “what-it-is-like-ness” is a good working definition of what consciousness “is”. But the rock and human cases are the easy objects. What about objects like an earthworm? Is there something it is like to be an earthworm? Whereas it is seemingly obvious that there is nothing it is like to be a rock, how can we answer this question about an earthworm? It seems somewhat intuitive to say that there is something it is like to be an earthworm. Therefore, it would seem that we must say that the earthworm is conscious. But in this post I want to press these intuitions. For me, it doesn’t seem immediately wrong to say that an earthworm lacks consciousness. This seems like a perfectly coherent thing to say. If it is, then we must either say that there is nothing it is like to be an earthworm, or that consciousness does not overlap with what-it-is-like-ness. Since it seems wrong to say that there is nothing it is like to be an earthworm, then we are compelled to reexamine the mainstream definition of consciousness as “what-it-is-like-ness”.

But if consciousness is not what-it-is-like-ness, then what is it? Well, we seem to be pretty clear on the fact that humans are capable of being conscious, and that rocks aren’t, so what is the difference between a rock and a human? The difference needs to be such that it isn’t shared by the earthworm and the human, so we will need to rule out the capacity for perception and action, or the possession of a nervous system. A clue for narrowing in on this difference can be found in the case of the sleeping mother. Imagine a mother is asleep in one room and a newborn infant is asleep in another room. The mother is sound asleep and oblivious to sounds like the noisy air conditioner turning on, or the sound of traffic outside the window. But the slightest noise of the infant is enough to catch her attention and wake her. Now, it seems obvious to most people that the sleeping mother was capable of a complex perceptual act, since presumably the perception of the infant’s small cry against the background noise of the house is a case of genuine perception. So here is the million dollar question: was the mother conscious of the baby’s cry while she was alseep?

The field of consciousness studies seems to be split down the middle when it comes to answering this question. On the one hand you have the first-order theorists who claim that since the perception of the baby’s cry necessarily requires awareness of the cry, and since they define consciousness as first-order awareness, then the mother was in fact conscious of the baby’s cry. On the other hand you have the second-order theorists who claim that it is not enough for the mother to be simply aware of the cry to be conscious. Rather, they claim that in order to be conscious of the cry, the mother must be aware that she is aware of the cry. The awareness must be higher-order in order to be conscious.

My intuitions lean towards the second-order theorists. I think that the mother is not conscious of the baby’s cry. Rather, her adaptive unconscious is aware of the baby’s cry and upon perceiving the cry, this information is globally assembled and shunted into consciousness where it shortcuts decision making. But the unconscious perception of the cry is genuine mental activity and the unconscious awareness of the cry is genuine awareness. I find the second-order story of the sleeping mother much more intuitive since it strikes me as patently misguided to say that someone could be conscious of something even when they are asleep and not aware that they are aware. Obviously while sleeping the mother’s mind is in some respect aware of what’s happening in the environment otherwise she wouldn’t wake up upon hearing her baby stir. But I think it is misguided to define consciousness in terms of such simple awareness, for what follows from such a definition is the idea that earthworms are conscious, since they too possess the capacity for first-order awareness. And I think it is most sensible to restrict consciousness such that the earthworm and the sleeping mother lacks it. But just to be clear, consciousness is also not to be confused with mere alertness, wakefulness, and awareness of events in either the body or the world. For the earthworm is aware of certain properties in the environment, yet it is not conscious (in my opinion).

A further question is whether there is something it is like to be a sleeping mother who becomes aware of her baby’s cry. The first-order theorists claim that yes there is something it is like to be the sleeping mother insofar as the mother is aware of the baby and that  they think that there is something it is like to be aware of things in the world. For most second-order theorists, there is not something it is like to be the sleeping mother since the mother lacks second-order awareness. This is where my intuitions depart from the second-order theorists, for I think that even though the mother is not conscious, there is still something it is like to be asleep, just like there is something it is like to be an earthworm. Now, some theorists will immediately reply that it is absurd to claim that there is something it is like to be asleep. But we’ve already seen that the sleeping person is still capable of first-order awareness of the environment, and it does seem intuitive that there is something it is like to have first-order awareness, otherwise we are forced to claim that there is nothing it is like to be an earthworm and this is an undesirable position.

So what are we left with in terms of a definition of consciousness? I think the second-order theorists are on the right track insofar as they emphasize that it is not enough for an entity to be aware of something in order to be conscious; one must also be aware that you are aware. However, I disagree with the second-order theorists insofar as I don’t think what needs explaining is what-it-is-like-ness, since I think there is something it is like to be an earthworm and obviously the earthworm is not aware of its own awareness. So to explain consciousness, we need to explain how it’s possible for an entity to be aware of its own awareness. While I won’t go into the details in this post, regular readers of this blog know that I take a Jaynesian approach to this question, and think that what makes it possible for us to be aware of our awareness is to have a linguistic concept for “awareness”. New linguistic concepts enable us to pay attention to new aspects of reality. The linguistic concept of “awareness” allows us to pay attention to awareness qua awareness. So the hypothesis here is that unless you have a linguistic concept for awareness, you cannot be conscious, because you cannot pay attention and thus be aware of your own awareness. So on this view, infants who lack the linguistic concept for awareness are not conscious. This also restricts the historicality of consciousness to those points in history where humans first started developing mentalistic concepts. This is in accord with Daniel Dennett’s famous analogy of baseball. Just like one cannot play baseball without the concept of baseball, one cannot be conscious unless you have the right concepts in place.



Filed under Consciousness, Philosophy

9 responses to “Why we should disentangle "what-it-is-like-ness" from consciousness

  1. Hmm, I have no idea whether earthworms have what-it’s-likes. OTOH, I’m sure that earthworms have a kind of functional awareness, and that they lack higher order awareness, so insofar as I’m unsure whether they’re “conscious” the only thing I could sensibly be wondering about is whether they have what-it’s-likes.

    If the debate between first-order and higher-order theorists isn’t over whether higher-order awareness is necessary for the (distinct) phenomenon of what-it’s-likes, then doesn’t it just degrade into a merely verbal dispute? We can talk about functional awareness (“consciousness1”), and about functional meta-awareness (“consciousness2”), but all the really substantive questions seem to turn on how each of these relates to the distinct notion of *phenomenal* consciousness (or what-it’s-likes). No?

  2. Gary Williams


    I think you are right and it has indeed come down to more or less a debate of definitions. I think now the real fight is going to be which set of definitions is more productive in terms of gathering adherents and producing literature. But in a way, the downgrade of the debate into a verbal dispute between people who say the main distinction is nonconsciousness versus consciousness and the people who say the distinction is between consciousness and meta-consciousness is not really a bad thing. Both groups of people are talking about the importance of “higher-order” thinking, but differ in how they define it in respect to consciousness. This seems to be some kind of headway towards a broader consensus in the field. I like the nonconscious vs conscious distinction over the conscious vs meta-conscious distinction but I have in print with my friend Micah set out the distinction in terms of prereflective consciousness vs reflective consciousness. I think there are compelling reasons for maintaining the nonconscious vs conscious distinction between I am also willing to use the principle of charity and realize most people are talking about the same things but with different interpretational frameworks.

  3. “… it does seem intuitive that there is something it is like to have first-order awareness, otherwise we are forced to claim that there is nothing it is like to be an earthworm and this is an undesirable position.”

    Interesting. I don’t have the intuition that it’s something to be like an earthworm, and in fact all the evidence thus far suggests that phenomenal experience is associated with fairly complex information integration processes in service to flexible behavior which earthworms don’t exhibit, see http://www.naturalism.org/kto.htm#Neuroscience

    “So on this view, infants who lack the linguistic concept for awareness are not conscious.”

    On this view it would follow that since they aren’t conscious, infants don’t experience pain, so we needn’t worry about giving them anesthesia for surgical procedures. But maybe I’ve missed something…

    • Gary Williams

      “all the evidence thus far suggests that phenomenal experience is associated with fairly complex information integration processes in service to flexible behavior which earthworms don’t exhibit”

      How do you define “phenomenal experience” here? If you define it as “what-it-is-like-ness”, and then restrict this to complicated creatures, you beg the question against the claim that worms have a what-it-is-like. In my estimation, all lifeforms have phenomenal experience insofar as there is “something it is like” to be alive. The point of my post is to disentangle “what-it-is-like-ness” from complicated cognitive operations that allow flexible behavior. People are free to disagree with my intuitions here, but there are problems with denying earthworms (who do have neural ganglia and thus process information) phenomenal experience e.g. where do you draw the line along the phylogenetic continuum? Where does “phenomenality” begin and end? To draw a line above earthworms seems arbitrary to me.

      “On this view it would follow that since they aren’t conscious, infants don’t experience pain, so we needn’t worry about giving them anesthesia for surgical procedures. But maybe I’ve missed something…”

      You have missed the distinction between pain and suffering. On my view, pain can exist without consciousness insofar as it is tied into homeostatic regulation systems that detect cellular damage. Suffering is a metacognitive appraisal of the pain. So on my view, infants do experience pain insofar as their basic pain detection mechanisms are working, but do not “suffer” in the same way that an adult human does who is capable of ruminating on pain. I assume then that we give infants anesthesia because of the negative biological effects of the basic pain system, and the anxiety and distress of such experience is bad for the infant. So I don’t think consciousness is necessary for pain. There is something it is like to be in pain, but this doesn’t require consciousness of pain. A consequence of this view is that there are such things as “nonconscious pains”. For example, imagine you stub your toe and then a minute later you get an important phone call. While you are talking, you forget about the pain, but when you hang up, the pain returned. What happened to the pain while you were on the phone? Did it disappear? This is a prima facie reason for accepting to existence of nonconscious pains.

    • Gary Williams

      Tom, I read those sections you linked in your paper, and I think you are definitely on the right track. You say:

      “Once we discount the seeming observation of private phenomenal facts, we can begin to see that specifically phenomenal aspects of information – the highly integrated, self-and-objects-in-a-world aspect, and the resistance-to-further-representation aspect – involve nothing beyond the functioning of the global workspace in which representations of the self and environment dominate in controlling behavior. Such integrated representations dominate because we need a global, integrated, reality model to behave effectively…”

      I agree completely with this statement and much of what you say in regards to consciousness being an property of the functional operations of a global workspace. However, my claim is that such a theory does not explain phenomenal experience. What I think the global workspace theory does is explain consciousness. But I have attempted to argue in various places that consciousness and phenomenal experience can be separated in principle i.e. one can have nonconscious phenomenal experience. I think the earthworm, the bat, the cat, and the infant are all having nonconscious phenomenal experience. So what is it that enables the development of a global workspace? Following Jaynes and Dennett (c.f. Dennett’s book Kinds of Minds), I think it is the learning of certain kinds of complex language which enables the kind of global broadcasting of nonconscious information. So on my view, what the global workspace explains is not “phenomenal experience”, but, “conscious experience”. On this view, the newborn infant has phenomenal experience, but is not conscious, because it has not learned language yet, and thus, is not capable of the kinds of multimodal global broadcasting typical of conscious operations. So the brain must learn how to globally broadcast information, and I think it is language which enables this process through its special properties of multimodal integration, particularly in respect to narratological and autobiographical representations.

      For me, “phenomenal experience” is not as theoretically interesting as consciousness, because I think phenomenal experience is basic and shared even by unicellulars, and it might have something to do with the dynamic and temporally extended autopoietic regulation of metabolism. My claim is that consciousness does not bestow phenomenal feelings, but rather, radically modifies feelings, giving rise to new forms of subjectivity that are based on the mechanisms of global broadcasting that you talk about in your paper. So I definitely think our positions are theoretically close insofar as we both agree on the functional architecture of consciousness as a global broadcasting system. I think the main difference is that I am explicit that it is language which enables the development of global broadcasting, and thus, such operations are unavailable to nonlinguistic creatures. I know people like Bernard Baars disagree about limiting consciousness to linguistic creatures, but I am pretty sure this is Dennett’s position (as well as Jaynes).

      • Hi Gary,

        I think of phenomenal experience as what consciousness is, and that consciousness involves there being something it is like, so it seems to me the idea of unconscious phenomenal experience (e.g., unconscious pain, where it is something it is like to be in pain) is an oxymoron. You ask:

        “What happened to the pain while you were on the phone? Did it disappear?”

        When we become unconscious of pain, I’d say pain no longer *exists*, even though the locution “become unconscious of pain” suggests the pain persists unconsciously. If you put phenomenal experience, what it is like, etc. into the unconscious realm, seems to me you’re subtracting the primary features of consciousness as we ordinarily think of it. What’s left of consciousness as an explanatory target on your account is the difference, for instance, between pain and suffering, where the latter isn’t anything phenomenal or what it is like (since those can be unconscious), but a further thing produced by a “metacogntive appraisal.” But I’m not sure what this feature of consciousness is.

        What language permits us to do, seems to me, is to verbally report experiences that an infant and other non-linguistic creatures have but cannot verbally report. But they report it, that is, make it apparent to outsiders, in other behavioral ways. And the pain they undergo, seems to me, *is* conscious suffering, just like ours. As to where exactly on the phylogenetic continuum conscious suffering (pain) comes to exist for an organism, that’s a tough (and perhaps badly formulated) question, but it clearly has to do with the sorts of internal processing empirically discovered to correlate with conscious states.

  4. Good discussion, but don’t get too hung up on the terminology. Most people I know of (e.g. psychologists) have a two teared system. Skinner called them ‘awareness’ and ‘consciousness’ others call them ‘consciousness’ and ‘self-consciousness’, but it doesn’t matter. If you agree there are two things worth talking about and you can keep them apart, the rest is just silly arguments over words.


    P.S. I like Hegel’s Master and Slave or “consciousness for itself” and “consciousness for another” if you want philosophy jargon with a good pedigree.

  5. John Kubie

    My sense is that you can’t put consciousness into either/or categories. In the particular example of the sleeping mother, she may be dreaming. Is that a conscious state? Her baby’s cry may enter the dream (sensory inputs do enter dreams). This may be the trigger that awakens her. My sense is that there are many discrete states of consciousness, and that self-awareness is only one, or can be differentiated into subcategories. For example, I can be aware of the state of my body, or I can be aware of where I am in time and space, or aware of my capabilities.

    On another note, what about animal consciousness. Is my dog (or bat) conscious? I strongly believe that my dog has a strong form of consciousness, but I’ve been surprised to hear people like Ramashandrun suggest that consciousness is uniquely human. The solution, I think, is to separate the human-type consciousness that we generally talk about from a more immediate, language-free consciousness of many animals.

  6. Pingback: The Meaning of Meaning | Minds and Brains

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s