Tag Archives: introspection

Quote of the Day – The Power and Limit of Introspection

Introspection is not fully inept. People do know the contents of their minds. That is, they do know what they think, what their attitudes are, what their emotions are. Introspection works fine as a way of knowing what the current contents of one’s mind are. But people do not know how they arrived at those contents. A sincere answer to ‘What are you feeling right now?” is likely to be accurate, whereas a sincere answer to ‘Why do you feel that way?” is much less reliable.

~Roy Baumeister, The Cultural Animal, p. 220-221

Leave a comment

Filed under Consciousness, Psychology

The Refrigerator Light Problem

1.0 The Problem of Phenomenal Consciousness

Phenomenal consciousness has a familiar guise but is frustratingly mysterious. Difficult to define (Goldman, 1993), it involves the sense of there being “something-it-is-like” for an entity to exist. Many theorists have studied phenomenal consciousness and concluded physicalism is false (Chalmers, 1995, 2003; Jackson, 1982; Kripke, 1972; Nagel, 1974). Other theorists defend physicalism on metaphysical grounds but argue there is an unbridgeable “explanatory gap” for phenomenal consciousness (Howell, 2009; Levine, 1983, 2001). “Mysterians” have argued the explanatory gap is intractable because of how the human mind works (McGinn, 1989; 1999). Whatever it is, phenomenal consciousness seems to lurk amidst biological processes but never plays a clearly identifiable causal role that couldn’t be performed nonconsciously (Flanagan & Polger, 1995). After all, some philosophers argue for the possibility of a “zombie” (Chalmers, 1996) physically identical to humans but entirely devoid of phenomenal consciousness.

Debates in the sprawling consciousness literature often come down to differences in intuition concerning the basic question of what consciousness actually is. One question we might have about its nature concerns its pervasiveness. First, is consciousness pervasive throughout our own waking life? Second, is it pervasive throughout the animal kingdom? We might be tempted to answer the first question by introspecting on our experience and hoping that will help us with the second question. However, introspecting on our experience generates a well known puzzle known as the “refrigerator light problem”.

2.0 The Refrigerator Light Problem
2.1 Thick vs thin

The refrigerator light problem is motivated by the question, “Consciousness seems pervasive in our waking life, but just how pervasive is it?” Analogously, we can ask whether the refrigerator light is always on. Naively, it seems like it’s on even when the door is closed, but is it really? The question is easily answered because we can investigate the design and function of refrigerators and conclude that the light is designed to turn off when the door is closed. We could even cut a hole in the door to see for ourselves. However, the functional approach won’t work with phenomenal consciousness because we currently lack a theory of how phenomenal consciousness works or any consensus on what its possible function might be, or whether it could even serve a function.

The refrigerator light problem is the problem of deciding between two mutually exclusive views of consciousness (Schwitzgebel, 2007):

The Thick View: Consciousness seems pervasive because it is pervasive, but we often cannot access or report this consciousness.
The Thin View: Consciousness seems pervasive, but this is just an illusion.

The thick view is straightforward to understand, but the thin view is prima facie counterintuitive. How could we be wrong about how our own consciousness seems to us? Many philosophers argue that a reality/appearance distinction for consciousness itself is nonsensical because consciousness just is how things seem. In other words, if consciousness seems pervasive, then it is pervasive.

On the thin view, however, the fact that it seems like consciousness is pervasive is a result of consciousness generating a false sense of pervasiveness. The thin theorist thinks that anytime we try to become aware of what-it-is-like to enjoy nonintrospective experience, we activate our introspection by inquiring and corrupt the data. The thin theorist is for methodological reasons skeptical about the idea of phenomenal consciousness existing without our ability to access or attend to it. If phenomenal consciousness can exist without any ability to report it then how can psychologists study it if subjects must issue a report that they are conscious? Anytime a subject reports they are conscious, you can’t rule out that it is the reporting doing all the work. The thin theorist challenges us to become aware of these nonintrospective experiences such that we can report on their existence and meaningfully theorize about them.

Philosophers might appeal to special phenomenological properties to falsify the thin view. This won’t work because, in principle, one could develop a thin view to accommodate any of the special phenomenological properties ascribed to phenomenal consciousness such as the pervasive “raw feeling” of redness when introspecting on what-it-is-like to look at a strawberry or the “painfulness” of pain. Thin theory can simply explain away the experience of pervasiveness as an illusion generated by a mechanism that itself isn’t pervasive. Julian Jaynes is famous for defending a strong thin view:

Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of…It is like asking a flashlight in a dark room to search around for something that doesn’t have any light shining on it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so consciousness can seem to pervade all mentality when actually it does not. (1976, p. 23)

Thin vs thick views represent the two most common interpretations of the refrigerator light problem, and both seem to account for the data equally well. The problem is that from the perspective of introspection, both theories are indistinguishable. The mere possibility of the thin view being true motivates the methodological dilemma of the refrigerator light problem. How do we rule out thin explanations of thick phenomenology?

2.2 The Difference Introspection Makes

The intractability of the refrigerator light depends on the inevitable influence introspection has on nonintrospective experience. Consider the following case. Jones loves strawberries. He eats one a day at 3:00 pm. All day, Jones looks forward to 3:00 pm because it’s the one time of the day when he can savor the moment and take a break from the hustle-and-bustle of work. When 3:00 pm arrives, he first gazes longingly at the strawberry, his eyes soaking up its patterns of texture and color while his reflective mind contemplates how it will taste. Now Jones reaches out for the strawberry, puts it up to his mouth, and bites into it slowly, savoring and paying attention to the sweetness and delicate fibrosity that is distinctive of strawberries. What’s crucial is that Jones is not just enjoying the strawberry, but introspecting on the fact that he is enjoying the strawberry. That is, he is aware of the strawberry but also meta-aware of his first-order awareness.

Suppose we ask Jones what it’s like for him to enjoy the strawberry when he is not introspecting. The refrigerator light problem will completely stump him. Moreover, suppose we want to ascribe consciousness to Jones (or Jones wants to ascribe it to himself). Should we ascribe it before he starts introspecting or after? Naturally, the answer depends on whether we accept a thin or thick view. According to a thin view, whatever is present in Jones’ experience prior to introspection does not warrant the label “consciousness”. The thin theorist might call this pervasive property “nonconscious qualia” (Rosenthal, 1997), but they reserve the term “consciousness” to describe Jones’ metarepresentational awareness that his perceiving. The thin theorist would agree with William Calvin when he says, in defining “consciousness”, “The term should capture something of our advanced abilities rather than covering the commonplace” (1989, p. 78).

What about nonhuman animals? Whereas a thin theorist would say there is a difference in kind between human and rat consciousnesss, the thick theorist is likely to say that both the rat and Jones share the most important kind of pervasive consciousness. Is this jostling a purely terminological squabble? Kriegel (2009) has argued that the debate is substantial because theorists have different intuitions about the source of mystery for consciousness. The thick theorist thinks the mystery originates with first-order pervasiveness; the thin theorist thinks it originates with second-order awareness. Unfortunately, a squabble over intuitions is just as stale as a terminological dispute.

3.0 The Generality of the Refrigerator Light Problem
3.1 Introducing the Stipulation Strategy

If you are a scientist wanting to tackle the Hard problem of phenomenal consciousness, how would you respond to the refrigerator light problem? If the debate between thin and thick theories is either terminological or based on conflicting intuitions, what do you do? The only strategy I can think of for circumventing the terminological arbitrariness is to embrace it using what I call the stipulation strategy. It works like this. You first agree that we cannot resolve the thin vs thick debate using introspection alone. Unfazed, you simply stipulate some criterion for pointing phenomenal consciousness out such that it can be detected with empirical methods.

Possible criteria are diverse and differ from scientist to scientist. Some theorists stipulate that you will find phenomenal consciousness anytime you can find first-order (FO) perceptual representations of the right kind (Baars, 1997; Block, 1995; Byrne, 1997; Dretske, 1993, 2006; Tye, 1997). This would allow us to find many instances of phenomenal consciousness throughout the biological world, especially in creatures with nervous systems. However, we might have a more restricted criterion that says you will find phenomenal consciousness anytime you have higher-order (HO) thoughts/perceptions (Gennaro, 2004; Lycan, 1997; Rosenthal, 2005), restricting the instantiations of phenomenal consciousness to mammals or maybe even primates depending on your understanding of higher-order cognition. Or, more controversially, you might have a panpsychist stipulation criterion that makes it possible to point out phenomenal consciousness in the inorganic world.

Once we understand how the stipulation strategy works, the significance of any possible reductive explanation becomes trivialized qua explanation of phenomenal consciousness. To apply this result to contemporary views, I will start with FO theory, apply the same argument to HO theory, and then discuss the more counterintuitive (but equally plausible) theory of panpsychism.

3.2 The First-order Gambit

FO theorists deny the transitivity principle and claim one does not need to be meta-aware in order for there to be something-it-is-like to exist. The idea is that we can be in genuine conscious states but completely unaware of being in them. That is, FO theorists think there can be something-it-is-like for S to exist without S being aware of what-it-is-like for S to exist, a possibility HO theorists think absurd if not downright incoherent because the phrase “for S” suggests meta-awareness.

FO approaches are characterized by their use of perceptual awareness as the stipulation criterion for consciousness. A representative example is Dretske, who says “Seeing, hearing, and smelling x are ways of being conscious of x. Seeing a tree, smelling a rose, and feeling a wrinkle is to be (perceptually) aware (conscious) of the tree, the rose, and the wrinkle” (1993, p. 265). Dretske argues that once you understand what consciousness is (perceptual awareness), you will realize that one can be pervasively conscious without being meta-aware that you are conscious.

However, there is a serious problem with trying to reconcile the implications of theoretical stipulation criteria with common intuitions about which creatures are conscious. The problem with using perceptual awareness as our criterion is that it casts its net widely, perhaps too widely if you think phenomenality is only realized in nervous systems. Since many FO theorists think that if we are going to have a scientific explanation of phenomenal consciousness at all it must be a neural explanation (Block, 2007; Koch, 2004) they will want to avoid ascribing consciousness to nonneural organisms. However, if we stipulate that a bat has phenomenal consciousness in virtue of its capacity for perceptual awareness, I see no principled way of looking at the phylogenetic timeline and marking the evolution of neural systems as the origin of perceptual awareness.

To see why, consider chemotaxis in unicellular bacteria (Kirby, 2009; Van Haastert & Devreotes, 2004). Recently chemotaxis has been modeled using informatic or computational theory rather than classical mechanistic biology (Bourret & Stock, 2002; Bray, 1995; Danchin, 2009; Shapiro, 2007). A simple demonstration of chemotaxis would occur if you stuck a bacterium in a petri dish that had a small concentration of sugar on one side. The bacterium would be able to intelligently discriminate the sugar side from the non-sugar side and regulate its swimming behavior to move upstream the gradient. Naturally we assume the bacterium is able to perceive the presence of sugar and respond appropriately. On this simplistic notion of perceiving, perceiving a stimulus is, roughly speaking, a matter of valenced behavioral discrimination of that stimulus. By valenced, I mean that the stimuli are valued as either attractive or aversive with respect to the goals of the organism (in this case, survival and homeostasis). If the bacterium simply moved around randomly when placed in a sugar gradient such that the sugar had no particular attractive or aversive force, we might conclude that the bacterium is not capable of perceiving sugar, or that sugar is not ecologically relevant to the goals of the organism. But if the bacterium always moved upstream of the sugar gradient, it is natural to say that the bacterium is capable of perceiving the presence of sugar. Likewise, if there were a toxin placed in the petri dish, we would expect this to be valenced as aversive and the bacteria would react appropriately by avoiding it, with appropriateness understood in terms of the goal of survival

Described in this minimal way, perceptual awareness in its most basic form does not seem so special that only creatures with nerve cells are capable of it. Someone might object that this is not a case of genuine perceptual awareness because there is nothing-it-is-like for the bacterium to sense the sugar or that its goals are not genuine goals. But how do we actually know this? How could we know this? For all we know, there is something-it-is-like for the bacterium to perceive the sugar. If we use perceptual awareness as our stipulation criterion, then we are fully justified in ascribing consciousness to even unicellulars.

Furthermore, it is misleading to say bacteria only respond to “proximal” stimulation, and therefore are not truly perceiving. Proximal stimulation implies an implausible “snapshot” picture of stimulation where the stimulation happens instantaneously at a receptor surface. But if stimuli can have a spatial (adjacent) component why can they not also have a temporal (successive) component? As J.J. Gibson put it, “Transformations of pattern are just as [biologically] stimulating as patterns are” (Gibson, 1966). And this is what researchers studying chemotaxis actually find: “for optimal chemotactic sensitivity [cells] combine spatial and temporal information” (Van Haastert & Devreotes, 2004, p. 626). The distinction between proximal stimulation and distal perception rests on a misunderstanding of what actually stimulates organisms.

Interestingly, the FO gambit offers resources for responding to the zombie problem. Since we have independent reasons to think bacteria are entirely physical creatures, if perceptual awareness is used as a stipulation criterion then the idea of zombie bacteria is inconceivable. Because bacterial perception is biochemical in nature, a perfect physical duplicate of a bacteria would satisfy the stipulation criterion we apply to creatures in the actual world. The problem, however, is that we have no compelling reason to choose FO stipulation criteria over any other, including HO criteria.

3.3 The Higher-order Gambit

HO theories are reductive and emphasize some kind of metacognitive representation as a criterion for ascribing phenomenal consciousness to a creature (e.g. awareness that you are aware). These HO representations are postulated in order to capture the “transitivity principle” (Rosenthal, 1997), which says that a conscious state is a state whose subject is, in some way, aware of being in it. A controversial corollary of the transitivity principle is that there are some genuinely qualitative mental states that are nonconscious e.g. nonconscious pain.
Neurologically motivated HO theories like Baar’s Global Workspace model (1988; 1997) and Dehaene’s Global Neuronal Workspace model (Dehaene et al., 2006; Dehaene, Kerszberg, & Changeux, 1998; 2001; Gong et al., 2009) have had great empirical success but they are deeply unsatisfying as explanations of phenomenal consciousness. HO theory can explain our ability to report on or monitor our experiences, but many philosophers wonder how it could provide an explanation for phenomenal consciousness (Chalmers, 1995). Ambitious HO theorists reply by insisting they do in fact have an explanation of how phenomenal consciousness arises from nonconscious mental states.

However, ambitious HO approaches suffer from the same problem of arbitrariness that FO approaches did. In order decide between FO and HO stipulation criteria we need to first decide on either a thick or thin interpretation of the refrigerator light problem. Since introspection is no help, we are forced to use the stipulation strategy. But why choose a HO stipulation strategy over a FO one? If everyone had the same intuitions concerning which creatures were conscious we could generate stipulation criteria that perfectly match these intuitions. The problem is that theorists have different intuitions concerning what creatures (beside themselves) are in fact conscious. Surprisingly, some theorists might go beyond the biological world altogether and claim inorganic entities are conscious.

3.4 The Panpsychist Gambit

A more radical stipulation strategy is possible. If antiphysicalist arguments suggest that neurons and biology have nothing to do with phenomenal consciousness, we might think that phenomenal consciousness is a fundamental feature of reality. On this view, matter itself is intrinsically experiential. Another idea is that phenomenality is necessitated by an even more fundamental property, called a protophenomenal property (Chalmers, 2003).

Panpsychism is a less popular stipulation gambit, but at least one prominent scientist has recently used a stipulation criterion that leads to panpsychism (although he downplays this result). Guilio Tononi (2008) proposes integrated information as a promising stipulation criterion. The intellectual weight of the theory rests on a thought experiment involving a photodiode. A photodiode discriminates between light and no light. But does the photodiode see the light? Does it experience the light? Most people would think no. But the photodiode does integrate information (1 bit to be precise) and therefore, according to the theory of integrated information, has some experience, however dim. Whatever theoretical or practical benefits come with accepting the theory of integrated information, when it comes to the Hard problem of phenomenal consciousness we are left scratching our heads as to why integrated information is the best criterion for picking out phenomenal consciousness. Given the criterion leads to ascriptions of phenomenality to a photodiode, many theorists will take this as good reason for thinking the criterion itself is wrong given their pretheoretical intuitions about what entities are phenomenally conscious. But as we have learned, intuitions are diverse as they are unreliable.


Unable to define phenomenal consciousness, theorists are tempted to use their introspection to “point out” the phenomenon. The refrigerator light problem is motivated by the problem of deciding between thin and thick views of your own phenomenal consciousness using introspection alone. If introspection is supposed to help us understand what phenomenal consciousness is, and the refrigerator light problem prevents introspection from deciding between thin and thick views, then we need some other methodological procedure. The only option available is the stipulation strategy whereby we arbitrarily stipulate a criterion for pointing it out e.g. integrated information, or higher-order thoughts. The problem is that any proposed stipulation criterion is just as plausible as any other given we lack a pretheoretical consensus on basic questions such as the function of phenomenal consciousness. Our only hope is to push for the standardization of stipulation criteria.

p.s. If anyone wants the full reference for a citation, just ask.


Filed under Consciousness, Philosophy, Psychology

New paper (comments and criticism welcome): Consciousness and the Indeterminacy of Introspection

Consciousness and the Indeterminacy of Introspection

This is the first draft of my qualifying paper for Wash U. I got a revise and resubmit, so I am looking for feedback on how I can improve. It was already suggested to me that I am trying to do two things in the paper: (1) create a narrow argument that has results for higher-order theorists in particular and (2) create a broad argument that has results for anyone trying to reductively explain nonintrospective phenomenal consciousness. I was told I should pick which path to take. Right now I am leaning towards the broad argument since it is much more interesting (and potentially significant)  and it would allow me to engage more with panpsychist views (which I only talk about in footnotes), but I’d be interested in hearing what other people thought. Here’s the abstract (which will have to change once I revise the paper but it gives you a general sense of what I am doing in the paper):

“Since it is widely recognized to be difficult to define phenomenal consciousness, theorists might use introspection to “point” to the phenomenon in order to fix upon what most needs explaining. However, there is a well-known methodological problem built into introspection – the “refrigerator light problem” – that prevents us from gaining introspective access to what we most want to explain in some theories of consciousness. To deal with this, some theorists simply stipulate criteria for pointing out the phenomenon that needs explanation. However, I argue that the most common stipulation strategies pose problems for Higher-order theories of phenomenal consciousness because they inevitably cast their net wide in ascribing phenomenal consciousness to nonhuman organisms. If I am right, then there are repercussions for how we understand the phenomenon that needs explanation when setting up the problem of consciousness.”


Filed under Consciousness

Getting scooped: Derk Pereboom's Qualitative Inaccuracy Hypothesis

One thing I have learned in studying philosophy is that there is rarely anything new under the sun. I thought I had come up with an original idea for my current paper I am working on, but yesterday I was wandering the library stacks and randomly pulled out Derk Pereboom’s book Consciousness and the Prospects of Physicalism. I read the first page of the introduction and realized I had been scooped by Pereboom’s “Qualitative Inaccuracy Hypothesis”. According to this hypothesis, when we introspect upon our phenomenal experience our introspection represents our experience as having qualitative features that it in fact does not have. For example, I might introspect on my phenomenal experience and represent my phenomenal experience as having special qualitative features that generate the Knowledge or Conceivability arguments against physicalism. Pereboom’s idea is that our introspection systematically misrepresents our phenomenal experience such we are deluded into thinking our phenomenal experience is metaphysically “primitive” when in fact it is not primitive. Although Pereboom only argues that the qualitive inaccuracy hypothesis is a live possibility, the mere possibility of it is enough to cause wrinkles in the Knowledge and Conceivability argument. That is, if the hypothesis is correct, then Mary turns out to have a false belief upon stepping outside the room and introspecting upon her experience (since her introspection misrepresents and her subsequent knowledge is thus false). Moreover, the conceivability and zombie argument doesn’t go through because if our phenomenal experience does not in fact have the special qualitative features we introspect it as having (primitiveness) then it becomes impossible to conceive all physical truths being the same as they are now (P), a “that’s all clause” (T), and there not being phenomenal experience (~Q) for the same reason that it’s impossible to conceive PT and there not being any water. That is, if our only evidence for phenomenality having the special features that make the zombie argument go through is to be found in our introspection, if there is a possibility of our introspection getting the data wrong, then the zombie argument does not work without arguing for the (questionable) assumption that our introspection is necessarily accurate.

However, despite getting scooped on this, I believe my paper is still an original contribution to the literature. For one, I give a more empirically plausible model of how our introspection works as well as give more elaborate details on how it misrepresents our experience. I also tie in this introspective inaccuracy to the well-known “refrigerator light problem” in consciousness studies. I also develop a methodological strategy for getting around the introspective inaccuracy that I call the “stipulation strategy”. From this, I develop some implications for our ascription of phenomenality to nonhuman organisms and argue that the most common stipulation strategies end up ascribing phenomenality almost everywhere in the organic world (which contradicts central tenets of Higher-order theory). This is a surprising conclusion. My paper is also well-sourced in the empirical literature and unlike Pereboom, I don’t spend much time dealing with Chalmers and all the intricate details of the Knowledge and Conceivability arguments. I spend much more time developing a model of how introspection works and how it could possibly by inaccurate with respect to our own phenomenal experience.

So although it’s nice to know I’m not alone in arguing for what I call the “Indeterminacy of Introspection”, it’s always a shock when you spend so much time developing what you think of as an original idea and then discovering that someone else already had the same idea. Luckily, my paper has a lot more going on in it, and I think it can still be published as an original contribution to the literature.


Filed under Consciousness

A quick thought on pain and suffering

It is common for theorists to distinguish between pain and suffering. Pain is generally associated with nociception, a very primitive chemical detection system that responds to cellular damage signals. Suffering, in contrast, is usually defined as the minding of pain, sometimes called the “affectivity” or “unpleasantness” of pain. In humans and monkeys, the pain system and the minding system can be teased apart.  Such a distinction has considerable moral implications for how we treat nonhuman animals. Many philosophers think that it is only the minding of pain, and not pain itself, that deserves moral consideration. Thus, any creature who only has nociception but does not mind pain will not fall under the full umbrella of moral consideration. Moreover, the minding system has been associated with having an Anterior Cingulate Cortex. All mammals have an ACC. Therefore, this seems like a good reason to grant all mammals moral status.

But I propose to make a further distinction between the minding of pain and the introspective awareness that you mind pain. It is unfortunate that the term “minding pain” seems to imply a kind of higher-order awareness since “minding” sounds like a cognitively sophisticated capacity reminiscent of introspection. But if a rat can mind pain, how complex could it really be? Such an capacity doesn’t strike me as all that fancy. And I am skeptical that in humans we have really teased apart minding from introspective awareness of minding. Do we really know that what “bothers” humans is the minding shared with rats or the introspective awareness of minding? More experimentation will be needed to tease this apart, but it is difficult because the verbal reports necessary to determine minding levels seem to be confounded by introspective awareness.

Don’t take me the wrong way. I’m not arguing that only introspective awareness of minding is deserving of moral consideration. Otherwise, I’d be left with the conclusion that we can treat newborn babies as mere objects, a conclusion I obviously reject. It seems plausible that the ability to merely mind pain deserves some moral consideration. But the crucial question is, how much? It seems plausible to me that we have good reason to want to reduce all instances of minding pain in the universe. But it also seems plausible to me that we have good reason to prioritize the reduction of the introspective awareness of minding over the mere minding. This line of reasoning includes nonhuman mammals into the moral sphere, but does not place them on an equal status with well-developed human beings capable of introspective minding.

Leave a comment

Filed under Consciousness, Philosophy

Is Higher-order Theory Really Defunct?

Last year Ned Block published a paper in Analysis called “The higher-order approach to consciousness is defunct“. In it, he offers a very simple and compelling argument that is supposed to expose the incoherency of both Higher-order Thought theory (HOT) and Higher-order Perception theory (HOP). Block first distinguishes modest and ambitious versions of these theories. The modest view is simply an account of “Higher-order consciousness” as distinct from what-it-is-likeness while the ambitious view is designed to be a theory of what-it-is-likeness itself. According to Block, the Higher-order view is as follows:

The higher order theory: A mental state is conscious if and only if the state is the object of a certain kind of representation arrived at non-inferentially.

Block’s argument against the ambitious view rests on the possibility of radical misrepresentation, something acknowledged by all HO theorists. More specifically, Block has in mind the possibility of a “targetless” Higher-order representation. Block formulates his argument in terms of HOT, but since I am more interested in HOP, I will do it the other way. Suppose that Jones has a HOP to the effect that it says “I am now having a red sensory experience” when in fact there is no first-order representation of redness. The HOP is in this case “empty”. But according to ambitious Higher-order theory, it is sufficient for there to be what-it-is-likeness so long as there is a HOP, since it is the HOP that generates what-it-is-likeness. But notice how the Higher-order theory is formulated. A mental state is conscious IFF the state is the object of an HOP. But there is no first-order mental state! As Block says, “Thus, the sufficient condition and the necessary condition are incompatible in a situation in which there is only one non-self-referential higher order representation.” Block (rightly) thinks this is incoherent.

According to ambitious Higher-order theorists, the targetless HO representation is enough to generate what-it-is-likeness. But the theory seems to require there to be a first-order state, since the theory is designed to show how first-order states become conscious. So the HO theorist seems to be stuck. The theory is supposed to be a theory of how first-order states become conscious but the theory is committed to the idea that HO representations all by themselves can generate what-it-is-likeness completely independently of the existence of any first-order state.

To be honest, I actually think Block has a nice argument here. But this is because I have always thought the ambitious version of HO theory is confused (see my paper “What is it like to be nonconscious?“). I don’t think higher-order theory is a theory of the origin of what-it-is-likeness, but rather, a theory of introspection. This is what William Lycan supposedly has claimed all along: that he is only offering a theory of introspective awareness. But wouldn’t it just be trivial to develop a “higher-order theory” of higher-order introspection? Well, it’s not trivial so long as we are trying to decide between HOP and HOT as an account of higher-order consciousness. Personally, I think HOP is better suited neurologically to explain higher-order introspective awareness.

But I am also skeptical of the very possibility of a truly targetless HOP. I just can’t make much neurological sense of such a possibility. Let’s assume an overly simplistic neural theory of introspection such that introspection is neurally realized in the frontal cortex. On this simplistic view, the frontal cortex is constantly receiving input from the other areas of the brain and introspecting upon that content. It seems to me that in order for there to be a truly targetless HOP, either the frontal cortex would have to be completely isolated from the rest of the brain, or the rest of the brain would have to be turned off. In the latter case, it seems like the person would simply be brain dead. And the former case seems just as unrealistic, since the idea of the frontal cortex have zero synaptic connections to any other area of the brain seems too incredible. So long as the rest of the brain is working, and there is at least one synaptic connection to the frontal cortex, then the frontal cortex will have something to “work with” in performing its introspective monitoring function.

Consider Damasio’s theory of primal background feelings arising in the brain stem and other primitive circuitry. Presumably these kinds of first-order mental states can’t just be “turned off” without severely incapacitating the subject. And if these background feelings can make their way to the frontal cortex (as seems plausible), the introspective machinery will always have something to work with. So the case of a truly targetless HOP seems unrealistic to me. However, it seems more realistic to assume that radical misrepresentation of first-order states is possible. This seems like what’s going on when people are on psychedelic drugs or hallucinating. But it’s never the case that the frontal cortex is completely spinning in the void, without having any input from first-order systems. We can then reformulate the higher-order theory to coherently (and perhaps trivially) say “a mental state is the object of introspective awareness just when it is accompanied by a higher-order representation”. No surprises there. The only thing that’s left is just to develop a theoretical model of the evolutionary and ontogenetic origins of such introspective awareness (no easy feat, as Jaynes shows).

Where does this leave us then in terms of Block’s attack on HO theory? Well, I believe the attack is successful against ambitious HO views, since it seems entirely plausible to me that there is something-it-is-like for first-order sensorimotor systems to be operative. But so long as we are sufficiently modest in our ambitious about what HO theory can explain, then it seems like HOP theory is on solid grounds for making sense of our human powers of introspection. Where I disagree with Lycan however is that Lycan thinks the introspective machinery of HOP is simplistic enough to be shared by many nonhuman mammals. My own research has led to me conclude that the introspective machinery of HOP is unique to humans, and that such introspective machinery is what accounts for the great cognitive differences between humans and nonhuman animals. If HOP is a theory of higher-order consciousness, then I believe that HOP is also a theory of what makes humans cognitively unique. While there are likely simpler homologues of introspective machinery in other primates, it seems to me that human introspection is at a much higher level of sophistication. Following Julian Jaynes, I believe this sophistication stems from our linguistic mastery. More specifically, learning linguistic concepts related to psychological functions allows us think about thinking. This linguistically mediated recursion seems to allow for an “intentional ascension” whereby we engage in truly metarepresentational cognition. This allows us to thinking about the fact that we are thinking about the fact that we are thinking, and so on.

So, I don’t think Higher-order theory is really defunct. It’s defunct as a theory of what-it-is-likeness, but that not really all that surprising given the usual criteria for ascribing what-it-is-likeness are cases where we think there is simple sensation going on. And it’s just absurd to suppose that sensation requires the existence of metarepresentation. So that alone gives us good reason to make a phenomenological distinction between what-it-is-like to be a simple sensing creature and what-it-is-like to be a creature with sensation and the capacity for higher-order representation. Where I disagree with Block is his view that what-it-is-likeness is a property generated in neural systems, since I think there is good reason to ascribe phenomenality to creatures lacking nervous systems.  And unlike Block, I also don’t think what-it-is-likeness generates an epistemic or explanatory gap once we understood what exactly it is we are referring to when we use such a term.

1 Comment

Filed under Consciousness

Too HOT to Tell: The Failure of Introspection

I’m working on a new paper that will probably be used as my first Qualifying Paper for the Wash U PhD program to be turned in at the beginning of the Fall semester (the program requires the submission of 3 Qualifying Papers instead of comps). There is a central argument in the paper that I wanted to hopefully get some feedback on and see what people think. I call it the Failure of Introspection Argument. It goes something like this:

  1. When philosophers set up the “hard problem of phenomenal consciousness”, they often point out the phenomenon of phenomenal consciousness by asking you to imagine the “raw feel” of, e.g., “the juiciness of a strawberry” or the “raw feel” of the “redness” of a looking at red color patch, or the “raw feel” of pain.
  2. Often what philosophers think of as their own “raw” experiences such as the experience of “juiciness” are not in fact “raw”, if by raw we mean unfiltered by higher-order conceptual machinery.  Philosophers have insufficiently demonstrated that their own introspection gives them access to truly raw feelings. What their introspection actually gives access to is very conceptually loaded experiences.
  3. To address (2), philosophers might simply stipulate that what they’re interested in are the raw feels that exist independently of complex higher-order machinery, such as those of a bat, a newborn baby, or a global aphasic.
  4. But without a definite criterion to determine whether an entity does in fact have phenomenal consciousness, the stipulation approach fails to stop the threat of the ascription of phenomenal consciousness to entities like single-celled organisms (are you sure there is nothing-it-is-like to be an amoeba?)
  5. Philosophers should therefore reconsider the project of offering a higher-order explanation of phenomenal consciousness.

 The idea behind premise (1) is that when philosophers talk about phenomenal consciousness they don’t define it so much as attempt to point out the phenomenon. Perhaps the most common way to point out phenomenal consciousness is to say things like “Imagine the raw feelings of juiciness as you bite into a strawberry”, or “Imagine the raw visual experience of redness when looking at a red color patch”. So whenever philosophers try to point out the phenomenon of consciousness within their own phenomenology, they point to these “raw feelings” discovered in their phenomenology through introspection.

Premise (2) is controversial in one way and uncontroversial in another. It’s relatively uncontroversial that introspection itself is a higher-order operation, so it’s trivial to say that introspection involves conceptually loaded experience. But what’s controversial is to say that, when introspecting on their raw feelings, philosophers have no principled way to determine what experiential properties are raw and which aren’t. So, for example, in the case of experiencing a “raw feel” of redness when looking at a color patch, my basic hypothesis is that the “redness quale” is a product of higher-order brain operations and is not itself an experiential primitive.

But it is important to realize that I am not claiming that phenomenal consciousness itself is a product of higher-order operations. I think phenomenal consciousness and higher-order operations directed towards phenomenal consciousness are two entirely different things. But where I differ from most same-order theorists is that I think the appeal to “raw feelings” discovered in human introspection is unable to deliver the goods in terms of demonstrating that the “redness” of the color patch is in fact a primitive experiential property. My claim is that human higher-order machinery generates specific sensory “gazing” qualities that are only present when we step back and reflect on what it is exactly that we see. But in accordance with versions of affordance theory, my claim is that when a mouse perceives a red color patch, it does not perceive the redness qua redness, but rather, purely as a means to some behavioral end. So if the red color patch was a sign for where cheese is located, the mouse’s perceptual content would not be “raw redness” but “sign-of-cheese”. That is, it would be cashed out in terms of what Heidegger called something’s “in-order-to”.

For example, let’s imagine a carpenter who lacked all higher-order thoughts but was still capable of basic sensorimotor skills. I would say that the carpenter’s perception of a hammer would not be akin to how a philosopher might introspect on what it is like to perceive a hammer. Instead, the carpenter would perceive the hammer is something-for-hammering. The “raw sensory quales” such as the hammer’s “brownness” are mental contents only available to creatures capable of non-affordance perception. I personally think that such an ability partially stems from complex linguistic skills, but that’s another story. The point is that based on the concept of affordance perception and notions of ecologically relevant perception, it becomes psychologically unrealistic to posit the content of “raw feels” in non-human animals. And since human introspection is unable to tell “from within” whether the experiential content is a product of raw feels or tinged by higher-order machinery, the only way to reliably “point out” the phenomenon of phenomenal consciousness is to stipulate it into existence.

This brings me to premise (3). Since it becomes difficult to use human introspection to point out raw feels, philosophers might simply stipulate that they are interested in the experiential properties that exist independently of higher-order thought, such as those experiential properties had by, say, a mouse, a bat, a newborn baby, or perhaps a global aphasic. The problem with the stipulation approach however is this: if you are going to say a bat has phenomenally conscious states in virtue of its echolocation, on a suitably mechanistic account of echolocation, it’s going to turn out that echolocation is not all that different from the type of perception a single-celled organism is capable of. If all we mean by perception is the discrimination of stimuli, then it’s clear that single-celled organisms are capable of a very rudimentary type of perception. But since most philosophers who talk about phenomenal consciousness seem to think it’s a property of the brain, this broad-brushed ascription to lowly single-celled organisms is problematic, but it starts to look like phenomenal consciousness is not that interesting of a property, given it’s shared by a bacterium, a mouse, and a human.

There is plenty of room for disagreement about whether bacteria are in fact phenomenally conscious (it might be argued that phenomenal perceptions require the possibility of misrepresentation and bacteria can’t misrepresent. I personally think the appeal to representation doesn’t work given the arguments of William Ramsey about the “job description” challenge and the fundamental problem of representation) But even if you were to offer a plausible and rigorous definition of phenomenal consciousness that somehow excludes single-celled organisms, you will still run into a sorites paradox when tying to figure out just when in the phyologenetic timeline phenomenal consciousness arose. Since it’s not a well-defined property, this seems like a difficult if not impossible task.  Or worse, it seems at least possible to argue for a panpsychism with respect to phenomenal consciousness. Can we really just rule it out a priori? I don’t think so.

For these reasons amongst others, I think higher-order theory should give up in trying to account for phenomenal consciousness. What I think HOT is best suited to explain is not phenomenal consciousness but the higher-order introspection upon first-order sensory contents. I think it is a mistake to think that phenomenal consciousness itself is generated by higher-order representations. But since phenomenal consciousness is really just a property that we stipulate into existence, it doesn’t seem all that important to attempt to a scientific explanation of how it arises out of neural tissue. We should give up on using HOT to explain phenomenal consciousness and stick to something more scientifically tractable: giving a functional account of just how it is philosophers are capable of introspecting on their experience and then thinking and talking about their experience.



Filed under Consciousness, Philosophy

Some Conscious Thoughts on Consciousness


What is consciousness? A perennial question, no doubt. How do we start to answer this question? Let me begin by saying that I think Socrates was entirely misguided in the Theaetetus when he disallowed Theaetetus to use examples to help answer the question, “What is knowledge?” Similarly, I think the best way to start answering the question of consciousness is to know what needs explaining through real-life examples or analogies. What is conscious thought? Plato thought (no pun intended) that conscious thought was essentially talking to oneself. Contemporary thinkers probably hold this to be laughably naive, but I beg to differ.

In fact, I think Plato had some significant insight into the nature of consciousness but little to no insight into the nature of nonconscious or “online” aptic structures. Let me explain. As Julian Jaynes defined them, “Aptic structures are the neurological basis of aptitudes that are composed of an innate evolved aptic paradigm plus the results of experience in development…They are organizations of the brain, always partially innate, that make the organism apt to behave in a certain way under certain conditions”. The Ancient Greeks simply did not have a contemporary understanding or mental taxonomy of nonconscious processing, what cognitive scientists refer to as the “cognitive unconscious”. The concept of the unconscious mind would not be thought until centuries later (think about what that entails for a second).The cognitive unconscious is the system that beats our heart, controls our hormone levels, makes us breath, helps us walk and ride bikes, enables saccadic motion, moves our tongues when we speak, etc. Why was Plato not conscious of the unconsciousness? Because, as Julian Jaynes says,

Consciousnes is a much smaller part of out mental life than we are conscious of, because we cannot be conscious of what we are not conscious of. How simple that is to say; how difficult to appreciate! It is like asking a flashlight in a dark room to search around to something that does not have any light shining upon it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so consciousness can seem to pervade all mentality when actually it does not.

This paradox of consciousness puts the Theaetetus into a lot of perspective. Plato understood thought to be “talking to oneself” because when he retired into himself to think thoughts, all he discovered was analog talking. Analog talking is made possible by our ability to really talk. Try it yourself. Close your eyes and consciously think to yourself the thought “I am now thinking a conscious thought. It sure is fun to think conscious thoughts.” (If you do not know how to think conscious thoughts, then I am amazed you are reading this post). As you repeat this thought experiment, pay attention to the thought as a phenomenon. This requires taking what Husserl called the “reductive stance”, namely, a perspective of inquiry upon the nature of your own experience

I hope this exercise demonstrated the intuitive plausibility of Plato’s thesis about conscious thought as talking to oneself.  But you might have noticed that when you close your eyes, there are more possibilities of imaginative-reconstruction than mere verbal play. One can also imagine colors and vibrant patterns as you think thoughts, and you can also think of places, conversations, faces, lovers, sex, porn, relationships, patterns, puzzles, social dilemmas, tragedy, death, mortality, future pleasure, past pain, compassion, books, music, melodies, television, projects, papers, ideas, inventions, theology, scientific problems, metaphysics, epistemology, philosophy of mind, and writing blog posts! Indeed, I was lying in bed tonight thinking and after trying the above thought experiment, this very post sprung into my head and here I am writing it at 11:04pm.

Now we know what consciousness is. Or we should at least have a general sense or intuitive feel for what needs explaining. Basically,  consciousness is a functional operation that allows us to do certain things, what Andy Clark calls “epistemic actions”. These actions involve the manipulation of information (not Shannon’s) in virtual “workspaces”. The phonological loop and visual-spatial sketchpad are familiar examples of such a workspace. It runs somewhat “off-line” from the “online” bodily control loops which constitute our cognitive unconscious. Julian Jaynes describes offline consciousness (what I have called “J-consciousness”, for Jaynesian consciousness) eloquently and precisely. He says

[Consciousness] is an operation rather than a thing, a repository, or a function. It operates by way of analogy, by way of constructing an analog space with an analog “I” that can observe that space, and move metaphorically in it. It operates on any reactivity, [consciously selects] relevant aspects, narratizes and [assimilates] them together in a metaphorical space where such meanings can be manipulated like things in space. Conscious mind is a spatial analog of the world and mental acts are analogs of bodily acts.

[Consciousness] is an analog of what is called the real world. It is built up with a vocabulary or lexical field whose terms are all metaphors or analogs of behavior in the physical world. Its reality is of the same order as mathematics. It allows us to shortcut behavioral processes and arrive at more adequate decisions. Like mathematics, it is an operator rather than a thing or repository. And it is intimately bound up with volition and decision.

Now, with Chalmers and friends in mind, where is the problem? Does the concept of “virtual workspace” really invite spooky questions of dualism and explanatory skepticism? If so, why? No doubt, this is one of the greatest intellectual challenges that humanity has faced. No one said understanding consciousness would be easy. But hard with a capital “h”? That doesn’t follow.

1 Comment

Filed under Philosophy, Psychology