Tag Archives: phenomenal consciousness

The Refrigerator Light Problem

1.0 The Problem of Phenomenal Consciousness

Phenomenal consciousness has a familiar guise but is frustratingly mysterious. Difficult to define (Goldman, 1993), it involves the sense of there being “something-it-is-like” for an entity to exist. Many theorists have studied phenomenal consciousness and concluded physicalism is false (Chalmers, 1995, 2003; Jackson, 1982; Kripke, 1972; Nagel, 1974). Other theorists defend physicalism on metaphysical grounds but argue there is an unbridgeable “explanatory gap” for phenomenal consciousness (Howell, 2009; Levine, 1983, 2001). “Mysterians” have argued the explanatory gap is intractable because of how the human mind works (McGinn, 1989; 1999). Whatever it is, phenomenal consciousness seems to lurk amidst biological processes but never plays a clearly identifiable causal role that couldn’t be performed nonconsciously (Flanagan & Polger, 1995). After all, some philosophers argue for the possibility of a “zombie” (Chalmers, 1996) physically identical to humans but entirely devoid of phenomenal consciousness.

Debates in the sprawling consciousness literature often come down to differences in intuition concerning the basic question of what consciousness actually is. One question we might have about its nature concerns its pervasiveness. First, is consciousness pervasive throughout our own waking life? Second, is it pervasive throughout the animal kingdom? We might be tempted to answer the first question by introspecting on our experience and hoping that will help us with the second question. However, introspecting on our experience generates a well known puzzle known as the “refrigerator light problem”.

2.0 The Refrigerator Light Problem
2.1 Thick vs thin

The refrigerator light problem is motivated by the question, “Consciousness seems pervasive in our waking life, but just how pervasive is it?” Analogously, we can ask whether the refrigerator light is always on. Naively, it seems like it’s on even when the door is closed, but is it really? The question is easily answered because we can investigate the design and function of refrigerators and conclude that the light is designed to turn off when the door is closed. We could even cut a hole in the door to see for ourselves. However, the functional approach won’t work with phenomenal consciousness because we currently lack a theory of how phenomenal consciousness works or any consensus on what its possible function might be, or whether it could even serve a function.

The refrigerator light problem is the problem of deciding between two mutually exclusive views of consciousness (Schwitzgebel, 2007):

The Thick View: Consciousness seems pervasive because it is pervasive, but we often cannot access or report this consciousness.
The Thin View: Consciousness seems pervasive, but this is just an illusion.

The thick view is straightforward to understand, but the thin view is prima facie counterintuitive. How could we be wrong about how our own consciousness seems to us? Many philosophers argue that a reality/appearance distinction for consciousness itself is nonsensical because consciousness just is how things seem. In other words, if consciousness seems pervasive, then it is pervasive.

On the thin view, however, the fact that it seems like consciousness is pervasive is a result of consciousness generating a false sense of pervasiveness. The thin theorist thinks that anytime we try to become aware of what-it-is-like to enjoy nonintrospective experience, we activate our introspection by inquiring and corrupt the data. The thin theorist is for methodological reasons skeptical about the idea of phenomenal consciousness existing without our ability to access or attend to it. If phenomenal consciousness can exist without any ability to report it then how can psychologists study it if subjects must issue a report that they are conscious? Anytime a subject reports they are conscious, you can’t rule out that it is the reporting doing all the work. The thin theorist challenges us to become aware of these nonintrospective experiences such that we can report on their existence and meaningfully theorize about them.

Philosophers might appeal to special phenomenological properties to falsify the thin view. This won’t work because, in principle, one could develop a thin view to accommodate any of the special phenomenological properties ascribed to phenomenal consciousness such as the pervasive “raw feeling” of redness when introspecting on what-it-is-like to look at a strawberry or the “painfulness” of pain. Thin theory can simply explain away the experience of pervasiveness as an illusion generated by a mechanism that itself isn’t pervasive. Julian Jaynes is famous for defending a strong thin view:

Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of…It is like asking a flashlight in a dark room to search around for something that doesn’t have any light shining on it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so consciousness can seem to pervade all mentality when actually it does not. (1976, p. 23)

Thin vs thick views represent the two most common interpretations of the refrigerator light problem, and both seem to account for the data equally well. The problem is that from the perspective of introspection, both theories are indistinguishable. The mere possibility of the thin view being true motivates the methodological dilemma of the refrigerator light problem. How do we rule out thin explanations of thick phenomenology?

2.2 The Difference Introspection Makes

The intractability of the refrigerator light depends on the inevitable influence introspection has on nonintrospective experience. Consider the following case. Jones loves strawberries. He eats one a day at 3:00 pm. All day, Jones looks forward to 3:00 pm because it’s the one time of the day when he can savor the moment and take a break from the hustle-and-bustle of work. When 3:00 pm arrives, he first gazes longingly at the strawberry, his eyes soaking up its patterns of texture and color while his reflective mind contemplates how it will taste. Now Jones reaches out for the strawberry, puts it up to his mouth, and bites into it slowly, savoring and paying attention to the sweetness and delicate fibrosity that is distinctive of strawberries. What’s crucial is that Jones is not just enjoying the strawberry, but introspecting on the fact that he is enjoying the strawberry. That is, he is aware of the strawberry but also meta-aware of his first-order awareness.

Suppose we ask Jones what it’s like for him to enjoy the strawberry when he is not introspecting. The refrigerator light problem will completely stump him. Moreover, suppose we want to ascribe consciousness to Jones (or Jones wants to ascribe it to himself). Should we ascribe it before he starts introspecting or after? Naturally, the answer depends on whether we accept a thin or thick view. According to a thin view, whatever is present in Jones’ experience prior to introspection does not warrant the label “consciousness”. The thin theorist might call this pervasive property “nonconscious qualia” (Rosenthal, 1997), but they reserve the term “consciousness” to describe Jones’ metarepresentational awareness that his perceiving. The thin theorist would agree with William Calvin when he says, in defining “consciousness”, “The term should capture something of our advanced abilities rather than covering the commonplace” (1989, p. 78).

What about nonhuman animals? Whereas a thin theorist would say there is a difference in kind between human and rat consciousnesss, the thick theorist is likely to say that both the rat and Jones share the most important kind of pervasive consciousness. Is this jostling a purely terminological squabble? Kriegel (2009) has argued that the debate is substantial because theorists have different intuitions about the source of mystery for consciousness. The thick theorist thinks the mystery originates with first-order pervasiveness; the thin theorist thinks it originates with second-order awareness. Unfortunately, a squabble over intuitions is just as stale as a terminological dispute.

3.0 The Generality of the Refrigerator Light Problem
3.1 Introducing the Stipulation Strategy

If you are a scientist wanting to tackle the Hard problem of phenomenal consciousness, how would you respond to the refrigerator light problem? If the debate between thin and thick theories is either terminological or based on conflicting intuitions, what do you do? The only strategy I can think of for circumventing the terminological arbitrariness is to embrace it using what I call the stipulation strategy. It works like this. You first agree that we cannot resolve the thin vs thick debate using introspection alone. Unfazed, you simply stipulate some criterion for pointing phenomenal consciousness out such that it can be detected with empirical methods.

Possible criteria are diverse and differ from scientist to scientist. Some theorists stipulate that you will find phenomenal consciousness anytime you can find first-order (FO) perceptual representations of the right kind (Baars, 1997; Block, 1995; Byrne, 1997; Dretske, 1993, 2006; Tye, 1997). This would allow us to find many instances of phenomenal consciousness throughout the biological world, especially in creatures with nervous systems. However, we might have a more restricted criterion that says you will find phenomenal consciousness anytime you have higher-order (HO) thoughts/perceptions (Gennaro, 2004; Lycan, 1997; Rosenthal, 2005), restricting the instantiations of phenomenal consciousness to mammals or maybe even primates depending on your understanding of higher-order cognition. Or, more controversially, you might have a panpsychist stipulation criterion that makes it possible to point out phenomenal consciousness in the inorganic world.

Once we understand how the stipulation strategy works, the significance of any possible reductive explanation becomes trivialized qua explanation of phenomenal consciousness. To apply this result to contemporary views, I will start with FO theory, apply the same argument to HO theory, and then discuss the more counterintuitive (but equally plausible) theory of panpsychism.

3.2 The First-order Gambit

FO theorists deny the transitivity principle and claim one does not need to be meta-aware in order for there to be something-it-is-like to exist. The idea is that we can be in genuine conscious states but completely unaware of being in them. That is, FO theorists think there can be something-it-is-like for S to exist without S being aware of what-it-is-like for S to exist, a possibility HO theorists think absurd if not downright incoherent because the phrase “for S” suggests meta-awareness.

FO approaches are characterized by their use of perceptual awareness as the stipulation criterion for consciousness. A representative example is Dretske, who says “Seeing, hearing, and smelling x are ways of being conscious of x. Seeing a tree, smelling a rose, and feeling a wrinkle is to be (perceptually) aware (conscious) of the tree, the rose, and the wrinkle” (1993, p. 265). Dretske argues that once you understand what consciousness is (perceptual awareness), you will realize that one can be pervasively conscious without being meta-aware that you are conscious.

However, there is a serious problem with trying to reconcile the implications of theoretical stipulation criteria with common intuitions about which creatures are conscious. The problem with using perceptual awareness as our criterion is that it casts its net widely, perhaps too widely if you think phenomenality is only realized in nervous systems. Since many FO theorists think that if we are going to have a scientific explanation of phenomenal consciousness at all it must be a neural explanation (Block, 2007; Koch, 2004) they will want to avoid ascribing consciousness to nonneural organisms. However, if we stipulate that a bat has phenomenal consciousness in virtue of its capacity for perceptual awareness, I see no principled way of looking at the phylogenetic timeline and marking the evolution of neural systems as the origin of perceptual awareness.

To see why, consider chemotaxis in unicellular bacteria (Kirby, 2009; Van Haastert & Devreotes, 2004). Recently chemotaxis has been modeled using informatic or computational theory rather than classical mechanistic biology (Bourret & Stock, 2002; Bray, 1995; Danchin, 2009; Shapiro, 2007). A simple demonstration of chemotaxis would occur if you stuck a bacterium in a petri dish that had a small concentration of sugar on one side. The bacterium would be able to intelligently discriminate the sugar side from the non-sugar side and regulate its swimming behavior to move upstream the gradient. Naturally we assume the bacterium is able to perceive the presence of sugar and respond appropriately. On this simplistic notion of perceiving, perceiving a stimulus is, roughly speaking, a matter of valenced behavioral discrimination of that stimulus. By valenced, I mean that the stimuli are valued as either attractive or aversive with respect to the goals of the organism (in this case, survival and homeostasis). If the bacterium simply moved around randomly when placed in a sugar gradient such that the sugar had no particular attractive or aversive force, we might conclude that the bacterium is not capable of perceiving sugar, or that sugar is not ecologically relevant to the goals of the organism. But if the bacterium always moved upstream of the sugar gradient, it is natural to say that the bacterium is capable of perceiving the presence of sugar. Likewise, if there were a toxin placed in the petri dish, we would expect this to be valenced as aversive and the bacteria would react appropriately by avoiding it, with appropriateness understood in terms of the goal of survival

Described in this minimal way, perceptual awareness in its most basic form does not seem so special that only creatures with nerve cells are capable of it. Someone might object that this is not a case of genuine perceptual awareness because there is nothing-it-is-like for the bacterium to sense the sugar or that its goals are not genuine goals. But how do we actually know this? How could we know this? For all we know, there is something-it-is-like for the bacterium to perceive the sugar. If we use perceptual awareness as our stipulation criterion, then we are fully justified in ascribing consciousness to even unicellulars.

Furthermore, it is misleading to say bacteria only respond to “proximal” stimulation, and therefore are not truly perceiving. Proximal stimulation implies an implausible “snapshot” picture of stimulation where the stimulation happens instantaneously at a receptor surface. But if stimuli can have a spatial (adjacent) component why can they not also have a temporal (successive) component? As J.J. Gibson put it, “Transformations of pattern are just as [biologically] stimulating as patterns are” (Gibson, 1966). And this is what researchers studying chemotaxis actually find: “for optimal chemotactic sensitivity [cells] combine spatial and temporal information” (Van Haastert & Devreotes, 2004, p. 626). The distinction between proximal stimulation and distal perception rests on a misunderstanding of what actually stimulates organisms.

Interestingly, the FO gambit offers resources for responding to the zombie problem. Since we have independent reasons to think bacteria are entirely physical creatures, if perceptual awareness is used as a stipulation criterion then the idea of zombie bacteria is inconceivable. Because bacterial perception is biochemical in nature, a perfect physical duplicate of a bacteria would satisfy the stipulation criterion we apply to creatures in the actual world. The problem, however, is that we have no compelling reason to choose FO stipulation criteria over any other, including HO criteria.

3.3 The Higher-order Gambit

HO theories are reductive and emphasize some kind of metacognitive representation as a criterion for ascribing phenomenal consciousness to a creature (e.g. awareness that you are aware). These HO representations are postulated in order to capture the “transitivity principle” (Rosenthal, 1997), which says that a conscious state is a state whose subject is, in some way, aware of being in it. A controversial corollary of the transitivity principle is that there are some genuinely qualitative mental states that are nonconscious e.g. nonconscious pain.
Neurologically motivated HO theories like Baar’s Global Workspace model (1988; 1997) and Dehaene’s Global Neuronal Workspace model (Dehaene et al., 2006; Dehaene, Kerszberg, & Changeux, 1998; 2001; Gong et al., 2009) have had great empirical success but they are deeply unsatisfying as explanations of phenomenal consciousness. HO theory can explain our ability to report on or monitor our experiences, but many philosophers wonder how it could provide an explanation for phenomenal consciousness (Chalmers, 1995). Ambitious HO theorists reply by insisting they do in fact have an explanation of how phenomenal consciousness arises from nonconscious mental states.

However, ambitious HO approaches suffer from the same problem of arbitrariness that FO approaches did. In order decide between FO and HO stipulation criteria we need to first decide on either a thick or thin interpretation of the refrigerator light problem. Since introspection is no help, we are forced to use the stipulation strategy. But why choose a HO stipulation strategy over a FO one? If everyone had the same intuitions concerning which creatures were conscious we could generate stipulation criteria that perfectly match these intuitions. The problem is that theorists have different intuitions concerning what creatures (beside themselves) are in fact conscious. Surprisingly, some theorists might go beyond the biological world altogether and claim inorganic entities are conscious.

3.4 The Panpsychist Gambit

A more radical stipulation strategy is possible. If antiphysicalist arguments suggest that neurons and biology have nothing to do with phenomenal consciousness, we might think that phenomenal consciousness is a fundamental feature of reality. On this view, matter itself is intrinsically experiential. Another idea is that phenomenality is necessitated by an even more fundamental property, called a protophenomenal property (Chalmers, 2003).

Panpsychism is a less popular stipulation gambit, but at least one prominent scientist has recently used a stipulation criterion that leads to panpsychism (although he downplays this result). Guilio Tononi (2008) proposes integrated information as a promising stipulation criterion. The intellectual weight of the theory rests on a thought experiment involving a photodiode. A photodiode discriminates between light and no light. But does the photodiode see the light? Does it experience the light? Most people would think no. But the photodiode does integrate information (1 bit to be precise) and therefore, according to the theory of integrated information, has some experience, however dim. Whatever theoretical or practical benefits come with accepting the theory of integrated information, when it comes to the Hard problem of phenomenal consciousness we are left scratching our heads as to why integrated information is the best criterion for picking out phenomenal consciousness. Given the criterion leads to ascriptions of phenomenality to a photodiode, many theorists will take this as good reason for thinking the criterion itself is wrong given their pretheoretical intuitions about what entities are phenomenally conscious. But as we have learned, intuitions are diverse as they are unreliable.

Conclusion

Unable to define phenomenal consciousness, theorists are tempted to use their introspection to “point out” the phenomenon. The refrigerator light problem is motivated by the problem of deciding between thin and thick views of your own phenomenal consciousness using introspection alone. If introspection is supposed to help us understand what phenomenal consciousness is, and the refrigerator light problem prevents introspection from deciding between thin and thick views, then we need some other methodological procedure. The only option available is the stipulation strategy whereby we arbitrarily stipulate a criterion for pointing it out e.g. integrated information, or higher-order thoughts. The problem is that any proposed stipulation criterion is just as plausible as any other given we lack a pretheoretical consensus on basic questions such as the function of phenomenal consciousness. Our only hope is to push for the standardization of stipulation criteria.

p.s. If anyone wants the full reference for a citation, just ask.

3 Comments

Filed under Consciousness, Philosophy, Psychology

Book review: Giulio Tononi's Phi: A Voyage from the Brain to the Soul

Phi is easily the most unusual book on consciousness I have read in awhile. It’s hard to describe, but Tononi makes his case for “integrated information” using poetry, art, metaphor, and fiction. Each chapter is a fictional vignette or dialogue between characters inspired by famous scientists like Galileo, Darwin, or Francis Crick. At the end of every chapter is a “note” written in normal academic language explaining the context of the stories. On just about every page there are huge full-color glossy pictures of famous art. The book is simply beautiful as a physical object in an attempt, I suspect, to convince qualiaphiles that Tononi is “one of them”.

The theory of integrated information itself, however, is less appealing.  Here is how integrated information is defined:

Integrated information measures how much can be distinguished by the whole above and beyond its parts, and Phi is its symbol. A complex is where  Phi reaches its maximum, and therein lives one consciousness- a single entity of experience.

And with that Tononi hopes the “hard” problem of consciousness is solved. However, the intellectual weight of Phi  rests on a thought experiment involving a photodiode. A photodiode discriminates between light and no light. But does the photodiode see the light? Does it experience the light? Most people would think no. But the photodiode does integrate information (1 bit to be precise) and therefore, according to the theory of integrated information, has some experience, however dim. The theory of integrated information is therefore a modern form of panpsychism based on the informational axiom of “it from bit”. For obvious reasons Tononi downplays the panpsychist implications of his theory, but he does admit it. Consider this quote:

“Compared to [a camera], even a photodiode is richer, it owns a wisp of consciousness, the dimmest of experiences, one bit, because each of its states is one of two, not one of trillions” (p. 162)

The reason the camera is not rich is because it can be broken down into a million individual photodiodes. According to Tononi, the reason why the camera has a low level of  Phi compared to a brain is that the brain integrates information between all its specialized processors and the camera does not. But nevertheless, each photodiode has a “wisp of consciousness”.

Tononi also uses a thought experiment involving a “qualiascope”, a hypothetical device that measures integrated information and can therefore be used to detect consciousness in the world around us. In the vignettes, Tononi writes that when you use the qualiascope:

“‘You’ll look in vain at rocks and rivers, clouds and mountains,’ said the old woman. ‘The highest peak is small when you compare it to the tiny moth'” (p. 222).

This is how he downplays his panpsychism. Notice how he doesn’t say that rocks and clouds  altogether lack consciousness. It’s just that their “highest peak” of  Phi is low compared to a moth. The important part however is that the  Phi of rocks and clouds is low but not nonexistent.

Why is this important? Because Tononi wants to have his cake and eat it too. To see why just look at some of his chapter subtitles:

Chapter 3 “In which is shown that the corticothalamic system generates consciousness”
Chapter 4 “In which is shown that the cerebellum, while having more neurons than the cerebrum, does not generate consciousness.”

 This is because Tononi admires the Neural Correlates of Consciousness methodology founded by none other than Francis Crick, who has a strong intellectual presence throughout the book. According to most NCC approaches, consciousness seems to depend on “corticothamalic” loops and not just specialized processors alone (like the cerebellum).This finding comes from research correlating behavioral reports of consciousness with activity of the brain. When most people report being conscious, higher-order system loops are activated. And in monkey experiments the “report” is a judgement about whether they see a stimulus, which can be made by pressing a lever. What they find in the NCC approach is that consciousness seems to depend on more than just specialized processors operating alone. It requires a kind of globalized network of communicating modules to “generate” consciousness.

It should now be plain as day why Tononi is inconsistent in trying to have his cake and eat it too. If a lowly inorganic photodiode has a “wisp of consciousness”, then clearly, by any standard, a single neuron also has a wisp of consciousness, as well as the entire cerebellum. Tononi acknowledges this:

“Perhaps a whiff of consciousness still breathes inside your sleeping brain but is so feeble that with discretion it makes itself unnoticed. Perhaps inside your brain asleep the repertoire is so reduced that it’s no richer than in a waking ant, or not by much. Your sleeping  Phi would be much less than when your brain is fast awake, but still not nil” (p. 275).

“Early on, an embryo’s consciousness – the value of its  Phi – may be less than a fly’s. The shapes of its qualia will be less formed than its unformed body, and less human than that: featureless, undistinguished, undifferentiated lumps that do not bear the shape of sight and sound and smell” (p. 281)

” Phi may be low for individual neurons” (p. 344)

But if a single neuron has a wisp of consciousness, then clearly consciousness is not “generated” by the corticothalamic system. It is instead a fundamental property of matter itself. It from bit. What Tononi means to say with his chapter subtitles is that “The corticothalamic system generates the right amount of  Phi to make consciousness interesting and precious to humans”. The difference between the photodiode and the corticothalamic system is a difference of degree. The corticothalamic system has a high enough level  Phi such that it makes an interesting difference to human experience such that we can report or notice it, distinguishing coma patients (very low  Phi) from awake alert adults (very high  Phi).

But now there is an interesting tension in Tononi’s theory. If there is a low but nonnegligible amount  of  Phi in a human embryo, Tononi’s theory must now figure out how to make a cut-off point between the lowest amount of  Phi we actually care about so we can figure out when to stop giving people abortions. Until Tononi answers that question, his “solution” to the hard problem of consciousness is fairly disappointing. He came up with this notion of integrated information to explain qualia, but now we are faced with the difficult question of “How much  Phi is necessary for us to care about?” Clearly no one really cares about the “wisp of consciousness” in a photodiode. So having solved the “hard” problem of qualia, Tononi just creates an equally difficult problem: how to figure out the amount of  Phi worth caring about from a moral perspective. And he plainly admits he hasn’t solved these problems.

But for me this is a huge problem. You can’t have your cake and eat it to if you are a panpsychist. You can’t say that photodiodes are conscious but then say the only interesting consciousness is that of corticothalamic systems. This seems rather ad hoc to me; a solution meant to fit into prexisting research trends. If you are a panpsychist you should embrace the radical conclusion. According to  Phi theory, Consciousness is everywhere. It is not “generated” in the brain. It only reaches a high level of  Phi in the brain. And if that’s the case, then the entire methodology of NCC is mistaken. NCC is not a true NCC but rather the “Neural Correlates of the Amount of Consciousness Humans Actually Care About”.

Overall conclusion: Phi is an interesting book and worth borrowing from the library. But I wouldn’t say it adequately solves the hard problem of consciousness. Not even close. What it does is arbitrarily stipulate criteria for pointing out consciousness in nonhuman entities. But Tononi never makes a real argument beyond appeals to intuition for why we should accept a definition of consciousness such that the ascriptions come out with photodiodes having a “wisp” of consciousness. I think most people will want to define stipulation criteria such that the ascriptions come out with only biological creatures having consciousness. Panpsychism is just too radical for most. So while I applaud Tononi for exploring this ancient idea from a modern perspective, I ultimately think that when people truly understand that Tononi is a panpsychist they will be less attracted to it despite its close relationship to Francis Crick and the wildly popular NCC approach.

17 Comments

Filed under Consciousness, Philosophy, Psychology

Getting scooped: Derk Pereboom's Qualitative Inaccuracy Hypothesis

One thing I have learned in studying philosophy is that there is rarely anything new under the sun. I thought I had come up with an original idea for my current paper I am working on, but yesterday I was wandering the library stacks and randomly pulled out Derk Pereboom’s book Consciousness and the Prospects of Physicalism. I read the first page of the introduction and realized I had been scooped by Pereboom’s “Qualitative Inaccuracy Hypothesis”. According to this hypothesis, when we introspect upon our phenomenal experience our introspection represents our experience as having qualitative features that it in fact does not have. For example, I might introspect on my phenomenal experience and represent my phenomenal experience as having special qualitative features that generate the Knowledge or Conceivability arguments against physicalism. Pereboom’s idea is that our introspection systematically misrepresents our phenomenal experience such we are deluded into thinking our phenomenal experience is metaphysically “primitive” when in fact it is not primitive. Although Pereboom only argues that the qualitive inaccuracy hypothesis is a live possibility, the mere possibility of it is enough to cause wrinkles in the Knowledge and Conceivability argument. That is, if the hypothesis is correct, then Mary turns out to have a false belief upon stepping outside the room and introspecting upon her experience (since her introspection misrepresents and her subsequent knowledge is thus false). Moreover, the conceivability and zombie argument doesn’t go through because if our phenomenal experience does not in fact have the special qualitative features we introspect it as having (primitiveness) then it becomes impossible to conceive all physical truths being the same as they are now (P), a “that’s all clause” (T), and there not being phenomenal experience (~Q) for the same reason that it’s impossible to conceive PT and there not being any water. That is, if our only evidence for phenomenality having the special features that make the zombie argument go through is to be found in our introspection, if there is a possibility of our introspection getting the data wrong, then the zombie argument does not work without arguing for the (questionable) assumption that our introspection is necessarily accurate.

However, despite getting scooped on this, I believe my paper is still an original contribution to the literature. For one, I give a more empirically plausible model of how our introspection works as well as give more elaborate details on how it misrepresents our experience. I also tie in this introspective inaccuracy to the well-known “refrigerator light problem” in consciousness studies. I also develop a methodological strategy for getting around the introspective inaccuracy that I call the “stipulation strategy”. From this, I develop some implications for our ascription of phenomenality to nonhuman organisms and argue that the most common stipulation strategies end up ascribing phenomenality almost everywhere in the organic world (which contradicts central tenets of Higher-order theory). This is a surprising conclusion. My paper is also well-sourced in the empirical literature and unlike Pereboom, I don’t spend much time dealing with Chalmers and all the intricate details of the Knowledge and Conceivability arguments. I spend much more time developing a model of how introspection works and how it could possibly by inaccurate with respect to our own phenomenal experience.

So although it’s nice to know I’m not alone in arguing for what I call the “Indeterminacy of Introspection”, it’s always a shock when you spend so much time developing what you think of as an original idea and then discovering that someone else already had the same idea. Luckily, my paper has a lot more going on in it, and I think it can still be published as an original contribution to the literature.

15 Comments

Filed under Consciousness

Some Thoughts on Christof Koch's New Book and the Neuronal Correlates of Consciousness

I’m reading Chistof Koch’s new book Consciousness: Confessions of a Romantic Reductionist and wanted to put some thoughts down in writing in order to get more clear about what exactly is going on with Koch’s understanding of consciousness. Koch is famously interested in the neuronal correlates of consciousness. First, what does Koch mean by consciousness? He uses a mix of four different definitions:

1. “A commonsense definition equates consciousness with our inner, mental life.”

2. “A behavioral definition of consciousness is a checklist of actions or behaviors that would certify as conscious any organism that could do one or more of them.”

3. “A neuronal definition of consciousness specifies the minimal physiologic mechanisms required for any one conscious sensation.”

4. A philosophical definition, “consciousness is what it is like to feel something.”

I have the sneaking suspicion that Koch can’t possibly be talking about the last definition, phenomenal consciousness. Why? Because he says things like “The neural correlates of consciousness must include neurons in the prefrontal cortex”. So on Koch’s view, phenomenal content is a high-level phenomena that is not produced when there is just lower-level activity in the primary visual cortex.

To support this view, Koch describes the work of Logothetis and the binocular rivalry experiments in monkeys. In these experiments, monkeys are trained to pull a different lever whenever they see either a starburst pattern or a flag pattern. Then the researchers projected both these images onto either eye to induce a binocular rivalry.

“Logothesis then lowered fine wires into the monkey’s cortex while the trained animal was in the binocular rivalry setup. In the primary visual cortex and nearby regions, he found only a handful of neurons that weakly modulated their response in keeping with the monkey’s percept. The majority fired with little regard to the image the animal saw. When the monkey signaled one percept, legions of neurons in the primary visual cortex responded strongly to the suppressed image that the animal was not seeing. This result is fully in line with Francis’s and my hypothesis that the primary visual cortex is not accessible to consciousness.”

 I think this line of thinking is greatly confused if it is supposed to be an account of the origin of qualia or phenomenal content. First of all, it’s not clear that we can rule out the existence of phenomenal content in very simple organisms that lack nervous systems, let alone prefrontal cortices. Is there something-it-is-like to be a slug, or an amoeba? I don’t see how we can rule this out a priori. This puts pressure on Koch’s claim that what he is talking about is the origin of qualia. I think Koch is talking about something else. What I actually think the Logothetis experiments are getting at is the neural correlates of complex discrimination and reporting, which produce new forms of (reportable) subjectivity.

For example, let’s imagine that we remove the monkey’s higher-order regions so that there is just the primary visual cortex responding to the stimuli. How can we rule out the possibility that there is something-it-is-like for the monkey to have its primary visual cortex respond? I don’t see how we can possibly do this. Notice that in the original training scenario the only way to know for sure that the monkeys see the different images is for the monkey’s to “report” by pulling a lever. This is a kind of behavioral discrimination. But how do we know there is nothing-it-is-like to “see” the stimuli but not report? This is why I don’t think Koch should be appealing to the philosophical definition of phenomenal consciousness. It’s too slippery of a concept and can be applied to a creature even in the absence of behavioral discrimination, for we can always coherently ask, “How do you know for sure that there is nothing-it-is-like to for the monkey to not behaviorally discriminate the stimuli?”

The fact that Koch so closely relies on the possibility of reporting conscious percepts indicates he cannot be talking about phenomenal consciousness, because we have no principled way to rule out the presence of phenomenal consciousness in the absence of reporting. And this is especially true if we are willing to ascribe phenomenal consciousness to very simplistic creatures that don’t have the kind of higher-order cortical capacities that Koch thinks are necessary for consciousness. Koch seems to admit this because he very briefly mentions the possibility of there being “protoconsciousness” in single-celled bacteriums, but doesn’t dwell in the implications this would have for his quest to find the “origin of qualia” in higher-order neuronal processes. If there is protoconsciousness or protoqualia in single-celled bacteriums, then the brain would not be the producer of qualia, but only the great modifier of qualia. If bacteria are phenomenally consciousness, then the brain cannot be the origin of phenomenal content, but only a way to produce ever more complex phenomenal content. Accordingly, the Logothetis experiments don’t show that higher-order brain areas are necessary for phenomenal content, but only phenomenal content of a particular kind. The experiments show instead that higher-order brain regions are necessary for the phenomenal content of complex behavioral discrimination.

Let me explain. A bacterium is capable of very basic perceptual discrimination. For example, it can discriminate the presence of sugar in a petri dish. But this is not a very complex kind of discrimination in comparison to the discrimination being done by the monkey when it pulls a lever in the presence of a flag stimuli. The causal chain of mediation is much more complex in the monkey than it is for the bacterium. On this view, phenomenal content comes in degrees. It is present in bacterium to a very low degree. It is present to a higher degree in flies, worms, and monkeys. I believe it is even present in completely comatose patients (I at least see no way to rule this possibility out), but to a very low degree. And it’s higher in vegetative patients and even higher in minimally conscious patients,and of course super-high in fully awake mammals like primates, and extraordinarily high in fully awake adult humans.

So what I think Koch’s NCC approach is doing is finding the neural correlates of highly complex forms of discrimination and reporting.  Koch and Crick define the neural correlates of consciousness as “the minimal neural mechanisms jointly sufficient for any one specific conscious percept”. If we understand “conscious” here in terms of phenomenal consciousness, then I think that the NCC approach does no such thing. Rather, the NCC specifies the minimal neural mechanisms for a conscious percept that is reportable. These are hugely different things. But this doesn’t mean that Koch is completely misguided in his quest to find the NCC for conscious percepts that are reportable (Bernard Baars actually defines consciousness in exactly this way). Since the ability to intelligently report is critical in our ability to act in the world, to find the NCC of percepts that can be reported will still be highly useful in coming up with diagnostic criteria for minimally conscious patients. Except on my terminology, “minimally conscious patients” cannot really mean minimally phenomenally conscious, which implies that there is nothing-it-is-like to be in a vegetative state (which we can’t conclusively rule out). Instead, we should understand it as “minimally capable of high-level report”, with report being understood very broadly to not mean just verbal report, but any kind of meaningful discrimination and responsiveness.  And as I tried to make clear in my last post, the ability to report on your phenomenal states is very much capable of modifying phenomenality in such a way as to give rise to new forms of subjectivity, what I call “sensory gazing”.

I therefore think we should drop the quest to find the neural correlates of phenomenal consciousness. Of the four definitions that Koch uses, he should give up on the fourth, because phenomenal consciousness is just too slippery to be useful in distinguishing coma patients from minimally responsive patients, or in understanding what’s going on in the binocular rivalry cases. So when Koch says “Francis and I proposed that a critical component of any neural correlate of consciousness is the long-distance, reciprocal connections between higher-order sensory regions, located in the back of the cerebral cortex, and the planning and decision-making regions of the prefrontal cortex, located in the front”, he can’t possibly be talking about phenomenal consciousness so long as we cannot conclusively rule out the possibility of protoconsciousness in bacteria. What I actually think Koch is homing in on is the neural correlates of reflective consciousness. And it’s perfectly coherent to talk about simple forms of reflective consciousness that are present in monkeys and other mammals. Reflective here could simply mean “downstream from primary sensorimotor processing”. Uniquely human self-reflection and mind-wandering could then be understood in terms of an amplification and redeployment of these reflective circuits for new, culturally modified purposes (think of how reading circuitry in humans is built out of other more basic circuits). It would make sense that any human-unique circuitry would be built out of preexisting circuitry that we share with other primates (c.f. Michael Anderson’s massive redeployment hypothesis). And the impact of language on these reflective circuits would certainly modify them enough to account for human-typical cognitive capacities. The point then is that we can account for Koch’s findings without supposing that he is talking about the origin of qualia.

UPDATE:

Having read more of the book, it’s only fair that I amend my interpretation of Koch’s theory. Following Guilio Tononi’s theory of Integrated Information, Koch seems to espouse a kind of panpsychism, and admits that even bacteria might have a very, very dim kind of phenomenal experience. So he doesn’t seem to ultimately think that higher-order brain processes are the origin of qualia, which directly contradicts some of the things he says earlier in the book. This is very confusing in light of the things he says about binocular rivalry and other phenomena. So he seems to thinks that even a mite of dust or a piece of dirt has a dim sliver of phenomenal experience. Although this is an intriguing hypothesis (and it seems to be at least logically possible), it only seems to confirm my opinion that if phenomenal consciousness is a intelligible property at all, it is not a very useful one for doing cognitive science, since it can be applied to almost anything on certain definitions. Personally, I think that if we are going to make sense of qualia at all (and I’m not sure we ever will), it will have to be the type of property that “arises” (whatever that means) in living organisms, but not inorganic entities.

5 Comments

Filed under Consciousness, Philosophy, Psychology

Too HOT to Tell: The Failure of Introspection

I’m working on a new paper that will probably be used as my first Qualifying Paper for the Wash U PhD program to be turned in at the beginning of the Fall semester (the program requires the submission of 3 Qualifying Papers instead of comps). There is a central argument in the paper that I wanted to hopefully get some feedback on and see what people think. I call it the Failure of Introspection Argument. It goes something like this:

  1. When philosophers set up the “hard problem of phenomenal consciousness”, they often point out the phenomenon of phenomenal consciousness by asking you to imagine the “raw feel” of, e.g., “the juiciness of a strawberry” or the “raw feel” of the “redness” of a looking at red color patch, or the “raw feel” of pain.
  2. Often what philosophers think of as their own “raw” experiences such as the experience of “juiciness” are not in fact “raw”, if by raw we mean unfiltered by higher-order conceptual machinery.  Philosophers have insufficiently demonstrated that their own introspection gives them access to truly raw feelings. What their introspection actually gives access to is very conceptually loaded experiences.
  3. To address (2), philosophers might simply stipulate that what they’re interested in are the raw feels that exist independently of complex higher-order machinery, such as those of a bat, a newborn baby, or a global aphasic.
  4. But without a definite criterion to determine whether an entity does in fact have phenomenal consciousness, the stipulation approach fails to stop the threat of the ascription of phenomenal consciousness to entities like single-celled organisms (are you sure there is nothing-it-is-like to be an amoeba?)
  5. Philosophers should therefore reconsider the project of offering a higher-order explanation of phenomenal consciousness.

 The idea behind premise (1) is that when philosophers talk about phenomenal consciousness they don’t define it so much as attempt to point out the phenomenon. Perhaps the most common way to point out phenomenal consciousness is to say things like “Imagine the raw feelings of juiciness as you bite into a strawberry”, or “Imagine the raw visual experience of redness when looking at a red color patch”. So whenever philosophers try to point out the phenomenon of consciousness within their own phenomenology, they point to these “raw feelings” discovered in their phenomenology through introspection.

Premise (2) is controversial in one way and uncontroversial in another. It’s relatively uncontroversial that introspection itself is a higher-order operation, so it’s trivial to say that introspection involves conceptually loaded experience. But what’s controversial is to say that, when introspecting on their raw feelings, philosophers have no principled way to determine what experiential properties are raw and which aren’t. So, for example, in the case of experiencing a “raw feel” of redness when looking at a color patch, my basic hypothesis is that the “redness quale” is a product of higher-order brain operations and is not itself an experiential primitive.

But it is important to realize that I am not claiming that phenomenal consciousness itself is a product of higher-order operations. I think phenomenal consciousness and higher-order operations directed towards phenomenal consciousness are two entirely different things. But where I differ from most same-order theorists is that I think the appeal to “raw feelings” discovered in human introspection is unable to deliver the goods in terms of demonstrating that the “redness” of the color patch is in fact a primitive experiential property. My claim is that human higher-order machinery generates specific sensory “gazing” qualities that are only present when we step back and reflect on what it is exactly that we see. But in accordance with versions of affordance theory, my claim is that when a mouse perceives a red color patch, it does not perceive the redness qua redness, but rather, purely as a means to some behavioral end. So if the red color patch was a sign for where cheese is located, the mouse’s perceptual content would not be “raw redness” but “sign-of-cheese”. That is, it would be cashed out in terms of what Heidegger called something’s “in-order-to”.

For example, let’s imagine a carpenter who lacked all higher-order thoughts but was still capable of basic sensorimotor skills. I would say that the carpenter’s perception of a hammer would not be akin to how a philosopher might introspect on what it is like to perceive a hammer. Instead, the carpenter would perceive the hammer is something-for-hammering. The “raw sensory quales” such as the hammer’s “brownness” are mental contents only available to creatures capable of non-affordance perception. I personally think that such an ability partially stems from complex linguistic skills, but that’s another story. The point is that based on the concept of affordance perception and notions of ecologically relevant perception, it becomes psychologically unrealistic to posit the content of “raw feels” in non-human animals. And since human introspection is unable to tell “from within” whether the experiential content is a product of raw feels or tinged by higher-order machinery, the only way to reliably “point out” the phenomenon of phenomenal consciousness is to stipulate it into existence.

This brings me to premise (3). Since it becomes difficult to use human introspection to point out raw feels, philosophers might simply stipulate that they are interested in the experiential properties that exist independently of higher-order thought, such as those experiential properties had by, say, a mouse, a bat, a newborn baby, or perhaps a global aphasic. The problem with the stipulation approach however is this: if you are going to say a bat has phenomenally conscious states in virtue of its echolocation, on a suitably mechanistic account of echolocation, it’s going to turn out that echolocation is not all that different from the type of perception a single-celled organism is capable of. If all we mean by perception is the discrimination of stimuli, then it’s clear that single-celled organisms are capable of a very rudimentary type of perception. But since most philosophers who talk about phenomenal consciousness seem to think it’s a property of the brain, this broad-brushed ascription to lowly single-celled organisms is problematic, but it starts to look like phenomenal consciousness is not that interesting of a property, given it’s shared by a bacterium, a mouse, and a human.

There is plenty of room for disagreement about whether bacteria are in fact phenomenally conscious (it might be argued that phenomenal perceptions require the possibility of misrepresentation and bacteria can’t misrepresent. I personally think the appeal to representation doesn’t work given the arguments of William Ramsey about the “job description” challenge and the fundamental problem of representation) But even if you were to offer a plausible and rigorous definition of phenomenal consciousness that somehow excludes single-celled organisms, you will still run into a sorites paradox when tying to figure out just when in the phyologenetic timeline phenomenal consciousness arose. Since it’s not a well-defined property, this seems like a difficult if not impossible task.  Or worse, it seems at least possible to argue for a panpsychism with respect to phenomenal consciousness. Can we really just rule it out a priori? I don’t think so.

For these reasons amongst others, I think higher-order theory should give up in trying to account for phenomenal consciousness. What I think HOT is best suited to explain is not phenomenal consciousness but the higher-order introspection upon first-order sensory contents. I think it is a mistake to think that phenomenal consciousness itself is generated by higher-order representations. But since phenomenal consciousness is really just a property that we stipulate into existence, it doesn’t seem all that important to attempt to a scientific explanation of how it arises out of neural tissue. We should give up on using HOT to explain phenomenal consciousness and stick to something more scientifically tractable: giving a functional account of just how it is philosophers are capable of introspecting on their experience and then thinking and talking about their experience.

 

6 Comments

Filed under Consciousness, Philosophy

On the Alleged Failure of Behaviorism, a Defense of Gilbert Ryle

After having heard so much about Gilbert Ryle’s magnum opus The Concept of Mind, but having never read it myself, I was very much pleased to find a used copy in a bookstore for $5. I have of course heard other people’s comments on The Concept of Mind, but I have only recently come to realize that I have been hearing strawmen. To my estimation, most people think that Ryle gave the best defense of philosophical behaviorism possible, but that the book is still a failure because e.g. it fails to account for subjectivity, phenomenal consciousness, etc. It seems to me that many philosophers are liable to write Ryle off as being a simple minded behaviorist who likened all mental activity to dispositional properties like “the glass is brittle because it will shatter in the right conditions”. Likewise, the accusation against Ryle is that he fails to capture the “inner life” of phenomenal consciousness because everything “mental” is just a behavioral disposition and talk of “inner life” is but a category mistake.

This criticism of Ryle seems to be misguided in that Ryle was far from denying the reality of inner conscious mentality. Indeed, Ryle spends a great amount of time talking about silent monologues “in the head”, imaginings, fancies, conjuring images in the “mind’s eye”, episodic remembering, etc. It seems then that Ryle has a good grasp on the explanandum of consciousness, namely, the internal processes which generate the illusion of having a “mind space” in the head wherein one can carry out activities like silent speech, imagination, rumination, etc. Where Ryle differs from the dualist is not so much in his denying that “inner activity” happens (which would be absurd), but rather, in denying that the “inner mind space” refers to a literal place in the head that is somehow nonphysical. When dualists claim that imagining goes on “in the mind”, they are usually unconsciously adopting the “in the head” metaphor. Ryle is ahead of his time in pointing out the metaphorical character of expressions like “I am having a silent monologue in my head or in my mind”. So the difference between Ryle and the dualist is that whereas the dualists thinks that mental activity literally takes places in a nonphysical “location” (the mind), Ryle recognizes that when someone is having an inner monologue there is only one activity happening, namely, the inner monologue, which is a skill.

So although many philosophers are liable to lump Ryle in with elimativists who deny that mental activity takes place at all, Ryle fully concedes that we do sometimes do things “in our heads” (such as daydream), but he argues that this does not mean that there is a “ghost in the machine”, a secret theater of consciousness that is fully private, inaccessible to others, and wholly mysterious. Ryle claims that we only metaphorically think that there is such a secret theater. We are deluded and misled by inside/outside metaphors into thinking that when we perform inner monologues there is both physical activity (brain processing, etc.) and happenings “in the mind”, which is nonphysical. Instead of two processes happening (one physical, one mental), there is only one process, but it is explainable in multiple ways.

To show this is the case, let’s borrow an example from Wittgenstein and imagine an experimenter who hooked himself up to an fMRI machine so that he could look at his brain activity in realtime as he is thinking various thoughts. Let’s say he has an inner speech thought T, namely, “This is so cool that I can see my brain”. The critical question is, how many things are happening when thought T happens? Is there just one thing? Or two? The dualist is forced to say there are two things: the thought T and the brain activity correlated with the thought. The physicalist says there is only one thing. In concluding that there is only one thing, does the physicalist then deny that thought T happened? Hardly. As Wittgenstein claims, it is legitimate to conclude to both the brain activity and the actual thought T can seen as “expressions of thought”.

Are we then justified in claiming that thought takes place “in the head”? Yes, but only as an hypothesis. The dualist wants to claim that we are justified in claiming that conscious thought happens “in the head” because it seems that their thinking is happening in an “internal mindspace”. The physicalists claims that we are justified in claiming that conscious thought happens “in the head” because the hypothesis “thought takes place in the head” is testable through the fMRI in principle. So what is the superior position? I think the Rylean physicalist is in better shape because the claim about thinking happening in the head/brain is not made on the basis of infallible first-person knowledge, but is arrived at through a public process of reasoning about publicly available data, and is falsifiable (we could, for example, discover that thought actually is beemed into our brains from an alien overlord hovering over Earth).

But what about phenomenal consciousness? Isn’t Rylean behaviorism missing something? I think Ryle has plenty of means to account for the subjective experience of animals, including humans. Since Ryle claims the seat of confusion regarding “inner life” is the mistake of taking metaphorical expressions literally, it seems like we could develop a psychological account of how metaphorical cognition generates feelings of qualia for “insideness”. This would be similar to Julian Jaynes’ approach to consciousness, which sees it has a function operating on the basis of lexical metaphors that generates conscious experiences of “mind space”. When we have an inner speech episode and conclude that because we have such experience there is a literal space inside our heads where such speech takes place, we are being fooled by the illusion of insideness generated by inside/outside metaphors. The illusion persists even once you are aware of it being an illusion.

Consequently, “phenomenal consciousness” can be understood in two ways. The first way is in terms of “what it is like” to be an organism in general. The “who” of this subjectivity is the unconscious self (which might be better thought of as a bundle of selves). The unconscious self is simply that self which reacts to incoming stimuli in such a way as to maintain the autonomy of organismic life. This is why the unconscious is called the adaptive unconscious. It helps all animals stay alive. I believe that the unconscious self is not a special emergent feature of brain activity, but can be found even in organisms who lack a nervous system. The nervous system is merely an evolutionary development of the reactive mind which allows for increasingly adaptive behavior. Of course, the development of the nervous system gives rise to “new phenomenal feels” but I don’t believe these feels are enough to make any sharp evolutionary cut-off point for when “phenomenal consciousness” arose. So in the first sense of phenomenal consciousness, if you are a living organism with a body, then you are phenomenally conscious insofar as there is “something it is like” to react to stimuli an such a way as to maintain your metabolism.

The second sense of phenomenal consciousness can be understood in terms of the “phenomenal difference” of human-specific cognition e.g. rumination, articulation, inner speech, contemplation, imagining, mental time travel, propositional reasoning, full blown theory of mind, etc. I thus claim that “what it is like” to be human is radically different from “what it is like” to be a bat, and that the phenomenal difference is so great as to necessitate a new, evolutionarily recent sense of “phenomenal consciousness” specific to humans. Why is this distinction needed? Because part of the functional-profile of human-specific cognition is to form meta-cognitive acts of amplification and modulation on first-order sensory networks. In essence, human-specific consciousness (which I have called “Jaynesian consciousness”) operates on the information embedded in the adaptive unconscious and generates “higher order” mental states that give rise to new forms of subjective feeling. We can now make distinctions between things like pain and suffering, where pain is a first-order adaptive process and suffering is the conscious rumination on pain. One of the most interesting “side effects” of higher-order phenomenal consciousness is the generation of “sensations”. Conscious sensations are different from the classic psychological distinction between sensation and perception, where sensation is the mere transduction of energy at receptor sites and perception is extracting meaning from the stimulation. On my account, “conscious sensation” is closer to perception than it is sensation. In other words, conscious sensation is the meta-cognitive act of introspecting on first-order perceptual activitiy. This meta-cognitive act generates “feelings” of privacy, inwardness, ineffability, wonder, magic, etc. To introspect on your sensory stream is more than just paying attention to something. It is to experience your own sensations in terms of various mental constructs that are evolutionary recent and socially mediated. Following Dennett and Jaynes, I claim that one cannot have experiences of “inwardness” until there is a social construct available which makes an inside/outside psychological distinction. Such a distinction is evolutionary recent (perhaps less than 10,000 years old). Once the metaphor of inside-outside is available in the community, the brain is able to use it to generate new experiences, such as the phenomenal sensation that you are peering out at the world from behind your eyes. Of course, in ancient times, the “inside” metaphor located the mind inside the heart, not the head. It is only with the advent of neurological science that the social construction of “inside the mind” has come to mean “inside the head”.

1 Comment

Filed under Consciousness, Philosophy, Psychology