Tag Archives: qualia

The Distressing Swiftness of Contemporary Philosophical Argumentation

David Chalmers recently posted a paper about panpsychism to his blog. Like an addict returning to the source of their troubles, I can’t help but read almost everything Chalmers writes when it comes to consciousness. He calls his argument for panpsychism “Hegelian” because it works using a thesis, antithesis, and synthesis structure. The thesis is materialism, the antithesis is the conceivability argument against materialism, and the synthesis is panpsychism. Because the paper is focused on panpsychism, Chalmers sets up the thesis and antithesis quickly. Using his finely honed but slightly worn stock pile of arguments against materialism, Chalmers is deftly able to dismiss his opponents in a single sentence! Consider this paragraph after presenting the antithesis:

Materialists do not just curl up and die when confronted with the conceivability argument and its cousins. Type-A materialists reject the epistemic premise, holding for example that zombies are not conceivable. Type-B materialists reject the step from an epistemic premise to an ontological conclusion, holding for example that conceivability does not entail possibility. Still, there are significant costs to both of these views. Type-A materialism seems to require something akin to an analytic functionalist view of consciousness, which most philosophers find too deflationary to be plausible.

For those not acquainted with Chalmers neat taxonomy of everyone who disagrees with him, “Type-A materialism” is that view that zombies are not conceivable. Chalmers created the Type-A concept basically as an honorary category reserved especially for Dan Dennett’s writings on qualia. Crudely stated, Dennett’s Type-A materialism amounts to the view that serious scientific (or philosophical) theorizing about qualia is misguided and confused for innumerable reasons and that people who use the term in the way Chalmers does generally don’t know what they are talking about, or if they do they can’t explain it to anyone else, and that we’re better off denying qualia exist or replacing the qualia concept with some better, more fruitful way of thinking about minds.

But notice the incredibly swiftness of Chalmers dismissal of Type-A materialism as high-lighted by the above bolded statement. He says Type-A materialism is not worth our time because “most philosophers find it too deflationary to be plausible.” However, Type-A materialists are a minority position in consciousness studies precisely because they are equivalent to the phlogiston naysayers who argued that the concept “phlogiston” is an empty symbol, like “the present king of France”. So of course most philosophers are going to “find it too deflationary”! But that’s not an argument! That’s just citing a sociological fact that as a matter of course most people who study qualia disagree with the people who say it’s a bad idea to try and study qualia! The dismissal amounts to nothing more than doing philosophy by survey. Because “most philosophers” find it implausible, it can be dismissed in a single sentence, which is equivalent to saying “A minority view is not held by a majority of philosophers, therefore the minority view is not worth our time.”

This curtness of dialectical engagement with critics who are skeptical of the basic presuppositions surrounding talk of qualia highlights what I see as a critical weakness in the “normal science” of qualia studies: insufficiently precise definitions of concepts. For example, look at how Chalmers sets up the theory of panpsychism:

I will understand panpsychism as the thesis that some fundamental physical entities are conscious: that is, that there is something it is like to be a quark or a photon or a member of some other fundamental physical type.

In defining what it means to call protons conscious he appeals to another concept: what-it-is-likeness, which is left completely undefined under the tacit assumption we know perfectly what it means. But, what exactly does it mean? I have no idea. No one who seriously uses the concept has ever given me a satisfactory answer when I press them to define it without appeal to concepts that are equally mysterious e.g. “awareness”, “experience”, “phenomenal”, etc. At this point my interlocutors will just try to get me to sound “weird” and ask “C’mon Gary, are you seriously denying there is something it is like to drink that beer you’re sipping?” And yes,  I will deny it but only because I am unclear what that term means and don’t wish to say nonsensical things and thumping the table and appealing to crass intuitions is unlikely to convince me that our discussion is on firm ground.

P.W. Bridgman anticipated this problem when he wrote in his 1927 book The Logic of Modern Physics that:

It is a task for experiment to discover whether concepts so defined correspond to anything in nature, and we must always be prepared to find that the concepts correspond to nothing or only partially correspond. In particular, if we examine the definition of absolute time in the light of experiment, we find nothing of absolute time in the light of experiment, we find nothing in nature with such properties.

Bridgman’s diagnosis is that these “empty concepts” are often not defined  in a sufficiently operational manner in order to be amenable to empirical inquiry, the heart and soul of science. If you cannot devise or imagine an experiment that would determine if there is anything in nature corresponding to your proposed theoretical entity, then your theoretical concept is unfruitful to scientific progress in the highest degree. Bridgman cites the following as a good example of a “meaningless” question i.e. a question that cannot be operationally defined so as to be resolvable by means of the physical measurement instruments used in science to conduct experimentation:

Is the sensation which I call blue really the same as that which my neighbor calls blue? Is it possible that a blue object may arouse in him the same sensation a red object does in me and vice versa? 

Bridgman doesn’t actually claim this question is meaningless, but suggests “The reader may amuse himself by finding whether [it has] meaning or not”. My guess would be no.

Bridgman’s work is like a breathe of fresh air after wading through the foggy mires of qualia studies. I am intent on studying Bridgman more, so don’t be surprised to see his name being mentioned on this blog more frequently henceforth.

Leave a comment

Filed under Consciousness, Philosophy of science

A Skeptical Response to that Cat on Youtube “Seeing visual illusions”

This video was brought to my attention last night, and it seems to have gone viral with everyone getting excited that this video demonstrates that cats are fooled by illusions just like we are. A common thing people say is that we can reasonably infer that the cat is “seeing things”. The most glaring problem with this “demonstration” is that the paper is placed on a soft couch. If you notice, as the cat paws the paper, it makes the paper move. This self-induced movement changes not only the lighting patterns on the paper but makes the patterns themselves move, which is obviously attention-grabbing. As the cat bats down one “hill” on the paper, another “hill” pops up which immediately attracts attention. I’ve seen my own cat do this with blank pieces of paper or newspaper. Because it’s impossible from this video alone to determine whether the cat was reacting to self-induced movements or “illusory” movements, it’s completely inconclusive whether or not this cat is really seeing things. A better demonstration would be if the printed illusion was laminated flat against a hard and smooth surface, so the cat would not be able to self-deform the pattern and induce movement. My guess is that the experiments would be similarly inconclusive and difficult to interpret.

I am not aware of any scientific attempt to determine whether cats really see things. This is probably because most level-headed experimentalists understand there is a deep epistemological problem in trying to make inferences about the private mental states of animals that are incapable of giving verbal reports about their experience in terms we can make sense of. A scientist could only ever tentatively make such inferences on the basis of analogy, but since cats can’t talk to us, we must make these analogical inferences about their visual qualia from strictly physical cues as measured by physical measuring instruments. But therein lies the problem: how do we know we made the right inference about “what-it-it’s-like” to be a cat based purely on the read-outs of our physical instruments e.g. electrical recordings of neuronal activity? This problem about an “inferential gap” is similar to familiar philosophical chestnuts such as the “explanatory gap” or “problem of inverted qualia” , which in turn are related to that much older chestnut: the “Problem of Other Minds”.

As far as I know, there is no solution to these problems that doesn’t involve some kind of handwaving appeals to intuition, circular reasoning, or wishful thinking. One thing to do is deny foundationalism and the loosen our standards for what counts as knowledge such that our blind inference about the cat’s visual qualia becomes something more secure and less troublesome when we ask the pesky skeptical questions. There is nothing wrong in principle with inferential reasoning and analogical bootstrapping because we will always run into these sorts of worries when trying to make sense of the unknown in terms of the known through an iterated extension of our properly basic knowledge. But some bootstrapping extensions are more reasonable than others. In terms of Otto Neurath’s analogy of repairing a boat while out at sea, some repairs will keep us afloat but others will sink us. A good extension is when scientists turn their newly calibrated instruments on these unknown domains and they can make sense of the unfamiliar readings in terms that overlap with familiar domains of extension where the experimental results are robust and reliable.

So why can’t we “extend” our knowledge to the unknown domain of visual qualia in nonhuman animals? The crucial disanalogy is that in the natural sciences the successful extension of the concept is done by using reliable instruments that work using known means and provide reliable, replicable data in familiar domains. Moreover, if different versions of the same instrument made by different scientists gave similar data we would have a good reason to be confident that this instrument would be a good “base” upon which to extend our knowledge. But as far as I’m aware, we haven’t got a clue how to build a “qualia-scope”. What materials would such a device be made of? Why those materials and not others? What physical quantities would it be designed to respond to? Why those quantities and not others? What theory can we appeal to to justify a decision to use some quantities over others?

1 Comment

Filed under Consciousness, Philosophy

The Refrigerator Light Problem

1.0 The Problem of Phenomenal Consciousness

Phenomenal consciousness has a familiar guise but is frustratingly mysterious. Difficult to define (Goldman, 1993), it involves the sense of there being “something-it-is-like” for an entity to exist. Many theorists have studied phenomenal consciousness and concluded physicalism is false (Chalmers, 1995, 2003; Jackson, 1982; Kripke, 1972; Nagel, 1974). Other theorists defend physicalism on metaphysical grounds but argue there is an unbridgeable “explanatory gap” for phenomenal consciousness (Howell, 2009; Levine, 1983, 2001). “Mysterians” have argued the explanatory gap is intractable because of how the human mind works (McGinn, 1989; 1999). Whatever it is, phenomenal consciousness seems to lurk amidst biological processes but never plays a clearly identifiable causal role that couldn’t be performed nonconsciously (Flanagan & Polger, 1995). After all, some philosophers argue for the possibility of a “zombie” (Chalmers, 1996) physically identical to humans but entirely devoid of phenomenal consciousness.

Debates in the sprawling consciousness literature often come down to differences in intuition concerning the basic question of what consciousness actually is. One question we might have about its nature concerns its pervasiveness. First, is consciousness pervasive throughout our own waking life? Second, is it pervasive throughout the animal kingdom? We might be tempted to answer the first question by introspecting on our experience and hoping that will help us with the second question. However, introspecting on our experience generates a well known puzzle known as the “refrigerator light problem”.

2.0 The Refrigerator Light Problem
2.1 Thick vs thin

The refrigerator light problem is motivated by the question, “Consciousness seems pervasive in our waking life, but just how pervasive is it?” Analogously, we can ask whether the refrigerator light is always on. Naively, it seems like it’s on even when the door is closed, but is it really? The question is easily answered because we can investigate the design and function of refrigerators and conclude that the light is designed to turn off when the door is closed. We could even cut a hole in the door to see for ourselves. However, the functional approach won’t work with phenomenal consciousness because we currently lack a theory of how phenomenal consciousness works or any consensus on what its possible function might be, or whether it could even serve a function.

The refrigerator light problem is the problem of deciding between two mutually exclusive views of consciousness (Schwitzgebel, 2007):

The Thick View: Consciousness seems pervasive because it is pervasive, but we often cannot access or report this consciousness.
The Thin View: Consciousness seems pervasive, but this is just an illusion.

The thick view is straightforward to understand, but the thin view is prima facie counterintuitive. How could we be wrong about how our own consciousness seems to us? Many philosophers argue that a reality/appearance distinction for consciousness itself is nonsensical because consciousness just is how things seem. In other words, if consciousness seems pervasive, then it is pervasive.

On the thin view, however, the fact that it seems like consciousness is pervasive is a result of consciousness generating a false sense of pervasiveness. The thin theorist thinks that anytime we try to become aware of what-it-is-like to enjoy nonintrospective experience, we activate our introspection by inquiring and corrupt the data. The thin theorist is for methodological reasons skeptical about the idea of phenomenal consciousness existing without our ability to access or attend to it. If phenomenal consciousness can exist without any ability to report it then how can psychologists study it if subjects must issue a report that they are conscious? Anytime a subject reports they are conscious, you can’t rule out that it is the reporting doing all the work. The thin theorist challenges us to become aware of these nonintrospective experiences such that we can report on their existence and meaningfully theorize about them.

Philosophers might appeal to special phenomenological properties to falsify the thin view. This won’t work because, in principle, one could develop a thin view to accommodate any of the special phenomenological properties ascribed to phenomenal consciousness such as the pervasive “raw feeling” of redness when introspecting on what-it-is-like to look at a strawberry or the “painfulness” of pain. Thin theory can simply explain away the experience of pervasiveness as an illusion generated by a mechanism that itself isn’t pervasive. Julian Jaynes is famous for defending a strong thin view:

Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of…It is like asking a flashlight in a dark room to search around for something that doesn’t have any light shining on it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so consciousness can seem to pervade all mentality when actually it does not. (1976, p. 23)

Thin vs thick views represent the two most common interpretations of the refrigerator light problem, and both seem to account for the data equally well. The problem is that from the perspective of introspection, both theories are indistinguishable. The mere possibility of the thin view being true motivates the methodological dilemma of the refrigerator light problem. How do we rule out thin explanations of thick phenomenology?

2.2 The Difference Introspection Makes

The intractability of the refrigerator light depends on the inevitable influence introspection has on nonintrospective experience. Consider the following case. Jones loves strawberries. He eats one a day at 3:00 pm. All day, Jones looks forward to 3:00 pm because it’s the one time of the day when he can savor the moment and take a break from the hustle-and-bustle of work. When 3:00 pm arrives, he first gazes longingly at the strawberry, his eyes soaking up its patterns of texture and color while his reflective mind contemplates how it will taste. Now Jones reaches out for the strawberry, puts it up to his mouth, and bites into it slowly, savoring and paying attention to the sweetness and delicate fibrosity that is distinctive of strawberries. What’s crucial is that Jones is not just enjoying the strawberry, but introspecting on the fact that he is enjoying the strawberry. That is, he is aware of the strawberry but also meta-aware of his first-order awareness.

Suppose we ask Jones what it’s like for him to enjoy the strawberry when he is not introspecting. The refrigerator light problem will completely stump him. Moreover, suppose we want to ascribe consciousness to Jones (or Jones wants to ascribe it to himself). Should we ascribe it before he starts introspecting or after? Naturally, the answer depends on whether we accept a thin or thick view. According to a thin view, whatever is present in Jones’ experience prior to introspection does not warrant the label “consciousness”. The thin theorist might call this pervasive property “nonconscious qualia” (Rosenthal, 1997), but they reserve the term “consciousness” to describe Jones’ metarepresentational awareness that his perceiving. The thin theorist would agree with William Calvin when he says, in defining “consciousness”, “The term should capture something of our advanced abilities rather than covering the commonplace” (1989, p. 78).

What about nonhuman animals? Whereas a thin theorist would say there is a difference in kind between human and rat consciousnesss, the thick theorist is likely to say that both the rat and Jones share the most important kind of pervasive consciousness. Is this jostling a purely terminological squabble? Kriegel (2009) has argued that the debate is substantial because theorists have different intuitions about the source of mystery for consciousness. The thick theorist thinks the mystery originates with first-order pervasiveness; the thin theorist thinks it originates with second-order awareness. Unfortunately, a squabble over intuitions is just as stale as a terminological dispute.

3.0 The Generality of the Refrigerator Light Problem
3.1 Introducing the Stipulation Strategy

If you are a scientist wanting to tackle the Hard problem of phenomenal consciousness, how would you respond to the refrigerator light problem? If the debate between thin and thick theories is either terminological or based on conflicting intuitions, what do you do? The only strategy I can think of for circumventing the terminological arbitrariness is to embrace it using what I call the stipulation strategy. It works like this. You first agree that we cannot resolve the thin vs thick debate using introspection alone. Unfazed, you simply stipulate some criterion for pointing phenomenal consciousness out such that it can be detected with empirical methods.

Possible criteria are diverse and differ from scientist to scientist. Some theorists stipulate that you will find phenomenal consciousness anytime you can find first-order (FO) perceptual representations of the right kind (Baars, 1997; Block, 1995; Byrne, 1997; Dretske, 1993, 2006; Tye, 1997). This would allow us to find many instances of phenomenal consciousness throughout the biological world, especially in creatures with nervous systems. However, we might have a more restricted criterion that says you will find phenomenal consciousness anytime you have higher-order (HO) thoughts/perceptions (Gennaro, 2004; Lycan, 1997; Rosenthal, 2005), restricting the instantiations of phenomenal consciousness to mammals or maybe even primates depending on your understanding of higher-order cognition. Or, more controversially, you might have a panpsychist stipulation criterion that makes it possible to point out phenomenal consciousness in the inorganic world.

Once we understand how the stipulation strategy works, the significance of any possible reductive explanation becomes trivialized qua explanation of phenomenal consciousness. To apply this result to contemporary views, I will start with FO theory, apply the same argument to HO theory, and then discuss the more counterintuitive (but equally plausible) theory of panpsychism.

3.2 The First-order Gambit

FO theorists deny the transitivity principle and claim one does not need to be meta-aware in order for there to be something-it-is-like to exist. The idea is that we can be in genuine conscious states but completely unaware of being in them. That is, FO theorists think there can be something-it-is-like for S to exist without S being aware of what-it-is-like for S to exist, a possibility HO theorists think absurd if not downright incoherent because the phrase “for S” suggests meta-awareness.

FO approaches are characterized by their use of perceptual awareness as the stipulation criterion for consciousness. A representative example is Dretske, who says “Seeing, hearing, and smelling x are ways of being conscious of x. Seeing a tree, smelling a rose, and feeling a wrinkle is to be (perceptually) aware (conscious) of the tree, the rose, and the wrinkle” (1993, p. 265). Dretske argues that once you understand what consciousness is (perceptual awareness), you will realize that one can be pervasively conscious without being meta-aware that you are conscious.

However, there is a serious problem with trying to reconcile the implications of theoretical stipulation criteria with common intuitions about which creatures are conscious. The problem with using perceptual awareness as our criterion is that it casts its net widely, perhaps too widely if you think phenomenality is only realized in nervous systems. Since many FO theorists think that if we are going to have a scientific explanation of phenomenal consciousness at all it must be a neural explanation (Block, 2007; Koch, 2004) they will want to avoid ascribing consciousness to nonneural organisms. However, if we stipulate that a bat has phenomenal consciousness in virtue of its capacity for perceptual awareness, I see no principled way of looking at the phylogenetic timeline and marking the evolution of neural systems as the origin of perceptual awareness.

To see why, consider chemotaxis in unicellular bacteria (Kirby, 2009; Van Haastert & Devreotes, 2004). Recently chemotaxis has been modeled using informatic or computational theory rather than classical mechanistic biology (Bourret & Stock, 2002; Bray, 1995; Danchin, 2009; Shapiro, 2007). A simple demonstration of chemotaxis would occur if you stuck a bacterium in a petri dish that had a small concentration of sugar on one side. The bacterium would be able to intelligently discriminate the sugar side from the non-sugar side and regulate its swimming behavior to move upstream the gradient. Naturally we assume the bacterium is able to perceive the presence of sugar and respond appropriately. On this simplistic notion of perceiving, perceiving a stimulus is, roughly speaking, a matter of valenced behavioral discrimination of that stimulus. By valenced, I mean that the stimuli are valued as either attractive or aversive with respect to the goals of the organism (in this case, survival and homeostasis). If the bacterium simply moved around randomly when placed in a sugar gradient such that the sugar had no particular attractive or aversive force, we might conclude that the bacterium is not capable of perceiving sugar, or that sugar is not ecologically relevant to the goals of the organism. But if the bacterium always moved upstream of the sugar gradient, it is natural to say that the bacterium is capable of perceiving the presence of sugar. Likewise, if there were a toxin placed in the petri dish, we would expect this to be valenced as aversive and the bacteria would react appropriately by avoiding it, with appropriateness understood in terms of the goal of survival

Described in this minimal way, perceptual awareness in its most basic form does not seem so special that only creatures with nerve cells are capable of it. Someone might object that this is not a case of genuine perceptual awareness because there is nothing-it-is-like for the bacterium to sense the sugar or that its goals are not genuine goals. But how do we actually know this? How could we know this? For all we know, there is something-it-is-like for the bacterium to perceive the sugar. If we use perceptual awareness as our stipulation criterion, then we are fully justified in ascribing consciousness to even unicellulars.

Furthermore, it is misleading to say bacteria only respond to “proximal” stimulation, and therefore are not truly perceiving. Proximal stimulation implies an implausible “snapshot” picture of stimulation where the stimulation happens instantaneously at a receptor surface. But if stimuli can have a spatial (adjacent) component why can they not also have a temporal (successive) component? As J.J. Gibson put it, “Transformations of pattern are just as [biologically] stimulating as patterns are” (Gibson, 1966). And this is what researchers studying chemotaxis actually find: “for optimal chemotactic sensitivity [cells] combine spatial and temporal information” (Van Haastert & Devreotes, 2004, p. 626). The distinction between proximal stimulation and distal perception rests on a misunderstanding of what actually stimulates organisms.

Interestingly, the FO gambit offers resources for responding to the zombie problem. Since we have independent reasons to think bacteria are entirely physical creatures, if perceptual awareness is used as a stipulation criterion then the idea of zombie bacteria is inconceivable. Because bacterial perception is biochemical in nature, a perfect physical duplicate of a bacteria would satisfy the stipulation criterion we apply to creatures in the actual world. The problem, however, is that we have no compelling reason to choose FO stipulation criteria over any other, including HO criteria.

3.3 The Higher-order Gambit

HO theories are reductive and emphasize some kind of metacognitive representation as a criterion for ascribing phenomenal consciousness to a creature (e.g. awareness that you are aware). These HO representations are postulated in order to capture the “transitivity principle” (Rosenthal, 1997), which says that a conscious state is a state whose subject is, in some way, aware of being in it. A controversial corollary of the transitivity principle is that there are some genuinely qualitative mental states that are nonconscious e.g. nonconscious pain.
Neurologically motivated HO theories like Baar’s Global Workspace model (1988; 1997) and Dehaene’s Global Neuronal Workspace model (Dehaene et al., 2006; Dehaene, Kerszberg, & Changeux, 1998; 2001; Gong et al., 2009) have had great empirical success but they are deeply unsatisfying as explanations of phenomenal consciousness. HO theory can explain our ability to report on or monitor our experiences, but many philosophers wonder how it could provide an explanation for phenomenal consciousness (Chalmers, 1995). Ambitious HO theorists reply by insisting they do in fact have an explanation of how phenomenal consciousness arises from nonconscious mental states.

However, ambitious HO approaches suffer from the same problem of arbitrariness that FO approaches did. In order decide between FO and HO stipulation criteria we need to first decide on either a thick or thin interpretation of the refrigerator light problem. Since introspection is no help, we are forced to use the stipulation strategy. But why choose a HO stipulation strategy over a FO one? If everyone had the same intuitions concerning which creatures were conscious we could generate stipulation criteria that perfectly match these intuitions. The problem is that theorists have different intuitions concerning what creatures (beside themselves) are in fact conscious. Surprisingly, some theorists might go beyond the biological world altogether and claim inorganic entities are conscious.

3.4 The Panpsychist Gambit

A more radical stipulation strategy is possible. If antiphysicalist arguments suggest that neurons and biology have nothing to do with phenomenal consciousness, we might think that phenomenal consciousness is a fundamental feature of reality. On this view, matter itself is intrinsically experiential. Another idea is that phenomenality is necessitated by an even more fundamental property, called a protophenomenal property (Chalmers, 2003).

Panpsychism is a less popular stipulation gambit, but at least one prominent scientist has recently used a stipulation criterion that leads to panpsychism (although he downplays this result). Guilio Tononi (2008) proposes integrated information as a promising stipulation criterion. The intellectual weight of the theory rests on a thought experiment involving a photodiode. A photodiode discriminates between light and no light. But does the photodiode see the light? Does it experience the light? Most people would think no. But the photodiode does integrate information (1 bit to be precise) and therefore, according to the theory of integrated information, has some experience, however dim. Whatever theoretical or practical benefits come with accepting the theory of integrated information, when it comes to the Hard problem of phenomenal consciousness we are left scratching our heads as to why integrated information is the best criterion for picking out phenomenal consciousness. Given the criterion leads to ascriptions of phenomenality to a photodiode, many theorists will take this as good reason for thinking the criterion itself is wrong given their pretheoretical intuitions about what entities are phenomenally conscious. But as we have learned, intuitions are diverse as they are unreliable.


Unable to define phenomenal consciousness, theorists are tempted to use their introspection to “point out” the phenomenon. The refrigerator light problem is motivated by the problem of deciding between thin and thick views of your own phenomenal consciousness using introspection alone. If introspection is supposed to help us understand what phenomenal consciousness is, and the refrigerator light problem prevents introspection from deciding between thin and thick views, then we need some other methodological procedure. The only option available is the stipulation strategy whereby we arbitrarily stipulate a criterion for pointing it out e.g. integrated information, or higher-order thoughts. The problem is that any proposed stipulation criterion is just as plausible as any other given we lack a pretheoretical consensus on basic questions such as the function of phenomenal consciousness. Our only hope is to push for the standardization of stipulation criteria.

p.s. If anyone wants the full reference for a citation, just ask.


Filed under Consciousness, Philosophy, Psychology

Book review: Giulio Tononi's Phi: A Voyage from the Brain to the Soul

Phi is easily the most unusual book on consciousness I have read in awhile. It’s hard to describe, but Tononi makes his case for “integrated information” using poetry, art, metaphor, and fiction. Each chapter is a fictional vignette or dialogue between characters inspired by famous scientists like Galileo, Darwin, or Francis Crick. At the end of every chapter is a “note” written in normal academic language explaining the context of the stories. On just about every page there are huge full-color glossy pictures of famous art. The book is simply beautiful as a physical object in an attempt, I suspect, to convince qualiaphiles that Tononi is “one of them”.

The theory of integrated information itself, however, is less appealing.  Here is how integrated information is defined:

Integrated information measures how much can be distinguished by the whole above and beyond its parts, and Phi is its symbol. A complex is where  Phi reaches its maximum, and therein lives one consciousness- a single entity of experience.

And with that Tononi hopes the “hard” problem of consciousness is solved. However, the intellectual weight of Phi  rests on a thought experiment involving a photodiode. A photodiode discriminates between light and no light. But does the photodiode see the light? Does it experience the light? Most people would think no. But the photodiode does integrate information (1 bit to be precise) and therefore, according to the theory of integrated information, has some experience, however dim. The theory of integrated information is therefore a modern form of panpsychism based on the informational axiom of “it from bit”. For obvious reasons Tononi downplays the panpsychist implications of his theory, but he does admit it. Consider this quote:

“Compared to [a camera], even a photodiode is richer, it owns a wisp of consciousness, the dimmest of experiences, one bit, because each of its states is one of two, not one of trillions” (p. 162)

The reason the camera is not rich is because it can be broken down into a million individual photodiodes. According to Tononi, the reason why the camera has a low level of  Phi compared to a brain is that the brain integrates information between all its specialized processors and the camera does not. But nevertheless, each photodiode has a “wisp of consciousness”.

Tononi also uses a thought experiment involving a “qualiascope”, a hypothetical device that measures integrated information and can therefore be used to detect consciousness in the world around us. In the vignettes, Tononi writes that when you use the qualiascope:

“‘You’ll look in vain at rocks and rivers, clouds and mountains,’ said the old woman. ‘The highest peak is small when you compare it to the tiny moth'” (p. 222).

This is how he downplays his panpsychism. Notice how he doesn’t say that rocks and clouds  altogether lack consciousness. It’s just that their “highest peak” of  Phi is low compared to a moth. The important part however is that the  Phi of rocks and clouds is low but not nonexistent.

Why is this important? Because Tononi wants to have his cake and eat it too. To see why just look at some of his chapter subtitles:

Chapter 3 “In which is shown that the corticothalamic system generates consciousness”
Chapter 4 “In which is shown that the cerebellum, while having more neurons than the cerebrum, does not generate consciousness.”

 This is because Tononi admires the Neural Correlates of Consciousness methodology founded by none other than Francis Crick, who has a strong intellectual presence throughout the book. According to most NCC approaches, consciousness seems to depend on “corticothamalic” loops and not just specialized processors alone (like the cerebellum).This finding comes from research correlating behavioral reports of consciousness with activity of the brain. When most people report being conscious, higher-order system loops are activated. And in monkey experiments the “report” is a judgement about whether they see a stimulus, which can be made by pressing a lever. What they find in the NCC approach is that consciousness seems to depend on more than just specialized processors operating alone. It requires a kind of globalized network of communicating modules to “generate” consciousness.

It should now be plain as day why Tononi is inconsistent in trying to have his cake and eat it too. If a lowly inorganic photodiode has a “wisp of consciousness”, then clearly, by any standard, a single neuron also has a wisp of consciousness, as well as the entire cerebellum. Tononi acknowledges this:

“Perhaps a whiff of consciousness still breathes inside your sleeping brain but is so feeble that with discretion it makes itself unnoticed. Perhaps inside your brain asleep the repertoire is so reduced that it’s no richer than in a waking ant, or not by much. Your sleeping  Phi would be much less than when your brain is fast awake, but still not nil” (p. 275).

“Early on, an embryo’s consciousness – the value of its  Phi – may be less than a fly’s. The shapes of its qualia will be less formed than its unformed body, and less human than that: featureless, undistinguished, undifferentiated lumps that do not bear the shape of sight and sound and smell” (p. 281)

” Phi may be low for individual neurons” (p. 344)

But if a single neuron has a wisp of consciousness, then clearly consciousness is not “generated” by the corticothalamic system. It is instead a fundamental property of matter itself. It from bit. What Tononi means to say with his chapter subtitles is that “The corticothalamic system generates the right amount of  Phi to make consciousness interesting and precious to humans”. The difference between the photodiode and the corticothalamic system is a difference of degree. The corticothalamic system has a high enough level  Phi such that it makes an interesting difference to human experience such that we can report or notice it, distinguishing coma patients (very low  Phi) from awake alert adults (very high  Phi).

But now there is an interesting tension in Tononi’s theory. If there is a low but nonnegligible amount  of  Phi in a human embryo, Tononi’s theory must now figure out how to make a cut-off point between the lowest amount of  Phi we actually care about so we can figure out when to stop giving people abortions. Until Tononi answers that question, his “solution” to the hard problem of consciousness is fairly disappointing. He came up with this notion of integrated information to explain qualia, but now we are faced with the difficult question of “How much  Phi is necessary for us to care about?” Clearly no one really cares about the “wisp of consciousness” in a photodiode. So having solved the “hard” problem of qualia, Tononi just creates an equally difficult problem: how to figure out the amount of  Phi worth caring about from a moral perspective. And he plainly admits he hasn’t solved these problems.

But for me this is a huge problem. You can’t have your cake and eat it to if you are a panpsychist. You can’t say that photodiodes are conscious but then say the only interesting consciousness is that of corticothalamic systems. This seems rather ad hoc to me; a solution meant to fit into prexisting research trends. If you are a panpsychist you should embrace the radical conclusion. According to  Phi theory, Consciousness is everywhere. It is not “generated” in the brain. It only reaches a high level of  Phi in the brain. And if that’s the case, then the entire methodology of NCC is mistaken. NCC is not a true NCC but rather the “Neural Correlates of the Amount of Consciousness Humans Actually Care About”.

Overall conclusion: Phi is an interesting book and worth borrowing from the library. But I wouldn’t say it adequately solves the hard problem of consciousness. Not even close. What it does is arbitrarily stipulate criteria for pointing out consciousness in nonhuman entities. But Tononi never makes a real argument beyond appeals to intuition for why we should accept a definition of consciousness such that the ascriptions come out with photodiodes having a “wisp” of consciousness. I think most people will want to define stipulation criteria such that the ascriptions come out with only biological creatures having consciousness. Panpsychism is just too radical for most. So while I applaud Tononi for exploring this ancient idea from a modern perspective, I ultimately think that when people truly understand that Tononi is a panpsychist they will be less attracted to it despite its close relationship to Francis Crick and the wildly popular NCC approach.


Filed under Consciousness, Philosophy, Psychology

Nonconscious Qualia?

Here’s a strange idea: nonconscious qualia.  Absurd you might say? Well, many proponents of the so-called Higher-order approach to consciousness believe they not only exist, but are quite routine and omnipresent in our mental lives. Peter Carruthers, Uriah Kriegel, and David Rosenthal are three theorists who have openly talked about nonconscious qualia. Examples of nonconscious qualia include sensing redness, loudness, roughness, sweetness etc. The idea is that there can be genuinely nonconscious sensory qualities. The absent minded driver is a common case used to support the idea of nonconscious qualia. The only difference between conscious and nonconscious qualia is that, obviously, the conscious qualia are conscious.

More specifically, these theorists claim that there is nothing-it-is-like to have nonconscious qualia. That is the big difference: there is something-it-is-like to have conscious qualia but there is nothing-it-is-like to have nonconscious qualia. Why is there something-it-is-like to have conscious qualia? Because the presence of a higher-order mental state is what generates what-it-is-likeness. It is easy to see why people find higher-order theory to be absurd. After all, most people associate qualia with what-it-is-likeness, so to talk about qualia that there is nothing-it-is-like to be in seems absurd.

My own position is that there is something-it-is-like to have nonconscious qualia. This puts me at odds with both First-order and Higher-order theory. Higher-order consciousness, in my view, is much closer to a kind of self-conscious introspection than any kind of “noninferential higher-order thought” (granted that the objects of such self-consciousness don’t have to be just the self). And if I were to think that only conscious qualia have what-it-is-likeness, I would have to conclude that there is nothing-it-is-like to be a cat or  mouse, since cats and mice obviously aren’t capable of entertaining complex introspection. Some theorists like Peter Carruthers simply bite the bullet and deny there is anything-it-is-like to be a nonhuman animal. But I think that if what-it-is-likeness is going to be a coherent property at all, it will have to be a property shared by pretty much all lifeforms.

I think one reason why higher-order theorists think that what-it-is-likeness is associated with higher-order awareness is that Nagel’s original formulation was in terms of what-it-is-like for a subject and not just what-it-is-likeness. So the idea is that it is absurd to suppose there is something-it-is-like for Jones to not be aware of what-it-is-like to exist. But I fail to see why this is absurd. If we distinguish between what-it-is-likeness and our introspective awareness of what-it-is-like, then there seems to be no difficulties in thinking there is something-it-is-like to lack a meta-awareness of what-it-is-like. The phrase “for a subject” seems to suggest the presence of higher-order awareness, but this is because we are conflating the minimal subject with the conscious subject. If we thought the only legitimate type of subject was a conscious subject, then the idea of what-it-is-likeness without consciousness would be absurd. But if we thought there was a kind of minimal prereflective subjectivity intrinsic to being an embodied creature, then the idea of there being something “for a subject” without that subject being meta-aware is perfectly coherent.

1 Comment

Filed under Consciousness, Philosophy

The Nature of Visual Experience


Many philosophers have used visual illusions as support for a representational theory of visual experience. The basic idea is that sensory input in the environment is too ambiguous for the brain to really figure out anything on the basis of sensory evidence alone. To deal with this ambiguity, theorists have conjectured that the brain generates a series of predictions or hypotheses about the world based on the continuously incoming evidence and it’s accumulated knowledge (known as “priors”). On this theory, the nature of visual experience is explained by saying that what we experience is really just the prediction. So on the visual illusion above, the brain guesses that the B square is a lighter color and therefore we experience it as lighter. The brain guesses this because in its stored memory is information about typical configurations of checkered squares under typical kinds of illumination. On this standard view, all of visual experience is a big illusion, like a virtual-reality type Matrix.

Lately I have been deeply interested in thinking about these notions of “guessing” and “prediction”. What does it mean to say that a collection of neurons predicts something? How is this possible? What does it mean for a collection of neurons to make a hypothesis? I am worried that in using these notions as our explanatory principle, we risk the possibility that we are simply trading in metaphors instead of gaining true explanatory power. So let’s examine this notion of prediction further and see if we can make sense of it in light of what we know about how the brain works.

One thought might be that predictions or guesses are really just kinds of representations. To perceive the B square as lighter is just for your brain to represent it as lighter. But what could we mean by representation? One idea comes from Jeff Hawkin’s book On Intelligence. He talks about representations in terms of invariancy. For Hawkins, the concept of representation and prediction is inevitably tied into memory. To see why consider my perception of my computer chair. I can see and recognize that my chair is my chair from a variety of visual angles. I have a memory of what my chair looks like in my brain and the different visual angles provide evidence that matches my stored memory of my chair. The key is that my high-level memory of my chair is invariant with respect to it’s visual features. But at lower levels of visual processing, the neurons are tuned to respond only to low-level visual features. So some low-level neurons only fire in respond to certain angles or edge configurations. So on different visual angles these low-level neurons might not respond. But at higher levels of visual processing, there must be some neurons that are always firing regardless of the visual angle because their level of response invariancy is higher. So my memory of the chair really spans a hierarchy of levels of invariancy. At the highest levels of invariancy, I can even predict the chair when I am not in the room. So if I am about to walk into my office, I can predict that my chair will be on the right side of the room. If I walked in and my chair was not on the right side, I would be surprised and I’d have to update my memory with a new pattern.

On this account, representation and prediction is intimately tied into our memory, our stored knowledge of reality that helps us make predictions to better cope with our lives. But what is memory really? If we are going to be neurally realistic, it seems like it is going to have to be cashed out in terms of various dispositions of brain cells to react in certain ways. So memory is the collective dispositions of many different circuits of brain cells, particularly their synaptic activities. Dispositions can be thought of as mechanical mediations between input and output. Invariancies can thus be thought of as invariancies in mediation. Low-level mediation is variant with respect to the fine-grained features of the input. High-level mediation is less variant with respect to fine-grain detail. What does this tell us about visual experience? I believe the mediational view of representation offers an alternative account of illusions.

I am still working out the details of this idea, so bear with me. My current thought is that the brain’s “guess” that square B is lighter can be understood dispositionally rather than intentionally. Let’s imagine that we reconstruct the 2D visual illusion in the real world, so that we experience the same illusion that the B square is lighter. What would it mean for my brain to make this prediction? Well, on the dispositional view, it would mean that in making such a prediction my brain is essentially saying “If I go over and inspect that square some more I should expect it to be lighter”. If you actually did go inspect the square and found it is is not a light square, you would have to make an update to your memory store. However, visual illusions are persistent despite high-level prediction. This is because the entirety of the memory store for low-level visual processing overrides the meager alternate prediction generated at higher levels.

What about qualia? The representational view says that the qualitative features of the B square result from the square being represented as lighter. But if we understand representations as mediations, we see that representations don’t have to be these spooky things with strange properties like “aboutness”. Aboutness is just cashed out in terms of specificity of response. But the problem of qualia is tricky. In a way I kind of think the “lightness” of the B square is just an illusion added “on top” of a more or less veridical acquaintance. So I feel like I should resist inferring from this minor illusional augmentation that all of my visual experience is massively illusory in this way. Instead, I think we could see the “prediction” of the B square as lighter as a kind of augmentation of mediation. The brain augments the flow of mediations such that if this illusion was a real scene and someone asked you to “go step on all the light squares” you would step on the B square. For this reason, I think the phenomenal impressiveness of the illusions are amplified because of their 2Dness. If it were a 3D scene, the “prediction” would take the form of possible continuations of mediated behavior in response to a task demand (e.g. finding light squares). But because it’s a 2D image, the “qualia” of the B square being light takes on a special form, pressing itself upon us as being a “raw visual feel” of lightness that on the surface doesn’t seem to be linked to behavior. But I think if we understand the visual hierachy of invariant mediation, and the ways in which the higher and lower levels influence each other, we don’t need to conclude that all visual experience is massively illusory because we live behind a Kantian screen of representation. Understanding brain representations as mediational rather than intentional helps us strip the Kantian image of its persuasive power.


Filed under Consciousness, Philosophy

Does Mary the Neuroscientist Learn Anything New?

I was thinking about the famous Mary the Neuroscientist thought experiment today, and had a few thoughts I’d like to write down and try to make clear in my head. I’m not sure what follows is perfectly coherent, but here goes. In case you haven’t heard of it, the thought experiment goes something like this. Mary is a super scientist. So super that she has theoretical knowledge of all physical facts (emphasis on theoretical). She has the theoretical knowledge of a complete physics, biology, chemistry, and neuroscience. This sounds great, but there is a catch: Mary has been confined to a black-and-white room her entire life. For perhaps obvious reasons, Mary is very interested in scientifically explaining color vision. She knows every physical fact relevant to color vision. She knows, theoretically, exactly down to the quarks how every brain physically responds when it steps in front of a colored object. Now suppose Mary’s cruel captors finally let her out of her black-and-white room such that she sees a red rose for the first time. Here’s the big question: does she learn anything new upon seeing the red rose?

Many philosophers find it intuitive that she does learn something new. What does she learn according to these philosophers? Well, she learns what-it-is-like to see red. She knew all the relevant physical facts about how her brain would react to a red rose, but upon actually seeing one, she learns what-it-is-like to have red experiences. This thought experiment was originally designed to show that physicalism is false (although the creator, Frank Jackson, no longer thinks the argument shows physicalism to be false). But why conclude that physicalism is false from the thought experiment? The argument goes something like this. If physicalism is true then all facts are physical facts, including facts about consciousness. Since Mary by hypothesis knows all physical facts, there shouldn’t be any information about consciousness that she isn’t already privy to. But our intuitions strongly suggest that she learns something new upon stepping outside the room. If physicalism is true, and Mary knew all physical facts, then it seems like she wouldn’t learn anything new. There would be no epiphany. Mary would be like “Yep, already knew it.” But since most people think Mary does learn something new, physicalism can’t be right because there is nonphysical information to be learned, namely, information about what-it-is-like to have certain experiences. Physicalists have responded to this thought experiment in many ways. Some have suggested that Mary doesn’t learn any new fact, but rather, gains a new ability of some sort. Or some have suggested that Mary doesn’t learn any new fact, but rather, learns about these same facts from a different perspective.

As of right now I lean towards the idea that Mary does learn something new, but I don’t think it’s necessary to talk about her new knowledge as being about what-it-is-likeness. And I don’t really think Mary was surprised in anyway either. Rather, what I think Mary learns is that her color discriminatory capacities are in fact working. Having been confined to a black-and-white room all her life, Mary never got a chance to put her color discrimination skills to the test. Theoretically, she knew that given the state of her brain compared to other people that her visual capacities do work, but when she stepped out into the real world she got actual confirmation of her theoretical guess. Using her theoretical knowledge of science, she had previously hypothesized that if she stepped outside and looked at a rose, she would be able to discriminate the redness of the rose from the greenness of the grass behind the flower. She also obviously wasn’t surprised by how her brain reacted. In fact, Mary had rigged up a portable brain monitoring device such that when she stepped outside to see the rose her brain was completely monitored. Prior to stepping outside, she had made predictions about what her brain would do. And of course, checking the data later, Mary was not surprised at all. The brain data came out precisely as she predicted. After all, she has near God-like theoretical knowledge of science. So I don’t think she had any sort of epiphanies when stepping outside. All she learned was the fact that her visual discriminatory capacities do in fact work. Prior to stepping outside, she had only hypothesized that they worked based on good scientific guesswork. But when she stepped outside, the fact that she could see the redness of the rose as against the greenness of the grass confirmed her hypothesis.

On my story, we can talk about Mary learning something new without positing talk about what-it-is-likeness. But I suppose based on how it’s defined, there would have been something-it-is-like for Mary to have confirmed her theory about her visual system working. But what does what-it-is-likeness really mean anyway? I have written before on how I think the term is vague, ambiguous, and poorly defined. Usually people use it to talk about “phenomenal feels” like the feeling of redness when looking at a flower. But I have argued before that in talking about properties like the “sensation of redness” we need to be careful. We can’t be talking about the redness of the rose when we are introspectively aware of our looking at a rose, because the introspection severally distorts the mental content. But if we are talking about nonintrospective redness, then it’s unclear to me that the mental content is anything but purely discriminatory capacities. Imagine how a mouse looks at a rose. It doesn’t see redness qua redness but rather, redness qua some affordance. Seeing “pure” sensory qualities is something humans do in virtue of our introspective capacities. Otherwise we get absorbed into the affordances of things, like the hammerability of a nail when we have a hammer in our hands. If all what-it-is-likeness is referring to is these certain kinds of affordance-style mental content, then I’m not sure that Mary would be incapable of learning about this content from a theoretical perspective. What you couldn’t learn about affordance-style mental content in other creatures is what-it-is-like from the inside to discriminate information. But we shouldn’t be confused by metaphors like “from the inside” to think that there actually is some inside distinct from gushy brain bits. The “insideness” of cognition stems from facts about the individuality of being embodied creatures. But the fact that you can’t know for ourselves what-it-is-like for a bat to perceptually discriminate should not lead one to think physicalism is false, because surely discrimination is a purely physical process, and there is nothing “nonphysical” involved when a bat discriminates flies from nonflies.

So although we could translate what Mary learns about her own capacities into talk about what-it-is-likeness, I don’t see how this shows physicalism to be false. We might say Mary learned what-it-is-like to discover that her visual capacities for discrimination do in fact work, in addition to learning the fact that her ability to be introspectively aware of first-order color content was also working. But her inability to learn these facts in her black-and-white room is not a limitation of complete scientific knowledge. It’s a limitation in confirming a hypothesis. Obviously, Mary had pretty good confidence that her hypothesis was right given her knowledge of her own brain. But she was never sure it worked until she stepped outside. Stepping outside allowed her to experimentally confirm her prior hypothesis. But I don’t see why we should conclude physicalism is false just because there are limitations to what theoretical knowledge of science is capable of providing. If she made any hypotheses while in the room about her own capacities outside the room, theoretical knowledge would never translate into confirmed or corroborated knowledge until she steps outside and makes the relevant tests. So on my reading, the limitations of what Mary can know are really limitations of testing. Obviously if she is confined to the room she is unable to carry out certain tests related to her own person.

1 Comment

Filed under Consciousness, Philosophy