Tag Archives: qualia

The Distressing Swiftness of Contemporary Philosophical Argumentation

David Chalmers recently posted a paper about panpsychism to his blog. Like an addict returning to the source of their troubles, I can’t help but read almost everything Chalmers writes when it comes to consciousness. He calls his argument for panpsychism “Hegelian” because it works using a thesis, antithesis, and synthesis structure. The thesis is materialism, the antithesis is the conceivability argument against materialism, and the synthesis is panpsychism. Because the paper is focused on panpsychism, Chalmers sets up the thesis and antithesis quickly. Using his finely honed but slightly worn stock pile of arguments against materialism, Chalmers is deftly able to dismiss his opponents in a single sentence! Consider this paragraph after presenting the antithesis:

Materialists do not just curl up and die when confronted with the conceivability argument and its cousins. Type-A materialists reject the epistemic premise, holding for example that zombies are not conceivable. Type-B materialists reject the step from an epistemic premise to an ontological conclusion, holding for example that conceivability does not entail possibility. Still, there are significant costs to both of these views. Type-A materialism seems to require something akin to an analytic functionalist view of consciousness, which most philosophers find too deflationary to be plausible.

For those not acquainted with Chalmers neat taxonomy of everyone who disagrees with him, “Type-A materialism” is that view that zombies are not conceivable. Chalmers created the Type-A concept basically as an honorary category reserved especially for Dan Dennett’s writings on qualia. Crudely stated, Dennett’s Type-A materialism amounts to the view that serious scientific (or philosophical) theorizing about qualia is misguided and confused for innumerable reasons and that people who use the term in the way Chalmers does generally don’t know what they are talking about, or if they do they can’t explain it to anyone else, and that we’re better off denying qualia exist or replacing the qualia concept with some better, more fruitful way of thinking about minds.

But notice the incredibly swiftness of Chalmers dismissal of Type-A materialism as high-lighted by the above bolded statement. He says Type-A materialism is not worth our time because “most philosophers find it too deflationary to be plausible.” However, Type-A materialists are a minority position in consciousness studies precisely because they are equivalent to the phlogiston naysayers who argued that the concept “phlogiston” is an empty symbol, like “the present king of France”. So of course most philosophers are going to “find it too deflationary”! But that’s not an argument! That’s just citing a sociological fact that as a matter of course most people who study qualia disagree with the people who say it’s a bad idea to try and study qualia! The dismissal amounts to nothing more than doing philosophy by survey. Because “most philosophers” find it implausible, it can be dismissed in a single sentence, which is equivalent to saying “A minority view is not held by a majority of philosophers, therefore the minority view is not worth our time.”

This curtness of dialectical engagement with critics who are skeptical of the basic presuppositions surrounding talk of qualia highlights what I see as a critical weakness in the “normal science” of qualia studies: insufficiently precise definitions of concepts. For example, look at how Chalmers sets up the theory of panpsychism:

I will understand panpsychism as the thesis that some fundamental physical entities are conscious: that is, that there is something it is like to be a quark or a photon or a member of some other fundamental physical type.

In defining what it means to call protons conscious he appeals to another concept: what-it-is-likeness, which is left completely undefined under the tacit assumption we know perfectly what it means. But, what exactly does it mean? I have no idea. No one who seriously uses the concept has ever given me a satisfactory answer when I press them to define it without appeal to concepts that are equally mysterious e.g. “awareness”, “experience”, “phenomenal”, etc. At this point my interlocutors will just try to get me to sound “weird” and ask “C’mon Gary, are you seriously denying there is something it is like to drink that beer you’re sipping?” And yes,  I will deny it but only because I am unclear what that term means and don’t wish to say nonsensical things and thumping the table and appealing to crass intuitions is unlikely to convince me that our discussion is on firm ground.

P.W. Bridgman anticipated this problem when he wrote in his 1927 book The Logic of Modern Physics that:

It is a task for experiment to discover whether concepts so defined correspond to anything in nature, and we must always be prepared to find that the concepts correspond to nothing or only partially correspond. In particular, if we examine the definition of absolute time in the light of experiment, we find nothing of absolute time in the light of experiment, we find nothing in nature with such properties.

Bridgman’s diagnosis is that these “empty concepts” are often not defined  in a sufficiently operational manner in order to be amenable to empirical inquiry, the heart and soul of science. If you cannot devise or imagine an experiment that would determine if there is anything in nature corresponding to your proposed theoretical entity, then your theoretical concept is unfruitful to scientific progress in the highest degree. Bridgman cites the following as a good example of a “meaningless” question i.e. a question that cannot be operationally defined so as to be resolvable by means of the physical measurement instruments used in science to conduct experimentation:

Is the sensation which I call blue really the same as that which my neighbor calls blue? Is it possible that a blue object may arouse in him the same sensation a red object does in me and vice versa? 

Bridgman doesn’t actually claim this question is meaningless, but suggests “The reader may amuse himself by finding whether [it has] meaning or not”. My guess would be no.

Bridgman’s work is like a breathe of fresh air after wading through the foggy mires of qualia studies. I am intent on studying Bridgman more, so don’t be surprised to see his name being mentioned on this blog more frequently henceforth.

Leave a comment

Filed under Consciousness, Philosophy of science

A Skeptical Response to that Cat on Youtube “Seeing visual illusions”

This video was brought to my attention last night, and it seems to have gone viral with everyone getting excited that this video demonstrates that cats are fooled by illusions just like we are. A common thing people say is that we can reasonably infer that the cat is “seeing things”. The most glaring problem with this “demonstration” is that the paper is placed on a soft couch. If you notice, as the cat paws the paper, it makes the paper move. This self-induced movement changes not only the lighting patterns on the paper but makes the patterns themselves move, which is obviously attention-grabbing. As the cat bats down one “hill” on the paper, another “hill” pops up which immediately attracts attention. I’ve seen my own cat do this with blank pieces of paper or newspaper. Because it’s impossible from this video alone to determine whether the cat was reacting to self-induced movements or “illusory” movements, it’s completely inconclusive whether or not this cat is really seeing things. A better demonstration would be if the printed illusion was laminated flat against a hard and smooth surface, so the cat would not be able to self-deform the pattern and induce movement. My guess is that the experiments would be similarly inconclusive and difficult to interpret.

I am not aware of any scientific attempt to determine whether cats really see things. This is probably because most level-headed experimentalists understand there is a deep epistemological problem in trying to make inferences about the private mental states of animals that are incapable of giving verbal reports about their experience in terms we can make sense of. A scientist could only ever tentatively make such inferences on the basis of analogy, but since cats can’t talk to us, we must make these analogical inferences about their visual qualia from strictly physical cues as measured by physical measuring instruments. But therein lies the problem: how do we know we made the right inference about “what-it-it’s-like” to be a cat based purely on the read-outs of our physical instruments e.g. electrical recordings of neuronal activity? This problem about an “inferential gap” is similar to familiar philosophical chestnuts such as the “explanatory gap” or “problem of inverted qualia” , which in turn are related to that much older chestnut: the “Problem of Other Minds”.

As far as I know, there is no solution to these problems that doesn’t involve some kind of handwaving appeals to intuition, circular reasoning, or wishful thinking. One thing to do is deny foundationalism and the loosen our standards for what counts as knowledge such that our blind inference about the cat’s visual qualia becomes something more secure and less troublesome when we ask the pesky skeptical questions. There is nothing wrong in principle with inferential reasoning and analogical bootstrapping because we will always run into these sorts of worries when trying to make sense of the unknown in terms of the known through an iterated extension of our properly basic knowledge. But some bootstrapping extensions are more reasonable than others. In terms of Otto Neurath’s analogy of repairing a boat while out at sea, some repairs will keep us afloat but others will sink us. A good extension is when scientists turn their newly calibrated instruments on these unknown domains and they can make sense of the unfamiliar readings in terms that overlap with familiar domains of extension where the experimental results are robust and reliable.

So why can’t we “extend” our knowledge to the unknown domain of visual qualia in nonhuman animals? The crucial disanalogy is that in the natural sciences the successful extension of the concept is done by using reliable instruments that work using known means and provide reliable, replicable data in familiar domains. Moreover, if different versions of the same instrument made by different scientists gave similar data we would have a good reason to be confident that this instrument would be a good “base” upon which to extend our knowledge. But as far as I’m aware, we haven’t got a clue how to build a “qualia-scope”. What materials would such a device be made of? Why those materials and not others? What physical quantities would it be designed to respond to? Why those quantities and not others? What theory can we appeal to to justify a decision to use some quantities over others?

1 Comment

Filed under Consciousness, Philosophy

The Refrigerator Light Problem

1.0 The Problem of Phenomenal Consciousness

Phenomenal consciousness has a familiar guise but is frustratingly mysterious. Difficult to define (Goldman, 1993), it involves the sense of there being “something-it-is-like” for an entity to exist. Many theorists have studied phenomenal consciousness and concluded physicalism is false (Chalmers, 1995, 2003; Jackson, 1982; Kripke, 1972; Nagel, 1974). Other theorists defend physicalism on metaphysical grounds but argue there is an unbridgeable “explanatory gap” for phenomenal consciousness (Howell, 2009; Levine, 1983, 2001). “Mysterians” have argued the explanatory gap is intractable because of how the human mind works (McGinn, 1989; 1999). Whatever it is, phenomenal consciousness seems to lurk amidst biological processes but never plays a clearly identifiable causal role that couldn’t be performed nonconsciously (Flanagan & Polger, 1995). After all, some philosophers argue for the possibility of a “zombie” (Chalmers, 1996) physically identical to humans but entirely devoid of phenomenal consciousness.

Debates in the sprawling consciousness literature often come down to differences in intuition concerning the basic question of what consciousness actually is. One question we might have about its nature concerns its pervasiveness. First, is consciousness pervasive throughout our own waking life? Second, is it pervasive throughout the animal kingdom? We might be tempted to answer the first question by introspecting on our experience and hoping that will help us with the second question. However, introspecting on our experience generates a well known puzzle known as the “refrigerator light problem”.

2.0 The Refrigerator Light Problem
2.1 Thick vs thin

The refrigerator light problem is motivated by the question, “Consciousness seems pervasive in our waking life, but just how pervasive is it?” Analogously, we can ask whether the refrigerator light is always on. Naively, it seems like it’s on even when the door is closed, but is it really? The question is easily answered because we can investigate the design and function of refrigerators and conclude that the light is designed to turn off when the door is closed. We could even cut a hole in the door to see for ourselves. However, the functional approach won’t work with phenomenal consciousness because we currently lack a theory of how phenomenal consciousness works or any consensus on what its possible function might be, or whether it could even serve a function.

The refrigerator light problem is the problem of deciding between two mutually exclusive views of consciousness (Schwitzgebel, 2007):

The Thick View: Consciousness seems pervasive because it is pervasive, but we often cannot access or report this consciousness.
The Thin View: Consciousness seems pervasive, but this is just an illusion.

The thick view is straightforward to understand, but the thin view is prima facie counterintuitive. How could we be wrong about how our own consciousness seems to us? Many philosophers argue that a reality/appearance distinction for consciousness itself is nonsensical because consciousness just is how things seem. In other words, if consciousness seems pervasive, then it is pervasive.

On the thin view, however, the fact that it seems like consciousness is pervasive is a result of consciousness generating a false sense of pervasiveness. The thin theorist thinks that anytime we try to become aware of what-it-is-like to enjoy nonintrospective experience, we activate our introspection by inquiring and corrupt the data. The thin theorist is for methodological reasons skeptical about the idea of phenomenal consciousness existing without our ability to access or attend to it. If phenomenal consciousness can exist without any ability to report it then how can psychologists study it if subjects must issue a report that they are conscious? Anytime a subject reports they are conscious, you can’t rule out that it is the reporting doing all the work. The thin theorist challenges us to become aware of these nonintrospective experiences such that we can report on their existence and meaningfully theorize about them.

Philosophers might appeal to special phenomenological properties to falsify the thin view. This won’t work because, in principle, one could develop a thin view to accommodate any of the special phenomenological properties ascribed to phenomenal consciousness such as the pervasive “raw feeling” of redness when introspecting on what-it-is-like to look at a strawberry or the “painfulness” of pain. Thin theory can simply explain away the experience of pervasiveness as an illusion generated by a mechanism that itself isn’t pervasive. Julian Jaynes is famous for defending a strong thin view:

Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of…It is like asking a flashlight in a dark room to search around for something that doesn’t have any light shining on it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so consciousness can seem to pervade all mentality when actually it does not. (1976, p. 23)

Thin vs thick views represent the two most common interpretations of the refrigerator light problem, and both seem to account for the data equally well. The problem is that from the perspective of introspection, both theories are indistinguishable. The mere possibility of the thin view being true motivates the methodological dilemma of the refrigerator light problem. How do we rule out thin explanations of thick phenomenology?

2.2 The Difference Introspection Makes

The intractability of the refrigerator light depends on the inevitable influence introspection has on nonintrospective experience. Consider the following case. Jones loves strawberries. He eats one a day at 3:00 pm. All day, Jones looks forward to 3:00 pm because it’s the one time of the day when he can savor the moment and take a break from the hustle-and-bustle of work. When 3:00 pm arrives, he first gazes longingly at the strawberry, his eyes soaking up its patterns of texture and color while his reflective mind contemplates how it will taste. Now Jones reaches out for the strawberry, puts it up to his mouth, and bites into it slowly, savoring and paying attention to the sweetness and delicate fibrosity that is distinctive of strawberries. What’s crucial is that Jones is not just enjoying the strawberry, but introspecting on the fact that he is enjoying the strawberry. That is, he is aware of the strawberry but also meta-aware of his first-order awareness.

Suppose we ask Jones what it’s like for him to enjoy the strawberry when he is not introspecting. The refrigerator light problem will completely stump him. Moreover, suppose we want to ascribe consciousness to Jones (or Jones wants to ascribe it to himself). Should we ascribe it before he starts introspecting or after? Naturally, the answer depends on whether we accept a thin or thick view. According to a thin view, whatever is present in Jones’ experience prior to introspection does not warrant the label “consciousness”. The thin theorist might call this pervasive property “nonconscious qualia” (Rosenthal, 1997), but they reserve the term “consciousness” to describe Jones’ metarepresentational awareness that his perceiving. The thin theorist would agree with William Calvin when he says, in defining “consciousness”, “The term should capture something of our advanced abilities rather than covering the commonplace” (1989, p. 78).

What about nonhuman animals? Whereas a thin theorist would say there is a difference in kind between human and rat consciousnesss, the thick theorist is likely to say that both the rat and Jones share the most important kind of pervasive consciousness. Is this jostling a purely terminological squabble? Kriegel (2009) has argued that the debate is substantial because theorists have different intuitions about the source of mystery for consciousness. The thick theorist thinks the mystery originates with first-order pervasiveness; the thin theorist thinks it originates with second-order awareness. Unfortunately, a squabble over intuitions is just as stale as a terminological dispute.

3.0 The Generality of the Refrigerator Light Problem
3.1 Introducing the Stipulation Strategy

If you are a scientist wanting to tackle the Hard problem of phenomenal consciousness, how would you respond to the refrigerator light problem? If the debate between thin and thick theories is either terminological or based on conflicting intuitions, what do you do? The only strategy I can think of for circumventing the terminological arbitrariness is to embrace it using what I call the stipulation strategy. It works like this. You first agree that we cannot resolve the thin vs thick debate using introspection alone. Unfazed, you simply stipulate some criterion for pointing phenomenal consciousness out such that it can be detected with empirical methods.

Possible criteria are diverse and differ from scientist to scientist. Some theorists stipulate that you will find phenomenal consciousness anytime you can find first-order (FO) perceptual representations of the right kind (Baars, 1997; Block, 1995; Byrne, 1997; Dretske, 1993, 2006; Tye, 1997). This would allow us to find many instances of phenomenal consciousness throughout the biological world, especially in creatures with nervous systems. However, we might have a more restricted criterion that says you will find phenomenal consciousness anytime you have higher-order (HO) thoughts/perceptions (Gennaro, 2004; Lycan, 1997; Rosenthal, 2005), restricting the instantiations of phenomenal consciousness to mammals or maybe even primates depending on your understanding of higher-order cognition. Or, more controversially, you might have a panpsychist stipulation criterion that makes it possible to point out phenomenal consciousness in the inorganic world.

Once we understand how the stipulation strategy works, the significance of any possible reductive explanation becomes trivialized qua explanation of phenomenal consciousness. To apply this result to contemporary views, I will start with FO theory, apply the same argument to HO theory, and then discuss the more counterintuitive (but equally plausible) theory of panpsychism.

3.2 The First-order Gambit

FO theorists deny the transitivity principle and claim one does not need to be meta-aware in order for there to be something-it-is-like to exist. The idea is that we can be in genuine conscious states but completely unaware of being in them. That is, FO theorists think there can be something-it-is-like for S to exist without S being aware of what-it-is-like for S to exist, a possibility HO theorists think absurd if not downright incoherent because the phrase “for S” suggests meta-awareness.

FO approaches are characterized by their use of perceptual awareness as the stipulation criterion for consciousness. A representative example is Dretske, who says “Seeing, hearing, and smelling x are ways of being conscious of x. Seeing a tree, smelling a rose, and feeling a wrinkle is to be (perceptually) aware (conscious) of the tree, the rose, and the wrinkle” (1993, p. 265). Dretske argues that once you understand what consciousness is (perceptual awareness), you will realize that one can be pervasively conscious without being meta-aware that you are conscious.

However, there is a serious problem with trying to reconcile the implications of theoretical stipulation criteria with common intuitions about which creatures are conscious. The problem with using perceptual awareness as our criterion is that it casts its net widely, perhaps too widely if you think phenomenality is only realized in nervous systems. Since many FO theorists think that if we are going to have a scientific explanation of phenomenal consciousness at all it must be a neural explanation (Block, 2007; Koch, 2004) they will want to avoid ascribing consciousness to nonneural organisms. However, if we stipulate that a bat has phenomenal consciousness in virtue of its capacity for perceptual awareness, I see no principled way of looking at the phylogenetic timeline and marking the evolution of neural systems as the origin of perceptual awareness.

To see why, consider chemotaxis in unicellular bacteria (Kirby, 2009; Van Haastert & Devreotes, 2004). Recently chemotaxis has been modeled using informatic or computational theory rather than classical mechanistic biology (Bourret & Stock, 2002; Bray, 1995; Danchin, 2009; Shapiro, 2007). A simple demonstration of chemotaxis would occur if you stuck a bacterium in a petri dish that had a small concentration of sugar on one side. The bacterium would be able to intelligently discriminate the sugar side from the non-sugar side and regulate its swimming behavior to move upstream the gradient. Naturally we assume the bacterium is able to perceive the presence of sugar and respond appropriately. On this simplistic notion of perceiving, perceiving a stimulus is, roughly speaking, a matter of valenced behavioral discrimination of that stimulus. By valenced, I mean that the stimuli are valued as either attractive or aversive with respect to the goals of the organism (in this case, survival and homeostasis). If the bacterium simply moved around randomly when placed in a sugar gradient such that the sugar had no particular attractive or aversive force, we might conclude that the bacterium is not capable of perceiving sugar, or that sugar is not ecologically relevant to the goals of the organism. But if the bacterium always moved upstream of the sugar gradient, it is natural to say that the bacterium is capable of perceiving the presence of sugar. Likewise, if there were a toxin placed in the petri dish, we would expect this to be valenced as aversive and the bacteria would react appropriately by avoiding it, with appropriateness understood in terms of the goal of survival

Described in this minimal way, perceptual awareness in its most basic form does not seem so special that only creatures with nerve cells are capable of it. Someone might object that this is not a case of genuine perceptual awareness because there is nothing-it-is-like for the bacterium to sense the sugar or that its goals are not genuine goals. But how do we actually know this? How could we know this? For all we know, there is something-it-is-like for the bacterium to perceive the sugar. If we use perceptual awareness as our stipulation criterion, then we are fully justified in ascribing consciousness to even unicellulars.

Furthermore, it is misleading to say bacteria only respond to “proximal” stimulation, and therefore are not truly perceiving. Proximal stimulation implies an implausible “snapshot” picture of stimulation where the stimulation happens instantaneously at a receptor surface. But if stimuli can have a spatial (adjacent) component why can they not also have a temporal (successive) component? As J.J. Gibson put it, “Transformations of pattern are just as [biologically] stimulating as patterns are” (Gibson, 1966). And this is what researchers studying chemotaxis actually find: “for optimal chemotactic sensitivity [cells] combine spatial and temporal information” (Van Haastert & Devreotes, 2004, p. 626). The distinction between proximal stimulation and distal perception rests on a misunderstanding of what actually stimulates organisms.

Interestingly, the FO gambit offers resources for responding to the zombie problem. Since we have independent reasons to think bacteria are entirely physical creatures, if perceptual awareness is used as a stipulation criterion then the idea of zombie bacteria is inconceivable. Because bacterial perception is biochemical in nature, a perfect physical duplicate of a bacteria would satisfy the stipulation criterion we apply to creatures in the actual world. The problem, however, is that we have no compelling reason to choose FO stipulation criteria over any other, including HO criteria.

3.3 The Higher-order Gambit

HO theories are reductive and emphasize some kind of metacognitive representation as a criterion for ascribing phenomenal consciousness to a creature (e.g. awareness that you are aware). These HO representations are postulated in order to capture the “transitivity principle” (Rosenthal, 1997), which says that a conscious state is a state whose subject is, in some way, aware of being in it. A controversial corollary of the transitivity principle is that there are some genuinely qualitative mental states that are nonconscious e.g. nonconscious pain.
Neurologically motivated HO theories like Baar’s Global Workspace model (1988; 1997) and Dehaene’s Global Neuronal Workspace model (Dehaene et al., 2006; Dehaene, Kerszberg, & Changeux, 1998; 2001; Gong et al., 2009) have had great empirical success but they are deeply unsatisfying as explanations of phenomenal consciousness. HO theory can explain our ability to report on or monitor our experiences, but many philosophers wonder how it could provide an explanation for phenomenal consciousness (Chalmers, 1995). Ambitious HO theorists reply by insisting they do in fact have an explanation of how phenomenal consciousness arises from nonconscious mental states.

However, ambitious HO approaches suffer from the same problem of arbitrariness that FO approaches did. In order decide between FO and HO stipulation criteria we need to first decide on either a thick or thin interpretation of the refrigerator light problem. Since introspection is no help, we are forced to use the stipulation strategy. But why choose a HO stipulation strategy over a FO one? If everyone had the same intuitions concerning which creatures were conscious we could generate stipulation criteria that perfectly match these intuitions. The problem is that theorists have different intuitions concerning what creatures (beside themselves) are in fact conscious. Surprisingly, some theorists might go beyond the biological world altogether and claim inorganic entities are conscious.

3.4 The Panpsychist Gambit

A more radical stipulation strategy is possible. If antiphysicalist arguments suggest that neurons and biology have nothing to do with phenomenal consciousness, we might think that phenomenal consciousness is a fundamental feature of reality. On this view, matter itself is intrinsically experiential. Another idea is that phenomenality is necessitated by an even more fundamental property, called a protophenomenal property (Chalmers, 2003).

Panpsychism is a less popular stipulation gambit, but at least one prominent scientist has recently used a stipulation criterion that leads to panpsychism (although he downplays this result). Guilio Tononi (2008) proposes integrated information as a promising stipulation criterion. The intellectual weight of the theory rests on a thought experiment involving a photodiode. A photodiode discriminates between light and no light. But does the photodiode see the light? Does it experience the light? Most people would think no. But the photodiode does integrate information (1 bit to be precise) and therefore, according to the theory of integrated information, has some experience, however dim. Whatever theoretical or practical benefits come with accepting the theory of integrated information, when it comes to the Hard problem of phenomenal consciousness we are left scratching our heads as to why integrated information is the best criterion for picking out phenomenal consciousness. Given the criterion leads to ascriptions of phenomenality to a photodiode, many theorists will take this as good reason for thinking the criterion itself is wrong given their pretheoretical intuitions about what entities are phenomenally conscious. But as we have learned, intuitions are diverse as they are unreliable.

Conclusion

Unable to define phenomenal consciousness, theorists are tempted to use their introspection to “point out” the phenomenon. The refrigerator light problem is motivated by the problem of deciding between thin and thick views of your own phenomenal consciousness using introspection alone. If introspection is supposed to help us understand what phenomenal consciousness is, and the refrigerator light problem prevents introspection from deciding between thin and thick views, then we need some other methodological procedure. The only option available is the stipulation strategy whereby we arbitrarily stipulate a criterion for pointing it out e.g. integrated information, or higher-order thoughts. The problem is that any proposed stipulation criterion is just as plausible as any other given we lack a pretheoretical consensus on basic questions such as the function of phenomenal consciousness. Our only hope is to push for the standardization of stipulation criteria.

p.s. If anyone wants the full reference for a citation, just ask.

4 Comments

Filed under Consciousness, Philosophy, Psychology

Book review: Giulio Tononi's Phi: A Voyage from the Brain to the Soul

Phi is easily the most unusual book on consciousness I have read in awhile. It’s hard to describe, but Tononi makes his case for “integrated information” using poetry, art, metaphor, and fiction. Each chapter is a fictional vignette or dialogue between characters inspired by famous scientists like Galileo, Darwin, or Francis Crick. At the end of every chapter is a “note” written in normal academic language explaining the context of the stories. On just about every page there are huge full-color glossy pictures of famous art. The book is simply beautiful as a physical object in an attempt, I suspect, to convince qualiaphiles that Tononi is “one of them”.

The theory of integrated information itself, however, is less appealing.  Here is how integrated information is defined:

Integrated information measures how much can be distinguished by the whole above and beyond its parts, and Phi is its symbol. A complex is where  Phi reaches its maximum, and therein lives one consciousness- a single entity of experience.

And with that Tononi hopes the “hard” problem of consciousness is solved. However, the intellectual weight of Phi  rests on a thought experiment involving a photodiode. A photodiode discriminates between light and no light. But does the photodiode see the light? Does it experience the light? Most people would think no. But the photodiode does integrate information (1 bit to be precise) and therefore, according to the theory of integrated information, has some experience, however dim. The theory of integrated information is therefore a modern form of panpsychism based on the informational axiom of “it from bit”. For obvious reasons Tononi downplays the panpsychist implications of his theory, but he does admit it. Consider this quote:

“Compared to [a camera], even a photodiode is richer, it owns a wisp of consciousness, the dimmest of experiences, one bit, because each of its states is one of two, not one of trillions” (p. 162)

The reason the camera is not rich is because it can be broken down into a million individual photodiodes. According to Tononi, the reason why the camera has a low level of  Phi compared to a brain is that the brain integrates information between all its specialized processors and the camera does not. But nevertheless, each photodiode has a “wisp of consciousness”.

Tononi also uses a thought experiment involving a “qualiascope”, a hypothetical device that measures integrated information and can therefore be used to detect consciousness in the world around us. In the vignettes, Tononi writes that when you use the qualiascope:

“‘You’ll look in vain at rocks and rivers, clouds and mountains,’ said the old woman. ‘The highest peak is small when you compare it to the tiny moth'” (p. 222).

This is how he downplays his panpsychism. Notice how he doesn’t say that rocks and clouds  altogether lack consciousness. It’s just that their “highest peak” of  Phi is low compared to a moth. The important part however is that the  Phi of rocks and clouds is low but not nonexistent.

Why is this important? Because Tononi wants to have his cake and eat it too. To see why just look at some of his chapter subtitles:

Chapter 3 “In which is shown that the corticothalamic system generates consciousness”
Chapter 4 “In which is shown that the cerebellum, while having more neurons than the cerebrum, does not generate consciousness.”

 This is because Tononi admires the Neural Correlates of Consciousness methodology founded by none other than Francis Crick, who has a strong intellectual presence throughout the book. According to most NCC approaches, consciousness seems to depend on “corticothamalic” loops and not just specialized processors alone (like the cerebellum).This finding comes from research correlating behavioral reports of consciousness with activity of the brain. When most people report being conscious, higher-order system loops are activated. And in monkey experiments the “report” is a judgement about whether they see a stimulus, which can be made by pressing a lever. What they find in the NCC approach is that consciousness seems to depend on more than just specialized processors operating alone. It requires a kind of globalized network of communicating modules to “generate” consciousness.

It should now be plain as day why Tononi is inconsistent in trying to have his cake and eat it too. If a lowly inorganic photodiode has a “wisp of consciousness”, then clearly, by any standard, a single neuron also has a wisp of consciousness, as well as the entire cerebellum. Tononi acknowledges this:

“Perhaps a whiff of consciousness still breathes inside your sleeping brain but is so feeble that with discretion it makes itself unnoticed. Perhaps inside your brain asleep the repertoire is so reduced that it’s no richer than in a waking ant, or not by much. Your sleeping  Phi would be much less than when your brain is fast awake, but still not nil” (p. 275).

“Early on, an embryo’s consciousness – the value of its  Phi – may be less than a fly’s. The shapes of its qualia will be less formed than its unformed body, and less human than that: featureless, undistinguished, undifferentiated lumps that do not bear the shape of sight and sound and smell” (p. 281)

” Phi may be low for individual neurons” (p. 344)

But if a single neuron has a wisp of consciousness, then clearly consciousness is not “generated” by the corticothalamic system. It is instead a fundamental property of matter itself. It from bit. What Tononi means to say with his chapter subtitles is that “The corticothalamic system generates the right amount of  Phi to make consciousness interesting and precious to humans”. The difference between the photodiode and the corticothalamic system is a difference of degree. The corticothalamic system has a high enough level  Phi such that it makes an interesting difference to human experience such that we can report or notice it, distinguishing coma patients (very low  Phi) from awake alert adults (very high  Phi).

But now there is an interesting tension in Tononi’s theory. If there is a low but nonnegligible amount  of  Phi in a human embryo, Tononi’s theory must now figure out how to make a cut-off point between the lowest amount of  Phi we actually care about so we can figure out when to stop giving people abortions. Until Tononi answers that question, his “solution” to the hard problem of consciousness is fairly disappointing. He came up with this notion of integrated information to explain qualia, but now we are faced with the difficult question of “How much  Phi is necessary for us to care about?” Clearly no one really cares about the “wisp of consciousness” in a photodiode. So having solved the “hard” problem of qualia, Tononi just creates an equally difficult problem: how to figure out the amount of  Phi worth caring about from a moral perspective. And he plainly admits he hasn’t solved these problems.

But for me this is a huge problem. You can’t have your cake and eat it to if you are a panpsychist. You can’t say that photodiodes are conscious but then say the only interesting consciousness is that of corticothalamic systems. This seems rather ad hoc to me; a solution meant to fit into prexisting research trends. If you are a panpsychist you should embrace the radical conclusion. According to  Phi theory, Consciousness is everywhere. It is not “generated” in the brain. It only reaches a high level of  Phi in the brain. And if that’s the case, then the entire methodology of NCC is mistaken. NCC is not a true NCC but rather the “Neural Correlates of the Amount of Consciousness Humans Actually Care About”.

Overall conclusion: Phi is an interesting book and worth borrowing from the library. But I wouldn’t say it adequately solves the hard problem of consciousness. Not even close. What it does is arbitrarily stipulate criteria for pointing out consciousness in nonhuman entities. But Tononi never makes a real argument beyond appeals to intuition for why we should accept a definition of consciousness such that the ascriptions come out with photodiodes having a “wisp” of consciousness. I think most people will want to define stipulation criteria such that the ascriptions come out with only biological creatures having consciousness. Panpsychism is just too radical for most. So while I applaud Tononi for exploring this ancient idea from a modern perspective, I ultimately think that when people truly understand that Tononi is a panpsychist they will be less attracted to it despite its close relationship to Francis Crick and the wildly popular NCC approach.

17 Comments

Filed under Consciousness, Philosophy, Psychology

Nonconscious Qualia?

Here’s a strange idea: nonconscious qualia.  Absurd you might say? Well, many proponents of the so-called Higher-order approach to consciousness believe they not only exist, but are quite routine and omnipresent in our mental lives. Peter Carruthers, Uriah Kriegel, and David Rosenthal are three theorists who have openly talked about nonconscious qualia. Examples of nonconscious qualia include sensing redness, loudness, roughness, sweetness etc. The idea is that there can be genuinely nonconscious sensory qualities. The absent minded driver is a common case used to support the idea of nonconscious qualia. The only difference between conscious and nonconscious qualia is that, obviously, the conscious qualia are conscious.

More specifically, these theorists claim that there is nothing-it-is-like to have nonconscious qualia. That is the big difference: there is something-it-is-like to have conscious qualia but there is nothing-it-is-like to have nonconscious qualia. Why is there something-it-is-like to have conscious qualia? Because the presence of a higher-order mental state is what generates what-it-is-likeness. It is easy to see why people find higher-order theory to be absurd. After all, most people associate qualia with what-it-is-likeness, so to talk about qualia that there is nothing-it-is-like to be in seems absurd.

My own position is that there is something-it-is-like to have nonconscious qualia. This puts me at odds with both First-order and Higher-order theory. Higher-order consciousness, in my view, is much closer to a kind of self-conscious introspection than any kind of “noninferential higher-order thought” (granted that the objects of such self-consciousness don’t have to be just the self). And if I were to think that only conscious qualia have what-it-is-likeness, I would have to conclude that there is nothing-it-is-like to be a cat or  mouse, since cats and mice obviously aren’t capable of entertaining complex introspection. Some theorists like Peter Carruthers simply bite the bullet and deny there is anything-it-is-like to be a nonhuman animal. But I think that if what-it-is-likeness is going to be a coherent property at all, it will have to be a property shared by pretty much all lifeforms.

I think one reason why higher-order theorists think that what-it-is-likeness is associated with higher-order awareness is that Nagel’s original formulation was in terms of what-it-is-like for a subject and not just what-it-is-likeness. So the idea is that it is absurd to suppose there is something-it-is-like for Jones to not be aware of what-it-is-like to exist. But I fail to see why this is absurd. If we distinguish between what-it-is-likeness and our introspective awareness of what-it-is-like, then there seems to be no difficulties in thinking there is something-it-is-like to lack a meta-awareness of what-it-is-like. The phrase “for a subject” seems to suggest the presence of higher-order awareness, but this is because we are conflating the minimal subject with the conscious subject. If we thought the only legitimate type of subject was a conscious subject, then the idea of what-it-is-likeness without consciousness would be absurd. But if we thought there was a kind of minimal prereflective subjectivity intrinsic to being an embodied creature, then the idea of there being something “for a subject” without that subject being meta-aware is perfectly coherent.

1 Comment

Filed under Consciousness, Philosophy

The Nature of Visual Experience

Photobucket

Many philosophers have used visual illusions as support for a representational theory of visual experience. The basic idea is that sensory input in the environment is too ambiguous for the brain to really figure out anything on the basis of sensory evidence alone. To deal with this ambiguity, theorists have conjectured that the brain generates a series of predictions or hypotheses about the world based on the continuously incoming evidence and it’s accumulated knowledge (known as “priors”). On this theory, the nature of visual experience is explained by saying that what we experience is really just the prediction. So on the visual illusion above, the brain guesses that the B square is a lighter color and therefore we experience it as lighter. The brain guesses this because in its stored memory is information about typical configurations of checkered squares under typical kinds of illumination. On this standard view, all of visual experience is a big illusion, like a virtual-reality type Matrix.

Lately I have been deeply interested in thinking about these notions of “guessing” and “prediction”. What does it mean to say that a collection of neurons predicts something? How is this possible? What does it mean for a collection of neurons to make a hypothesis? I am worried that in using these notions as our explanatory principle, we risk the possibility that we are simply trading in metaphors instead of gaining true explanatory power. So let’s examine this notion of prediction further and see if we can make sense of it in light of what we know about how the brain works.

One thought might be that predictions or guesses are really just kinds of representations. To perceive the B square as lighter is just for your brain to represent it as lighter. But what could we mean by representation? One idea comes from Jeff Hawkin’s book On Intelligence. He talks about representations in terms of invariancy. For Hawkins, the concept of representation and prediction is inevitably tied into memory. To see why consider my perception of my computer chair. I can see and recognize that my chair is my chair from a variety of visual angles. I have a memory of what my chair looks like in my brain and the different visual angles provide evidence that matches my stored memory of my chair. The key is that my high-level memory of my chair is invariant with respect to it’s visual features. But at lower levels of visual processing, the neurons are tuned to respond only to low-level visual features. So some low-level neurons only fire in respond to certain angles or edge configurations. So on different visual angles these low-level neurons might not respond. But at higher levels of visual processing, there must be some neurons that are always firing regardless of the visual angle because their level of response invariancy is higher. So my memory of the chair really spans a hierarchy of levels of invariancy. At the highest levels of invariancy, I can even predict the chair when I am not in the room. So if I am about to walk into my office, I can predict that my chair will be on the right side of the room. If I walked in and my chair was not on the right side, I would be surprised and I’d have to update my memory with a new pattern.

On this account, representation and prediction is intimately tied into our memory, our stored knowledge of reality that helps us make predictions to better cope with our lives. But what is memory really? If we are going to be neurally realistic, it seems like it is going to have to be cashed out in terms of various dispositions of brain cells to react in certain ways. So memory is the collective dispositions of many different circuits of brain cells, particularly their synaptic activities. Dispositions can be thought of as mechanical mediations between input and output. Invariancies can thus be thought of as invariancies in mediation. Low-level mediation is variant with respect to the fine-grained features of the input. High-level mediation is less variant with respect to fine-grain detail. What does this tell us about visual experience? I believe the mediational view of representation offers an alternative account of illusions.

I am still working out the details of this idea, so bear with me. My current thought is that the brain’s “guess” that square B is lighter can be understood dispositionally rather than intentionally. Let’s imagine that we reconstruct the 2D visual illusion in the real world, so that we experience the same illusion that the B square is lighter. What would it mean for my brain to make this prediction? Well, on the dispositional view, it would mean that in making such a prediction my brain is essentially saying “If I go over and inspect that square some more I should expect it to be lighter”. If you actually did go inspect the square and found it is is not a light square, you would have to make an update to your memory store. However, visual illusions are persistent despite high-level prediction. This is because the entirety of the memory store for low-level visual processing overrides the meager alternate prediction generated at higher levels.

What about qualia? The representational view says that the qualitative features of the B square result from the square being represented as lighter. But if we understand representations as mediations, we see that representations don’t have to be these spooky things with strange properties like “aboutness”. Aboutness is just cashed out in terms of specificity of response. But the problem of qualia is tricky. In a way I kind of think the “lightness” of the B square is just an illusion added “on top” of a more or less veridical acquaintance. So I feel like I should resist inferring from this minor illusional augmentation that all of my visual experience is massively illusory in this way. Instead, I think we could see the “prediction” of the B square as lighter as a kind of augmentation of mediation. The brain augments the flow of mediations such that if this illusion was a real scene and someone asked you to “go step on all the light squares” you would step on the B square. For this reason, I think the phenomenal impressiveness of the illusions are amplified because of their 2Dness. If it were a 3D scene, the “prediction” would take the form of possible continuations of mediated behavior in response to a task demand (e.g. finding light squares). But because it’s a 2D image, the “qualia” of the B square being light takes on a special form, pressing itself upon us as being a “raw visual feel” of lightness that on the surface doesn’t seem to be linked to behavior. But I think if we understand the visual hierachy of invariant mediation, and the ways in which the higher and lower levels influence each other, we don’t need to conclude that all visual experience is massively illusory because we live behind a Kantian screen of representation. Understanding brain representations as mediational rather than intentional helps us strip the Kantian image of its persuasive power.

5 Comments

Filed under Consciousness, Philosophy

Does Mary the Neuroscientist Learn Anything New?

I was thinking about the famous Mary the Neuroscientist thought experiment today, and had a few thoughts I’d like to write down and try to make clear in my head. I’m not sure what follows is perfectly coherent, but here goes. In case you haven’t heard of it, the thought experiment goes something like this. Mary is a super scientist. So super that she has theoretical knowledge of all physical facts (emphasis on theoretical). She has the theoretical knowledge of a complete physics, biology, chemistry, and neuroscience. This sounds great, but there is a catch: Mary has been confined to a black-and-white room her entire life. For perhaps obvious reasons, Mary is very interested in scientifically explaining color vision. She knows every physical fact relevant to color vision. She knows, theoretically, exactly down to the quarks how every brain physically responds when it steps in front of a colored object. Now suppose Mary’s cruel captors finally let her out of her black-and-white room such that she sees a red rose for the first time. Here’s the big question: does she learn anything new upon seeing the red rose?

Many philosophers find it intuitive that she does learn something new. What does she learn according to these philosophers? Well, she learns what-it-is-like to see red. She knew all the relevant physical facts about how her brain would react to a red rose, but upon actually seeing one, she learns what-it-is-like to have red experiences. This thought experiment was originally designed to show that physicalism is false (although the creator, Frank Jackson, no longer thinks the argument shows physicalism to be false). But why conclude that physicalism is false from the thought experiment? The argument goes something like this. If physicalism is true then all facts are physical facts, including facts about consciousness. Since Mary by hypothesis knows all physical facts, there shouldn’t be any information about consciousness that she isn’t already privy to. But our intuitions strongly suggest that she learns something new upon stepping outside the room. If physicalism is true, and Mary knew all physical facts, then it seems like she wouldn’t learn anything new. There would be no epiphany. Mary would be like “Yep, already knew it.” But since most people think Mary does learn something new, physicalism can’t be right because there is nonphysical information to be learned, namely, information about what-it-is-like to have certain experiences. Physicalists have responded to this thought experiment in many ways. Some have suggested that Mary doesn’t learn any new fact, but rather, gains a new ability of some sort. Or some have suggested that Mary doesn’t learn any new fact, but rather, learns about these same facts from a different perspective.

As of right now I lean towards the idea that Mary does learn something new, but I don’t think it’s necessary to talk about her new knowledge as being about what-it-is-likeness. And I don’t really think Mary was surprised in anyway either. Rather, what I think Mary learns is that her color discriminatory capacities are in fact working. Having been confined to a black-and-white room all her life, Mary never got a chance to put her color discrimination skills to the test. Theoretically, she knew that given the state of her brain compared to other people that her visual capacities do work, but when she stepped out into the real world she got actual confirmation of her theoretical guess. Using her theoretical knowledge of science, she had previously hypothesized that if she stepped outside and looked at a rose, she would be able to discriminate the redness of the rose from the greenness of the grass behind the flower. She also obviously wasn’t surprised by how her brain reacted. In fact, Mary had rigged up a portable brain monitoring device such that when she stepped outside to see the rose her brain was completely monitored. Prior to stepping outside, she had made predictions about what her brain would do. And of course, checking the data later, Mary was not surprised at all. The brain data came out precisely as she predicted. After all, she has near God-like theoretical knowledge of science. So I don’t think she had any sort of epiphanies when stepping outside. All she learned was the fact that her visual discriminatory capacities do in fact work. Prior to stepping outside, she had only hypothesized that they worked based on good scientific guesswork. But when she stepped outside, the fact that she could see the redness of the rose as against the greenness of the grass confirmed her hypothesis.

On my story, we can talk about Mary learning something new without positing talk about what-it-is-likeness. But I suppose based on how it’s defined, there would have been something-it-is-like for Mary to have confirmed her theory about her visual system working. But what does what-it-is-likeness really mean anyway? I have written before on how I think the term is vague, ambiguous, and poorly defined. Usually people use it to talk about “phenomenal feels” like the feeling of redness when looking at a flower. But I have argued before that in talking about properties like the “sensation of redness” we need to be careful. We can’t be talking about the redness of the rose when we are introspectively aware of our looking at a rose, because the introspection severally distorts the mental content. But if we are talking about nonintrospective redness, then it’s unclear to me that the mental content is anything but purely discriminatory capacities. Imagine how a mouse looks at a rose. It doesn’t see redness qua redness but rather, redness qua some affordance. Seeing “pure” sensory qualities is something humans do in virtue of our introspective capacities. Otherwise we get absorbed into the affordances of things, like the hammerability of a nail when we have a hammer in our hands. If all what-it-is-likeness is referring to is these certain kinds of affordance-style mental content, then I’m not sure that Mary would be incapable of learning about this content from a theoretical perspective. What you couldn’t learn about affordance-style mental content in other creatures is what-it-is-like from the inside to discriminate information. But we shouldn’t be confused by metaphors like “from the inside” to think that there actually is some inside distinct from gushy brain bits. The “insideness” of cognition stems from facts about the individuality of being embodied creatures. But the fact that you can’t know for ourselves what-it-is-like for a bat to perceptually discriminate should not lead one to think physicalism is false, because surely discrimination is a purely physical process, and there is nothing “nonphysical” involved when a bat discriminates flies from nonflies.

So although we could translate what Mary learns about her own capacities into talk about what-it-is-likeness, I don’t see how this shows physicalism to be false. We might say Mary learned what-it-is-like to discover that her visual capacities for discrimination do in fact work, in addition to learning the fact that her ability to be introspectively aware of first-order color content was also working. But her inability to learn these facts in her black-and-white room is not a limitation of complete scientific knowledge. It’s a limitation in confirming a hypothesis. Obviously, Mary had pretty good confidence that her hypothesis was right given her knowledge of her own brain. But she was never sure it worked until she stepped outside. Stepping outside allowed her to experimentally confirm her prior hypothesis. But I don’t see why we should conclude physicalism is false just because there are limitations to what theoretical knowledge of science is capable of providing. If she made any hypotheses while in the room about her own capacities outside the room, theoretical knowledge would never translate into confirmed or corroborated knowledge until she steps outside and makes the relevant tests. So on my reading, the limitations of what Mary can know are really limitations of testing. Obviously if she is confined to the room she is unable to carry out certain tests related to her own person.

1 Comment

Filed under Consciousness, Philosophy

Some Thoughts on Christof Koch's New Book and the Neuronal Correlates of Consciousness

I’m reading Chistof Koch’s new book Consciousness: Confessions of a Romantic Reductionist and wanted to put some thoughts down in writing in order to get more clear about what exactly is going on with Koch’s understanding of consciousness. Koch is famously interested in the neuronal correlates of consciousness. First, what does Koch mean by consciousness? He uses a mix of four different definitions:

1. “A commonsense definition equates consciousness with our inner, mental life.”

2. “A behavioral definition of consciousness is a checklist of actions or behaviors that would certify as conscious any organism that could do one or more of them.”

3. “A neuronal definition of consciousness specifies the minimal physiologic mechanisms required for any one conscious sensation.”

4. A philosophical definition, “consciousness is what it is like to feel something.”

I have the sneaking suspicion that Koch can’t possibly be talking about the last definition, phenomenal consciousness. Why? Because he says things like “The neural correlates of consciousness must include neurons in the prefrontal cortex”. So on Koch’s view, phenomenal content is a high-level phenomena that is not produced when there is just lower-level activity in the primary visual cortex.

To support this view, Koch describes the work of Logothetis and the binocular rivalry experiments in monkeys. In these experiments, monkeys are trained to pull a different lever whenever they see either a starburst pattern or a flag pattern. Then the researchers projected both these images onto either eye to induce a binocular rivalry.

“Logothesis then lowered fine wires into the monkey’s cortex while the trained animal was in the binocular rivalry setup. In the primary visual cortex and nearby regions, he found only a handful of neurons that weakly modulated their response in keeping with the monkey’s percept. The majority fired with little regard to the image the animal saw. When the monkey signaled one percept, legions of neurons in the primary visual cortex responded strongly to the suppressed image that the animal was not seeing. This result is fully in line with Francis’s and my hypothesis that the primary visual cortex is not accessible to consciousness.”

 I think this line of thinking is greatly confused if it is supposed to be an account of the origin of qualia or phenomenal content. First of all, it’s not clear that we can rule out the existence of phenomenal content in very simple organisms that lack nervous systems, let alone prefrontal cortices. Is there something-it-is-like to be a slug, or an amoeba? I don’t see how we can rule this out a priori. This puts pressure on Koch’s claim that what he is talking about is the origin of qualia. I think Koch is talking about something else. What I actually think the Logothetis experiments are getting at is the neural correlates of complex discrimination and reporting, which produce new forms of (reportable) subjectivity.

For example, let’s imagine that we remove the monkey’s higher-order regions so that there is just the primary visual cortex responding to the stimuli. How can we rule out the possibility that there is something-it-is-like for the monkey to have its primary visual cortex respond? I don’t see how we can possibly do this. Notice that in the original training scenario the only way to know for sure that the monkeys see the different images is for the monkey’s to “report” by pulling a lever. This is a kind of behavioral discrimination. But how do we know there is nothing-it-is-like to “see” the stimuli but not report? This is why I don’t think Koch should be appealing to the philosophical definition of phenomenal consciousness. It’s too slippery of a concept and can be applied to a creature even in the absence of behavioral discrimination, for we can always coherently ask, “How do you know for sure that there is nothing-it-is-like to for the monkey to not behaviorally discriminate the stimuli?”

The fact that Koch so closely relies on the possibility of reporting conscious percepts indicates he cannot be talking about phenomenal consciousness, because we have no principled way to rule out the presence of phenomenal consciousness in the absence of reporting. And this is especially true if we are willing to ascribe phenomenal consciousness to very simplistic creatures that don’t have the kind of higher-order cortical capacities that Koch thinks are necessary for consciousness. Koch seems to admit this because he very briefly mentions the possibility of there being “protoconsciousness” in single-celled bacteriums, but doesn’t dwell in the implications this would have for his quest to find the “origin of qualia” in higher-order neuronal processes. If there is protoconsciousness or protoqualia in single-celled bacteriums, then the brain would not be the producer of qualia, but only the great modifier of qualia. If bacteria are phenomenally consciousness, then the brain cannot be the origin of phenomenal content, but only a way to produce ever more complex phenomenal content. Accordingly, the Logothetis experiments don’t show that higher-order brain areas are necessary for phenomenal content, but only phenomenal content of a particular kind. The experiments show instead that higher-order brain regions are necessary for the phenomenal content of complex behavioral discrimination.

Let me explain. A bacterium is capable of very basic perceptual discrimination. For example, it can discriminate the presence of sugar in a petri dish. But this is not a very complex kind of discrimination in comparison to the discrimination being done by the monkey when it pulls a lever in the presence of a flag stimuli. The causal chain of mediation is much more complex in the monkey than it is for the bacterium. On this view, phenomenal content comes in degrees. It is present in bacterium to a very low degree. It is present to a higher degree in flies, worms, and monkeys. I believe it is even present in completely comatose patients (I at least see no way to rule this possibility out), but to a very low degree. And it’s higher in vegetative patients and even higher in minimally conscious patients,and of course super-high in fully awake mammals like primates, and extraordinarily high in fully awake adult humans.

So what I think Koch’s NCC approach is doing is finding the neural correlates of highly complex forms of discrimination and reporting.  Koch and Crick define the neural correlates of consciousness as “the minimal neural mechanisms jointly sufficient for any one specific conscious percept”. If we understand “conscious” here in terms of phenomenal consciousness, then I think that the NCC approach does no such thing. Rather, the NCC specifies the minimal neural mechanisms for a conscious percept that is reportable. These are hugely different things. But this doesn’t mean that Koch is completely misguided in his quest to find the NCC for conscious percepts that are reportable (Bernard Baars actually defines consciousness in exactly this way). Since the ability to intelligently report is critical in our ability to act in the world, to find the NCC of percepts that can be reported will still be highly useful in coming up with diagnostic criteria for minimally conscious patients. Except on my terminology, “minimally conscious patients” cannot really mean minimally phenomenally conscious, which implies that there is nothing-it-is-like to be in a vegetative state (which we can’t conclusively rule out). Instead, we should understand it as “minimally capable of high-level report”, with report being understood very broadly to not mean just verbal report, but any kind of meaningful discrimination and responsiveness.  And as I tried to make clear in my last post, the ability to report on your phenomenal states is very much capable of modifying phenomenality in such a way as to give rise to new forms of subjectivity, what I call “sensory gazing”.

I therefore think we should drop the quest to find the neural correlates of phenomenal consciousness. Of the four definitions that Koch uses, he should give up on the fourth, because phenomenal consciousness is just too slippery to be useful in distinguishing coma patients from minimally responsive patients, or in understanding what’s going on in the binocular rivalry cases. So when Koch says “Francis and I proposed that a critical component of any neural correlate of consciousness is the long-distance, reciprocal connections between higher-order sensory regions, located in the back of the cerebral cortex, and the planning and decision-making regions of the prefrontal cortex, located in the front”, he can’t possibly be talking about phenomenal consciousness so long as we cannot conclusively rule out the possibility of protoconsciousness in bacteria. What I actually think Koch is homing in on is the neural correlates of reflective consciousness. And it’s perfectly coherent to talk about simple forms of reflective consciousness that are present in monkeys and other mammals. Reflective here could simply mean “downstream from primary sensorimotor processing”. Uniquely human self-reflection and mind-wandering could then be understood in terms of an amplification and redeployment of these reflective circuits for new, culturally modified purposes (think of how reading circuitry in humans is built out of other more basic circuits). It would make sense that any human-unique circuitry would be built out of preexisting circuitry that we share with other primates (c.f. Michael Anderson’s massive redeployment hypothesis). And the impact of language on these reflective circuits would certainly modify them enough to account for human-typical cognitive capacities. The point then is that we can account for Koch’s findings without supposing that he is talking about the origin of qualia.

UPDATE:

Having read more of the book, it’s only fair that I amend my interpretation of Koch’s theory. Following Guilio Tononi’s theory of Integrated Information, Koch seems to espouse a kind of panpsychism, and admits that even bacteria might have a very, very dim kind of phenomenal experience. So he doesn’t seem to ultimately think that higher-order brain processes are the origin of qualia, which directly contradicts some of the things he says earlier in the book. This is very confusing in light of the things he says about binocular rivalry and other phenomena. So he seems to thinks that even a mite of dust or a piece of dirt has a dim sliver of phenomenal experience. Although this is an intriguing hypothesis (and it seems to be at least logically possible), it only seems to confirm my opinion that if phenomenal consciousness is a intelligible property at all, it is not a very useful one for doing cognitive science, since it can be applied to almost anything on certain definitions. Personally, I think that if we are going to make sense of qualia at all (and I’m not sure we ever will), it will have to be the type of property that “arises” (whatever that means) in living organisms, but not inorganic entities.

5 Comments

Filed under Consciousness, Philosophy, Psychology

Too HOT to Tell: The Failure of Introspection

I’m working on a new paper that will probably be used as my first Qualifying Paper for the Wash U PhD program to be turned in at the beginning of the Fall semester (the program requires the submission of 3 Qualifying Papers instead of comps). There is a central argument in the paper that I wanted to hopefully get some feedback on and see what people think. I call it the Failure of Introspection Argument. It goes something like this:

  1. When philosophers set up the “hard problem of phenomenal consciousness”, they often point out the phenomenon of phenomenal consciousness by asking you to imagine the “raw feel” of, e.g., “the juiciness of a strawberry” or the “raw feel” of the “redness” of a looking at red color patch, or the “raw feel” of pain.
  2. Often what philosophers think of as their own “raw” experiences such as the experience of “juiciness” are not in fact “raw”, if by raw we mean unfiltered by higher-order conceptual machinery.  Philosophers have insufficiently demonstrated that their own introspection gives them access to truly raw feelings. What their introspection actually gives access to is very conceptually loaded experiences.
  3. To address (2), philosophers might simply stipulate that what they’re interested in are the raw feels that exist independently of complex higher-order machinery, such as those of a bat, a newborn baby, or a global aphasic.
  4. But without a definite criterion to determine whether an entity does in fact have phenomenal consciousness, the stipulation approach fails to stop the threat of the ascription of phenomenal consciousness to entities like single-celled organisms (are you sure there is nothing-it-is-like to be an amoeba?)
  5. Philosophers should therefore reconsider the project of offering a higher-order explanation of phenomenal consciousness.

 The idea behind premise (1) is that when philosophers talk about phenomenal consciousness they don’t define it so much as attempt to point out the phenomenon. Perhaps the most common way to point out phenomenal consciousness is to say things like “Imagine the raw feelings of juiciness as you bite into a strawberry”, or “Imagine the raw visual experience of redness when looking at a red color patch”. So whenever philosophers try to point out the phenomenon of consciousness within their own phenomenology, they point to these “raw feelings” discovered in their phenomenology through introspection.

Premise (2) is controversial in one way and uncontroversial in another. It’s relatively uncontroversial that introspection itself is a higher-order operation, so it’s trivial to say that introspection involves conceptually loaded experience. But what’s controversial is to say that, when introspecting on their raw feelings, philosophers have no principled way to determine what experiential properties are raw and which aren’t. So, for example, in the case of experiencing a “raw feel” of redness when looking at a color patch, my basic hypothesis is that the “redness quale” is a product of higher-order brain operations and is not itself an experiential primitive.

But it is important to realize that I am not claiming that phenomenal consciousness itself is a product of higher-order operations. I think phenomenal consciousness and higher-order operations directed towards phenomenal consciousness are two entirely different things. But where I differ from most same-order theorists is that I think the appeal to “raw feelings” discovered in human introspection is unable to deliver the goods in terms of demonstrating that the “redness” of the color patch is in fact a primitive experiential property. My claim is that human higher-order machinery generates specific sensory “gazing” qualities that are only present when we step back and reflect on what it is exactly that we see. But in accordance with versions of affordance theory, my claim is that when a mouse perceives a red color patch, it does not perceive the redness qua redness, but rather, purely as a means to some behavioral end. So if the red color patch was a sign for where cheese is located, the mouse’s perceptual content would not be “raw redness” but “sign-of-cheese”. That is, it would be cashed out in terms of what Heidegger called something’s “in-order-to”.

For example, let’s imagine a carpenter who lacked all higher-order thoughts but was still capable of basic sensorimotor skills. I would say that the carpenter’s perception of a hammer would not be akin to how a philosopher might introspect on what it is like to perceive a hammer. Instead, the carpenter would perceive the hammer is something-for-hammering. The “raw sensory quales” such as the hammer’s “brownness” are mental contents only available to creatures capable of non-affordance perception. I personally think that such an ability partially stems from complex linguistic skills, but that’s another story. The point is that based on the concept of affordance perception and notions of ecologically relevant perception, it becomes psychologically unrealistic to posit the content of “raw feels” in non-human animals. And since human introspection is unable to tell “from within” whether the experiential content is a product of raw feels or tinged by higher-order machinery, the only way to reliably “point out” the phenomenon of phenomenal consciousness is to stipulate it into existence.

This brings me to premise (3). Since it becomes difficult to use human introspection to point out raw feels, philosophers might simply stipulate that they are interested in the experiential properties that exist independently of higher-order thought, such as those experiential properties had by, say, a mouse, a bat, a newborn baby, or perhaps a global aphasic. The problem with the stipulation approach however is this: if you are going to say a bat has phenomenally conscious states in virtue of its echolocation, on a suitably mechanistic account of echolocation, it’s going to turn out that echolocation is not all that different from the type of perception a single-celled organism is capable of. If all we mean by perception is the discrimination of stimuli, then it’s clear that single-celled organisms are capable of a very rudimentary type of perception. But since most philosophers who talk about phenomenal consciousness seem to think it’s a property of the brain, this broad-brushed ascription to lowly single-celled organisms is problematic, but it starts to look like phenomenal consciousness is not that interesting of a property, given it’s shared by a bacterium, a mouse, and a human.

There is plenty of room for disagreement about whether bacteria are in fact phenomenally conscious (it might be argued that phenomenal perceptions require the possibility of misrepresentation and bacteria can’t misrepresent. I personally think the appeal to representation doesn’t work given the arguments of William Ramsey about the “job description” challenge and the fundamental problem of representation) But even if you were to offer a plausible and rigorous definition of phenomenal consciousness that somehow excludes single-celled organisms, you will still run into a sorites paradox when tying to figure out just when in the phyologenetic timeline phenomenal consciousness arose. Since it’s not a well-defined property, this seems like a difficult if not impossible task.  Or worse, it seems at least possible to argue for a panpsychism with respect to phenomenal consciousness. Can we really just rule it out a priori? I don’t think so.

For these reasons amongst others, I think higher-order theory should give up in trying to account for phenomenal consciousness. What I think HOT is best suited to explain is not phenomenal consciousness but the higher-order introspection upon first-order sensory contents. I think it is a mistake to think that phenomenal consciousness itself is generated by higher-order representations. But since phenomenal consciousness is really just a property that we stipulate into existence, it doesn’t seem all that important to attempt to a scientific explanation of how it arises out of neural tissue. We should give up on using HOT to explain phenomenal consciousness and stick to something more scientifically tractable: giving a functional account of just how it is philosophers are capable of introspecting on their experience and then thinking and talking about their experience.

 

6 Comments

Filed under Consciousness, Philosophy

The Myth of the Jaynesians

The Jaynesians are a mythical race of human like creatures who lack all capacity for reflection. Never has a Jaynesian stopped to reflect on his or her experience. But they are definitely smart. Their adaptive unconscious makes all the important decisions for them: when to get up, when and what to eat, how to work, whom to sleep with, who to fight, who to fear, and so on. The Jaynesians are verbal, but their talk does not have mental concepts like “mind”, “reflection”, “consciousness”, or “self-knowledge”. They simply exchange information through speaking, but do so with vocalizations that are abstracted from their original sensory presentation. A farmer who needs a new hammer made by the blacksmith emits a series of vocalizations upon seeing the blacksmith and coming within hearing range, and the blacksmith responds to this vocalization with his own vocalization, until both are pleased. Each vocalization allows for the exchange of meaningful information. But when the farmer asked for a hammer, he did not say “I need a hammer for my project”. Instead it was more like “yo! give hammer receive food” or “receive food give hammer” or simply “hammer (points to shop then to himself)…food (points to food, then to the blacksmith)”. All communication was done without mentalistic metaphors. There is no concept for “mind” or “inner consciousness”, no distinction between things “inside” or “outside” the mind. The Jaynesians mainly ordered each other around based on social rankings but also exchanged info about the weather, about food, about sex, about social events, about the gods, about harvest, about life lessons. These exchanges are products of the adaptive unconscious. There is no conscious intent in their speakings, no mental deliberation and rehearsal of what to say, no contemplation on past conversations. The utterances and head nodding involved in day-to-day small talk better illustrates the kind of communication done with the Jaynesians than the type of nervous over-thinking of a typical first-date. It is reactive, not deliberate. The “islands” of speech stand in for different things, but are stored in the unconscious recesses of the mind and strung together into vocalizations without reflective oversight.

If you doubt the plausibility of symbolic communication without reflective oversight, consider the 19th century cases of automatic writing studied by people like William James and various psychical societies. In automatic writing, very intelligent and meaningful writing is produced entirely by the adaptive unconscious, with the conscious self having no clue what their hand is about to write. They have no reflective access to the decision making of the writing; it simply spills out of their hand fluidly, but demonstrates powerful cognitive skills, often of a creative and poetic nature. Many a poet has utilized this unconscious well as their Muse. Words come into their minds and they simply write them down. To imagine the Jaynesian race is to imagine a society of creatures who are always using the unconscious to speak, without any reflective oversight. The words simply come out in appropriate situations, guided by all the knowledge they have gained since birth about when it is appropriate to use what vocalizations.

Without any capacity for reflection, the mental lives of the Jaynesians are best described as “externally oriented” rather than “internally contemplative”. They are doers. Persons of action. Their adaptive unconscious guides them with great care, making decisions for them in such a way as to facilitate the development of civilization. They worship gods and their worship takes the form of ritual, trance states, and hallucination. In the same way that the unconscious brought speech to their mouths, it brings speech to their ears, automatically generating hallucinations of ancestors, gods, demons, and angels talking to them. This is another way for the adaptive unconscious to exercise control over the individual Jaynesians. A voice that is experienced as your dead father is very effective at getting you to do something, especially if you don’t have to ability to rationally reflect and realize that you are hearing a hallucination. You simply hear the voice and believe it is as real as the ground you are standing on. After all, because the voice is a product of the unconscious mind, it demonstrates great wisdom and knowledge, impressing the Jaynesians with its near omniscience, convincing them these gods they hear talking to them are in fact what they say they are: the all powerful rulers of the cosmos who must be obeyed at all costs or ELSE. This is kind of like Achilles obeying Athena:

He was mulling it over, inching the great sword
From its sheath, when out of the blue
Athena came, sent by the white-armed Goddess
Hera, who loved and watched over both men.
She stood behind Achilles and grabbed his sandy hair,
Visible only to him: not another soul saw her.
Awestruck, Achilles turned around, recognizing
Pallas Athena at once – it was her eyes-…
[Athena gives her command]
…Achilles, the great runner, responded:
When you two speak, Goddess, a man has to listen
No matter how angry. It’s better that way.
Obey the gods and they hear you when you pray.”

Achilles represents a more advanced state of consciousness than even the Jaynesians, for the Jaynesians would have never been able to respond to the hallucinations with a dialogue. They would have simply obeyed immediately without hesitation. This was for the best, as strict obedience to the imagined gods held the society together. It was the temples that held the great icons of the gods which were the most powerful inducers of hallucinated command, with the Jaynesian’s own brain tricking them into obeying it by projecting voices into the statues of the gods. We can infer the ancient hallucinatory function of idols from the statues of the god Abu at Tell Asmar:

Photobucket

Notice the size of the eyes. For many mammals, the “eye staredown” is a way to assert dominance. Whoever lowers their eyes first submits to the mammal with the more powerful stare. Staring is thus is a signal for dominance and control, a signal to obey. Now imagine a Jaynesian as fasting for a week to prepare for the religious spiritual quest he is about to embark on. As he ingests a powerful substance he walks into the temple chamber and falls under the glance of the imposing statue of the god. He looks into the statue’s eyes and a hallucination is easily induced since the ritualistic preparation greatly lowered the threshold for the induction of hallucinations. The Jaynesian experiences the god as literally talking to him, giving him orders and commands. Some of the most common commands were probably orders to bring burial goods. As the wikipedia article on ancient Egyptian burial customs says “From the earliest periods of Egyptian history, all Egyptians were buried with at least some burial goods that they thought were necessary after death. At a minimum, these usually consisted of everyday objects such as bowls, combs, and other trinkets, along with food.” Why was this? I think it was because the god’s orders took the neural form of a human projection experienced as a hallucination, which is unconsciously understood to need food and drink and other goods. This makes sense because the first gods were just powerful dead ancestors, eventually ending up with human god-Kings. When the god-King died, the hallucinations were “copies” in the brain of the personality matrix of the King. As a mortal, the King needed food and drink and pleasures, so it is no surprise that the hallucinated form of the King after his death commanded his followers to bring him food and drink and other daily goods, and these were brought in great loads, introducing the concept of the alter and sacrifice to our ancient ancestors.

One of the more curious features of the Jaynesians’ experience is their visual experience. Without the capacity for reflection, the Jaynesians are unable to step back and ask themselves what they just saw, or what they are currently seeing. This experience is almost impossible to imagine for modern conscious humans. It is hard to reflectively imagine what it is like to not be able to ask yourself what you are currently seeing, because right now as you are reading this your brain is asking itself what it is seeing. This reflection of the brain onto its own incoming visual data stream is what generates “sensations”, which are feelings of seeing. Most animals do not need to feel what they see as this is extraneous information, and unnecessary for the adaptive unconscious to make motor decisions. However, conscious humans do ask ourselves what we see. Our brain is constantly doing this. Modern human adult brains perceive their own perception, and are also capable of perceiving their perception of their perception, or possibly perceiving the perception of perceiving their perception. This ability to mentally travel around your own head, consciously perceiving old memories, current data, or future simulations, is essential to the mental toolkit of the modern conscious human. In his recent book The Recursive Mind, Michael Corballis argues that it is the ability of deeply recursive thought and mental time travel that separates humans from nonhuman animals. He argues that the gestural grammars of referring to noncurrent times and places necessitated the development of recursive thinking, and this in turn allowed for the development of mental time travel (inserting past or future experience into present experience, or injecting near-present experience into present experience, generating feelings of sensation). I think Corballis makes a compelling case.

So the Jayensians are a race of creatures without such recursive embedding of perceptions into perceptions. Their visual consciousness is radically difficult from ours, and is almost impossible to imagine. I think this inability to consciously imagine what it’s like to not be able to have such recursive qualia is what leads many philosophers of mind astray. They experience their own experience and think that the qualia associated with experiencing experience are essential to all experiences, when really it is of course essential only to the experience of experience, and not just experience itself. Because they are unfamiliar with the unique phenomenological characteristics of experiencing experience, many philosophers are left to wonder about how “special”, “ineffable”, or “immaterial” their experiences are. They delight at the pure perception of a red patch, or of the juice of a strawberry, or the painfulness of pain. They mistakenly think that painfulness of pain is intrinsic to all pain experiences when in fact it is intrinsic only to experiences of pain, which is higher-order.

There are a lot of important philosophical lessons that can be had from contemplating the possibility of the Jaynesian race. I have self-consciously styled this post after the thought experiment of Wilfred Sellars about the mythical race of the Ryleans. I think the two cases are similar, but mine is actually historically plausible and fits in with what we know of ancient neolithic experience (c.f. Inside the Neolithic Mind ). The case of the Jaynesians also illustrates the differences between myself and higher-order theorists like David Rosenthal. Rosenthal, from what I understand, wants to deny that it is reflection which generates the specialness of qualia. He claims it is a higher-order thought, which can be prereflective. So Rosenthal thinks we don’t need to be deliberately introspecting to have conscious qualia. Whereas I agree that we don’t need to deliberate in order to have visual qualia based on experience of experience (i.e. higher-order thought), I do think that, evolutionarily speaking, it is the development for the capacity of reflection that eventually leads to the automatic and prereflective higher-order thoughts which generate conscious “what it is like ness”. So we agree that you don’t need to reflectively deliberate to presently have conscious qualia, but we disagree because I think it is the phylogenetic and ontogenetic development of reflection which enables the prereflective higher-order thoughts to get started in the first place. I’m still not sure how major of a disagreement this actually is between us. I think we could actually be quite theoretically close, but only differ in terms of evolutionary implementation details.

3 Comments

Filed under Consciousness, Philosophy, Psychology