I’m working on a new paper that will probably be used as my first Qualifying Paper for the Wash U PhD program to be turned in at the beginning of the Fall semester (the program requires the submission of 3 Qualifying Papers instead of comps). There is a central argument in the paper that I wanted to hopefully get some feedback on and see what people think. I call it the Failure of Introspection Argument. It goes something like this:
- When philosophers set up the “hard problem of phenomenal consciousness”, they often point out the phenomenon of phenomenal consciousness by asking you to imagine the “raw feel” of, e.g., “the juiciness of a strawberry” or the “raw feel” of the “redness” of a looking at red color patch, or the “raw feel” of pain.
- Often what philosophers think of as their own “raw” experiences such as the experience of “juiciness” are not in fact “raw”, if by raw we mean unfiltered by higher-order conceptual machinery. Philosophers have insufficiently demonstrated that their own introspection gives them access to truly raw feelings. What their introspection actually gives access to is very conceptually loaded experiences.
- To address (2), philosophers might simply stipulate that what they’re interested in are the raw feels that exist independently of complex higher-order machinery, such as those of a bat, a newborn baby, or a global aphasic.
- But without a definite criterion to determine whether an entity does in fact have phenomenal consciousness, the stipulation approach fails to stop the threat of the ascription of phenomenal consciousness to entities like single-celled organisms (are you sure there is nothing-it-is-like to be an amoeba?)
- Philosophers should therefore reconsider the project of offering a higher-order explanation of phenomenal consciousness.
The idea behind premise (1) is that when philosophers talk about phenomenal consciousness they don’t define it so much as attempt to point out the phenomenon. Perhaps the most common way to point out phenomenal consciousness is to say things like “Imagine the raw feelings of juiciness as you bite into a strawberry”, or “Imagine the raw visual experience of redness when looking at a red color patch”. So whenever philosophers try to point out the phenomenon of consciousness within their own phenomenology, they point to these “raw feelings” discovered in their phenomenology through introspection.
Premise (2) is controversial in one way and uncontroversial in another. It’s relatively uncontroversial that introspection itself is a higher-order operation, so it’s trivial to say that introspection involves conceptually loaded experience. But what’s controversial is to say that, when introspecting on their raw feelings, philosophers have no principled way to determine what experiential properties are raw and which aren’t. So, for example, in the case of experiencing a “raw feel” of redness when looking at a color patch, my basic hypothesis is that the “redness quale” is a product of higher-order brain operations and is not itself an experiential primitive.
But it is important to realize that I am not claiming that phenomenal consciousness itself is a product of higher-order operations. I think phenomenal consciousness and higher-order operations directed towards phenomenal consciousness are two entirely different things. But where I differ from most same-order theorists is that I think the appeal to “raw feelings” discovered in human introspection is unable to deliver the goods in terms of demonstrating that the “redness” of the color patch is in fact a primitive experiential property. My claim is that human higher-order machinery generates specific sensory “gazing” qualities that are only present when we step back and reflect on what it is exactly that we see. But in accordance with versions of affordance theory, my claim is that when a mouse perceives a red color patch, it does not perceive the redness qua redness, but rather, purely as a means to some behavioral end. So if the red color patch was a sign for where cheese is located, the mouse’s perceptual content would not be “raw redness” but “sign-of-cheese”. That is, it would be cashed out in terms of what Heidegger called something’s “in-order-to”.
For example, let’s imagine a carpenter who lacked all higher-order thoughts but was still capable of basic sensorimotor skills. I would say that the carpenter’s perception of a hammer would not be akin to how a philosopher might introspect on what it is like to perceive a hammer. Instead, the carpenter would perceive the hammer is something-for-hammering. The “raw sensory quales” such as the hammer’s “brownness” are mental contents only available to creatures capable of non-affordance perception. I personally think that such an ability partially stems from complex linguistic skills, but that’s another story. The point is that based on the concept of affordance perception and notions of ecologically relevant perception, it becomes psychologically unrealistic to posit the content of “raw feels” in non-human animals. And since human introspection is unable to tell “from within” whether the experiential content is a product of raw feels or tinged by higher-order machinery, the only way to reliably “point out” the phenomenon of phenomenal consciousness is to stipulate it into existence.
This brings me to premise (3). Since it becomes difficult to use human introspection to point out raw feels, philosophers might simply stipulate that they are interested in the experiential properties that exist independently of higher-order thought, such as those experiential properties had by, say, a mouse, a bat, a newborn baby, or perhaps a global aphasic. The problem with the stipulation approach however is this: if you are going to say a bat has phenomenally conscious states in virtue of its echolocation, on a suitably mechanistic account of echolocation, it’s going to turn out that echolocation is not all that different from the type of perception a single-celled organism is capable of. If all we mean by perception is the discrimination of stimuli, then it’s clear that single-celled organisms are capable of a very rudimentary type of perception. But since most philosophers who talk about phenomenal consciousness seem to think it’s a property of the brain, this broad-brushed ascription to lowly single-celled organisms is problematic, but it starts to look like phenomenal consciousness is not that interesting of a property, given it’s shared by a bacterium, a mouse, and a human.
There is plenty of room for disagreement about whether bacteria are in fact phenomenally conscious (it might be argued that phenomenal perceptions require the possibility of misrepresentation and bacteria can’t misrepresent. I personally think the appeal to representation doesn’t work given the arguments of William Ramsey about the “job description” challenge and the fundamental problem of representation) But even if you were to offer a plausible and rigorous definition of phenomenal consciousness that somehow excludes single-celled organisms, you will still run into a sorites paradox when tying to figure out just when in the phyologenetic timeline phenomenal consciousness arose. Since it’s not a well-defined property, this seems like a difficult if not impossible task. Or worse, it seems at least possible to argue for a panpsychism with respect to phenomenal consciousness. Can we really just rule it out a priori? I don’t think so.
For these reasons amongst others, I think higher-order theory should give up in trying to account for phenomenal consciousness. What I think HOT is best suited to explain is not phenomenal consciousness but the higher-order introspection upon first-order sensory contents. I think it is a mistake to think that phenomenal consciousness itself is generated by higher-order representations. But since phenomenal consciousness is really just a property that we stipulate into existence, it doesn’t seem all that important to attempt to a scientific explanation of how it arises out of neural tissue. We should give up on using HOT to explain phenomenal consciousness and stick to something more scientifically tractable: giving a functional account of just how it is philosophers are capable of introspecting on their experience and then thinking and talking about their experience.