Sid Kouider et al. recently published a paper claiming to have found a “neural marker of perceptual consciousness” in babies too young to verbally report their awareness, a finding that would be a significant achievement if it actually meant anything. But I will argue that it doesn’t signify any progress at all in the science of consciousness. I have a single over-arching complaint about this paper which will generalize to the entire neural correlation approach: lack of operational precision in defining what they are trying to detect with their measuring instruments. The title says they are looking for a neural marker of “perceptual consciousness”. But in the abstract and paper they use a confusing mixture of different words such as “conscious reflection”, “conscious access”, “conscious perception”, “conscious experience”, “awareness”, and “subjectivity”.
This is ridiculous. Concepts that play a role in scientific thinking should not be so ambiguous. Are reflection and perception the same thing? How is perception defined? Is perception the same as perceptual consciousness? If not, what’s the difference? Is perception the same as access? Are awareness and subjectivity identical? Can you have awareness without reflection? Is all subjectivity reflective? How is experience defined? Do creatures incapable of reflection have sensory experiences? Is experience the same as having awareness? The slipperiness of these words is paralyzing because you can never pin them down; every time someone claims to have a firm grasp of these terms their meaning vanishes into a vapor of further undefinitions and hand-waving.
How can the science of consciousness ever be taken seriously if it never escapes from the morass of undefinitions and ambiguous synonyms? Are we detecting qualia, phenomenology, reflection, awareness, or experience? What do these terms mean? Do they all mean the same thing? How can we measure them? Can “experiences” be directly measured? If not, how do we justify our indirect measurement? How can we be sure that our measuring instruments are accurately measuring the things we say they are? These studies are built on a foundation of verbal sand, a tangled, confused mass of open-ended verbal definitions that are chained to nothing but other verbal definitions, with no clear sense of how these concepts can be measured by standard scientific instrumentation.
Measurement verification circularity is not completely unique to the “mental sciences”. The mental sciences are just not as well-equipped to practically deal with the problem. The concept of “temperature” is also circularly defined, but unlike consciousness, we have a consensus by convention that if you stick a mercury thermometer into the steam of boiling water the mercury will expand to the point on the thermometer marked “100” on an arbitrarily defined numerical scale under standard conditions e.g. normal atmospheric pressure, impure water, etc. The problem with consciousness studies is that there is no consensus on how to operationally define our concepts in terms of classes of operations that can uniquely defined and carried out by independent scientists with measuring instruments calibrated to a conventionally agreed standard.
Take Kouider et al’s operational definition for detecting consciousness in babies with EEG. They first use EEG on adults and classify perceptual processing as a two stage process, the second stage they take to be a neural marker for consciousness because the adults report they have “seen” something:
During the first ~200 to 300 ms of processing, brain responses increase linearly with the stimulus energy or duration. This early linear stage can be observed even on subliminal trials in which the stimulus is subjectively invisible. By contrast,the second stage, which starts after ~300 ms, is characterized by a nonlinear, essentially all-ornone change in brain activity detectable with event-related potentials (ERPs) and intracranial recordings . Note that this second stage occurs specifically on trials reported as consciously seen.
But how do they know there is no consciousness during stage 1 where there is no report? How do we rigorously make the inference from “There is no report” to “There is no consciousness”? It’s certainly not an analytic truth, so there must be some empirical justification. But what is it? Suppose some kind of consciousness exists in stage 1 but we haven’t figured out how to measure it. How do we rule out the possibility that our measuring instruments have missed something? If you define consciousness as “The act of reporting, and/or the contents of what are reported”, then the inference is on firmer ground but the firmness is purely conceptual. To see the verbal nature of this inference, suppose you define consciousness as “The subjectivity that can occur independently of any possibility of reporting it” (forgetting for now this isn’t actually a meaningful definition without also defining ‘subjectivity’). Then clearly the inference from lack of report of consciousness to lack of consciousness doesn’t follow.
I see the problem here as simple terminological disagreement. But terminological disputes are not innocent; they have a tendency to pollute the entire downstream scientific process. If competing labs have a terminological dispute but claim to be studying the same thing (“consciousness”) then they will be talking past each other in the most wearisome and unproductive manner. No progress will be made. Sure, there will be progress within the theoretical frameworks of each competing lab. But in Kuhnian language this will be akin to there not being a single “normal paradigm” of consciousness but dozens of rival paradigms, each with their own disciplinary matrix of terminology, definitions, preferred measurement protocols, and standards for measurement verification.
As I see it, the science of consciousness has two futures. In the first future, the dozens of competing definitions and concepts of consciousness will undergo a process of artificial selection, and in, say, 100 years all scientists who call themselves consciousness researchers will have reached a consensus on how to operationally define the concept, much like the current field of thermometry. This wouldn’t mean that the science of consciousness would be “complete”, it’s just that it would turn into a single “normal science”, which, if it undergoes a conceptual or experimental revolution, the revolution will be against a single well-established paradigm. Right now all we have are micro-revolutions that are of no general significance. The victories ring hollow because there is no consensus on how to evaluate the standards of success.
John Dorris called this line of thinking dangerously akin to “scorched Earth skepticism”. But I’m okay with that. To twist the metaphor, I see it as “Forest fire skepticism” because some plant species have adapted to local “fire regimes” such that the fire kills off half the species but triggers seed formation that secures population recovery. That’s my purpose in being skeptical of consciousness studies: to thin the field via negativa.
The second future is more grim: the science of consciousness will simply be abandoned. Either that, or what amounts to the same: the science of consciousness will be fractured into dozens of distinct, hyper-specialized subdisciplines that are effectively distinct academic pursuits, and only historians will remember that they were once all trying to study the same thing.
p.s. I don’t think talk of “perceptual representations” or “neural representations” is on any firmer ground than “perceptual consciousness”.