Can We Detect Consciousness in Babies? A Skeptical Reply to Sid Kouider et al.

Elevated Baby (6-12 Months)

Sid Kouider et al. recently published a paper claiming to have found a “neural marker of perceptual consciousness” in babies too young to verbally report their awareness, a finding that would be a significant achievement if it actually meant anything. But I will argue that it doesn’t signify any progress at all in the science of consciousness. I have a single over-arching complaint about this paper which will generalize to the entire neural correlation approach: lack of operational precision in defining what they are trying to detect with their measuring instruments. The title says they are looking for a neural marker of “perceptual consciousness”. But in the abstract and paper they use a confusing mixture of different words such as “conscious reflection”, “conscious access”, “conscious perception”, “conscious experience”, “awareness”, and “subjectivity”.

This is ridiculous. Concepts that play a role in scientific thinking should not be so ambiguous. Are reflection and perception the same thing? How is perception defined? Is perception the same as perceptual consciousness? If not, what’s the difference? Is perception the same as access? Are awareness and subjectivity identical? Can you have awareness without reflection? Is all subjectivity reflective? How is experience defined? Do creatures incapable of reflection have sensory experiences? Is experience the same as having awareness? The slipperiness of these words is paralyzing because you can never pin them down; every time someone claims to have a firm grasp of these terms their meaning vanishes into a vapor of further undefinitions and hand-waving.

How can the science of consciousness ever be taken seriously if it never escapes from the morass of undefinitions and ambiguous synonyms? Are we detecting qualia, phenomenology, reflection, awareness, or experience? What do these terms mean? Do they all mean the same thing? How can we measure them? Can “experiences” be directly measured? If not, how do we justify our indirect measurement? How can we be sure that our measuring instruments are accurately measuring the things we say they are? These studies are built on a foundation of verbal sand, a tangled, confused mass of open-ended verbal definitions that are chained to nothing but other verbal definitions, with no clear sense of how these concepts can be measured by standard scientific instrumentation.

Measurement verification circularity is not completely unique to the “mental sciences”. The mental sciences are just not as well-equipped to practically deal with the problem. The concept of “temperature” is also circularly defined, but unlike consciousness, we have a consensus by convention that if you stick a mercury thermometer into the steam of boiling water the mercury will expand to the point on the thermometer marked “100” on an arbitrarily defined numerical scale under standard conditions e.g. normal atmospheric pressure, impure water, etc. The problem with consciousness studies is that there is no consensus on how to operationally define our concepts in terms of classes of operations that can uniquely defined and carried out by independent scientists with measuring instruments calibrated to a conventionally agreed standard.

Take Kouider et al’s operational definition for detecting consciousness in babies with EEG. They first use EEG on adults and classify perceptual processing as a two stage process, the second stage they take to be a neural marker for consciousness because the adults report they have “seen” something:

During the first ~200 to 300 ms of processing, brain responses increase linearly with the stimulus energy or duration. This early linear stage can be observed even on subliminal trials in which the stimulus is subjectively invisible. By contrast,the second stage, which starts after ~300 ms, is characterized by a nonlinear, essentially all-ornone change in brain activity detectable with event-related potentials (ERPs)  and intracranial recordings . Note that this second stage occurs specifically on trials reported as consciously seen.

But how do they know there is no consciousness during stage 1 where there is no report? How do we rigorously make the inference from “There is no report” to “There is no consciousness”? It’s certainly not an analytic truth, so there must be some empirical justification. But what is it? Suppose some kind of consciousness exists in stage 1 but we haven’t figured out how to measure it. How do we rule out the possibility that our measuring instruments have missed something? If you define consciousness as “The act of reporting, and/or the contents of what are reported”, then the inference is on firmer ground but the firmness is purely conceptual. To see the verbal nature of this inference, suppose you define consciousness as “The subjectivity that can occur independently of any possibility of reporting it” (forgetting for now this isn’t actually a meaningful definition without also defining ‘subjectivity’). Then clearly the inference from lack of report of consciousness to lack of consciousness doesn’t follow.

I see the problem here as simple terminological disagreement. But terminological disputes are not innocent; they have a tendency to pollute the entire downstream scientific process. If competing labs have a terminological dispute but claim to be studying the same thing (“consciousness”) then they will be talking past each other in the most wearisome and unproductive manner. No progress will be made. Sure, there will be progress within the theoretical frameworks of each competing lab. But in Kuhnian language this will be akin to there not being a single “normal paradigm” of consciousness but dozens of rival paradigms, each with their own disciplinary matrix of terminology, definitions, preferred measurement protocols, and standards for measurement verification.

As I see it, the science of consciousness has two futures. In the first future, the dozens of competing definitions and concepts of consciousness will undergo a process of artificial selection, and in, say, 100 years all scientists who call themselves consciousness researchers will have reached a consensus on how to operationally define the concept, much like the current field of thermometry. This wouldn’t mean that the science of consciousness would be “complete”, it’s just that it would turn into a single “normal science”, which, if it undergoes a conceptual or experimental revolution, the revolution will be against a single well-established paradigm. Right now all we have are micro-revolutions that are of no general significance. The victories ring hollow because there is no consensus on how to evaluate the standards of success.

John Dorris called this line of thinking dangerously akin to “scorched Earth skepticism”. But I’m okay with that. To twist the metaphor, I see it as “Forest fire skepticism” because some plant species have adapted to local “fire regimes” such that the fire kills off half the species but triggers seed formation that secures population recovery. That’s my purpose in being skeptical of consciousness studies: to thin the field via negativa.

The second future is more grim: the science of consciousness will simply be abandoned. Either that, or what amounts to the same: the science of consciousness will be fractured into dozens of distinct, hyper-specialized subdisciplines that are effectively distinct academic pursuits, and only historians will remember that they were once all trying to study the same thing.

p.s. I don’t think talk of “perceptual representations” or “neural representations” is on any firmer ground than “perceptual consciousness”.

Advertisements

8 Comments

Filed under Consciousness, Philosophy of science, Psychology

8 responses to “Can We Detect Consciousness in Babies? A Skeptical Reply to Sid Kouider et al.

  1. For a useful definition of consciousness see “Where Am I? Redux”, here:

    http://theassc.org/documents/where_am_i_redux

    • Hi Arnold, is this the definition?

      “from the subjective point of view, as a transparent phenomenal representation of the world from a privileged egocentric perspective”

      If so, then I have some follow-up questions:

      How is “subjective” defined? How is “phenomenal” defined? How is “representation” defined?

      It seems to me that vague and amorphous terms like “consciousness” can only be defined verbally in terms of other vague and amorphous terms like “awareness”, “experience”, or “subjective”, which are of course left undefined, or also defined verbally in an endless chain of open-ended definitions.

  2. Gary, that is not my working definition. You are right, the subjective point of view goes round-and-round through the same revolving door. My working definition of consciousness is given on page 4 of my ASSC draft and on
    page 211 of the published JCS article. It is this:

    *Consciousness is a transparent brain representation of the world from a
    privileged egocentric perspective*

    The biophysical nature of this special kind of brain representation is realized in the neuronal structure and dynamics of what I call retinoid space. More details about this are given in my paper *Space, self, and the theater of consciousness*, here:

    http://people.umass.edu/trehub/YCCOG828%20copy.pdf

    You might also be interested in my reply to one who doubts the possibility of a scientific explanation of consciousness, in PhilPapers here:

    http://philpapers.org/post/7771 Reply

    • Hi Arnold,

      If we imagine ourselves traveling backwards through evolutionary time we will eventually find a living creature who does not have “transparent brain representations of the world from a privileged egocentric perspective”. So here’s my question: is there something-it-is-like to be that creature? If you say no, on what empirical evidence do you base that claim? Does something-it-is-likeness register on any measuring instruments such that we can detect its presence or absence in this brainless creature?

      Suppose you designed a measuring instrument to look for egocentric brain representations and found no such things in this primitive creature. Can you thereby assume you have conclusively established the absence of there being something-it-is-like in that creature without presupposing your theory is true (which is what we are trying to use a measuring instrument to verify)? Because if your definition holds, this imagined creature does not have consciousness. But on what empirical grounds can you make this conclusion? What measuring instrument will you use to detect the presence or absence of consciousness to confirm your theory is right in saying there is not even a “wisp” of consciousness in these brainless creatures?

      Cheers,

      Gary

  3. Gary, you wrote:

    “So here’s my question: is there something-it-is-like to be that creature?”

    This is the question that Nagel asked, but I think that it doesn’t quite hit the mark. There must be something it is like *for a conscious creature* to be that creature but, because each creature has a different history of experience, there is no way that one could possibly say what it is like for any *particular* conscious creature. The key question is “What is it like for *any creature* to be *conscious*?”. My answer to this question is that a creature is conscious if and only if the creature has a transparent brain representation of *something somewhere* in perspectival relation to its self. So there is nothing it is like for the living creature who does not have a “transparent brain representations of the world from a privileged egocentric perspective”.

    Gary: “Suppose you designed a measuring instrument to look for egocentric brain representations and found no such things in this primitive creature. Can you thereby assume you have conclusively established the absence of there being something-it-is-like in that creature without presupposing your theory is true (which is what we are trying to use a measuring instrument to verify)?”

    Does science *conclusively* establish the presence of any theoretical entity based on any measurement? I don’t think so. Science is a pragmatic enterprise that arrives at a provisional canon on the basis of the weight of current evidence. Also, I don’t see how one could even propose a measure of “something-it-is like” without a theoretical model to justify the measurement protocol. Failure to find what the theory says should be found counts against the validity of the theory. Finding what the theory predicts lends support to the theory. What else would you look for?

  4. Hi Arnold,
    You said:
    “I don’t see how one could even propose a measure of “something-it-is like” without a theoretical model to justify the measurement protocol. Failure to find what the theory says should be found counts against the validity of the theory. Finding what the theory predicts lends support to the theory. What else would you look for?”

    Your statement captures my own sentiments perfectly, but I guess I draw a more skeptical conclusion than you. The inescapable theory-ladenness worries me from a philosophical and epistemic point of view, especially when there are thousands of contesting consciousness researchers, each with their own competing definitions, methods, etc. I don’t see any current trendlines towards conventional consensus either. If anything, the field seems to be getting larger and more fractured.

    • trehub

      Gary,

      Sure, there are many different competing ideas about consciousness. But how many of these are well-developed theories with supporting empirical evidence? For example, the retinoid theory of consciousness details a particular system of brain mechanisms with a neuronal structure and dynamics that represents a volumetric space including a “point” of perspectival origin (the theoretical locus of the core *self* — I!). The causal properties of the retinoid system have been tested and have successfully explained/predicted many previously inexplicable conscious phenomena. See, for example:
      http://people.umass.edu/trehub/YCCOG828%20copy.pdf
      and
      http://theassc.org/documents/where_am_i_redux

      What do you think are the two most important competing theories of consciousness?

  5. Pingback: Has social psychology “proved” the unconscious-thought theory? And more importantly, does it need to? Thoughts on the recent “crisis” in social psychology | Minds and Brains

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s