Tag Archives: consciousness studies

My Biggest Pet Peeve in Consciousness Research

 

Boy was I excited to read that new Nature paper where scientists report experimentally inducing lucid dreaming in people. Pretty cool, right? But then right in the abstract I run across my biggest pet peeve whenever people use the dreaded c-word: blatant terminological inconsistency. Not just an inconsistency across different papers, or buried in a footnote, but between a title and an abstract and within the abstract itself. Consider the title of the paper:

Induction of self awareness in dreams through frontal low current stimulation of gamma activity

The term “self-awareness” makes sense here because if normal dream awareness is environmentally-decoupled 1st order awareness than lucid dreaming is a 2nd order awareness because you become meta-aware of the fact that you are first-order dream-aware. So far so good. Now consider the abstract:

 Recent findings link fronto-temporal gamma electroencephalographic (EEG) activity to conscious awareness in dreams, but a causal relationship has not yet been established. We found that current stimulation in the lower gamma band during REM sleep influences ongoing brain activity and induces self-reflective awareness in dreams. Other stimulation frequencies were not effective, suggesting that higher order consciousness is indeed related to synchronous oscillations around 25 and 40 Hz.

Gah! What a confusing mess of conflicting concepts. The title says “self-awareness” but the first sentence talks instead about “conscious awareness”. It’s an elementary mistake to confuse consciousness with self-consciousness, or at least to conflate them without making an immediate qualification of why you are violating standard practice in so doing. While there are certainly theorists out there who are skeptical about the very idea of “1st order” awareness being cleanly demaracted from “2nd order” awareness (Dan Dennett comes to mind), it goes without saying this is a highly controversial position that cannot just be assumed without begging the question. Immediate red flag.

The first sentence also references previous findings about the neural correlates of “conscious awareness” being linked to specific gamma frequencies of neural activity in fronto-temporal networks. The authors say though that correlation is not causation. The next sentence then makes us believe the study will provide that missing causal evidence about conscious awareness and gamma frequencies.

Yet the authors don’t say that. What they say instead is that they’ve found evidence that gamma frequencies are linked to “self-reflective awareness” and “higher-order consciousness”, which are again are theoretically distinct concepts than “conscious awareness” unless you are pretheoretically committed to a kind of higher-order theory of consciousness. But even that wouldn’t be quite right because on, e.g. Rosenthal’s HOT theory, a higher-order thought would give rise to first-order awareness not lucid dreaming, which is about self-awareness. On higher-order views, you would technically need a 3rd order awareness to count as lucid dreaming.

So please, if you are writing about consciousness, remember that consciousness is distinct from self-consciousness and keep your terms straight.

Advertisements

1 Comment

Filed under Academia, Consciousness, Random

Can We Detect Consciousness in Babies? A Skeptical Reply to Sid Kouider et al.

Elevated Baby (6-12 Months)

Sid Kouider et al. recently published a paper claiming to have found a “neural marker of perceptual consciousness” in babies too young to verbally report their awareness, a finding that would be a significant achievement if it actually meant anything. But I will argue that it doesn’t signify any progress at all in the science of consciousness. I have a single over-arching complaint about this paper which will generalize to the entire neural correlation approach: lack of operational precision in defining what they are trying to detect with their measuring instruments. The title says they are looking for a neural marker of “perceptual consciousness”. But in the abstract and paper they use a confusing mixture of different words such as “conscious reflection”, “conscious access”, “conscious perception”, “conscious experience”, “awareness”, and “subjectivity”.

This is ridiculous. Concepts that play a role in scientific thinking should not be so ambiguous. Are reflection and perception the same thing? How is perception defined? Is perception the same as perceptual consciousness? If not, what’s the difference? Is perception the same as access? Are awareness and subjectivity identical? Can you have awareness without reflection? Is all subjectivity reflective? How is experience defined? Do creatures incapable of reflection have sensory experiences? Is experience the same as having awareness? The slipperiness of these words is paralyzing because you can never pin them down; every time someone claims to have a firm grasp of these terms their meaning vanishes into a vapor of further undefinitions and hand-waving.

How can the science of consciousness ever be taken seriously if it never escapes from the morass of undefinitions and ambiguous synonyms? Are we detecting qualia, phenomenology, reflection, awareness, or experience? What do these terms mean? Do they all mean the same thing? How can we measure them? Can “experiences” be directly measured? If not, how do we justify our indirect measurement? How can we be sure that our measuring instruments are accurately measuring the things we say they are? These studies are built on a foundation of verbal sand, a tangled, confused mass of open-ended verbal definitions that are chained to nothing but other verbal definitions, with no clear sense of how these concepts can be measured by standard scientific instrumentation.

Measurement verification circularity is not completely unique to the “mental sciences”. The mental sciences are just not as well-equipped to practically deal with the problem. The concept of “temperature” is also circularly defined, but unlike consciousness, we have a consensus by convention that if you stick a mercury thermometer into the steam of boiling water the mercury will expand to the point on the thermometer marked “100” on an arbitrarily defined numerical scale under standard conditions e.g. normal atmospheric pressure, impure water, etc. The problem with consciousness studies is that there is no consensus on how to operationally define our concepts in terms of classes of operations that can uniquely defined and carried out by independent scientists with measuring instruments calibrated to a conventionally agreed standard.

Take Kouider et al’s operational definition for detecting consciousness in babies with EEG. They first use EEG on adults and classify perceptual processing as a two stage process, the second stage they take to be a neural marker for consciousness because the adults report they have “seen” something:

During the first ~200 to 300 ms of processing, brain responses increase linearly with the stimulus energy or duration. This early linear stage can be observed even on subliminal trials in which the stimulus is subjectively invisible. By contrast,the second stage, which starts after ~300 ms, is characterized by a nonlinear, essentially all-ornone change in brain activity detectable with event-related potentials (ERPs)  and intracranial recordings . Note that this second stage occurs specifically on trials reported as consciously seen.

But how do they know there is no consciousness during stage 1 where there is no report? How do we rigorously make the inference from “There is no report” to “There is no consciousness”? It’s certainly not an analytic truth, so there must be some empirical justification. But what is it? Suppose some kind of consciousness exists in stage 1 but we haven’t figured out how to measure it. How do we rule out the possibility that our measuring instruments have missed something? If you define consciousness as “The act of reporting, and/or the contents of what are reported”, then the inference is on firmer ground but the firmness is purely conceptual. To see the verbal nature of this inference, suppose you define consciousness as “The subjectivity that can occur independently of any possibility of reporting it” (forgetting for now this isn’t actually a meaningful definition without also defining ‘subjectivity’). Then clearly the inference from lack of report of consciousness to lack of consciousness doesn’t follow.

I see the problem here as simple terminological disagreement. But terminological disputes are not innocent; they have a tendency to pollute the entire downstream scientific process. If competing labs have a terminological dispute but claim to be studying the same thing (“consciousness”) then they will be talking past each other in the most wearisome and unproductive manner. No progress will be made. Sure, there will be progress within the theoretical frameworks of each competing lab. But in Kuhnian language this will be akin to there not being a single “normal paradigm” of consciousness but dozens of rival paradigms, each with their own disciplinary matrix of terminology, definitions, preferred measurement protocols, and standards for measurement verification.

As I see it, the science of consciousness has two futures. In the first future, the dozens of competing definitions and concepts of consciousness will undergo a process of artificial selection, and in, say, 100 years all scientists who call themselves consciousness researchers will have reached a consensus on how to operationally define the concept, much like the current field of thermometry. This wouldn’t mean that the science of consciousness would be “complete”, it’s just that it would turn into a single “normal science”, which, if it undergoes a conceptual or experimental revolution, the revolution will be against a single well-established paradigm. Right now all we have are micro-revolutions that are of no general significance. The victories ring hollow because there is no consensus on how to evaluate the standards of success.

John Dorris called this line of thinking dangerously akin to “scorched Earth skepticism”. But I’m okay with that. To twist the metaphor, I see it as “Forest fire skepticism” because some plant species have adapted to local “fire regimes” such that the fire kills off half the species but triggers seed formation that secures population recovery. That’s my purpose in being skeptical of consciousness studies: to thin the field via negativa.

The second future is more grim: the science of consciousness will simply be abandoned. Either that, or what amounts to the same: the science of consciousness will be fractured into dozens of distinct, hyper-specialized subdisciplines that are effectively distinct academic pursuits, and only historians will remember that they were once all trying to study the same thing.

p.s. I don’t think talk of “perceptual representations” or “neural representations” is on any firmer ground than “perceptual consciousness”.

8 Comments

Filed under Consciousness, Philosophy of science, Psychology

The Distressing Swiftness of Contemporary Philosophical Argumentation

David Chalmers recently posted a paper about panpsychism to his blog. Like an addict returning to the source of their troubles, I can’t help but read almost everything Chalmers writes when it comes to consciousness. He calls his argument for panpsychism “Hegelian” because it works using a thesis, antithesis, and synthesis structure. The thesis is materialism, the antithesis is the conceivability argument against materialism, and the synthesis is panpsychism. Because the paper is focused on panpsychism, Chalmers sets up the thesis and antithesis quickly. Using his finely honed but slightly worn stock pile of arguments against materialism, Chalmers is deftly able to dismiss his opponents in a single sentence! Consider this paragraph after presenting the antithesis:

Materialists do not just curl up and die when confronted with the conceivability argument and its cousins. Type-A materialists reject the epistemic premise, holding for example that zombies are not conceivable. Type-B materialists reject the step from an epistemic premise to an ontological conclusion, holding for example that conceivability does not entail possibility. Still, there are significant costs to both of these views. Type-A materialism seems to require something akin to an analytic functionalist view of consciousness, which most philosophers find too deflationary to be plausible.

For those not acquainted with Chalmers neat taxonomy of everyone who disagrees with him, “Type-A materialism” is that view that zombies are not conceivable. Chalmers created the Type-A concept basically as an honorary category reserved especially for Dan Dennett’s writings on qualia. Crudely stated, Dennett’s Type-A materialism amounts to the view that serious scientific (or philosophical) theorizing about qualia is misguided and confused for innumerable reasons and that people who use the term in the way Chalmers does generally don’t know what they are talking about, or if they do they can’t explain it to anyone else, and that we’re better off denying qualia exist or replacing the qualia concept with some better, more fruitful way of thinking about minds.

But notice the incredibly swiftness of Chalmers dismissal of Type-A materialism as high-lighted by the above bolded statement. He says Type-A materialism is not worth our time because “most philosophers find it too deflationary to be plausible.” However, Type-A materialists are a minority position in consciousness studies precisely because they are equivalent to the phlogiston naysayers who argued that the concept “phlogiston” is an empty symbol, like “the present king of France”. So of course most philosophers are going to “find it too deflationary”! But that’s not an argument! That’s just citing a sociological fact that as a matter of course most people who study qualia disagree with the people who say it’s a bad idea to try and study qualia! The dismissal amounts to nothing more than doing philosophy by survey. Because “most philosophers” find it implausible, it can be dismissed in a single sentence, which is equivalent to saying “A minority view is not held by a majority of philosophers, therefore the minority view is not worth our time.”

This curtness of dialectical engagement with critics who are skeptical of the basic presuppositions surrounding talk of qualia highlights what I see as a critical weakness in the “normal science” of qualia studies: insufficiently precise definitions of concepts. For example, look at how Chalmers sets up the theory of panpsychism:

I will understand panpsychism as the thesis that some fundamental physical entities are conscious: that is, that there is something it is like to be a quark or a photon or a member of some other fundamental physical type.

In defining what it means to call protons conscious he appeals to another concept: what-it-is-likeness, which is left completely undefined under the tacit assumption we know perfectly what it means. But, what exactly does it mean? I have no idea. No one who seriously uses the concept has ever given me a satisfactory answer when I press them to define it without appeal to concepts that are equally mysterious e.g. “awareness”, “experience”, “phenomenal”, etc. At this point my interlocutors will just try to get me to sound “weird” and ask “C’mon Gary, are you seriously denying there is something it is like to drink that beer you’re sipping?” And yes,  I will deny it but only because I am unclear what that term means and don’t wish to say nonsensical things and thumping the table and appealing to crass intuitions is unlikely to convince me that our discussion is on firm ground.

P.W. Bridgman anticipated this problem when he wrote in his 1927 book The Logic of Modern Physics that:

It is a task for experiment to discover whether concepts so defined correspond to anything in nature, and we must always be prepared to find that the concepts correspond to nothing or only partially correspond. In particular, if we examine the definition of absolute time in the light of experiment, we find nothing of absolute time in the light of experiment, we find nothing in nature with such properties.

Bridgman’s diagnosis is that these “empty concepts” are often not defined  in a sufficiently operational manner in order to be amenable to empirical inquiry, the heart and soul of science. If you cannot devise or imagine an experiment that would determine if there is anything in nature corresponding to your proposed theoretical entity, then your theoretical concept is unfruitful to scientific progress in the highest degree. Bridgman cites the following as a good example of a “meaningless” question i.e. a question that cannot be operationally defined so as to be resolvable by means of the physical measurement instruments used in science to conduct experimentation:

Is the sensation which I call blue really the same as that which my neighbor calls blue? Is it possible that a blue object may arouse in him the same sensation a red object does in me and vice versa? 

Bridgman doesn’t actually claim this question is meaningless, but suggests “The reader may amuse himself by finding whether [it has] meaning or not”. My guess would be no.

Bridgman’s work is like a breathe of fresh air after wading through the foggy mires of qualia studies. I am intent on studying Bridgman more, so don’t be surprised to see his name being mentioned on this blog more frequently henceforth.

Leave a comment

Filed under Consciousness, Philosophy of science