Monthly Archives: March 2011

Why I Think Pragmatism Fails

My intellectual history with pragmatism goes way back. My first real exposure to pragmatism was through Richard Rorty’s masterpiece Philosophy and the Mirror of Nature. In this work Rorty attempted to argue for a position that would do away with the dogmas and philosophical problems associated with either realism or idealism. Rorty’s own position was heavily inspired by James, Dewey, Heidegger, and late Wittgenstein. He saw in this tradition a way to avoid the problematic claims of realism and idealism, which he saw as both relying on a kind of epistemic foundationalism to get off the ground. Foundationalism is the idea that we can secure a solid foundation for building up philosophy and science that rests on self-evident epistemic principles. The most obvious epistemic foundation for Modern Philosophy is subjective experience. This was Descartes position in a nutshell. From the indubitably self-evident principles of subjective consciousness, Descartes wanted to provide a solid footing for the entirety of human knowledge, including science. For foundationalism, the essential project is to build a edifice of knowledge that rests on the epistemic security of our own experience of the world. From our own experience, we can provide a foundation for the truth claims of the sciences.

Rorty found this problematic because it assumed that our essential self was, at bottom, this self-conscious “Glassy Essence” that mirrors the world through representational mentation. Because the mind represents the world, the most secure path to knowledge for foundationalism is to determine whether the contents of the mind match up or correspond to the external world. Since the mind is the foundation for our knowledge, if we can develop a method for determining which mental contents accurately correspond to the world, we can arrive at a concept of truth. True mental states are those that correspond to the external world.

But what is this external world? Kant eventually forbid philosophy from talking about mental states corresponding to the external world. For Kant, the method for securing our epistemic foundations is equally subjective, since the essential task for philosophy is to inspect the mind to make sure its representational mechanisms are working properly. On the Kantian schema, the path to objective, grounded knowledge goes through the self and never really leaks out to the external world. The world which we think is external, is actually internal to our minds, since our experience is but a representation of the noumenal realm. To make sure our mirror is working properly, Kant wants us to polish the mirror rather than inspect the actual world, since we can never get out of our heads.

For Rorty, this whole project of grounding knowledge is doomed to fail from the beginning because it presupposes an awful lot about the nature of the mind, the self, and knowledge. Rorty’s essential philosophical move is to externalize the self such that there is no “inner core”, no real “Glassy essence” except the one we invent for ourselves through cultural accumulation. Rather than starting with the inner world and moving outwards towards the world, Rorty, like Sellars, wants to start with the outer world and move inwards. For Rorty we are first and foremost social creatures inhabiting a public sphere with a public language. Following Heidegger’s move of externalization, Rorty thinks we are first “outside” the mind, in-the-world, and it is only a theoretical move which brings us to the inner realm of subjectivity. Once we as a culture have played this “subjectivity game” for long enough, we actually become convinced that we do indeed have a Glassy Essence which is the foundation for all our experience, along with the appropriate cultural mechanisms for acknowledging our own authority on subjective matters. Rorty thinks this is a delusion generated by philosophical language games. In this respect, we can see how Rorty took up the project of later Wittgenstein, who thought philosophical problems about the mind and body are mere tricks generated by our use of language games.

When I first read Rorty, I bought this hook, line, and sinker. The demolition of the Glassy Essence seems right to me, even to this day. If there is no glassy essence at our core grounding knowledge, then the truth-claims of both realism and idealism are groundless, since they are both founded by the core self (which we know now is a mere delusion). In realism, the glassy foundation allows us to make truth-claims about the world insofar as we can represent the objective world in our mind. In idealism, the glassy foundation allows us to make truth-claims about the human-world correlate. On Rorty’s reading, both positions are problematic since they start off with the isolated, representing self.

So if objective truth-claims are groundless for Rorty, how does he avoid a radical relativism where anything goes? Since Rorty moves the “foundation” for knowledge from the inner self towards the outer community, does this not relegate truth to the community? What is to stop a community of flat Earthers to say it is “true for them” that the Earth is flat because they have a long communal history of talking about the Earth as if it is flat? Nothing. Rorty cannot avoid this relativism. But he can attempt to rob it of its essential force. How does Rorty do this? By recognizing that one of the most dominant and “useful” communal language games is science itself. Science is nothing but a sophisticated communal practice that has developed its own norms of subjectivity and objectivity, and science tells us that the Earth is not flat.

So although Rorty thinks that it is impossible to ground or provide an absolute foundation for truth claims which separate appearance from reality, he does think that science has invented a language game for distinguishing appearance from reality. This is how Rorty responds to the critics who claim that he is a relativist where “anything goes”. Rorty doesn’t think that we can have absolute knowledge of what’s mere appearance versus what’s reality, but he does think that we have highly developed language games for separating appearance from reality. Science is exactly such a game. It’s just that the truth-claims of science are grounded, not by the self, but by the standards and norms of the scientific community. So we are still able to make truth-claims that separate appearance from reality, it’s just that this ability to make claims is itself just a language game, albeit an absurdly successful one.

So why does pragmatism fail? Very simply, it fails because no matter how hard he tries Rorty is unable to stop religious fundamentalists from hijacking this exact argument to show the rationality of faith-based knowledge claims. Reformed Epistemologists like Plantinga want to use this exact same anti-foundationalist argument to bolster the claim that it is rational to believe in God even if there is no evidence or rational argument for his existence. Just so long as there is a religious community with shared communal norms and standards, it is perfectly rational for someone growing up in that community to accept the truth-claims without rational evidence or argumentation. God becomes “properly basic” i.e. not believed on the basis of any epistemic foundations. The Christian community grounds the truth-claims of Christianity and Christians are excused from providing evidence or arguments for their position.

This is unacceptable to me. I discovered this “quirk” of pragmatism when I took an undergrad class on Reformed Epistemology. That class made me realize that pragmatism makes it too easy to bolster the “subjective” truth-claims of religion as being perfectly rational because there are religious communities in which those claims make sense. If science is groundless but ok because it’s useful, then religion can be ok too so long as it is useful to a community of believers.

So what’s the solution? How do you avoid the relativism of pragmatism without collapsing back to a problematic foundationalism wherein truth-claims are grounded by the subject, which always seems to lead to problems of skepticism? I’m not totally sure. I’m still working out my critique of pragmatism and Reformed Epistemology. I certainly don’t want to return to foundationalism. I do think we need to demolish the “Glassy Essence” and acknowledge that we all start off embedded in a community of pragmatic norms. But perhaps we need to rehabilitate the position of naturalistic realism to be compatible with the demolition of the self. Can we develop an ecological realism that acknowledges both the reality of the mind-independent world and the ideality of our embeddedness into a community? Moreover, if the scientific language game is able to give a plausible explanation for how religion evolved in the first place, then we would have rational recourse for rejecting the truth-claims of religion without necessarily collapsing to a dogmatic foundationalism. If we can show that religion evolved as a method of social control based on the hallucination of divine beings, we could actually explain religion without merely claiming it is “false”, for obviously someone hallucinating believes with all their mind that their hallucinations correspond to reality. We could acknowledge that religious people think their claims are true while still having a plausible explanation for how these feelings of certainty are generated by neurological activity in the brain, which have an evolutionary and developmental history. When placed side by side in the intellectual arena, the truth-claims of religion and the naturalistic explanation for how religion contingently developed don’t seem to be on equal footing. If a schizophrenic was convinced that aliens had implanted a device in his brain, the pragmatist would be forced to say it’s “true for him”, especially if the schizophrenic started a cult of followers who developed communal norms of truth based on the reality of alien abductions. The pragmatist could only say “that idea is false from perspective of a scientific language game but true in respect to the standards of the cult”. The naturalistic realist would be able to, in principle, trace the origin of the belief in aliens to either a evolutionary or developmental neurological fact and claim them to be in all likelihood false (although it’s, of course, possible that the schizophrenic is right).

Have I really escaped pragmatism? It’s hard to see how I am avoiding it if I accept anti-foundationalism. It might seem like I am accepting anti-foundationalism but just adding dogma. But I think realism might have a way out of this, and that’s through the method of approximation by guessing. If we wanted to answer the question of where religion came from, we have two competing hypotheses. The religious hypothesis is that religion developed because God actually exists. The naturalistic hypothesis is that religion developed as a contingent fact of evolutionary and cultural development. Now which is the better hypothesis? I.e. , which hypothesis is most likely to be accepted by a community of genuine, truth-seeking inquirers after a million years of sustained inquiry? Given the overwhelming acceptance of naturalism amongst the educated and scientifically literate, we could extrapolate and determine that naturalism’s hypothesis about religion is approximating the truth. Given that the God-hypothesis cannot actually generate any predictions of the natural world (for it is one thing to say God exists, it is another to say what he is going to do), it seems like naturalism is superior as a method of inquiry. And the hypothesis for why that method is superior is that naturalistic realism is actually true. Note how this claim is not presupposed at the beginning of the investigation, but rather, is something that is generated after genuine inquiry into the probability of either hypothesis being true. Naturalism is the result of a long process of thinking and examining the world, not a dogma presupposed on the basis of self-evident knowledge. It seems then that we can accept anti-foundationalism while still being naturalists and realists.


Filed under Atheism, Philosophy

On the idea of massive modularity, or, coming around to computationalism

I feel weird saying this, but I am actually coming around to the idea of “modularity”, particularly the “massive” kind argued for by people like Peter Carruthers. Last week I started reading Carruthers’ highly ambitious 2006 book The Architecture of the Mind. As someone who has resisted representationalism, computationalism, and modularity for many years, I find myself agreeing  with Carruthers more often than not, which is a kind of novel experience for me, since usually such language strikes me as problematic and I am constantly thinking “No!”. Granted, I still have to perform a mental substitution for some of his terminological preferences in order to read his claims without thinking them vacuous, but that I am able to always make a plausible interpretation of his claims speaks to the power of his overall vision, and the depth of encyclopedic knowledge on display.

First, what does Carruthers mean by “modularity”? In general, modularity refers to the way a functional system can be broken down into dissociable components and subcomponents. For example, you can exchange the tires on a car without effecting the the functionality of the engine, or you can replace a speaker in a Hi-Fi system without damaging the rest of the system. The car is thus modular in the sense that it is made out of exchangeable parts that can break down independently of the functionality of other parts of the system. Crucially, modules must be understood in terms of their functionality, not in respect to their anatomical or physiological structure (although knowing that structure is of course helpful for understanding the function, and likewise). In the case of brain modules, we can’t simply point to one clump of neural tissue and say that’s a module; we have to examine the function of that tissue to determine where the modular components come apart, since they are defined along functional, not anatomical, lines. It also crucially important to note that for Carruthers, “modular” doesn’t necessarily mean “innate” or “genetically determined”, since the functionality of any module can be changed by development, and development itself can lead to the learning of new functional capabilities (especially with the imitative abilities of humans). Moreover, an important part of a modular functional system is that it can be understood in terms of input/output with particular kinds of computations done on the input in order to generate output. And as Carruthers defines it, “The input to a system [is] the set of items of information that can turn the system on.

Normally, I am quite opposed to the idea of using a computational “input/output” framework to explain the mind because it ends up falling prey to the Myth of the Given whereby the “input” is raw and meaningless, leading to passive forms of linear processing chains that miss the action-perception cycling that makes perception fundamentally meaningful all the way down at the input level. But Carruthers definition of input avoids these problematic passive-Cartesian assumptions and is in fact compatible with my own preferred mental metaphysics of “reactivity”. My basic idea is that the organic system is reactive, with the nervous system realizing a particular kind of reactivity. The organism reacts to the environment, reacting to its own reactions, with reactivity all the way down.

Accordingly, Carruthers’ definition of input is compatible with a metaphysics of reactivity in the following way. We can understand computations in terms of the chains of neural reactivity cascades in response to a perturbation of the system from either an external or internal source, with external and internal understood, not epistemologically, but in terms of the boundary of the organism’s membranes. The input to a module is simply that set of information that causes the module to “turn on”, i.e., to start reacting in particular and functionally specific ways. The reaction to the input is the “computation” that is carried out by the module, and the end-result of the reaction is the output, which can act as input to other modules, i.e. it can cause patterns of reactivity in other parts of the brain. Hence, the output of some modules can actually come back around and influence the reactivity of modules that are causally closer to the source of the perturbation, allowing for “top-down” effects. I think that this definition of input/output and computation is perfectly compatible with the “enactivist” tradition, which has traditionally been critical of the input/output paradigm on account of it missing the circular nature of action/perception cycles.

On my reading, Carruthers avoids these problem by defining the input as that which turns the system on, which can be cashed out in a biologically plausible way. Moreover, since Carruthers defines input as the kind of information contained in the stimulus which turns the module on, this is also compatible with a Gibsonian affordance ontology wherein it is the information about affordance-properties contained in the raw stimulus which actually effects the perceiver in such a way as to constitute perception (as opposed to merely sensation, which is noninformative). Hence, we could say that information about affordances in the ambient optic array turn on the modules that are evolutionarily designed to react to that information in adaptive ways. This avoids the Myth of the Given since that affordance information isn’t necessarily raw. And since the response to affordances is cashed out in an ontology of reactivity, we avoid the internalism and foundationalism of traditional computational approaches inspired by Locke.

So when applied to the brain, Carruthers’ thesis is that the brain(and hence the mind) is massively modular. How is this different from the classic modularity thesis put forward by people like Jerry Fodor? Carruthers radically differs from Fodor in the sense that Fodor only thought that the shallow perceptual processes such as vision were modular. When it comes to “general” cognitive systems like reasoning or believing, Fodor thought that these processes were not modular, but general. Carruthers’ thesis is radical in the sense that he thinks that even the most abstract, general, multi-modal, and intellectual of human cognitive processes are modular i.e. capable of being broken down into dissociable functional components. I read this thesis as compatible with a kind of Dennettian theory wherein there is no “general” place where it “all comes together”. There is simply a complex and messy “kludge” of functional components and subcomponents, which run their functions more or less independently from other processes (although as I mentioned above, the output of one particular module can be the input to another, so there is still communication and interaction between different modules rather than complete encapsulation as normally assumed by modular stereotypes). However, it is important to note that Carruthers, as he should, argues that there does seem to be exception to the normal independence of modules in the function of narratological and reflective consciousness in human adults. In this case, it seems necessary to talk about a more “global” neural interactivity (probably realized over the default mode network). But this is compatible with the overall thesis of massive modularity, since there still is an awful lot of domain-specific reactivity in the brain, particularly for prereflective cognition. Even if a global consciousness function is not modular in the sense that mouse vision is modular, it doesn’t follow that there isn’t a massive amount of modularity in all animals, including humans.

I like the massive modularity thesis because it seems in accordance with the Jaynesian principle that what is to be found in higher-order cognitive processes must first be found in the lower-order cognitive processes, and the functionality of those higher-order cognitive processes doesn’t require a general theater where it “all comes together” to slowly evolve as a distinct neurological center. Rather, the higher-order processes come into being through exaptions and readaptions of previous modules, often buffered by mechanisms of neural plasticity. It is the multiple and widely distributed functionally reactive/modular networks of neurons that realize the higher-order processes rather than some general-purpose CPU that does all the higher-order work in a fashion completely different from lower-order networks, which make up the vast majority of neural tissue in the brain. As Jaynes says, there is nothing in reflective consciousness that was not first found in behavior. And as Carruthers argues, rather than suppose that the human mind is becoming less modular and “more general” as we increase our cognitive powers across evolutionary time, we should instead see the human mind as becoming more modular as it evolves, corresponding with the increase in the functional specificity of modern living in a complex social-political-technological world. The number of modules and submodules we need to automatically cope with everything from driving a car, navigating websites, taking tests, playing sports, constructing a skyscraper, programming a computer, farming, hunting, etc. is truly astronomical in comparison with the functional specificity and developmental “niches” of other species. So, instead of massive modularity indicating biological primitiveness, supermassive-modularity indicates supreme functional development on both the biological and sociological scale.

1 Comment

Filed under Philosophy, Psychology

Some thoughts on the conceptual coherence of "philosophical zombies"

The Zombie argument has always rubbed me the wrong way. This post will attempt to explain why. Let me first try and reconstruct the argument in my own words, with the intention of being fair to Chalmers’ underlying intuitions about consciousness.

For Chalmers, consciousness is the “what it is like” of an agent. He claims that he can only know for sure that he is conscious, that there is “something it is like” to be him. For every other conscious being, Chalmers thinks he can only infer that they are based on third-person evidence, as opposed to having self-evident first-person knowledge of qualia states, the “qualitative” or “subjective” phenomena of having a perspective on the world, having a “phenomenal” world, etc. Chalmers thinks this phenomena of “phenomenal consciousness” is really strange philosophically. It is the source of the famous mind-body problem, the basic idea that there are two general aspects known in experience, mental phenomena and physical phenomena. Mental phenomena are “things” like sensations, perceptions, beliefs, desires, imaginations, feelings, pains, and thoughts. For Chalmers it is important to distinguish two ways of understanding these mental phenomena. The first way is in terms of their roles in a causal-functional economy, of how they “do stuff” that is useful. This is what Chalmers calls “access consciousness”. It is “easy” to explain access consciousness neurologically, because we can make sense of functions in terms of causes, and we know how neurons cause things to happen in the brain and body. The second way of understanding the mental phenomena is in terms of how there is a qualitative “something it is like” to be the subject of those sensations, beliefs, pains, etc.  This is what Chalmers calls “phenomenal consciousness”. It is “hard” to explain psychologically in terms of functions or adaptive usefulness.

This is the central claim for Chalmers: he claims to not be able to conceive of any functional usefulness for this phenomenal consciousness. Imagine an atom-for-atom duplicate of your body in an alternate world. Chalmers thinks that he can coherently conceive of this duplicate as a “Zombie”, that is, a being who lacks phenomenal consciousness. If you prick a Zombie, he will yelp and move his arm back. If you asked him if it hurt, he could produce verbal behavior that describes in great detail what it is like to feel pain. He could even write philosophical essays about the distinction between phenomenal and access consciousness, and wax poetic about the great joy of sensing and experiencing the world from a first person perspective. But the Zombie would not be conscious, in any way, even though it shared exactly the same set of 100 billion neurons, all arranged in the same same chemical soup and organized in the exact same way. Imagine hooking up a conscious human and his Zombie to a brain scanner, and asking the conscious subject to report when he starts to mind-wander and ruminate to himself, about either the past, present, or future. The scanner for the Zombie would look exactly the same, of course. And if you asked the Zombie to report any mind-wandering or self-conscious thinking, his report of the time of the conscious thought would be exactly the same as the conscious subject. In fact, if the conscious subject and the Zombie were physically identical, you could mix them up in the lab room and there would be, in principle, no way of telling one from the other.

This lack of coherent criteria for telling a Zombie from a conscious subject  in a real life setting should set off huge philosophical warning bells. Chalmers’ basic argument seems to be that since he cannot imagine any possible way of telling a functional story about the “qualitative” or “first person” perspective, and thus about consciousness, it is necessary that physicalism is not true, since physicalism claims that mental phenomena are really just physical phenomena. Chalmers is a monist, but he thinks that “conscious” things (i.e. qualitative properties), exist in a fundamentally different way than physical things like the brain. Hence Chalmers is an old-fashioned dualist wrapped up in modern garb. He thinks that physical matter gives rise to two types of properties: physical properties and mental properties. And since these two properties have to be explained in fundamentally different ways, physicalism (the idea that only “physical, causal-functional stories” suffice to explain mental phenomena) is necessarily false.

I think the problem here is as follows. Chalmers leads us astray from the start when he articulates what needs explaining and what is philosophically interesting. For Chalmers, what is philosophically interesting is first-person experience, “what it is likeness”. But this concept is never sufficiently defined or explained. He says something to the effect of “If you got to ask what it is, you aint never going to know.” This, of course, doesn’t satisfy me at all. First, I want to know just what this first-person experience is. What concept of person are we working under? Is a coma patient a subject of experience? Why not? It seems perfectly conceivable that there is “something it is like” to be a coma patient insofar as she is “living” in the world and interacting with it according to the individual idiosyncrasies of her still vegetatively working brain. Surely, on pains of conceptual parsimony, the coma patient’s unconscious mind has privileged, first-person access to the mental content that is the unconscious mental phenomena such as the processing and manipulation of information streaming in from the total environmental envelope, even if on a dim and vegetative scale. It makes perfect sense to say that the coma patient’s brain is most assuredly processing auditory data unconsciously, and this  constitutes an instance of a mental phenomena, under almost every modern definition of mental phenomena. And since mental phenomena are defined by Chalmers as having a qualitative component, then, on pains of contradiction, we are forced to conclude that “there is something it is like to be unconscious”.

And if this is the case, then the conceptual usefulness of defining what needs explaining about minds purely in terms of phenomenal consciousness looks doubtful, for the problem is this: We all know intuitively that there is a huge “mental” difference between a coma patient and a fully awake, linguistically competent human. Yet in terms of Chalmers very conceptual framework, there is not a fundamental constitutive difference in phenomenal consciousness between the coma patient and the adult human, only a difference in degree of “phenomenal dimness”. If you doubt this, ask yourself this: what is the difference in epistemic access to the mind of a bat or the mind of an unconscious coma patient? If we are allowed to posit that there is “something it is like” to be a bat even when we cannot ask it if it is conscious, then why are we not allowed to posit that there is “something it is like” to be a coma patient? In both cases, you cannot ask the agent if they are conscious. We can either claim that the criterion of consciousness is reportability, or we are stuck wondering whether the bat or the coma patient is conscious. But rather than claiming the bat is probably conscious and we just can’t know it for sure, I think we should claim that the bat is not conscious simply because it doesn’t have the cognitive acumen to be meta-conscious of its first-order awareness in such a way as to be able to report on those states, either internally in thought or externally in verbal behavior.

So what is the fundamental difference between the coma patient and adult human? I claim that it is the difference between nonconscious reactivity and the operation of consciousness proper. Phenomenal consciousness as a concept is less interesting to me precisely because it isn’t useful for distinguishing humans from nonhuman animals, nor infants and coma patients from linguistically competent and verbally/intentionally responsive mentalities. And as a researcher of the mind, I think the differences between humans and nonhumans are far more psychologically and philosophically interesting than the similarities.

For starters, humans are a literate species, and one immersed in language, symbols, culture, ritual, and artificial constructions to a degree off the charts in comparison to nonhuman animals. This has huge effects on the development of the brain and the potentiality for new forms of narratological subjectivity. Moreover, the long period of time before sexual maturation in humans allows for a greater plasticity and capacity for adopting to changing environments than any other animal. And it is not the volume of frontal matter that distinguishes us from apes when body size is controlled for; it is connective fibers, the neural tissue most open to effects of plasticity.There is “something it is like” to be transported into an imaginary world while reading a book. I would be greatly surprised if any nonhuman animal was capable of experiencing such forms of subjectivity. Chalmers greatly under estimates the qualitative differences between brains competent in language and those restricted only to nonverbal mentalities (with verbal cognition referring to at least some natural capacity for understanding symbolic signs). There is also something it is like to “talk to oneself”, to tell oneself what to do, to initiate action through the slow deliberation of “inner speech”. These inner speech mechanisms are directly tied into our autobiographical memory, and help constitute our sense of conscious identity, our explicit knowledge about who we are, where we came from, what we believe, what we desire, what it is like to be us. Paraphrasing Julian Jaynes, you cannot be conscious of what you are not conscious of. For if you could not remember that you were conscious at time T, in what sense could you ever consciously know that you were conscious at time T? There is a fundamental link between autobiographical memory, capacity to self-report, and consciousness that cannot be explained in terms of Chalmers distinction between “easy problems” and “hard problems”.

This is the only way to make sense of philosophers like Robert Brandom or Dan Dennett who claim that consciousness constitutively depends on the capacity for report and be meta-aware that you are conscious. In a very real sense, what we all intuitively understand to be philosophically interesting, that which separates conscious adults from coma patients, fetuses, and birds, is not the very capacity to experience the world from a first-person view, since even coma patients are still persons in an absolutely minimal sense of bodily self-consciousness. No, what’s philosophically interesting is not awareness of the environment — something shared with earthworms, as Darwin demonstrated — but awareness of your awareness of the environment, in terms of inferentially linked concepts like “sensation”, “belief”, “desire” “meta-awareness”, “representation”, “perception”, “memory”, “thinking”, “I”, “me”, “mine”, “Soul”, “mind”, “consciousness”.

As it turns out then, consciousness depends on the concept of consciousness being active within the mental economy of the conscious subject. We thus can distinguish between nonconscious first-person subjectivity shared by all organisms and conscious first-person subjectivity dependent on the capacity to be meta-aware that you are aware, and capable of reporting on past instances of awareness in terms of narratologically structured and inferentially linked concepts learned in through childhood through exposure to intersubjective linguistic stimulation.


Filed under Consciousness, Philosophy, Psychology

New paper published: Consciousness, Plasticity, and Connectomics

The paper I co-authored with Micah Allen is finally out! It is published in the open-access journal Frontiers in Psychology (special topic issue on neuralplasticity and consciousness in the subsection Frontiers in Consciousness Research). Download it for free here:

Consciousness, Plasticity, and Connectomics: The Role of Intersubjectivity in Human Cognition

The paper is a hypothesis and theory article, meaning that we develop a new operational definition of consciousness in addition to postulating novel hypotheses about the neural substrate of consciousness. The paper is a synthesis of diverse research traditions in the field of consciousness studies. We borrow equally from sensorimotor enactivists like Alva Noe and Evan Thompson,”Global workspace” theorists like Bernard Baars, higher-order theorists like Rosenthal, Lycan, and Armstrong, social-constructivists in the tradition of Vygotsky, and recent developments in the study of “mind wandering” or “meta-awareness” in the cognitive neurosciences. We take the best of all approaches and discard the worst.

How is this paper different from all the other articles on consciousness being published today? Besides our novel theoretical synthesis of diverse research traditions, we also take the time to map out a comprehensive mental taxonomy based on both phenomenological and empirical evidence. We also take the time to define exactly what we mean by the term “consciousness”. Our most basic idea is that there is a difference between prereflective and reflective consciousness. We claim that almost all animal are restricted to prereflective consciousness whereas language-using adult humans are capable of this mentality plus reflective consciousness. Here is a table showing the qualitative differences between prereflective and reflective consciousness:


(click for full image)

We contend that we prevailing theoretical spectrum in consciousness studies has often conflated these two phenomena and/or focused on one at the expensive of the other. For example, we think that the Higher-order Representation (HOR) theorists have been trying to use reflective consciousness to explain prereflective consciousness, the “what-it-is-like” of an organism. In contrast to the higher-order theorists, we think that there are phenomenal feels (“what-is-is-likeness” or “qualia”) independently of whether there are any higher-order representations active in the brain. So although the HOR people are definitely on the right track insofar as they are interested in meta-awareness (rather than just awareness), we think they have been barking up the wrong tree by explaining “what-it-is-likeness” in terms of HORs.Micah and I contend that what-is-is-likeness is shared by all living organisms insofar as they have organized and unitary bodies. This mind-in-life thesis is taken directly from the enactivist sensorimotor tradition.

However, in contrast to the enactivist tradition, we don’t think that sensorimotor connectivity exhausts the phenomena of consciousness. In fact, we believe that an over emphasis on embodied sensorimotor connectivity is likely to overlook or downplay the significance of reflective consciousness, which we argue is grounded by language and learned through exposure to narrative practice in childhood. We contend that HORs, although not the origin of what-it-is-likeness, do significantly change the phenomenal quality of what-it-is-likeness, giving rise to new forms of narratological subjectivity. As I mentioned in a previous post , there is good reason to believe that reflective consciousness gives rise to entirely new forms of phenomenal feeling such as sensory quales (e.g. the experience of gazing at a pure red patch). Conscious pain itself could plausibly be seen a side-effect of reflective consciousness feeding back into prereflective consciousness, allowing for conscious suffering (meta-awareness of pain). In this respect, we think that the HOR theorists are perfectly right to insist that meta-awareness or meta-consciousness of lower-order mental states allows for the emergence of special forms of subjectivity. However, we side with HOR theorists like Peter Carruthers (and against van Gulick) in arguing that this meta-consciousness is not widespread in the animal kingdom, and is perhaps restricted only to those animals capable of language. As Andy Clark says,

“[T]hinking about thinking” is a good candidate for a distinctively human capacity – one not evidently shared by the non-language using animals that share our planet. Thus, it is natural to wonder whether this might be an entire species of thought in which language plays the generative role – a species of thought that is not just reflected in (or extended by) our use of words but is directly dependent on language for its very existence. (1997, p. 209)

So the philosophical significance of our paper lies in our synthesis of Higher-order Representationalism and sensorimotor theorists of consciousness. Moreover, we synthesize HOR theory with Global Workspace Theory and Dan Hutto’s Narrative Practice Hypothesis, which emphasizes the importance of embodied narrative learning as the substrate for complex folk psychological attitudes and social cognitive processing.

But this is just the philosophical significance of the paper. There is also empirical significance. Micah developed a novel understanding of the “Default Mode Network” and synthesized a great deal of current data in the cognitive neurosciences in terms of our distinction between prereflective and reflective consciousness. The devil is in the details here, so I highly recommend reading the paper for a full overview of the empirical novelty of our paper. Needless to say, we feel like our paper marks a theoretical breakthrough on both philosophical and empirical fronts. Our theory of consciousness is complex and multifaceted, which is appropriate given the target of what we are trying to explain.


Filed under Consciousness