Monthly Archives: March 2012

Reason Is a Tool

I’m taking a seminar on Derek Parfit’s new book On What Matters this semester. In it, he defends an Objectivist account of reasons. Roughly stated, this view claims that the normative force of reasons concerning our attitudes towards an object stems, not from our subjective desires concerning that object, but from the nature of the object itself. In contrast, a Subjectivist account of reason basically says that the normative force of reasoning comes from our subjective preferences. The way Parfit sets up the debate, most metaphysical naturalists and empirically minded moral psychologists  essentially accept a Subjectivist account of reasons, and they do this based on evolutionarily considerations. This empirically minded Subjectivist tradition stretches back at least to Hume, who said that “Reason is, and ought only to be the slave of the passions”. This tradition has tried to argue that pretty much all reasoning is a matter of post-hoc rationalization for prior emotional convictions. This tradition has been recently taken up by people like Jonathan Haidt, who have argued that Reason is the tail being wagged by the emotional dog, and not the other way around.

Most people in the seminar are very skeptical about Parfit’s Non-Naturalist Objectivism, which makes out Reason to be this very mysterious and spooky thing, that seems to have magical normative powers to compel us to act in certain ways. Most people in the seminar, including the professor (Julia Driver), seem to basically accept the standard Humean account of Reason as the best and least mysterious account in the market.

Personally, I do not believe in the Humean story. I do not think that Reason is the tail, and emotions are the dog wagging that tail. I think the slave-master metaphor is a bad one, because both Reason and Emotion are masters in their own way. Let me explain. First of all, in order to tell my story of Reason, I need to make a distinction between what I will call instrumental rationality and Human rationality (for lack of a better term, though I think you could substitute conscious here just fine). Instrumental rationality is the rationality we share with all nonhuman animals in virtues of being the types of entities who have survival programmed into their genome. So if I am starving it is instrumentally rational to eat some food. If I am being attacked by a wild boar, it is instrumentally rational to try and defend myself. So far, so good. Instrumental rationality is not very mysterious and the normativity of instrumental rationality is fully compatible with a Subjectivist, Humean account.

However, I do not think Human rationality (or at least human-typical rationality) operates according to the same normative logic. I think there is a different normative structure in operation that governs the rationality of Human Reason. So this leads to a natural and obvious question: What is Human Reason? I propose an answer: a tool. Human Reason is a tool that is a product of cultural evolution, in the same exact way that Dan Everett has recently (and convincingly, imo) argued that language is a tool, in the same way that a bow and arrow is a tool. We do not grow the ability to make bows and arrows, we learn how to make them. Likewise, we learn language. And similarly, I am claiming, we learn to be Rational.

If Human Reason is a cultural tool, then it is going to operate according to a different evolutionarily logic than instrumental rationality. I see no reason why we should apply the Subjectivist story about instrumental rationality to Human Reason. They are simply very different things, although of course Human Reason bidirectionally interacts with instrumental rationality in very complex ways. I believe the story I am telling about Human Reason vs instrumental reason is more or less compatible with modern dual-process accounts of reason. On dual-process theory, there are basically two different reasoning systems in humans: one is evolutionarily ancient and shared with nonhuman animals, and one is evolutionary recent and likely unique to humans. My particular claim is that the reason why System 2 is evolutionarily recent is because it is a product of cultural evolution. Being a Jaynesian, I believe that Human Reason was “invented” through the mechanisms of cultural evolution very recently, perhaps within the last 10,000 years.

Moreover, I believe that philosophy as a cultural practice represents the loftiest instantiation of Reason as a tool. When humans invented the practice of philosophy, we developed a cognitive toolbox that opened up new vistas for human development. Indeed, natural philosophy itself eventually transformed into perhaps the most powerful tool of all: modern science. Science is the ultimate extension of Human Reason as a toolkit. It allows us unprecedented control over our environment. It allows us to, for example, surf the internet on our tablet computers while (someone else, hopefully) is driving a car which is being guided by GPS satellites. Science as a tool also allows complex feedback loops with instrumental rationality in virtue of the development of medicine as a means to prolong and maintain our biological health.


I started this post with a brief overview of the debate between Objectivists and Subjectivists about Reason. I rejected Parfit’s Non-Naturalist Objectivism because it makes Reason out to be this spooky, magical thing. But I also rejected Subjectivism for inappropriately applying the normative logic of instrumental rationality onto Human Rationality. The normative structure of Human Rationality is closer to Objectivism. However, I offered a cultural explanation for the origin of Human Rationality. Human Reason is a tool, in the same way that a bow and arrow is a tool. Just as there is (probably) no unique gene for making a bow and arrows , there is not a unique gene for Human Reason. It is a social construction. Which isn’t to say that there are not particular neural dispositions underlying our capacity to learn Human Reason that have a definite genetic basic. To say that Human Reason is a tool is to say that our brains do not grow the capacity for Human Reason, but learn it. For me this is essentially an optimistic picture, for it flips the depressing story about the dog and it’s tail around. Although emotion is certainly a force to be reckoned with, so is Human Reason when properly wielded. Not constrained to the evolutionary logic of spreading genes, Human Reason can allow humans to rise above the selfish-programming of genetic evolution and strive for decision making that is based on the application of principles that we have given to ourselves in virtue of our capacity to step back and think about what we ought to do. This gives me great hope, for it means essentially that Reason is not and ought not to be the slave of the passions; they are both masters in their own way.


Filed under Consciousness, Philosophy, Psychology

Some Thoughts on Christof Koch's New Book and the Neuronal Correlates of Consciousness

I’m reading Chistof Koch’s new book Consciousness: Confessions of a Romantic Reductionist and wanted to put some thoughts down in writing in order to get more clear about what exactly is going on with Koch’s understanding of consciousness. Koch is famously interested in the neuronal correlates of consciousness. First, what does Koch mean by consciousness? He uses a mix of four different definitions:

1. “A commonsense definition equates consciousness with our inner, mental life.”

2. “A behavioral definition of consciousness is a checklist of actions or behaviors that would certify as conscious any organism that could do one or more of them.”

3. “A neuronal definition of consciousness specifies the minimal physiologic mechanisms required for any one conscious sensation.”

4. A philosophical definition, “consciousness is what it is like to feel something.”

I have the sneaking suspicion that Koch can’t possibly be talking about the last definition, phenomenal consciousness. Why? Because he says things like “The neural correlates of consciousness must include neurons in the prefrontal cortex”. So on Koch’s view, phenomenal content is a high-level phenomena that is not produced when there is just lower-level activity in the primary visual cortex.

To support this view, Koch describes the work of Logothetis and the binocular rivalry experiments in monkeys. In these experiments, monkeys are trained to pull a different lever whenever they see either a starburst pattern or a flag pattern. Then the researchers projected both these images onto either eye to induce a binocular rivalry.

“Logothesis then lowered fine wires into the monkey’s cortex while the trained animal was in the binocular rivalry setup. In the primary visual cortex and nearby regions, he found only a handful of neurons that weakly modulated their response in keeping with the monkey’s percept. The majority fired with little regard to the image the animal saw. When the monkey signaled one percept, legions of neurons in the primary visual cortex responded strongly to the suppressed image that the animal was not seeing. This result is fully in line with Francis’s and my hypothesis that the primary visual cortex is not accessible to consciousness.”

 I think this line of thinking is greatly confused if it is supposed to be an account of the origin of qualia or phenomenal content. First of all, it’s not clear that we can rule out the existence of phenomenal content in very simple organisms that lack nervous systems, let alone prefrontal cortices. Is there something-it-is-like to be a slug, or an amoeba? I don’t see how we can rule this out a priori. This puts pressure on Koch’s claim that what he is talking about is the origin of qualia. I think Koch is talking about something else. What I actually think the Logothetis experiments are getting at is the neural correlates of complex discrimination and reporting, which produce new forms of (reportable) subjectivity.

For example, let’s imagine that we remove the monkey’s higher-order regions so that there is just the primary visual cortex responding to the stimuli. How can we rule out the possibility that there is something-it-is-like for the monkey to have its primary visual cortex respond? I don’t see how we can possibly do this. Notice that in the original training scenario the only way to know for sure that the monkeys see the different images is for the monkey’s to “report” by pulling a lever. This is a kind of behavioral discrimination. But how do we know there is nothing-it-is-like to “see” the stimuli but not report? This is why I don’t think Koch should be appealing to the philosophical definition of phenomenal consciousness. It’s too slippery of a concept and can be applied to a creature even in the absence of behavioral discrimination, for we can always coherently ask, “How do you know for sure that there is nothing-it-is-like to for the monkey to not behaviorally discriminate the stimuli?”

The fact that Koch so closely relies on the possibility of reporting conscious percepts indicates he cannot be talking about phenomenal consciousness, because we have no principled way to rule out the presence of phenomenal consciousness in the absence of reporting. And this is especially true if we are willing to ascribe phenomenal consciousness to very simplistic creatures that don’t have the kind of higher-order cortical capacities that Koch thinks are necessary for consciousness. Koch seems to admit this because he very briefly mentions the possibility of there being “protoconsciousness” in single-celled bacteriums, but doesn’t dwell in the implications this would have for his quest to find the “origin of qualia” in higher-order neuronal processes. If there is protoconsciousness or protoqualia in single-celled bacteriums, then the brain would not be the producer of qualia, but only the great modifier of qualia. If bacteria are phenomenally consciousness, then the brain cannot be the origin of phenomenal content, but only a way to produce ever more complex phenomenal content. Accordingly, the Logothetis experiments don’t show that higher-order brain areas are necessary for phenomenal content, but only phenomenal content of a particular kind. The experiments show instead that higher-order brain regions are necessary for the phenomenal content of complex behavioral discrimination.

Let me explain. A bacterium is capable of very basic perceptual discrimination. For example, it can discriminate the presence of sugar in a petri dish. But this is not a very complex kind of discrimination in comparison to the discrimination being done by the monkey when it pulls a lever in the presence of a flag stimuli. The causal chain of mediation is much more complex in the monkey than it is for the bacterium. On this view, phenomenal content comes in degrees. It is present in bacterium to a very low degree. It is present to a higher degree in flies, worms, and monkeys. I believe it is even present in completely comatose patients (I at least see no way to rule this possibility out), but to a very low degree. And it’s higher in vegetative patients and even higher in minimally conscious patients,and of course super-high in fully awake mammals like primates, and extraordinarily high in fully awake adult humans.

So what I think Koch’s NCC approach is doing is finding the neural correlates of highly complex forms of discrimination and reporting.  Koch and Crick define the neural correlates of consciousness as “the minimal neural mechanisms jointly sufficient for any one specific conscious percept”. If we understand “conscious” here in terms of phenomenal consciousness, then I think that the NCC approach does no such thing. Rather, the NCC specifies the minimal neural mechanisms for a conscious percept that is reportable. These are hugely different things. But this doesn’t mean that Koch is completely misguided in his quest to find the NCC for conscious percepts that are reportable (Bernard Baars actually defines consciousness in exactly this way). Since the ability to intelligently report is critical in our ability to act in the world, to find the NCC of percepts that can be reported will still be highly useful in coming up with diagnostic criteria for minimally conscious patients. Except on my terminology, “minimally conscious patients” cannot really mean minimally phenomenally conscious, which implies that there is nothing-it-is-like to be in a vegetative state (which we can’t conclusively rule out). Instead, we should understand it as “minimally capable of high-level report”, with report being understood very broadly to not mean just verbal report, but any kind of meaningful discrimination and responsiveness.  And as I tried to make clear in my last post, the ability to report on your phenomenal states is very much capable of modifying phenomenality in such a way as to give rise to new forms of subjectivity, what I call “sensory gazing”.

I therefore think we should drop the quest to find the neural correlates of phenomenal consciousness. Of the four definitions that Koch uses, he should give up on the fourth, because phenomenal consciousness is just too slippery to be useful in distinguishing coma patients from minimally responsive patients, or in understanding what’s going on in the binocular rivalry cases. So when Koch says “Francis and I proposed that a critical component of any neural correlate of consciousness is the long-distance, reciprocal connections between higher-order sensory regions, located in the back of the cerebral cortex, and the planning and decision-making regions of the prefrontal cortex, located in the front”, he can’t possibly be talking about phenomenal consciousness so long as we cannot conclusively rule out the possibility of protoconsciousness in bacteria. What I actually think Koch is homing in on is the neural correlates of reflective consciousness. And it’s perfectly coherent to talk about simple forms of reflective consciousness that are present in monkeys and other mammals. Reflective here could simply mean “downstream from primary sensorimotor processing”. Uniquely human self-reflection and mind-wandering could then be understood in terms of an amplification and redeployment of these reflective circuits for new, culturally modified purposes (think of how reading circuitry in humans is built out of other more basic circuits). It would make sense that any human-unique circuitry would be built out of preexisting circuitry that we share with other primates (c.f. Michael Anderson’s massive redeployment hypothesis). And the impact of language on these reflective circuits would certainly modify them enough to account for human-typical cognitive capacities. The point then is that we can account for Koch’s findings without supposing that he is talking about the origin of qualia.


Having read more of the book, it’s only fair that I amend my interpretation of Koch’s theory. Following Guilio Tononi’s theory of Integrated Information, Koch seems to espouse a kind of panpsychism, and admits that even bacteria might have a very, very dim kind of phenomenal experience. So he doesn’t seem to ultimately think that higher-order brain processes are the origin of qualia, which directly contradicts some of the things he says earlier in the book. This is very confusing in light of the things he says about binocular rivalry and other phenomena. So he seems to thinks that even a mite of dust or a piece of dirt has a dim sliver of phenomenal experience. Although this is an intriguing hypothesis (and it seems to be at least logically possible), it only seems to confirm my opinion that if phenomenal consciousness is a intelligible property at all, it is not a very useful one for doing cognitive science, since it can be applied to almost anything on certain definitions. Personally, I think that if we are going to make sense of qualia at all (and I’m not sure we ever will), it will have to be the type of property that “arises” (whatever that means) in living organisms, but not inorganic entities.


Filed under Consciousness, Philosophy, Psychology