Monthly Archives: August 2011

Classes I'm taking at Wash U this Fall, and also some thoughts on why humans are so damned religious

School starts on Tuesday and my levels of anticipation for this semester are highly elevated. I’m on fellowship for the first year, so I don’t have any TA responsibilities, but I do have to take four classes a semester. This Fall I am taking:

  • Required proseminar for first-years with Gillian Russell
  • We are going to be reading Scott Soames’ two-volume Philosophical Analysis in the Twentieth Century. Based on my brief reading, the text seems like very highly quality philosophy. I’m a little intimidated by the Mathematical logic stuff though.
  • Advanced Metaphysics with Roy Sorensen
    • Course description: “Through readings from both classical and contemporary sources, a single traditional metaphysical concern will be made the subject of careful and detailed analytic attention. Possible topics include such concepts as substance, category, cause, identity, reality, and possibility, and such positions as metaphysical realism, idealism, materialism, relativism, and irrealism.”
  • Agency, Metacognition, and Control: a PNP seminar with Larry Jacoby and Carl Craver
    • Course description: “This seminar will be organized around philosophical and psychological readings pertaining to agency, intentional action, and metacognition. The philosophical readings will be concerned with the nature of human agency, self-knowledge, and the capacity to form second-order desires. The psychological readings will be drawn from research concerned with the distinction between automatic and controlled behavior, illusions of conscious will, attribution, and metacognition (or higher order thought). The goal of the seminar is to promote interdisciplinary communication.”
  • Varieties of dissociation: a PNP seminar with Liz Schechter
    • Course description: “In this course we will examine some varieties of dissociation, as they occur in syndromes and disorders like dissociative identity disorder, schizophrenia, the split-brain phenomenon, anarchic and alien hand syndromes, and blindsight, to see what such phenomena can tell us about the architecture of mind and its unity (or disunity). We will also look at issues surrounding dissociations in cognitive neuropsychology: the role of double dissociations in particular, and whether evidence of different lesion sites is necessary to infer them, and, more generally, what findings of specific impairments following brain injury can tell us about the unimpaired brain.”

    So yeah, this semester is going to be awesome. I look forward to working on full-length research papers again. I haven’t really done any major philosophical thinking since completing my Master’s thesis, so it will be nice to produce some real papers for conferences and possibly more publications. I will need to start thinking about my first qualifying paper next summer. I still enjoy writing blog posts more than writing papers though. I hate citing stuff because I am lazy and just like writing from memory. I also like the length constraints of a blog post. I usually try to stick to around 1,000-1,500 words for my blog posts since I think that’s enough room to make one simple point without losing people’s attention. Also, being able to write 1,000 words in one sitting is a good skill to have, since that turns a 3,000 word research paper into three good sit-downs plus time for major editing. When I break a research paper into 1,000 word segments, it helps me to not feel anxious when I first open a blank document. This is why I highly recommend blogging for all academics. It’s cliche, but the more you write, the easier writing becomes. Stringing sentences together by tapping on a keyboard is a skill like any other and improves with practice, at least with respect to the ease of writing, not necessarily the intellectual content, though that too should, ideally, be steadily rising in quality over time as you grow as a scholar.

    The one downside to starting classes is that I will no longer be able to fully control what I read, an immense pleasure for me. Though I will be reading some cool stuff undoubtedly, I imagine I will be finishing books at a slower pace now that I have required reading. I have a contest with myself for reading books each year. This year I’m already at 58. I hope to get to at least 75 by the New Year, which will be a personal best I believe. This year I’ve read some awesome books, both fiction and nonfiction. For fiction, the highlight was definitely DFW’s Infinite Jest. For nonfiction, it’s hard to say; nothing really stands out in the way Infinite Jest does. But I have read some really interesting psychology so far this year. Nothing mindblowing or paradigm shifting in the sense that Jaynes’ Origin of Consciousness was back in the summer of 2009.

    I remember distinctly when I became a Jaynesian. I was on a cruise with Katie, and had brought along Origin of Consciousness, and was reading it on the pool-deck and white-sand beaches. I was highly skeptical going into the book, since I had read over and over that Jaynes was considered a crackpot and outside the mainstream of academic opinion. By the time I was half-way through, I was a total convert. Almost at once I saw the stunning theoretical elegance of his theory of how religion got started, and my mind started reeling. It united in my head so many disparate strands of research, both philosophical and scientific. It is easily the most important intellectual synthesis since Darwin. Darwin showed us where our bodies came from, but Jaynes showed us where the human mind came from, religious quirks and all. Why does our species hallucinate a spiritual realm filled with authoritative entities? Why do we bury the dead in the way we do, and why do we sacrifice to the gods? Why did humans once have a more direct line to the spiritual world, but eventually lose contact except through singular hyperreligious individuals? Why do normal people pray to gods indirectly but hyperreligious people hear the gods directly talking to them as if they were beaming thoughts directly into their brain? Why did almost all ancient humans treat the newly dead as if they were still in need of items only useful to the living? Why did idolatry become so rampant after humans lost direct contact with gods? Why did oracles and prophets arise in the wake of our losing contact with the will of the gods?

    Jaynes’ theory powerfully accounts for all of these phenomena and more. Other theories of religion are far too simplistic in their proposed mechanisms e.g. an “over-active agency detector”. Jaynes’ theory is so much more concrete in its explanation of why humans have such a thick religious history. An agency detector? That only explains seeing faces in the clouds or getting spooked by the wind whipping up a tree. But it does not explain the hallucinations. Religious scholars are reluctant to call experiences of gods what they really are, and instead refer to the fact that religious people suffer a delusion of belief. But where did this delusion come from? Who was the first person to have such a delusion? For Jaynesian theory, the root of all religious delusion is in the hallucination of voices speaking to you. We know this is a vestigial feature of something that was once beneficial because classic schizophrenia has a strong genetic component, and yet is highly damaging to reproductive fitness and hasn’t been bred out of the population. So either hallucinations were an adaptive trait or a side-effect of something else that was adaptive. Jaynes thought that it was a side-effect of humans gaining verbal communication.

    But once in place, the side-effect turned out to have great adaptive benefit, since scholars are now forming a consensus that religiosity was in place long before the rise of civilization and was its impetus, so we must conclude that highly religious communities of hominids were more successful than nonreligious communities. The success comes partly because of the fact that the religious humans were verbal humans, and language use vastly increases intelligence, for it aids in the categorization and thus understanding of reality. With better understanding comes better control and flexibility, and better control becomes the ability to adapt to novel environments. But the “side-effect” of religion tapped into the powerful cognitive algorithms of the temporal cortex. The “bicameral” mind is kind of like the unconscious ancestor of the modern savants. Amazing calendrical skills, literally god-like synthesis of novel information, far-flung future predictions of seasons and other rhythmical patterns. The bicameral mind, in other words, gave birth to civilization. This is why ancient neolithic communities were all centrally organized around the temples, the houses of god. It’s why the god-kings and gods held absolute sway over the people’s minds. The god-inspired despot truly dominated. This was because humans had not yet developed the self-consciousness necessary to have a rational dialogue with the gods that controlled society through hierarchically structured hallucinations. But with self-consciousness came philosophy, and with philosophy came reflection, and with reflection humans realized that the gods were projections of human cognitive machinery, a literal remnant of our ancient and primitive past.


    Filed under Consciousness, Random, Theology

    Noncomputational Representation

    I’ve been thinking about representations a lot lately. More specifically, I have been thinking about the possibility of noncomputational representation. On first blush, this sounds strange because representationalism has for a long time been intimately connected with the Computational Theory of Mind, which basically says that the brain is some kind of computer, and that cognition is most basically the manipulation of abstract quasi-linguaform representations by means of a low-level syntactic realizer base. I’ve never been quite sure how this is supposed to work, but the gist of it is captured by the software/hardware distinction. The mind is the software of the computing brain. Representations, in virtue of their supposed quasi-linguaform nature, are often thought of in terms of propositions. For a brain to know that P, it must have a representation or belief to the effect of that P. As it commonly goes, computation is knowledge, knowledge is representational, the brain represents, the brain is a computer.

    But in this post I want to explore the idea of noncomputational representation. The basic idea under question is whether we can say that the brain traffics in representations even though it is not a computer i.e. if the brain is not a computer, does it still represent things, if so, how and in what sense? Following Jeff Hawkins, I think it is plausible to suppose that the brain is not a digital computer. But if the brain is not computing like a computer in order to be so intelligent, what is it doing? Hawkins thinks that the secret of the brain’s intelligence is the neocortex. He thinks that the neocortex is basically a massive memory-prediction machine. Through experience, patterns and regularities flow into the nervous system in terms of neural patterns and regularities. These patterns are then stored in the brain’s neocortex as memory. It is a well-known fact that cortical memories are “stored” in the same place as where they were originally taken in and processed.

    How is this possible? Hawkins’ idea is that the reason why we see memory as being “stored” in the original cortical areas is that the function of storing patterns is to aid in the prediction of future patterns. As we experience the world, the sensory details change based on things like our perspective. Take my knowledge of where my chair is in my office. After experiencing this chair from various positions in the room, I now have a memory of where the chair is in relation to the room, and I have a memory of where the room is in relation to the house, and the house in relation to the neighborhood, and the neighborhood to the city, and so on. In terms of the chair, what the memory allows me to do is to “know” things about the chair which are independent of my perspective. I can look at the chair from any perspective and recognize that it is my chair, despite each sensory profile displaying totally different patterns. How is this possible? Hawkins idea is that the neocortex creates an invariant representation of the chair which is based on the integration of lower-level information into a higher-order representation.

    What does it mean to create an invariant representation? The basic idea here can be illustrated in terms of how information flows into and around the cortex. At the lowest levels, the patterns of regularities of my sensory experience of the chair are broken up into scattered and modality-specific information. The processing at the lowest levels is carried out by the lowest neocortical layers. Each small region in these layers has a receptive field that is very narrow and specific, such as firing only when a line sweeps across a tiny upper-right quadrant in the visual field. And of course, when the information comes into the brain it is processed by contralateral cortical areas, with the right lower cortical layers only responding to information streaming in from the left visual field, and vice-versa. As the modality specific and feature-oriented information flows up the cortical hierarchy, the receptive fields of the cells becomes broader, and more steady in the firing patterns. Whereas the lower cortical areas only respond to low-level details of the chair, the higher cortical areas stay active while in the presence of the chair under any experential condition. These higher cortical areas can thus be said to have created an invariant representation of the patterns and regularities which are specific to the chair. The brain is able to create these representations because the world actually is patterned and regular, and the brain is responding to this.

    So what is the cash value of these invariant representations? To understand this, you have to understand how once the information flows to the “top” of the hierarchy (ending in the hippocampus, forming long-term memories), it flows back down to the bottom. Neuroanatomists have long known that 90% of the connections at the lower cortical layers are streaming in from the “top”, and not the “bottom”. In other words, there is a massive amount of feedback from the higher levels into the lower levels. Hawkins’ idea is that this feedback is the physical instantiation of the invariant representations aiding in prediction. Because my brain has stored a memory/representation of what the chair is “really” like abstracted from particular sensory presentations, I am able to predict where the chair will be before I even walk into the room. However, if I walked into the room and the chair was on the ceiling, I would be shocked, because I have nothing in my memory about my chair, or any chair, ever being on the ceiling. Except I might have a memory about people pulling pranks by nailing furniture to ceilings, so after some shock, I would “re-understand” my expectations about future perceptions of chairs, being less surprised next time I see my chair on the ceiling.

    Hawkins think that it is this relation between having a good memory and the ability to predict the future based on that memory which is at the heart of intelligence. In the case of memories flowing down to the sensory cortices, the “prediction” is one that predicts what future patterns of sensory activity are like. For example, the brain learns Sensory Pattern A and creates a memory of this pattern throughout the cortical hierarchy. The most invariant representation in the hierarchy flows down to the lower sensory areas and fires the Pattern A again based on the memory-based prediction about when it will experience Pattern A again. If the memory-prediction was accurate, the incoming pattern will match Pattern A, and the memory will be confirmed and strengthened. If the pattern comes in is actually Pattern B, then the prediction will be incongruous with the incoming information. This will cause the new pattern to shoot up the hierarchy to form a new memory, which then feedbacks down to make predictions about future sensory experience. In the case of predictions flowing down into the motor cortices, the “predictions” are really motor commands. If I predict that if I walk into my office and turn right I will see my chair, and if the prediction is in the form of a motor commond, the prediction will actually make itself come true if the chair is where the brain predicted it will be. Predictive motor commands are confirmed when the prediction is accurate, and disconfirmed if inaccurate.

    So, a noncomputational representation is based on the fact that the brain (particularly the neocortex) is organized in an hierarchical memory system based on neuronal patterns and regularities, which in turn are composed of synaptic mechanisms like long-term potentiation. According to Hawkins, it is the hierarchy from bottom to top and back which gives the brain its remarkable powers of intelligence. The intelligence of humans for Hawkins is really a product of having a very good memory and being able to anticipate and hence understand the future in incredibly complex ways. If you understand a situation, you will not be surprised because your memory is so accurate. If you do not understand it, you cannot predict what will happen next.

    An interesting feature of Hawkins’ theory is that it predicts that the neocortex is fundamentally running a single algorithm: memory-prediction. So what gives the brain its adult modularity and specialization? It is the specific nature of the patterns and regularities of each specific sensory modality flowing into the brain. But the common currency of the brain is patterns of neuronal activity. Thus, every area of the cortex, could, in principle, “handle” any other neuronal pattern. Paul Bach-y-Rita’s research on sensory substitution is highly relevant here. Bach-y-Rita’s research has shown that the common currency of the perception is the detection and learning of sensory regularities. His research has, for example, allowed blind patients to “see” light by wiring a camera onto their tongues. This is to be expected if the neocortex is running a single type of algorithm. So what actually “wires” a cortical subregion is the type of information which streams in. Because auditory data and visual data always enter the brain from unique points, it is not surprising that specialized regions of the cortex “handle” this information. But the science shows that if any region is damaged, a certain amount of plasticity is capable of having other areas “take over” the input. This is especially true in childhood. What Micah Allen and I have tried to show in our  recent paper is that higher-order functions of humans are based on the kinds of information available for humans to “work with”, namely, social-linguistic information. So the key to human unique cognitive control is not having an evolutionary unique executive controller in the brain. Rather, the difference is in what kinds of information can be funneled into the executive controller. For humans, a huge amount of the data streaming in is social-linguistic. Our memory-prediction systems thus operate with more complexity and specialization because of the unique social-linguistic nature of the patterns which stream into the executive. So to answer Daniel Wegner’s question of “Who is the controller of controlled processes?“, the uniqueness of “voluntary” control is based on the higher-level invariant memories being social-linguistic in nature. The uniqueness of the information guarantees that the predictions, and thus behavior, of the human cognitive control system will be unique. So we are not different from chimps insofar as we have executive control. The difference lies in what kinds of information that control has to work with in terms of its memory and predictive capacities.


    Filed under Philosophy, Psychology

    The Meaning of Meaning

    What is meaning? This simple question is at the heart of philosophy of mind. Mentality and meaning have always gone hand in hand, and philosophers have tried to give an account of meaning for thousands of years.  Despite the many spirited attempts, a concrete understanding of meaning has been elusive in philosophy, and a broad consensus is no where to be seen. It’s a devishly complicated question to ask what the meaning of meaning is; it boggles the mind as to how to even go about answering the question. Although it is difficult to give a straight-forward definition of meaning, I do believe, contra Socrates, that giving examples is helpful in the art of producing a rigorous definition for a concept.

    Philosophers have often focused on meaning at the linguistic level, wondering how the phrase “the cat is on the mat” means that the cat is actually on the mat. Moreover, what is the practical import of the statement? What does it mean to tell someone the cat is on the mat? If the cat belongs to no one, the import is probably zilch. But if the owner of the cat has been looking for it for days, then the import of stating where the cat is is likely to be highly meaningful. From an evolutionary perspective, it seems like the practical import of a linguistic statement is more developmentally basic, in both a phylo and ontogenetic sense. In other words, meaning comes first, then language. But this pushes back the question, what is nonlinguistic meaning?

    The question of nonlinguistic meaning is tied into the question of nonverbal mental content. Linguistic meaning is usually talked about in terms of propositional content e.g. the content of the statement “the cat is on the mat” is the state of affairs of a cat being on a mat. So verbal content is relatively easy to make sense out of, because we can understand the conceptual content in terms of the implied propositional content, which can be spelled out in terms of beliefs and desires. If I don’t know where the cat is and I am looking for the cat, then someone telling me that the cat is on the mat will update my belief system such that I will, ceteris paribus, be motivated to go look on the mat, and will actually look. This is a fairly orthodox way of accounting for linguistic content. But what about nonverbal mental content? How can we make sense of that?

    The question is philosophically vexing in that it’s difficult to use language (the medium of philosophy) to talk about mental content that exists independently of language. One way to get a better sense of nonverbal mental content, and thus nonverbal meaning, is to ask which creatures “have” nonverbal mental content. Let’s start with unicellulars like bacteria. Does a bacterium have a “mental life”? Not in the traditional sense of the term, since it seems strained to say that a bacterium believes anything, and having beliefs has long been a traditional criteria in distinguishing creatures with mentality from those without. Whereas we could, if we wanted to, adopt an intentional stance and say that when the bacterium senses a sucrose gradient a belief is formed by the bacterium that this is indeed sucrose it is encountering. But we know deep down that the “sensing” of the sucrose is entirely constituted by the physical-chemical nature of the bacterium. The sensing and digestion of the sucrose is entirely reactive and mechanistic. The bacterium’s “decision” to devour the sucrose based on its “belief” is entirely mechanical. The belief-forming talk is just that, talk. We do not really think that the intracellular machinery’s job is to form beliefs; its job is to perform biochemical functions that aid in the continuation of the bacterium’s metabolic existence.

    But although the bacterium does not have beliefs, and thus does not “have” propositional attitudes except those we ascribe to it, it still makes sense to say that bacterium has a mental life, however dim compared to more complex creatures. For what is mental life? I claim a creature has a mental life just insofar as there is something it is like to be that creature. And, following Heidegger, I claim there is something it is like to be a creature just insofar as that creature “lives in” a phenomenal world. “Living in” in a phenomenal world is not like a spatial sense of “in” as in the case of the pencil being “in” the box. Living “in” a phenomenal world is more like being-in-the-world where being-in-the-world is a matter of (1) having concerns and (2) living in an environmental niche. A bacterium has concerns insofar as it is “concerned” about its own survival. Its whole existence is constituted by a desire to stay alive, to maintain its autonomous living. It “does” this in virtue of its complete biochemical nature. But its biochemical nature is organized in such a way as to constitute a machine which has a homeostatic equilibrium and the means by which to maintain that equilibrium despite perturbations from a changing environment and breakdowns in the stability of the internal mechanisms. So because the bacterium is “concerned” about itself in virtue of having its physical structure, the bacterium therefore lives in a phenomenal world insofar as it lives in an environment. The bacterium’s world is such that what is meaningful to the bacterium is that which enables it to keep on living. Thus, sucrose is meaningful to the bacterium because it affords the possibility of digesting it for maintenance of its homeostatic equilibrium.

    We have then a foundation of meaning upon which to build more complex types of meaning. Basic nonverbal mental content, and thus basic nonverbal meaning, is based around autonomy. The bacterium is an autonomous machine because it gives itself its own principles for behavior based on its nature. These principles are properties of its organization as a physical object. One of the principles is concern oriented insofar as the maintenance of a dynamic nonlinear homeostatic equilibrium is the fundamental concern. And as we said, if you are concerned about something, then you live in a phenomenal world. If you live in a phenomenal world, you “have” phenomenal experience (where having is understand to be a metaphor, and not a literal “having” of an object like having a hammer in your hand). And if you have phenomenal experience, there is something it is like to be you. Thus, there is something it is like to be a bacterium.

    But notice how the bacterium has no nervous system. If my argument goes through, then we can conclude that looking for the neural correlates of phenomenal experience is a completely misguided enterprise that is bound to fail. However, since I have been trying to argue that phenomenal experience and consciousness do not overlap, this means that we can still coherently look for the neural correlates of consciousness. But the NC of phenomenal experience is completely misguided, because, as I have tried to establish, there is something it is like to be a bacterium, and bacteria do not have nervous systems. If I am right, then neurophilosophers trying to pinpoint the NCs of phenomenal experience have been barking up the wrong tree. For the fundamental principle of mental life is not consciousness but living in a phenomenal world i.e. a world of real value and meaning, where entities are encountered as significant. Rocks do not live in a phenomenal world. There is nothing a rock is concerned about. It does not care if you break it in two. There is nothing it is like to be a rock. A rock has no mental life. But what a world of difference in the bacterium! The bacterium is alive. It has concerns. It lives in an ecological (i.e. phenomenal) niche. Whereas the rock does not strive to stay together in a particular organizational pattern, the bacterium does. Sucrose means nothing to a rock, for nothing means anything to a rock, but things matter to bacteria. Sucrose is meaningful to bacteria.

    And that is the meaning of meaning in its most basic form. Of course, I am glossing on the complexity of both primordial meaning and linguistic meaning. Linguistic meaning, though grounded by primordial meaning, takes on a life of its own once established in a population. This is why Heidegger made pains to distinguish between being-in-the-environment and being-in-a-linguistic-world, with the latter reserved for those humans who have learned a language and grown up in a social-linguistic community.


    Filed under Consciousness, Philosophy

    Does a global workspace really exist?

    Bernard Baars’ Global Workspace Theory (GWT) of consciousness is arguably the “hottest” theory of consciousness on the market right now. The essence of the theory is that most mental contents are nonconscious and localized to specific sensorimotor circuits e.g. a nonconscious visual mental content is primarily localized to the visual cortex. In order for a mental content to be “conscious”, the GWT says that the localized nonconscious content must be made available to a globally distributed neuronal workspace that integrates and associates that content with other localized content to form a unified, multimodal representation that is our conscious experience and which enables certain functions which can only operate on the basis of such global information. For example, when a human becomes conscious of a blue coffee mug, the GWT says that the localized circuits specific to different sensory modalities must become “globally available” for use by a distributed network such that each different sensory modality becomes “unified” into a conscious percept. The key idea of the GWT is thus the transition from nonconscious, local information to conscious, global information. The global information is said to now be in a “global workspace” whereby it can enable functions like (1) the integration of novel information into prexisting circuits (2) working memory functions such as inner speech and visual imagery (3) diverse kinds of learning (4) voluntary control enabled by conscious goals (5) access to the autobiographical self and self-referential articulation, etc.

    The GWT has been born out by numerous experiments whereby consciously reported percepts are correlated with a wider, globally distributed brain activation whereas nonconscious percepts are correlated with less widely distributed activation in the localized sensory cortices.  But as a philosopher, I am interested in questions of ontology. A question that interests me about the GWT, and which seems to have garnered little attention from proponents of the GWT, is whether or not the global workspace really exists. Now, in one sense of the term, the global neuronal workspace really exists insofar as there really is a “widely distributed network of brain circuitry” that has the functional properties associated with consciousness. But notice the terminology of the global workspace. What is the nature of this “space”? Is the space inside our skulls? Can we open up the brain and point to this workspace? Evidently not, for the “workspace” seems to refer to a functional space that is only experienced as a literally internal space. The only real “space” in the brain is the physiological space of brain tissue. But if you described to a layperson the nature of working memory such as inner speech and visual imagery, and then asked them “where” in space such functions are located, they would likely tell you the workspace is inside their heads. Now ask them how they know this and they will likely tell you they know it because they experience inner speech and visual imagery as taking place inside their heads. But as philosophers, we should be skeptical about deriving the reality of the workspace from our normal experience of the workspace. And it is this distinction between the “real” global distribution of neuronal circuitry and how we experience the workspace that captures my worry about the “reality” of the workspace.  For it is one thing to say that there is a global distribution of neuronal activity in the brain, and it is another to say that there really is a “spotlight” in our heads that “shines an attentional beam” onto a theater in our brains.

    Proponents of the GWT far too often fall into the trap of confusing the metaphorical nature of such theater talk with the “globally distributed” talk of neuron populations. It is important here to keep distinct what we are trying to explain with the GWT and the mechanisms which we use to explain that phenomenon. What we are trying to explain is the experience of ourselves as having a unified conscious experience that is coherent, stable, richly detailed, and continuous across time. This is what needs explaining. But a neuronal explanation of this phenomenon based on global distribution of neural information is fully compatible with the unified conscious experience being an illusion. I think this is a point that Dennett has been trying to make for decades. But when most people hear the word “illusion” they think that Dennett is trying to dismiss the phenomenon trying to be explained, namely, unified narratologically continuous experience. But what Dennett is actually trying to do is to get people to realize that one of the capacities of the human brain is indeed the capacity to generate illusions. And the “unified” Cartesian theater “in our heads” is the biggest illusion of all. For it is equally possible that those globally distributed neural mechanisms could make us experience ourselves as being outside our heads.

    That this is true is evidenced by both clinical and experimental evidence. On the clinical side, we have reports of out of body experiences. Julian Jaynes recounts such a report:

    That there is no phenomenal necessity in locating consciousness in the brain is further reinforced by various adnormal instances in which consciousness seems to be outside the body. A friend who received a left frontal brain injury in the war regained consciousness in the corner of the ceiling of a hospital ward looking down euphorically at himself on the cot swathed in bandages, Those who have taken lysergic acid diethylamide commonly report similar out-of-the-body or exosomatic experiences, as they are called. Such occurences do not demonstrates anything metaphysical whatever; simply that locating consciousness can be an arbitrary matter.

    On the experimental side, we have people like Thomas Metzinger who can induce out of body experiences through virtual reality setups where you are given visual feedback through a camera of your own backside. After adjustment, your brain will make you feel like you really are floating outside your body. This tells us that the “location” of the experential global workspace is more or less arbitrary. If the brain wanted to, it could make you feel like the global workspace exists 3 feet above your skull. Now, of course, there are good reasons why the brain generates the illusion as taking place in your brain, for this is tied into the closeness of volition and internal sensations with our bodily experience. But it is important to remember that thousands of years ago the “internal mind-space” was experienced as being in the heart, not the head.

    So the moral of this post is that if we are going to develop an adequate scientific theory of consciousness like GWT, we must be clear on the distinction between what is being explained and the mechanisms we posit to explain the phenomena. The phenomena to be explained is the experience of a unified Cartesian theater in our heads. The explanation is a global distribution of localized, nonconscious information. But when asked whether the global workspace “really exists”, we have to be clear between the workspace as experienced by us and the workspace as hypothesized by science. As we experience it, the “location” of the workspace inside our heads is arbitrary. As we explain it, the location of the workspace is the precise network of globally distributed brain* activity. So does a global workspace really exist? Yes. But it exists both as an illusion we experience, and as a brain distribution.


    *I say “brain” activity and not “neuronal” activity because there is growing evidence that astrocytes play a role in modulating neuronal information processing through modulation of glutamate uptake in the synaptic cleft.

    1 Comment

    Filed under Consciousness, Philosophy, Psychology

    Why we should disentangle "what-it-is-like-ness" from consciousness

    If you ask almost any mainstream philosopher familiar with the problem of consciousness for a definition of consciousness, more often than not they will define it in terms of “what-it-is-like-ness”. For these philosophers, if there is “something it is like” to be an entity then that entity is conscious, period. This works pretty well for most objects. Is there something it is like to be human? Most people would say yes. Therefore, humans are conscious. Is there something it is like to be a rock? Most people would say no. Therefore, rocks are not conscious.  On first blush then, it seems like “what-it-is-like-ness” is a good working definition of what consciousness “is”. But the rock and human cases are the easy objects. What about objects like an earthworm? Is there something it is like to be an earthworm? Whereas it is seemingly obvious that there is nothing it is like to be a rock, how can we answer this question about an earthworm? It seems somewhat intuitive to say that there is something it is like to be an earthworm. Therefore, it would seem that we must say that the earthworm is conscious. But in this post I want to press these intuitions. For me, it doesn’t seem immediately wrong to say that an earthworm lacks consciousness. This seems like a perfectly coherent thing to say. If it is, then we must either say that there is nothing it is like to be an earthworm, or that consciousness does not overlap with what-it-is-like-ness. Since it seems wrong to say that there is nothing it is like to be an earthworm, then we are compelled to reexamine the mainstream definition of consciousness as “what-it-is-like-ness”.

    But if consciousness is not what-it-is-like-ness, then what is it? Well, we seem to be pretty clear on the fact that humans are capable of being conscious, and that rocks aren’t, so what is the difference between a rock and a human? The difference needs to be such that it isn’t shared by the earthworm and the human, so we will need to rule out the capacity for perception and action, or the possession of a nervous system. A clue for narrowing in on this difference can be found in the case of the sleeping mother. Imagine a mother is asleep in one room and a newborn infant is asleep in another room. The mother is sound asleep and oblivious to sounds like the noisy air conditioner turning on, or the sound of traffic outside the window. But the slightest noise of the infant is enough to catch her attention and wake her. Now, it seems obvious to most people that the sleeping mother was capable of a complex perceptual act, since presumably the perception of the infant’s small cry against the background noise of the house is a case of genuine perception. So here is the million dollar question: was the mother conscious of the baby’s cry while she was alseep?

    The field of consciousness studies seems to be split down the middle when it comes to answering this question. On the one hand you have the first-order theorists who claim that since the perception of the baby’s cry necessarily requires awareness of the cry, and since they define consciousness as first-order awareness, then the mother was in fact conscious of the baby’s cry. On the other hand you have the second-order theorists who claim that it is not enough for the mother to be simply aware of the cry to be conscious. Rather, they claim that in order to be conscious of the cry, the mother must be aware that she is aware of the cry. The awareness must be higher-order in order to be conscious.

    My intuitions lean towards the second-order theorists. I think that the mother is not conscious of the baby’s cry. Rather, her adaptive unconscious is aware of the baby’s cry and upon perceiving the cry, this information is globally assembled and shunted into consciousness where it shortcuts decision making. But the unconscious perception of the cry is genuine mental activity and the unconscious awareness of the cry is genuine awareness. I find the second-order story of the sleeping mother much more intuitive since it strikes me as patently misguided to say that someone could be conscious of something even when they are asleep and not aware that they are aware. Obviously while sleeping the mother’s mind is in some respect aware of what’s happening in the environment otherwise she wouldn’t wake up upon hearing her baby stir. But I think it is misguided to define consciousness in terms of such simple awareness, for what follows from such a definition is the idea that earthworms are conscious, since they too possess the capacity for first-order awareness. And I think it is most sensible to restrict consciousness such that the earthworm and the sleeping mother lacks it. But just to be clear, consciousness is also not to be confused with mere alertness, wakefulness, and awareness of events in either the body or the world. For the earthworm is aware of certain properties in the environment, yet it is not conscious (in my opinion).

    A further question is whether there is something it is like to be a sleeping mother who becomes aware of her baby’s cry. The first-order theorists claim that yes there is something it is like to be the sleeping mother insofar as the mother is aware of the baby and that  they think that there is something it is like to be aware of things in the world. For most second-order theorists, there is not something it is like to be the sleeping mother since the mother lacks second-order awareness. This is where my intuitions depart from the second-order theorists, for I think that even though the mother is not conscious, there is still something it is like to be asleep, just like there is something it is like to be an earthworm. Now, some theorists will immediately reply that it is absurd to claim that there is something it is like to be asleep. But we’ve already seen that the sleeping person is still capable of first-order awareness of the environment, and it does seem intuitive that there is something it is like to have first-order awareness, otherwise we are forced to claim that there is nothing it is like to be an earthworm and this is an undesirable position.

    So what are we left with in terms of a definition of consciousness? I think the second-order theorists are on the right track insofar as they emphasize that it is not enough for an entity to be aware of something in order to be conscious; one must also be aware that you are aware. However, I disagree with the second-order theorists insofar as I don’t think what needs explaining is what-it-is-like-ness, since I think there is something it is like to be an earthworm and obviously the earthworm is not aware of its own awareness. So to explain consciousness, we need to explain how it’s possible for an entity to be aware of its own awareness. While I won’t go into the details in this post, regular readers of this blog know that I take a Jaynesian approach to this question, and think that what makes it possible for us to be aware of our awareness is to have a linguistic concept for “awareness”. New linguistic concepts enable us to pay attention to new aspects of reality. The linguistic concept of “awareness” allows us to pay attention to awareness qua awareness. So the hypothesis here is that unless you have a linguistic concept for awareness, you cannot be conscious, because you cannot pay attention and thus be aware of your own awareness. So on this view, infants who lack the linguistic concept for awareness are not conscious. This also restricts the historicality of consciousness to those points in history where humans first started developing mentalistic concepts. This is in accord with Daniel Dennett’s famous analogy of baseball. Just like one cannot play baseball without the concept of baseball, one cannot be conscious unless you have the right concepts in place.


    Filed under Consciousness, Philosophy

    A crude theory of perception: thoughts on affordances, information, and the explanatory role of representations

    Perception is the reaction to meaningful information, inside or outside the body. The most basic information is information specific to affordances. An affordance is a part of reality which, in virtue of its objective structure, offers itself as affording the possibility of some reaction (usually fitness-enhancing, but not necessarily so). A reaction can be understood at multiple levels of complexity and mechanism. Sucrose, in virtue of its objective structure, affords the possibility of maintaining metabolic equilibrium to a bacteria cell. Water, in virtue of its objective structure, affords the possibility of stable ground for the water strider. Water, in virtue of its objective structure, does not afford the possibility of a stable ground for a human being unless it is frozen. An affordance then is, as J.J. Gibson said, both subjective and objective at the same time. Objective, because what something affords is directly related to its objective structure; subjective, because what something affords depends on how the organism reacts to it (e.g. human vs. water strider)

    The objective structure of a proximal stimulus can only be considered informationally meaningful if that stimulus is structured so as to be specific to an affordance property. If a human is walking on the beach towards the ocean, the ocean will have the affordance property it has regardless of whether the human is there to perceive information specific to it. The “success” or meaningfulness of the human’s perception of the ocean is determined by whether the proximal stimulus contains information specific to that affordance property. A possible affordance property might be “getting you wet”, which is usually not useful, but can be extremely useful if you are suddenly caught on fire. Under normal viewing conditions, the objective structure of the ambient array of light in front of the human contains information specific to the ocean’s affordance properties in virtue of its reflective spectra off the water and through the airspace. But if the beach was shrouded in a very thick fog, the ambient optic array would stimulate the human’s senses, but the stimulus wouldn’t be meaningful because it only conveys useless information about the ocean, even though that information is potentially there for the taking if the fog was cleared. An extreme version of “meaningless stimulus without perception” is the Ganzfeld effect. On these grounds, we can recreate, without appealing to any kind of representational theory, the famous distinction between primary and secondary qualities i.e. the distinction between mere sensory transduction of meaningless stimuli and meaningful perception.

    Note too how perception is most basically “looking ahead” to the future since the affordance property specifies the possibility of a future reaction. This can be seen in how higher animals can “scan” the environment for information specific to affordances, but restrain themselves from acting on that information until the moment is right. This requires inhibition of basic action schemas either learned or hardwired genetically as instinctual. In humans, the “range” of futural cognition is uniquely enhanced by our technology of symbols and linguistic metaphor. For instance, a human can look at a flat sheet of colored paper stuck to a refrigerator and meaningfully think about a wedding to attend one year in the future. A scientist can start a project and think about consequences ten years down the road. Humans can use metaphors like “down the road” because we have advanced spatial analogs which allow us to consciously link disparate bits of neural information specific to sensorimotor pathways into a more cohesive, narratological whole so as to assert “top-down” control by a globally distributed executive function sensitive to social-cultural information.

    This is the function which enables humans to effortlessly “time travel” by inserting distant events into the present thought stream or simulating future scenarios through conscious imagination. We can study the book in our heads of what we have done and what we will do, rehearse speech acts for a future occasion, think in our heads what we should have said to that one person, and use external symbolic graphs to radically extend our cognitive powers. Reading and writing, for example, has utterly changed the cognitive powers of humans. Math, scientific methodology, and computer theory have also catapulted humans into the next level of technological sophistication. In the last few decades, we have seen how the rise of the personal computer, internet, and cellphone has radically changed how humans cope in this world. We are as Andy Clark said, natural born cyborgs. Born into a social-linguistic milieu rich in tradition and preinstalled with wonderful learning mechanisms that soak up useful information like sponges, newborn humans effortlessly adapt to the affordances of the most simple environmental elements (like the ground) to the most advanced (the affordance of a book, or a website).

    So although representations are not necessary at the basic level of behavioral reaction shared by the unicellulars (bacteria reacting to sucrose by devouring it and using it metabolically), the addition of the central nervous system allows for the storage of affordance information into representational maps. A representational map is a distributed pattern of brain activity which allows for the storage of informational patterns which can be utilized independently of the stimulus event which first brought you into contact with that information. For example, when a bird is looking right at a food cache, it does not need its representational memory to be able to get at the food; it simply looks at the cache and then reacts by means of a motor program for getting at the food sparked by a recognition sequence. However, when the cache is not in sight and the bird is hungry, how does the bird get itself to the location of the cache? By means of a re-presentation of the cache’s spatial location which was originally stored in the brain’s memory upon first caching the food. By accessing stored memory-based information about a place even when not actually at that place, the bird is utilizing representations to boost the cognitive prowess of its nonrepresentational affordance-reaction programs. Representations are thus a form of brain-based cognitive enhancement which allow for the reaction to information which is stored within the brain itself, rather than just contained in the external proximal stimulus data. By developing the capacity to react to information stored within itself, the brain gains the capacity to organize reactions into more complicated sequence of steps, delaying and modifying reactions and allowing for the storage of information for later retrieval and the capacity to better predict events farther into the future (like the bird predicting food will be at its cache even though it is miles away).


    Filed under Consciousness, Philosophy