Monthly Archives: May 2012

Partial Epiphenomenalism?

The theory of epiphenomensalism states that mental phenomena are causally inert by-products. Like the steam of a train whistle, mental phenomena are merely side-effects of an underlying physical/functional process. It seems to me that the question of epiphenomenalism is usually framed in terms of whether all mental phenomena are epiphenomenal. However, it has just struck me that this might be the wrong question. Maybe we should ask whether only some mental phenomena are causally inert by-products.

Let’s define pain as the brain’s nonconscious detection of cell damage signals. Let’s define the feeling of pain as the brain’s  metacognitive detection of its nonconscious detection of cell damage signals. I assume, unlike David Rosenthal, that there is something-it-is-like for a creature’s brain to nonconsciously detect cell damage. I also think there is something it is like for the brain to metacognitively detect nonconscious detection of cell damage. I think that it is these metacognitive feelings of pain that are conscious. I want to say that these feelings of pain only (actually) exist in the functional sense. This functional sense runs “automatically”, just like the nonconscious detection of cell damage. However, feelings of pain also have a subjective sense. It is this sense that is our conscious experience, which we can introspect and report on. This is the feeling of pain as opposed to just the pain itself. It is the type of feeling that makes pain unbearable and unpleasant. The idea I am beginning to develop is that the subjective component of this metacognitive system is epiphenomenal, in the sense that it is a by-product.The subjective component of conscious feelings is what-it-is-like to be a metacognitive brain system.

This conscious/nonconscious distinction can be made for more modalities than pain. We can define “feelings of vision”, “feelings of touch”, “feelings of taste”, “feelings of body”, “feelings of smell”,and  “feelings of hearing” all in the same way. Partial epiphenomenalism is the view that only some aspects of mentality are causally inert. Since the conscious feelings are realized by metacognitive neural assemblies, and all neural assembles deterministically “run” automatically, the conscious feelings themselves are epiphenomenal. Partial epiphenomenalism is partial because it says that these nonconscious first-order systems are genuinely mental and obviously any nonconscious system is causally efficacious. After all, rocks do not detect cellular damage at all, nor do they have to stay in homeostasis. And I have been arguing for awhile now that there is something-it-is-like for these nonconscious organisms to exist. But now take a nonconscious brain and provide it with a Global Neuronal Workspace that can metacognitively synthesize disparate specialized nonconscious processors and broadcast its information throughout the brain. There is going to be something-it-is-like for that GW to process its information. That something-it-is-likeness for the GW to run is what I mean by feelings of consciousness. Strictly speaking, the subjective component does nothing causally. So why is it there in the first place? Because it’s the inevitable result of what-it-seems-like to be an embodied organism. This might just be a brute fact of organic life. Either that, or panpsychism is true.

Regardless, why are only the metacognitive systems conscious? Because of how the GW is organized, it is tied into “executive” decision making, which is I think what ordinary people mean when they talk about “voluntary” actions, the actions we are responsible for because in some sense we consciously willed them. But I am in agreement with Libet and Dan Wegner: the feeling of conscious will is epiphenomenal. But it would be rash to therefore conclude that consciousness itself is causally inert. Because the feelings of will are realized by fully functional causal systems like the GW and these systems are very much causally active, it would be a mistake to conclude that consciousness itself is an illusion simply because the feelings of consciousness are illusions. The illusory feelings are the what-it-is-like for a GW to operate. Another way to say it is that consciousness is in the business of producing illusions.

Part and parcel of the content that guides the priors of the GW operations are an internalization of learned linguistic concepts. And as Julian Jaynes said:

Let no one think these are just word changes. Word changes are concept changes and concept changes are behavioral changes. The entire history of religions and of politics and even of science stands shrill witness to that. Without words like soul, liberty, or truth, the pageant of this human condition would have been filled with different roles, different climaxes. And so with the words we have designated as preconscious hypostases, which by the generating process of metaphor through these few centuries into unite into the operator of consciousness.

Compare two creatures, equally intelligent. One has been taught how to use psychological words like “mind”, “belief”, “desire”, “consciousness”, “will”, “reason”, and the other has not. Who is going to have the kind of intellectual life we most associate with conscious thinking adults? Obviously the one in possession of the right concepts. These concepts operate as the contextual priors of the metacognitive GW system. Micah Allen and I have called this “preloading”, with the idea being that socio-cultural factors “load up” the default mode network with social information and then this disseminates to the lower systems, influencing their processing, which in turn feeds back into the DMN for more processing. This loop is the continual loop of nonconscious states being operationalized into consciousness by metacognitive systems. This enables humans to do things like reminisce about a lover, to imagine what the future will be like, to imagine unseen vistas in your mind, to contemplate your past or future actions, to muse on an idea, to use inner speech as a cognitive aid, to project categorical structure onto the world, to narratize the world into stories, to think about your own life story, to tell stories, to articulate, to reason, to think about your own thinking, to reflect, deliberate, pontificate, to wonder, to gaze, to savor. You get the picture.

Of course what I have just said is not a full theory. It is only a sketch of a theory. But I think the overall picture is becoming clearer.

Leave a comment

Filed under Consciousness

A Question For Dispositionalist Higher-order Thought Theory

According to Peter Carruther’s Dispositionalist Higher-order Thought Theory, “phenomenally conscious experiences are analog/non-conceptual states that are immediately and non-inferentially available to a faculty of higher-order thought; and that by virtue of such availability (together with the truth of some sort of consumer semantics) the states in question possess dual analog/non-conceptual contents, both first-order and higher-order.”

The Dispositionalist theory is supposed to be superior to Armstrong and Lycan’s “inner sense” higher-order theory, which says that the relevant higher-order function is a kind of internal perception-like scanner or monitor of first-order states. Carruthers says:

Dispositionalist higher-order thought theory has all the advantages of inner sense theory, then, yet it has none of the associated costs. No “inner scanners” or organs of higher-order perception need to be proposed. Rather (in common with the actualist version of higher-order thought theory) all that needs to be postulated is some sort of mind-reading of theory-of-mind faculty, which has available to it concepts of experience, and which can access perceptual input.

My question is simple: why can’t we rephrase the mind-reading faculty’s “access to perceptual input” as saying the “mind-reading faculty scans perceptual input”? The difference between “having access to” perceptual input and being able to “scan” perceptual input seems to me to be trivial. The mind-reading faculty is obviously internal, and it can also scan or “access” perceptual input. Therefore, it can be understood as a kind of inner scanner. So on the face of it, I don’t see how Carruthers has done anything except provide a further mechanical specification of what an “inner scanner” might look like.

Of course, Carruthers might reply by saying that the mind-reading faculty’s knowledge of the “concept of experience” is more of a conceptual (thought like) process than a perceptual process. First of all, I am of the opinion that the notion of a “concept” applies even to simple perceptual motor systems (given Gibsonian assumptions about affordance ontology), so I don’t see any problem in talking about a “conceptualized” perceptual process. I think it’s concepts all the way down. Second, there is no reason to suppose that “inner sense” works the same way as first-order perceptual systems although it is grounded in such systems. After all, can’t we talk about first-order processes being redeployed for new functional purposes? However, it seems more neurally realistic to me to think that the kinds of micro-operations of neurons involved in first-order perception are going to be similar to the micro-operations of neurons involved in any kind of higher-order process, even a “mind-reading faculty” (since there are only so many ways to propagate neural information throughout the brain). The inner sense theory therefore captures the sense in which it becomes neurally unrealistic to posit radically distinct kinds of mental operations such as “thoughts” as opposed to just more abstract, invariant, and offline perceptual processes. Perceptions can be abstract too, not just “thoughts”.

But don’t get me wrong. I like Carruthers idea that consciousness in some way involves the mind-reading faculty “turning upon itself”. I also like the idea of consciousness depending in some way on having a concept of experience. But I’m just not sure how this is supposed to be radically incompatible with “inner sense” theory so long as we understand that (1) perceptual processes can be conceptual too and (2) whatever form the inner sense takes it won’t be like a “sense organ”. My guess is that part of the neural realizer for inner sense involves the default mode network, and all the systems associated with “mind-wandering” and other off-line imagination processes. A good chunk of the default mode network includes frontal regions, which makes architectural sense of the inner sense theory since the frontal lobes evolved last and take input from practically every other part of the brain. So long as we have a simplistic notion of what an “inner sense faculty” might look like, it will be easy to construct strawman arguments against it. But since it’s perfectly plausible that high-level cortico-cortical connectivity to the frontal areas would allow for the influence of language and narrative processes on our inner sense, the notion of an “inner sense faculty” can be as complex as we need to account for the rarefied psychological processes of higher-order cognition

This was Julian Jaynes’ idea. He thought “inner sense” depended in some way on special kinds of language and narrative learning. More specifically, he thought that inner sense depended on having linguistic concepts of psychological processes. In ancient peoples these psychological concepts were rather concrete and tightly attached to bodily states, such as the ancient Greek concept of “thumos”. But eventually the concepts grew more abstract and concepts like “mind”, “consciousness”, and “soul” were developed. Jaynes thus predates Carruthers in supposing that the concept of psychological experience is somehow necessary for the occurrence of consciousness. But Jaynes’ didnt take himself to be explaining anything like “phenomenal consciousness” as it is normally understood. He would have thought such notions were too amorphous to capture the uniquely human capacities for introspective consciousness, which were his main explanatory target. Jaynes’ emphasis on introspection makes him a early precursor to modern “inner sense” theories of consciousness, but I think Jaynes’ theory is more complex and nuanced than any simple “inner sense organ” given his emphasis on the importance of language, narrative, and conceptual complexity.

Leave a comment

Filed under Consciousness, Philosophy

Defining Consciousness

Many cognitive neuroscientists interested in explaining consciousness often define it as that which distinguishes an awake and alert mammal from a mammal in a coma. This is essentially what Bernard Baars takes himself to be explaining with his Global Workspace model. The idea is that being awake and alert with an active global workspace is what accounts for the intelligence and vigor of awake humans as opposed to sleeping humans or humans in comas. More specifically, Baars operationalizes consciousness to be such that “We will consider people to be conscious of an event if (1) they can say immediately after they were conscious of it and (2) we can independently verify the accuracy of the report”. It must be understood that “report” doesn’t necessarily mean verbal report either, otherwise the Global Workspace model couldn’t be applied to nonhuman mammals (which Baars clearly thinks it does). Of course, there are good methodological reasons for operationalizing consciousness, and Baars does admit that the operational definition might miss some unreportable experiences.

On the GW model, the ability to nonverbally “report” on a conscious experience is dependent on there being a global broadcasting system in the brain. I’m simplifying the complexity of the model greatly, but the basic idea is that behind such an ability to report is a GW that enables complex, intelligent, goal-directed behavior that just isn’t there when we are sleeping or in a coma. Without a GW there is just a diverse network of specialized processors that operate quickly, automatically, and in a massively parallel fashion. Such specialized processors can accomplish quite a bit on their own, but without the GW there is a lack of coherence that is the sign of awake, intelligent mammalian behavior.

With that said, I’m just not sure that it’s best to define consciousness as the difference maker between sleep and wakefulness in mammals. While I believe there is indeed a phenomenological difference between sleep and wakefulness, I’m not sure the phenomenological difference is a difference in consciousness since I don’t define consciousness in terms of whether it bestows phenomenology or not. It seems to me that what Baars is really getting at with his GW model is an explanation of how it’s possible that awake mammals are so intelligent compared to sleeping mammals, mammals in a coma, or creatures with much less complicated nervous systems. What I think Baars is getting at then is an explanation of the high end of a spectrum of intelligent reactivity. All organisms react to their environments in an appropriate way. This can be seen as a kind of intelligence. But with a mammalian brain outfitted with a GW, the complexity of intelligent reactivity is far greater because the GW allows for a higher-order level of coordination between disparate specialized processors. And there is undoubtedly a phenomenological difference when such a GW is active compared to when it’s not. But do we really need to invoke consciousness to explain this change in phenomenology and behavior when you have a GW? Since I have argued elsewhere that even nonneural organisms possess phenomenal consciousness, the GW model cannot be a reductive account of the origin of what-it-is-likeness. What other concepts of consciousness are left? Well, there is the introspective consciousness of humans. But Baars is explicit that his GW model is not a model of introspection. So where does consciousness come into the picture?

In my preferred phenomenological taxonomy the greatest qualitative shift in phenomenology comes when you possess the capacity for introspective consciousness. If you don’t have introspective consciousness, then I believe the differences in phenomenology from simple to complex creatures and from sleep to wakefulness are matters of degree. But with introspective consciousness I believe there is a radical qualitative shift in what-it-is-likeness. For this reason, I prefer to follow Julian Jaynes in restricting the definition of consciousness to introspective consciousness, dropping “consciousness” from phenomenal consciousness and just calling it phenomenality.

So on my preferred definition I don’t think the GW model is a model of consciousness. However, I do believe that the shared architecture of GW in humans might provide a neurological scaffold upon which to build the uniquely human capacity for introspective consciousness (I think you will need some kind linguistic input for such a construction). So the GW model might still be important for accounting for what I am calling consciousness, provided that it is only a foundation akin to Michael Anderson’s “Massive redeployment hypothesis”. The idea then is that in humans the GW gets redeployed to help instantiate complex introspective activities, which then in turn greatly change what-it-is-like to be us.

I take Baars and the GW model very seriously. Baars is a brilliant scientist and knows his science very well. But I just do not agree that it’s best to define consciousness such that it distinguishes wakefulness from coma. I believe we already have a vocabulary useful for describing what the GW is an account of in terms of greater degrees of intelligent reactivity in virtue of higher levels of coordination between specialized processors. Obviously such coordination will enable complex intelligent behavior that seems to warrant ascriptions of phenomenal experience. But since GW is not an account of either phenomenal or introspective consciousness, I don’t think we really need to talk about consciousness when talking about the GW model. As Julian Jaynes said,

“Reactivity covers all stimuli my behavior takes account of in any way, while consciousness is something quite distinct and a far less ubiquitous phenomenon. We are conscious of what we are reacting to only from time to time.”

In my view, GW theory is capable of explaining certain kinds of reactivity unique to mammals with special types of globally connected brain architectures. But unless we use the foundations of GW theory to account for introspective consciousness, I do not see GW theory as offering an explanation of consciousness. But since this is really a terminological quibble, don’t interpret me as thinking that I believe the GW is false. If we properly understood what GW is in fact trying to explain with its operational definition, then GW is an enormous empirical success and should be praised as such. But as a philosopher I am very particular about terminology. Of course, Baars is just following the current mainstream in defining consciousness such that it’s evolutionarily ancient and shared with mammals. And he’s perfectly justified in defining his terms however he wants. But I guess I am stubborn about defining consciousness such that it ends up being relatively rare in the animal world.

1 Comment

Filed under Consciousness, Philosophy

Quote of the day 5-22-12

“The true direction of the development of thinking is not from the individual to the social, but from the social to the individual.” ~ Lev Vygotsky, Thought and Language

Leave a comment

Filed under Random

Review of Stuff: Compulsive Hoarding and the Meaning of Things by Randy Frost

Hoarding is a fascinating psychological malady where the compulsion to hoard things becomes so strong that it eventually starts interfering with the well-being of your life. Randy Frost’s recent book Stuff: Compulsive Hoarding and the Meaning of Things is a riveting look into the lives of hoarders and what drives them to manifest such seemingly irrational behavior. The book is chock full of curious anecdotes and interviews of hoarders that helps you get a sense of what it is like to be so absorbed in the life of things. Hoarding is interesting because we all remember traces of it in our own childhood collecting fads. When I was young I collected everything from soda tabs to pokemon cards. But it never became obsessive. That’s the difference with hoarders: they take a normal childhood tendency to collect things and go completely overboard to the point where they can no longer live safely in their own homes.

Frost goes into some detail outlining possible causes of hoarding and he finds that many hoarders suffered from some kind of emotional trauma early in life such as distant parents. He speculates that this lack of affection in people might have drove people to find comfort in the world of things. Hoarding also has a lot of commonalities with OCD. But the disease is complex and multifaceted and shouldn’t be reduced to just a few factors. There also might be interesting evolutionary reasons behind the hoarding instinct, but how deep into our history it goes is unknown since there are really no close analogues in animals such most animals hoard food not objects.

Hoarders are an interesting bunch. They are often highly intelligent with a good memory for details and a knack for telling stories about the histories of their objects. But their minds are so disorganized that they are unable to use their intelligence for much good. Their involvement in their things prevents them from leaving a normal life, and maintaining relationships becomes difficult when your homespace is completely unlivable. Hoarding places great burdens on children and spouses who have to live with it.

What I found really interesting about Frost’s account of hoarding is that it is very compatible with current research on the extended mind hypothesis. Hoarders often use their collection of stuff as an external memory source. They can remember the details of when they brought each object into their home. To throw away these objects would be tantamount to throwing away their own memories. Moreover, it is not just their memory that is externalized but their very personal identity. William James thought we all had a “material self” that bleeds into our personal possessions, but with hoarders this sense of self extends into ALL their objects, and not just special ones. They feel like their objects are part of their basic self-hood, to the point that it becomes emotionally traumatic to throw away a piece of useless trash. Hoarders often have deep personal histories with each of their objects, and what might look like junk to an outsider could be to the hoarder a treasure worth cherishing. Hoarders are also interesting because they seem to enjoy aesthetic qualities in everyday objects that normal people might only experience on psychedelic drugs. The stained pattern on an old milk carton might be beautiful to a hoarder and they just can’t imagine throwing it away.

What’s also interesting is the commonalities of objects collected by hoarders. One of the most common items is newspapers and magazines. Apparently many hoarders think of themselves as information junkies, to the point of saving every scrap of information they have ever come across. What’s interesting from an extended mind perspective is that these hoarders often don’t even read the newspapers or magazines. It’s just enough to possess that information, “just in case” they might need it in the future. They feel like just owning the information makes it “theirs” despite not reading it. In effect, these hoarders have externalized their knowledge into their collections of newspapers and they have accepted the externality of that information as a replacement for actually reading it. This kind of “just in case” mentality is extremely common in hoarding. Many hoarders see potential uses in objects than most people would simply discard. This “just in case” mentality leads many hoarders to buy multiples of items even if they don’t need it, like having 36 bottles of the same shampoo. They feel great anxiety if they are not prepared for the worst case scenario. But while some hoarders can’t throw away things because of a perceived potential, others can’t throw away things because they feel anxious by the thought of wasting something.

Hoarding is a complex and interesting affliction that effects millions of people around the world. Randy Frost’s book Stuff is an excellent introduction to the phenomenon that’s easy to read and filled with interesting stories and anecdotes. Frost also reports on the latest research designed to help hoarders with their problems. Unfortunately, hoarding is known as being extremely difficult to cure. Cities waste millions of dollars cleaning out the apartments of hoarders only to have them filled back up in a matter of months. By investigating into effective treatment programs, researchers will hopefully be able to help hoarders beyond the quick fix of heavy duty cleaning. All in all, I highly recommend Stuff.

Overall rating: 4.8/5 stars.


Filed under Psychology, Random

Nonconscious Qualia?

Here’s a strange idea: nonconscious qualia.  Absurd you might say? Well, many proponents of the so-called Higher-order approach to consciousness believe they not only exist, but are quite routine and omnipresent in our mental lives. Peter Carruthers, Uriah Kriegel, and David Rosenthal are three theorists who have openly talked about nonconscious qualia. Examples of nonconscious qualia include sensing redness, loudness, roughness, sweetness etc. The idea is that there can be genuinely nonconscious sensory qualities. The absent minded driver is a common case used to support the idea of nonconscious qualia. The only difference between conscious and nonconscious qualia is that, obviously, the conscious qualia are conscious.

More specifically, these theorists claim that there is nothing-it-is-like to have nonconscious qualia. That is the big difference: there is something-it-is-like to have conscious qualia but there is nothing-it-is-like to have nonconscious qualia. Why is there something-it-is-like to have conscious qualia? Because the presence of a higher-order mental state is what generates what-it-is-likeness. It is easy to see why people find higher-order theory to be absurd. After all, most people associate qualia with what-it-is-likeness, so to talk about qualia that there is nothing-it-is-like to be in seems absurd.

My own position is that there is something-it-is-like to have nonconscious qualia. This puts me at odds with both First-order and Higher-order theory. Higher-order consciousness, in my view, is much closer to a kind of self-conscious introspection than any kind of “noninferential higher-order thought” (granted that the objects of such self-consciousness don’t have to be just the self). And if I were to think that only conscious qualia have what-it-is-likeness, I would have to conclude that there is nothing-it-is-like to be a cat or  mouse, since cats and mice obviously aren’t capable of entertaining complex introspection. Some theorists like Peter Carruthers simply bite the bullet and deny there is anything-it-is-like to be a nonhuman animal. But I think that if what-it-is-likeness is going to be a coherent property at all, it will have to be a property shared by pretty much all lifeforms.

I think one reason why higher-order theorists think that what-it-is-likeness is associated with higher-order awareness is that Nagel’s original formulation was in terms of what-it-is-like for a subject and not just what-it-is-likeness. So the idea is that it is absurd to suppose there is something-it-is-like for Jones to not be aware of what-it-is-like to exist. But I fail to see why this is absurd. If we distinguish between what-it-is-likeness and our introspective awareness of what-it-is-like, then there seems to be no difficulties in thinking there is something-it-is-like to lack a meta-awareness of what-it-is-like. The phrase “for a subject” seems to suggest the presence of higher-order awareness, but this is because we are conflating the minimal subject with the conscious subject. If we thought the only legitimate type of subject was a conscious subject, then the idea of what-it-is-likeness without consciousness would be absurd. But if we thought there was a kind of minimal prereflective subjectivity intrinsic to being an embodied creature, then the idea of there being something “for a subject” without that subject being meta-aware is perfectly coherent.

1 Comment

Filed under Consciousness, Philosophy

Quote of the Day 5-15-2012

“Traditionally, the problem of existence has been most directly confronted through religion, and an increasing number of the disillusioned are turning back to it, choosing either one of the standard creeds or a more esoteric Eastern variety. But religions are only temporarily successful attempts to cope with the lack of meaning in life; they are not permanent answers. At some moments in history, they have explained convincingly what was wrong with human existence and have given credible answers. From the fourth to the eighth century of our era Christianity spread throughout Europe, Islam arose in the Middle East, and Buddhism conquered Asia. For hundreds of years these religions provided satisfying goals for people to spend their lives pursuing. But today it is more difficult to accept their worldviews as definitive. The form in which religions have presented their truths – myths, revelations, holy texts – no longer compels belief in an area of scientific rationality, even though the substance of the truths may have remained unchanged. A vital new religion may one day arise again. In the meantime, those who seek consolation in existing churches often pay for their peace of mind with a tacit agreement to ignore a great deal of what is known about the way the world works.” ~ Mihaly Csikszentmihalyi, Flow: The Psychology of Optimal Experience

Leave a comment

Filed under Random

The Nature of Visual Experience


Many philosophers have used visual illusions as support for a representational theory of visual experience. The basic idea is that sensory input in the environment is too ambiguous for the brain to really figure out anything on the basis of sensory evidence alone. To deal with this ambiguity, theorists have conjectured that the brain generates a series of predictions or hypotheses about the world based on the continuously incoming evidence and it’s accumulated knowledge (known as “priors”). On this theory, the nature of visual experience is explained by saying that what we experience is really just the prediction. So on the visual illusion above, the brain guesses that the B square is a lighter color and therefore we experience it as lighter. The brain guesses this because in its stored memory is information about typical configurations of checkered squares under typical kinds of illumination. On this standard view, all of visual experience is a big illusion, like a virtual-reality type Matrix.

Lately I have been deeply interested in thinking about these notions of “guessing” and “prediction”. What does it mean to say that a collection of neurons predicts something? How is this possible? What does it mean for a collection of neurons to make a hypothesis? I am worried that in using these notions as our explanatory principle, we risk the possibility that we are simply trading in metaphors instead of gaining true explanatory power. So let’s examine this notion of prediction further and see if we can make sense of it in light of what we know about how the brain works.

One thought might be that predictions or guesses are really just kinds of representations. To perceive the B square as lighter is just for your brain to represent it as lighter. But what could we mean by representation? One idea comes from Jeff Hawkin’s book On Intelligence. He talks about representations in terms of invariancy. For Hawkins, the concept of representation and prediction is inevitably tied into memory. To see why consider my perception of my computer chair. I can see and recognize that my chair is my chair from a variety of visual angles. I have a memory of what my chair looks like in my brain and the different visual angles provide evidence that matches my stored memory of my chair. The key is that my high-level memory of my chair is invariant with respect to it’s visual features. But at lower levels of visual processing, the neurons are tuned to respond only to low-level visual features. So some low-level neurons only fire in respond to certain angles or edge configurations. So on different visual angles these low-level neurons might not respond. But at higher levels of visual processing, there must be some neurons that are always firing regardless of the visual angle because their level of response invariancy is higher. So my memory of the chair really spans a hierarchy of levels of invariancy. At the highest levels of invariancy, I can even predict the chair when I am not in the room. So if I am about to walk into my office, I can predict that my chair will be on the right side of the room. If I walked in and my chair was not on the right side, I would be surprised and I’d have to update my memory with a new pattern.

On this account, representation and prediction is intimately tied into our memory, our stored knowledge of reality that helps us make predictions to better cope with our lives. But what is memory really? If we are going to be neurally realistic, it seems like it is going to have to be cashed out in terms of various dispositions of brain cells to react in certain ways. So memory is the collective dispositions of many different circuits of brain cells, particularly their synaptic activities. Dispositions can be thought of as mechanical mediations between input and output. Invariancies can thus be thought of as invariancies in mediation. Low-level mediation is variant with respect to the fine-grained features of the input. High-level mediation is less variant with respect to fine-grain detail. What does this tell us about visual experience? I believe the mediational view of representation offers an alternative account of illusions.

I am still working out the details of this idea, so bear with me. My current thought is that the brain’s “guess” that square B is lighter can be understood dispositionally rather than intentionally. Let’s imagine that we reconstruct the 2D visual illusion in the real world, so that we experience the same illusion that the B square is lighter. What would it mean for my brain to make this prediction? Well, on the dispositional view, it would mean that in making such a prediction my brain is essentially saying “If I go over and inspect that square some more I should expect it to be lighter”. If you actually did go inspect the square and found it is is not a light square, you would have to make an update to your memory store. However, visual illusions are persistent despite high-level prediction. This is because the entirety of the memory store for low-level visual processing overrides the meager alternate prediction generated at higher levels.

What about qualia? The representational view says that the qualitative features of the B square result from the square being represented as lighter. But if we understand representations as mediations, we see that representations don’t have to be these spooky things with strange properties like “aboutness”. Aboutness is just cashed out in terms of specificity of response. But the problem of qualia is tricky. In a way I kind of think the “lightness” of the B square is just an illusion added “on top” of a more or less veridical acquaintance. So I feel like I should resist inferring from this minor illusional augmentation that all of my visual experience is massively illusory in this way. Instead, I think we could see the “prediction” of the B square as lighter as a kind of augmentation of mediation. The brain augments the flow of mediations such that if this illusion was a real scene and someone asked you to “go step on all the light squares” you would step on the B square. For this reason, I think the phenomenal impressiveness of the illusions are amplified because of their 2Dness. If it were a 3D scene, the “prediction” would take the form of possible continuations of mediated behavior in response to a task demand (e.g. finding light squares). But because it’s a 2D image, the “qualia” of the B square being light takes on a special form, pressing itself upon us as being a “raw visual feel” of lightness that on the surface doesn’t seem to be linked to behavior. But I think if we understand the visual hierachy of invariant mediation, and the ways in which the higher and lower levels influence each other, we don’t need to conclude that all visual experience is massively illusory because we live behind a Kantian screen of representation. Understanding brain representations as mediational rather than intentional helps us strip the Kantian image of its persuasive power.


Filed under Consciousness, Philosophy

Quote of the day 5-13-2012

“What convinces me that a cognitivistic theory could capture all the dear features I discover in my inner life is not any ‘argument’, and not just the programmatic appeal of thereby preserving something like ‘the unity of science’, but rather a detailed attempt to describe to myself exactly those features of my life and the nature of my acquaintance with me that I could cite as my ‘grounds’ for claiming that I am -and do not merely seem to be – conscious.” ~Dan Dennett, “Toward a Cognitive Theory of Consciousness”, in Brainstorms

Leave a comment

Filed under Random

Are Bacteria Capable of Caring?

At a conference on consciousness I went to recently, I suggested that bacteria are capable of care, but that rocks aren’t. Several people disagreed with me vehemently on this point. They said that it’s an obvious anthropomorphization to say that bacteria care. Their argument was that bacteria are just fully mechanical biochemical systems. To say a bacteria is capable of care is to speak metaphorically or something, but it can’t be literally true.

I don’t know about this. It seems to me true that bacteria are capable of caring but rocks aren’t. And you can’t just say bacteria are biochemical machines, because under the right description, so are humans. And moreover, seen through the lens of physics, humans are really no different from any other physical system, including rocks and bacteria. It’s all just fermions and bosons at the bottom anyway. So the argument that bacteria can’t care because they are mechanical or fully physical doesn’t work because under the right description humans look the same as bacteria and we all agree it’s appropriate to say humans care.

So the difference between the bacteria and the rock is not going to be a matter of being a physical system obeying physical law. Where I think the difference lies is in the way in which the bacteria’s physical matter is organized. It is at the level of organization that we see differences between rocks and bacteria. Bacteria, like all lifeforms, are balanced at the edge of thermodynamic disequilibrium. They are unstable in their organization, always ready to break down, but somehow they keep going (until death at least). Their unstability is characteristically stable, like a whirlpool in a river.

Moreover, there is something unique about the activities of the bacteria compared to other mechanical systems. The activities of the bacteria are continuously involved in producing the physical structures that constitute the bacteria. When the bacteria digests nutrients, it takes that matter and processes it in order to rebuild the membrane which distinguishes it from the environment. So the bacteria is continuously self-producing itself by always taking in nutrients to maintain the construction of the membrane which defines it against the environment. Theorists have called this kind of dynamic organization autopoietic. Whether or not autopoiesis alone is sufficient to define life against nonlife (some think you will need to also add notions of adaptivity), it is uncontroversial that organic lifeforms have a unique kind of organizational structure in virtue of something like autopoiesis.

But why should we think such an organizational structure warrants the claim that bacteria care about things? Well, I admit that such a gloss is taking advantage of metaphors to some extent, and all metaphors are in some sense literally false. But I still think it’s true to say bacteria care about things but rocks and other inorganic entities don’t. Imagine that you take some sugar and you place it in front of a rolling boulder or a moving bacteria. On one level of description, we could talk about the rock encountering the sugar in its pathway in input/output computational terms. The lump of sugar is an input into the system, the rock “computes” its response, and then generates an output, which is a slightly different change in behavior.

Similarly, we could use the same input/output description to talk about the bacteria encountering the lump. The sugar is an input into the system, the bacteria “computes” its response, and the output is a new set of behaviors. But just because we can apply this abstract characterization to both systems, that doesn’t mean that the rock and the bacteria are doing the same thing when they encounter the sugar. The difference, I think, is in the way the two entities “experience” the sugar. I don’t think the rock is really quite experiencing the sugar in the same way because I think the bacteria is on the look out for sugar. It is attuned for sugar, as opposed to other nutrients. It desires sugar. It seeks out sugar. It’s perception is valenced. It lives in a small lifeworld where all that matters is finding nutrients. None of this is true of the rock.  If the rock sees the world through a valence at all, it valences everything equally. It has no preferences. No affectivity. As Heidegger said,

A stone never finds itself but is simply present-at-hand. A very primitive unicellular form of life, on the contrary, will already find itself, where this affectivity can be the greatest and darkest dullness, but for all that it is in its structure of being essentially distinct from merely being present-at-hand like a thing. (History of the Concept of Time, p. 255)

I think this is a very insightful remark from Heidegger. He recognizes that there is something unique about the organizational structure of a bacteria when compared to a rock. When I say a rock “cares” about the world, I am really referencing Heidegger’s technical notion of “affectivity”. I talked about this a lot in my Master’s Thesis. The key idea is about the bacteria “finding itself”. This kind of self-reflexive organizational structure is I think a nontechnical precursor to the concept of autopoiesis. Pretty speculative, but bear with me. The idea is that rocks and stones don’t see the world as ready-to-hand. That is, they don’t see the world in terms of what it affords the possibility of doing. In other words, it is appropriate to think of bacteria as organized with respect to the future. This is a potentially mystifying claim, but it’s not that complex. From the perspective of physics, it’s still all just fermions and bosons obeying the laws of physics. But when dealing with lifeforms, the concept of valence is necessarily tied into the concept of a creature lacking something. The bacteria lacks the nutrients necessary to construct its membrane, so it seeks it out. Lack in organisms is always defined with respect to the future, what some ecological psychogists have called prospectivity. This type of absential, future-oriented organization is what Terrence Deacon has called ententional phenomena in his new book Incomplete Nature. I haven’t finished the book yet, but what I have read so far is quite brilliant.


Filed under Consciousness, Heidegger, Philosophy