Tag Archives: representation

Quote of the Day – The Code of Consciousness

The code used to register information in the brain is of little importance in determining what we perceive, so long as the code is used appropriately by the brain to determine our actions. For example, no one today thinks that in order to perceive redness some kind of red fluid must ooze out of neurons in the brain. Similarly, to perceive the world as right side up, the retinal image need not be right side up


J. Kevin O’Regan, Why Red Doesn’t Sound Like a Bell, p. 6

Leave a comment

Filed under Consciousness, Psychology

The Nature of Visual Experience


Many philosophers have used visual illusions as support for a representational theory of visual experience. The basic idea is that sensory input in the environment is too ambiguous for the brain to really figure out anything on the basis of sensory evidence alone. To deal with this ambiguity, theorists have conjectured that the brain generates a series of predictions or hypotheses about the world based on the continuously incoming evidence and it’s accumulated knowledge (known as “priors”). On this theory, the nature of visual experience is explained by saying that what we experience is really just the prediction. So on the visual illusion above, the brain guesses that the B square is a lighter color and therefore we experience it as lighter. The brain guesses this because in its stored memory is information about typical configurations of checkered squares under typical kinds of illumination. On this standard view, all of visual experience is a big illusion, like a virtual-reality type Matrix.

Lately I have been deeply interested in thinking about these notions of “guessing” and “prediction”. What does it mean to say that a collection of neurons predicts something? How is this possible? What does it mean for a collection of neurons to make a hypothesis? I am worried that in using these notions as our explanatory principle, we risk the possibility that we are simply trading in metaphors instead of gaining true explanatory power. So let’s examine this notion of prediction further and see if we can make sense of it in light of what we know about how the brain works.

One thought might be that predictions or guesses are really just kinds of representations. To perceive the B square as lighter is just for your brain to represent it as lighter. But what could we mean by representation? One idea comes from Jeff Hawkin’s book On Intelligence. He talks about representations in terms of invariancy. For Hawkins, the concept of representation and prediction is inevitably tied into memory. To see why consider my perception of my computer chair. I can see and recognize that my chair is my chair from a variety of visual angles. I have a memory of what my chair looks like in my brain and the different visual angles provide evidence that matches my stored memory of my chair. The key is that my high-level memory of my chair is invariant with respect to it’s visual features. But at lower levels of visual processing, the neurons are tuned to respond only to low-level visual features. So some low-level neurons only fire in respond to certain angles or edge configurations. So on different visual angles these low-level neurons might not respond. But at higher levels of visual processing, there must be some neurons that are always firing regardless of the visual angle because their level of response invariancy is higher. So my memory of the chair really spans a hierarchy of levels of invariancy. At the highest levels of invariancy, I can even predict the chair when I am not in the room. So if I am about to walk into my office, I can predict that my chair will be on the right side of the room. If I walked in and my chair was not on the right side, I would be surprised and I’d have to update my memory with a new pattern.

On this account, representation and prediction is intimately tied into our memory, our stored knowledge of reality that helps us make predictions to better cope with our lives. But what is memory really? If we are going to be neurally realistic, it seems like it is going to have to be cashed out in terms of various dispositions of brain cells to react in certain ways. So memory is the collective dispositions of many different circuits of brain cells, particularly their synaptic activities. Dispositions can be thought of as mechanical mediations between input and output. Invariancies can thus be thought of as invariancies in mediation. Low-level mediation is variant with respect to the fine-grained features of the input. High-level mediation is less variant with respect to fine-grain detail. What does this tell us about visual experience? I believe the mediational view of representation offers an alternative account of illusions.

I am still working out the details of this idea, so bear with me. My current thought is that the brain’s “guess” that square B is lighter can be understood dispositionally rather than intentionally. Let’s imagine that we reconstruct the 2D visual illusion in the real world, so that we experience the same illusion that the B square is lighter. What would it mean for my brain to make this prediction? Well, on the dispositional view, it would mean that in making such a prediction my brain is essentially saying “If I go over and inspect that square some more I should expect it to be lighter”. If you actually did go inspect the square and found it is is not a light square, you would have to make an update to your memory store. However, visual illusions are persistent despite high-level prediction. This is because the entirety of the memory store for low-level visual processing overrides the meager alternate prediction generated at higher levels.

What about qualia? The representational view says that the qualitative features of the B square result from the square being represented as lighter. But if we understand representations as mediations, we see that representations don’t have to be these spooky things with strange properties like “aboutness”. Aboutness is just cashed out in terms of specificity of response. But the problem of qualia is tricky. In a way I kind of think the “lightness” of the B square is just an illusion added “on top” of a more or less veridical acquaintance. So I feel like I should resist inferring from this minor illusional augmentation that all of my visual experience is massively illusory in this way. Instead, I think we could see the “prediction” of the B square as lighter as a kind of augmentation of mediation. The brain augments the flow of mediations such that if this illusion was a real scene and someone asked you to “go step on all the light squares” you would step on the B square. For this reason, I think the phenomenal impressiveness of the illusions are amplified because of their 2Dness. If it were a 3D scene, the “prediction” would take the form of possible continuations of mediated behavior in response to a task demand (e.g. finding light squares). But because it’s a 2D image, the “qualia” of the B square being light takes on a special form, pressing itself upon us as being a “raw visual feel” of lightness that on the surface doesn’t seem to be linked to behavior. But I think if we understand the visual hierachy of invariant mediation, and the ways in which the higher and lower levels influence each other, we don’t need to conclude that all visual experience is massively illusory because we live behind a Kantian screen of representation. Understanding brain representations as mediational rather than intentional helps us strip the Kantian image of its persuasive power.


Filed under Consciousness, Philosophy

Thoughts on the fundamental problem of representation

I’ve been thinking a lot about the so-called “fundamental problem of representation”: what is a representation and how does it work as a representation? How do representations represent? We need to first answer what a representation is. Many philosophers seem to agree that it has something to do with “standing in”. The obvious example is a photograph. A photograph of a cat is a representation of a cat because the photograph “stands in” for the real cat. How does this work? Well, it seems to need an interpreter to interpret the photograph as a representation of a cat. But if we want to explain how the brain represents something, it obviously won’t do to posit an interpreter, for this is just a homuncular explanation.

So the photograph example is kind of a nonstarter when it comes to understanding how the brain represents something. Many philosophers believe that in order for the brain to perceive the world, it must form a representation of the world. In this way, perceiving the world is seen as forming a model of the world, which is used to compute action plans. But is this really a scientific explanation? When a brain perceives a cat, what does it mean for brain activity to “stand in” for that cat? This “standing in” function is obscure. For this reason, Eric Dietrich prefers to talk about representations as “mediators”. A representation is a mediator between a stimulus and behavior.

This makes sense to me, for I can imagine what it means for something to mediate between a physical perturbation and physical behavior. But doesn’t a thermometer also mediate between a stimulus and a behavior? What makes neurons different from thermometers? Aren’t neurons just complex bio-machines? And machines are machines. But I’m convinced there is a difference between a thermometer and a brain. I think representations in the brain are genuinely mental whereas I do not think this is true of the thermometer. Why? I haven’t quite worked this out, but I think the difference is that the mechanisms of mediation in the brain are responding to meaningful information, whereas the mechanisms of mediation in the thermometer are not responding to meaning at all. Meaning is mental. Mental mediations are mediations in response to meaning. We can thus make a fundamental distinction between nonmental representation and mental representation.

But what is meaningful information? How can we understand meaning ontologically? I think the concept of affordances is useful here. Let’s start simple. Think of sucrose molecules. Now imagine a thermometer like machine that had sensors designed to respond to sucrose, mechanisms of mediation (“processing”), and then an output behavior (turning on a green light). We have no reason to think of this machine as instantiating any truly mental mediations. Its mediations are purely responding to the sucrose causally. But now imagine a bacteria. It too has biochemical sensors designed to discriminate sucrose, mechanisms of mediation for processing it, and output behaviors. Some people think I’m crazy for holding this view, but I genuinely think that in the case of the bacteria, the mechanisms of mediation are mental. Why? Because the sucrose affords something to the bacteria, namely, nutrition. The sucrose is thus meaningful to the bacteria, whereas sucrose is not meaningful to the machine. There is an affective valence even at the level of the bacteria, it is just hard to imagine. But put yourself in the “mind” of a bacteria. Its whole world has a valence. It is attracted/repelled by physical perturbations. But unlike the machine, the bacteria perceives these perturbations as genuinely meaningful, for the sucrose affords the possibility of an opportunity for helping maintain a norm (the norm of survival). I think this emphasis on survival and affective valence is important, because I think of it as a means to solve the frame problem. Having a norm which regulates behavior enables the mechanisms of mediation to be responsive to more than just brute causal information. In enables the perception of affordances. The norm of survival is the Ur-desire, the spark of mentation. Arguably, the category between life and nonlife is fuzzy, so it’s not quite clear where to draw the line, but that there is a line is undeniable. I don’t doubt that robots could in principle instantiate their own norms to solve the frame problem, but I imagine they will look similar to biological norms.

But what about neurons? Many contemporary philosophers of mind think that “mental stuff” only happens in sufficiently complex brains. I think this is a mistake, for any association of representation with strictly neural processes will fail to answer the fundamental problem of representation. I think Dietrich is right that the concept of mediation is the right way to understand representation. But I think that neural mediation is just one form of mediation. Evolutionarily speaking, neural mediation was highly adaptive for it allowed organisms to increase the complexity of the mediation between stimulus and behavior. More complexity in mediation leads to “deeper” processing i.e. more complex behavior. Neural processing allowed for the mediations to become “abstract”. By this, I mean that the mechanisms of response become sensitive to more “global” features of a stimulus profile. This is called an increase in invariance, for differences in low-level stimulus detail will make the “higher-order” circuits fire steady. Think of perceiving a chair. We can recognize a chair from almost any angle of viewing. As we change angles, the lower-level mechanisms of response fire only in response to very specific and low-level features of the chair. At higher levels of processing, the response is steady regardless of the lower-level features. In this way, we say that the representations in the brain have become more “abstract”. Language is the ultimate in abstract mediation, for linguistic “tagging” of the world enables us to respond to very abstract kinds of information, particularly in respect to social cognition and our collapse of human behavior into abstract folk psychological categories. The vocabulary term “mind” is one of these ultimate abstractions, for it abstracts over all physical behavior and gives us a new category of response: person. Such linguistic representations are meta-representational insofar as they allow organisms to represent representations, to mediate mediations. Many theorists, including myself, think that it is meta-representation which separates the mental life of humans from that of other animals.

In summary, the fundamental problem of representation is to understand what a representation is and to answer how it works as a representation. Representations are stand ins for stimuli. A stand in for a stimuli is a mechanisms of mediation between stimuli and behavior. There are two fundamental types of mediation: mental and nonmental. Nonmental mediation is ubiquitous in the physical world, whereas mental mediation is rare. Mental mediation is mental because the mechanisms of mediation are sensitive to affordance information, which is grounded by norms, the most evolutionarily basic being the norm of survival. Mental representations thus form a continuum of possible abstraction, with neural representations only being a kind of mediation, enabling deeper abstraction through stimulus-invariance. There is thus nothing mysterious about representation. The term itself is a shorthand description of the complex mechanisms of mediation intrinsic to an entity.


Filed under Philosophy, Psychology

Noncomputational Representation

I’ve been thinking about representations a lot lately. More specifically, I have been thinking about the possibility of noncomputational representation. On first blush, this sounds strange because representationalism has for a long time been intimately connected with the Computational Theory of Mind, which basically says that the brain is some kind of computer, and that cognition is most basically the manipulation of abstract quasi-linguaform representations by means of a low-level syntactic realizer base. I’ve never been quite sure how this is supposed to work, but the gist of it is captured by the software/hardware distinction. The mind is the software of the computing brain. Representations, in virtue of their supposed quasi-linguaform nature, are often thought of in terms of propositions. For a brain to know that P, it must have a representation or belief to the effect of that P. As it commonly goes, computation is knowledge, knowledge is representational, the brain represents, the brain is a computer.

But in this post I want to explore the idea of noncomputational representation. The basic idea under question is whether we can say that the brain traffics in representations even though it is not a computer i.e. if the brain is not a computer, does it still represent things, if so, how and in what sense? Following Jeff Hawkins, I think it is plausible to suppose that the brain is not a digital computer. But if the brain is not computing like a computer in order to be so intelligent, what is it doing? Hawkins thinks that the secret of the brain’s intelligence is the neocortex. He thinks that the neocortex is basically a massive memory-prediction machine. Through experience, patterns and regularities flow into the nervous system in terms of neural patterns and regularities. These patterns are then stored in the brain’s neocortex as memory. It is a well-known fact that cortical memories are “stored” in the same place as where they were originally taken in and processed.

How is this possible? Hawkins’ idea is that the reason why we see memory as being “stored” in the original cortical areas is that the function of storing patterns is to aid in the prediction of future patterns. As we experience the world, the sensory details change based on things like our perspective. Take my knowledge of where my chair is in my office. After experiencing this chair from various positions in the room, I now have a memory of where the chair is in relation to the room, and I have a memory of where the room is in relation to the house, and the house in relation to the neighborhood, and the neighborhood to the city, and so on. In terms of the chair, what the memory allows me to do is to “know” things about the chair which are independent of my perspective. I can look at the chair from any perspective and recognize that it is my chair, despite each sensory profile displaying totally different patterns. How is this possible? Hawkins idea is that the neocortex creates an invariant representation of the chair which is based on the integration of lower-level information into a higher-order representation.

What does it mean to create an invariant representation? The basic idea here can be illustrated in terms of how information flows into and around the cortex. At the lowest levels, the patterns of regularities of my sensory experience of the chair are broken up into scattered and modality-specific information. The processing at the lowest levels is carried out by the lowest neocortical layers. Each small region in these layers has a receptive field that is very narrow and specific, such as firing only when a line sweeps across a tiny upper-right quadrant in the visual field. And of course, when the information comes into the brain it is processed by contralateral cortical areas, with the right lower cortical layers only responding to information streaming in from the left visual field, and vice-versa. As the modality specific and feature-oriented information flows up the cortical hierarchy, the receptive fields of the cells becomes broader, and more steady in the firing patterns. Whereas the lower cortical areas only respond to low-level details of the chair, the higher cortical areas stay active while in the presence of the chair under any experential condition. These higher cortical areas can thus be said to have created an invariant representation of the patterns and regularities which are specific to the chair. The brain is able to create these representations because the world actually is patterned and regular, and the brain is responding to this.

So what is the cash value of these invariant representations? To understand this, you have to understand how once the information flows to the “top” of the hierarchy (ending in the hippocampus, forming long-term memories), it flows back down to the bottom. Neuroanatomists have long known that 90% of the connections at the lower cortical layers are streaming in from the “top”, and not the “bottom”. In other words, there is a massive amount of feedback from the higher levels into the lower levels. Hawkins’ idea is that this feedback is the physical instantiation of the invariant representations aiding in prediction. Because my brain has stored a memory/representation of what the chair is “really” like abstracted from particular sensory presentations, I am able to predict where the chair will be before I even walk into the room. However, if I walked into the room and the chair was on the ceiling, I would be shocked, because I have nothing in my memory about my chair, or any chair, ever being on the ceiling. Except I might have a memory about people pulling pranks by nailing furniture to ceilings, so after some shock, I would “re-understand” my expectations about future perceptions of chairs, being less surprised next time I see my chair on the ceiling.

Hawkins think that it is this relation between having a good memory and the ability to predict the future based on that memory which is at the heart of intelligence. In the case of memories flowing down to the sensory cortices, the “prediction” is one that predicts what future patterns of sensory activity are like. For example, the brain learns Sensory Pattern A and creates a memory of this pattern throughout the cortical hierarchy. The most invariant representation in the hierarchy flows down to the lower sensory areas and fires the Pattern A again based on the memory-based prediction about when it will experience Pattern A again. If the memory-prediction was accurate, the incoming pattern will match Pattern A, and the memory will be confirmed and strengthened. If the pattern comes in is actually Pattern B, then the prediction will be incongruous with the incoming information. This will cause the new pattern to shoot up the hierarchy to form a new memory, which then feedbacks down to make predictions about future sensory experience. In the case of predictions flowing down into the motor cortices, the “predictions” are really motor commands. If I predict that if I walk into my office and turn right I will see my chair, and if the prediction is in the form of a motor commond, the prediction will actually make itself come true if the chair is where the brain predicted it will be. Predictive motor commands are confirmed when the prediction is accurate, and disconfirmed if inaccurate.

So, a noncomputational representation is based on the fact that the brain (particularly the neocortex) is organized in an hierarchical memory system based on neuronal patterns and regularities, which in turn are composed of synaptic mechanisms like long-term potentiation. According to Hawkins, it is the hierarchy from bottom to top and back which gives the brain its remarkable powers of intelligence. The intelligence of humans for Hawkins is really a product of having a very good memory and being able to anticipate and hence understand the future in incredibly complex ways. If you understand a situation, you will not be surprised because your memory is so accurate. If you do not understand it, you cannot predict what will happen next.

An interesting feature of Hawkins’ theory is that it predicts that the neocortex is fundamentally running a single algorithm: memory-prediction. So what gives the brain its adult modularity and specialization? It is the specific nature of the patterns and regularities of each specific sensory modality flowing into the brain. But the common currency of the brain is patterns of neuronal activity. Thus, every area of the cortex, could, in principle, “handle” any other neuronal pattern. Paul Bach-y-Rita’s research on sensory substitution is highly relevant here. Bach-y-Rita’s research has shown that the common currency of the perception is the detection and learning of sensory regularities. His research has, for example, allowed blind patients to “see” light by wiring a camera onto their tongues. This is to be expected if the neocortex is running a single type of algorithm. So what actually “wires” a cortical subregion is the type of information which streams in. Because auditory data and visual data always enter the brain from unique points, it is not surprising that specialized regions of the cortex “handle” this information. But the science shows that if any region is damaged, a certain amount of plasticity is capable of having other areas “take over” the input. This is especially true in childhood. What Micah Allen and I have tried to show in our  recent paper is that higher-order functions of humans are based on the kinds of information available for humans to “work with”, namely, social-linguistic information. So the key to human unique cognitive control is not having an evolutionary unique executive controller in the brain. Rather, the difference is in what kinds of information can be funneled into the executive controller. For humans, a huge amount of the data streaming in is social-linguistic. Our memory-prediction systems thus operate with more complexity and specialization because of the unique social-linguistic nature of the patterns which stream into the executive. So to answer Daniel Wegner’s question of “Who is the controller of controlled processes?“, the uniqueness of “voluntary” control is based on the higher-level invariant memories being social-linguistic in nature. The uniqueness of the information guarantees that the predictions, and thus behavior, of the human cognitive control system will be unique. So we are not different from chimps insofar as we have executive control. The difference lies in what kinds of information that control has to work with in terms of its memory and predictive capacities.


Filed under Philosophy, Psychology

Thoughts on Representation

I just read an interesting paper by Eric Dietrich and Arthur B. Markman entitled “Discrete Thoughts: Why Cognition Must Use Discrete Representations.” In the paper, they first give a definition of general mental representations and make a distinction between discrete and continuous representations. Then they outline seven arguments for why they think discrete representations are necessary for any system that discriminates between two or more states.

Their definition of general mental representation is I think robust and conceptually useful. They define a representation to be any internal state that mediates or plays a mediating role between a system’s inputs and outputs in virtue of that state’s semantic content. They define semantic content in terms of information that is causally efficacious and in terms of what that information is used for. What this means is that representations have to be a part of mental causation. This approach reminds me a lot of Hofstadter’s work, which I have talked about here. Hofstadter emphasizes how mental representations, which mediate between the environmental stimulus and the behavioral output by virtue of being causal at the appropriate level of analysis. I take Dietrich and Markman to mean the same thing when they say that mental representations must be “psychologically real”. In Hofstadter’s terminology, the symbols must be active.

Next, the authors offers a definition of discrete representation. “A system has discrete representations if and only if it can discriminate its inputs.” If a system categorizes, then it has discrete representations. In contrast, a continuous representation would be more tightly bound to its correspondence with the environment. It would be coupled in such a way that it wouldn’t have the ability to make distinctions between its inputs. This is illustrated by the examples of a watt governor and a thermostat. In a watt governor, the arm angles of continuous representations of the speed of the fly wheel, and in contrast, a thermometer must make an on/off discrimination of the continuous representation of the varying bimetal strip. The discrete representation supervenes on the continuous representation.

Finally, the authors give seven arguments why cognition requires discrete representations. I won’t go over the arguments in detail, I will just list a brief summary taken from the text.

1. Cognitive systems must discriminate among states in the represented world.
2. Cognitive systems are able to access specific properties of representations.
3. Cognitive systems must be able to combine representations.
4. Cognitive systems must have some compositional structure.
5. There are strong functional role connections among concepts in cognitive systems.
6. Cognitive systems contain abstractions.
7. Cognitive systems require non-nomic representations.

In their conclusion the authors discuss the claim that it follows from the presence of discrete representations in the cognitive system that the best paradigm for cognitive science must be computationalism. They argue that any system that utilizes discrete representations must be finite and has deterministic transitions between states which can be constructed into an algorithm. Thus, the mind can be described as a computationalism system. I think this is a clever argument and places computationalism into its proper role as the dominant paradigm in cognitive science. Until conflicting evidence shows that when it comes to general mental phenomena there is a better methodological framework, we shouldn’t deny computationalism’s place as the best explanatory paradigm.

add to del.icio.us :: Add to Blinkslist :: add to furl :: Digg it :: add to ma.gnolia :: Stumble It! :: add to simpy :: seed the vine :: :: :: TailRank

1 Comment

Filed under Philosophy, Psychology