Tag Archives: representationalism

The Semantics of Brain Computations

In his paper “The Semantic Challenge to Computational Neuroscience”, Rick Grush claims that “there is a fact of the matter concerning whether or not a given neural state is representing something: a fact that is neither created nor imperiled by the interpretive whims of whomever might be examining the neural system.” However, Grush does not offer in this paper an objective, “whimless” criterion to determine whether or not some neural system is representing. Instead, he offers a critique of two attempts on the market: information semantics and biosemantics. So although Grush does not present his own view in this paper, he seems satisfied that such a criterion could, in principle, be given (and presumably thinks he could provide one himself).

I’m not so optimistic that the computationalist can ever escape from the imperilments of their “interpretative whims”. I believe that representationalism often takes advantage of the power of metaphors to make semantic interpretations of neural phenomena more philosophically substantive than they really are. I think that if you take away the power of metaphor, then the examples are basically left barren as exemplars of the semantic interpretation. Take the example Grush uses of neurons in the posterior parietal areas 7a. Grush claims that a semantic interpretation would look like this: “The cell is processing information about retinal location and eye orientation to provide information about direction of stimulus relative to the head.”

For me, the crucial phrase is that the cell “provides information about”. Another term might be “carries information about”. Obviously, Grush can’t be talking about providing in the same way we would say “Bob provided food for his family”; neither could he be talking about carrying in the same way someone might carry a book. Neuronal cells can’t possibly be providing or carrying like that. So what does providing or carrying mean in this context? Perhaps Grush means that the products of the neuronal processing are “about” the direction of the stimulus and when the cell “provides” or “carries” this information it simply passes it along to the next cell through its outputs. But here we are still dealing with metaphors. My claim is that if we really took a hard metaphysical look at the phenomena, we would discover that the only things happening are purely mechanical, not semantic. When we look at a posterior parietal cell firing, all we as scientists can observe is the physical operations of the mechanisms. We can not observe a neuron “providing information” to another cell. We can merely interpret neuronal activity as if it was carrying information.

Although I can’t speak for Grush’s actual view, perhaps he would appeal to something like isomorphic representation, which is essentially a kind of preservation of structural features. Retinotopic maps in the visual cortex are an obvious example. In these cases, it seems like we might say that the visual cortex represents certain aspects of the retina in virtue of having these similar structural features. But a question arises: is the visual cortex causally effective in virtue of the “similarity” relation between the retina, or in virtue of its own intrinsic physical powers, that are just shaped similar to the retina for whatever evolutionarily reason? There’s no denying that there is, objectively speaking, an isomorphic representation in the visual cortex, but what work is that isomorphism doing that isn’t already being done by the intrinsic causal powers of the visual cortex?

It doesn’t look like the semantic interpretation gets us any real functional work above and beyond what we already know brain cells are doing. So what does it get us? I believe the representational view helps our minds more easily understand a complex causal process. It is just illuminating to interpret the neural system as if it was representing things in the external world. And it seems possible to operationalize representations such that we have a criterion for when to talk about representation. So for example, if we throw a stick into some tall grass and a dog goes chasing after it, it makes perfect sense to say that the dog has a representation in its mind that is about that stick. This is indeed the classic scenario in which representational semantics seems perfectly appropriate.

But there is another interpretation that makes sense of the same data without losing the explanatory power that representational semantics gives us. We could say that instead of the mental state in the dog’s head being about that stick out there in the world, that mental state merely mediates between the stimulus of first seeing that stick thrown, and the complex searching behavior which results. We might say that instead of the mental state representing that external piece of the world, the mental state keeps that dog in a state of searching behavior until the stimulus finally comes into view again. So it is not necessary to talk about the dog’s brain as being about the stick. We can simply say that the dog’s brain mediates between the stimulus and the behavior in such a way that the stimulus doesn’t have to be in continuous view in order to be effective in triggering complex behavior. In virtue of mediating representations, the dog’s brain can pursue prey that have ran behind a boulder. And although it’s certainly possible to interpret the dog’s brain as if it had an internal representation of that prey, an alternative view is that what the neuronal representations are “about” is more related to their own intrinisic activity rather than anything “out there” in the world.

One form of intrinsic processing that seems plausible for a neuron to do is a kind of prediction about the future states of its own activity. On this view, neuronal representational processing is not really “about” the external world, but rather, is directed towards the future state of its own activity. So the representations stay within the brain, semantically speaking. By reducing their own prediction error, neurons could form complex mediational pathways for reliably guiding actions in the face of a messy and variable world. For example, although my coffee cup is never quite in the same position when I go to reach for it, reducing prediction error helps my fingers on-the-fly readjust their dynamics in order to meet my invariant goal of bringing it to my lips. So having invariant representations coding for a goal and using prediction error that adjust motor dynamics to bring me closer to that goal in light of noisy variables seems like a plausible semantic story to be told. But notice how this semantic story is never really about some neuronal state “standing in for” or “carrying information” about some external state of affairs. The brain seems to be only focused on its own dynamics and its own goals.

The mechanist might still respond by saying that even the prediction-error story is still overly semantic and prone to worries about interpretational whims. The mechanist might say that it’s still just mechanisms all the way down, and that talking about neurons as if they are predicting their own future states of affairs is just a useful fiction to help us understand a complex system. This might be true. My only point was that a semantic story about the brain seems to be less mysterious if we make the “intentionality” or “directionality” of the representations less far-reaching. So rather than neurons being about something that’s way out there in the external world, it seems more tractable for neurons to be about their own activity.

Leave a comment

Filed under Philosophy, Psychology

Tyler Burge begs the question against nonrepresentationalism

There is an interesting article by Tyler Burge in the NY Times philosophy blog called “A Real Science of Mind” that I happen to disgree with vehemently. He basically claims that representationalism is the only game in town when it comes to explaining visual perception. In fact, he doesn’t even hint at the fact that representationalism is but one theory, and one supported by philosophically ambiguous explanations of what it means to actually “represent” something. Indeed, he says:

Explanation in perceptual psychology is a sub-type of task-focused explanation.  What makes it distinctively psychological is that it uses notions like representational accuracy, a specific type of correlation…Why are explanations in terms of representational accuracy needed?  They explain perceptual constanciesVisual perception is getting the environment right — seeing it, representing it accurately…Perceptual psychology explains how perceptual states that represent environmental properties are formed.

Now, it seems to me that Burge has massively begged the question against nonrepresentational explanations of low-level visual perception.

In  making this claim, I put myself in a precarious position. One of the main points of Burge’s article is that vision science is a highly developed and “mathematically rigorous” science. Burge is insistent that vision science is on solid explanatory ground and I have no intention of challenging the mountain of empirical evidence gathered by orthodox representational visual science. No, the question is not about the facts, but rather, about the interpretation of the facts. It is my claim that representationalism is but one way of interpreting the empirical facts gathered by orthodox visual science.

My claim goes as follows: talk about the visual creature “accurately representing” the environment can be replaced, without losing any explanatory power, by talk of “discriminating information” in the environment. Some would say this is merely a matter of semantics, and in a way they would be right. But when it comes to philosophical explanations of visual perception, semantics are of the utmost importance. But why bother with this semantic triviality between “representation” and “discrimination”? Aren’t they the same thing? In a way, yes. But, as William Ramsey has argued in his important book Representation Reconsidered, this theoretical equivalency is actually the result of orthodox visual science moving away from classic forms of representationalism. For when a visual scientist claims that the organism “accurately” represents a feature of the environment in perception, all the explanatory work is being done by the neural workhorse that is the brain. And, naturally, this explanation is ultimately cashed out in physiological terms, against Burge’s claim that visual science is truly representational.

It is my contention that talk of “differentiation” or “discrimination” is just as psychological as talk of “representation”, but discrimination is more ontologically coherent. Take the example of a hungry primate perceiving a juicy red strawberry. Orthodox visual science would say that in successfully perceiving the strawberry the primate must have accurately represented the red strawberry as a red strawberry, (and not, say, as a purple poisonberry). This is the classic representationalist explanation. On my view, it would be more philosophically parsimonious to say that in successfully perceiving the strawberry, the primate discriminated the strawberry from out of the ambient array of energy surrounding the strawberry. Another way of putting it would be that in perceiving the red strawberry the primate attended to the information specific to the features of the strawberry that were relevant to its internal needs, namely, hunger.

On this account, the primate can be said to perceive the red strawberry as nutritious, not as a strawberry. Notice how this is starkly different from the representationalist interpretation. For the representationalist, the primate’s perception of the strawberry is cashed out in terms of how accurately the internal representation is in comparison to the objective features of the strawberry. If the primate represents the strawberry as being red, and the strawberry really is red, then the primate’s perception of the strawberry is said to be “accurate”, and thus successful. It is then said that the brain consults the representation when forming its intentions to act. Orthodox visual theory is thus committed to what some philosophers have called the sense-represent-planact model. The primate receives proximal sense-data, tries to form an accurate representation of distal stimulus, consults the representation to form a plan, and then executes a motor command to pluck the strawberry and bring it into its mouth.

On my interpretation, we can eliminate the “represent” and “plan” stages and replace it with a sensorimotor model. On this account, the task of the brain is to discriminate the meaningful information already in the environment by attending to it. Neurally speaking, the discrimination supervenes upon the neural patterns of activity. So how is this different from the representationalist story? Because unlike Burge, I think the behavioral nature of discriminatory perception is actually a plus, not a downside (and of course, behavioral explanations are a kind of psychological explanation unless we beg the question against behaviorism). So we shouldn’t expect visual neuroscience to engage in representational theorization when the proper explanatory level of description is behavioral, not representational. I have never seen a representational theory that avoided the homunculus problem without merely collapsing into descriptions of the behavior of  neurons.

And for good reason. Although Burge claims that representation is well understood by visual science, he is only half-right. Representation is well understood if by that we mean that we understand the neural underpinning and physiological correlations of the representation. But as William Ramsey has argued, this is precisely the point. Orthodox visual science has never actually successfully explained how a representation actually functions as a representation, as opposed to being a merely physiological mediator in a long chain of neural activity that ultimately leads to effective motor behavior.

So while Burge is perfectly right to say that “neuralbabble” is nonexplanatory on the psychological level, I believe he is mistaken when he claims that representationalism offers a philosophically rigorous interpretative framework that explains the phenomena at hand. Burge recognizes this when he talks about “generic representations” that apply so widely to any causal correlation as to no longer being explanatorily useful in cognitive science. To make representation explanatorily worthwhile, he introduces the notion of “accuracy”. But as I attempted to explain above, there is an alternative interpretation of accuracy available that focuses on the accurate perception of an affordance. But, crucially, the accurate perception of an affordance is entirely different than the accurate representation of an objective feature. This is because the affordance is more directly tied into the motivational circuits and can thus undercut the “represent” and “plan” stages of the sense-represent-plan-act model and jump right into the scientifically respectable arena of “sensation” and “action”. Hence, sensorimotor models of visual perception. The notion of accurately representing objective features of the environment is replaced by the accurate discrimination of information specific to invariant properties of objects which are themselves specific to affordances (opportunities for behavior). Perceiving the strawberry then becomes a matter of attending to those features of the strawberry which either past experience or innate knowledge has taught to be relevant to homeostatic needs.

Hence, we can account for the normative or “psychological” component of perception (its possible success or failure) in terms of how well the organism is capable of detecting information specific to properties that are themselves specific to affordances. And this offers us a path towards a “real science of mind”. Why? Because affordance perception is directly tied into those sensorimotor causal pathways that have been so successfully studied by orthodox visual science. And it does this without invoking a notion of one thing somehow “standing in for” something else.

Now, my representational critics will respond by saying that the discrimination of information specific to affordances is no more understood than is the notion of accurately representing the environment. Point well taken. But it is my contention that orthodox visual science has been talking about discrimination all along. So I really don’t see myself as being a “revolutionary”. I contend that we could go into almost every single visual science article and change “represents” with “discriminates” without losing any explanatory value. In fact, I think this semantical change would actually enhance the explanatory power of visual science precisely because “discrimination” is more ontologically tractable insofar as it doesn’t make a sharp distinction between the “merely mechanical” sensation of a bacterium and the “cognitive” perceptual capacities of “representing creatures”. One could say that my theory offers a “flat ontology” wherein all lifeforms are said to share in the capacity for discrimination of information and reactivity in direct response to that discrimination. Accordingly, my interpretation is immediately amenable with the advances being made in evolutionary biology.

Moreover, and most importantly for my purposes, the rejection of representationalism for an explanation of basic visual perception would leave room for those phenomena that truly deserve a representational explanation: human symbolic cognition. Indeed, in rejecting representationalism for the explanation of basic visual perception I do not reject all representational explanations like Anthony Chemero does. I thus think, following Clark and Toribio, that some phenomena are “representation hungry”, while others aren’t. Following Gibson, I do not think that basic visual perception as shared by most animals on this planet is representation hungry. What I do think absolutely requires a representational explanation is the symbolic and linguistic cognition of humans. For the referential system that is language absolutely requires an explanation of how one thing (a linguistic symbol) could “stand in for” something else. For example, the word “strawberry” cognitively stands in for a real strawberry. Now, I’m not claiming to have a complete theory worked out about symbolic cognition. But I think significant progress in the mind sciences would be made if we all recognized this demarcation between the nonrepresentational, sensorimotor cognition we share with nonhuman animals and the representational, symbolic cognition seemingly unique to humans.


Filed under Philosophy, Psychology

A Dialogue on Knowledge Between Two Philosophers

Martin: I ask you this then, what is knowledge?

John: Knowledge is justified true belief. For example, I know that I am seeing that tree over there. By all means, it is true that there is a tree over there. Accordingly, I have a belief that there is a tree over there. This belief is justified. Therefore, I know the tree.

M: You use the term “I” as if this term is not ambiguous. When you say “I know”, what is the nature of this “I”?

J: When I use the term “I”, I am referring to my self. This simply serves as an indexical reference. It points something out in the world, namely, myself.

M: Now you have connected the self to your answer of what knowledge is. Tell me, what is the nature of this self?

J: Simple. The self is an agent. An agent is one who acts under his own power and is the subject of experience.

M: Now you use the equally ambiguous concepts of agency, subjectivity, and experience. Tell me, what do you make of the cognitive unconscious?

J: Please, define how you are using that term. I am unfamiliar with the latest developments in the psychological sciences.

M: Of course. The cognitive unconscious is vast and intricately structured. It is emotional and speedy. It is the foundation of our perceptual systems. We are not metacognitively aware of how this network operates, but we are occasionally conscious of its results. We simply give this system instructions and the system executes them smoothly. For example, we are not conscious of how we move our mouth and lips when speaking. We simply get lost in the conversation, in the meaning, not the syntax.

J: I see where you are going with this. You want to know if I consider the unconscious mind as part of the agent. Yes and no. We can say that the unconscious mind is much like the external environment. It simply acts as an input into the self-conscious system. We could say that it “preprocesses” the input but then “presents” or “re-presents” the input to the conscious mind so that we can experience it consciously. This is the mechanism through which I gain knowledge about the tree. If the workings of the cognitive unconscious never reached into my conscious mind, I would never believe that its contents were true, and thus, according to my definition, I would never have knowledge. Consciousness is thus necessary for knowledge because consciousness is essential for believing.

M: Let me see if I understand what you are saying. There is a stimulus first and foremost which is strictly independent of our mind.  We can characterize this stimulus in terms of “primary” qualities such as length, extension, motion, etc. This stimulus impinges upon the receptors in our nervous system and becomes raw “sense-data”. The sense-data is then processed by the unconscious system in order to be presented to the conscious mind. Accordingly, the conscious mind does not experience the stimulus directly, but rather, it only experiences the re-presentation of the stimulus after it has been processed by the unconscious mind. We can say then that the unconscious system generates “conscious percepts” from raw sense-data and that these percepts are characterized in terms of “secondary” qualities, or “qualia”. Is this right?

J: Yes, that sounds more or less right. Knowledge is thus representational. When I see the tree, my belief that the tree is over there and has such-and-such properties is dependent on my having a belief about the tree. The mental content is thus intentional because it is about things “out there” in the world. I know that my belief is true because the properties are more-or-less preserved in the representation. We say then that the representation corresponds to the stimulus and that knowledge is justified true belief. The belief is true because it corresponds to the stimulus and it is justified because evolution usually produces systems which are more-or-less good at getting representational systems to properly correspond to the environment so as to successfully control behavior.

M: Tell me,  what is the nature of this presentation to the conscious mind? To what is the presentation presented to?

J: It is presented to me, the subject.

M: This term is as ambiguous as the “I”. What is the subject?

J: It is the self, the mind, the agent, the “I”. The agent is someone who has beliefs about the world, that is to say, who has knowledge and a subjective mental life. We call this “consciousness”.

M: You defined the self in terms of knowledge, and you defined knowledge in representations, and you defined representations in terms of a self! It feels like we are going in circles.

J: It does seem peculiar. But that’s why consciousness is so mysterious. We don’t quite know how to define it yet nor how it works. But once we get a better grasp on what consciousness is, we should have a better understanding of how re-presentation works and thus, a better understanding of knowledge. But we need to first update our metaphors. I agree with you that the term presentation is vague and illdefined. Traditionally, it was understood in terms of a homunculus or rational Ego. Theater metaphors are prone to this homuncularity. This is why I like Thomas Metzinger’s notion of a self-viewing theater. The problem with the theater metaphor is that it presupposes an audience, and we then run into a problem of regress when trying to understand the homunculus. But if we say  that the theater views itself, then we don’t actually need a conscious self for knowledge to occur. This is why Metzinger says that his theory of mind is selfless.

M: But the mystery of consciousness which generates these problems of selfhood is entirely of your own making! Because your definition of knowledge is circular when you don’t specify the ontological structure of the “I”, there seems to be this fundamental mystery in coming to terms with knowledge and what the mind is. But why should we define knowledge in terms of beliefs and representations? This is only dogma. You of all people should realize that Descartes himself simply assumed that the mind is set off against the environment in a distinct ontological sphere. You took this insight but naturalized it by assuming that the mind is a process not a distinct ontological substance. But because you assumed that the self is isolated from the world in the first place, you explained intentionality, the aboutness of knowledge, our contact with reality, in representational terms. This is because there has to be some mediation between the senseless primary properties and the sensible secondary qualities. But why should we assume that the primary qualities are meaningless?

J: What do you mean? The stimulus is just a big jumble!

M: On the contrary. Take the example of the ground. Is the ground a jumble? If we consider the objects which rest upon it, yes, the ground is (sometimes) a jumble. But take a flat grassy plain. Surely, if we consider the plain as a whole to be a stimulus, we can say that the stimulus is orderly and structured. Moreover, this plain as it exists in itself is not meaningless for an embodied creature. For one, the whole of it anchors us to it by means of gravity. Our entire bodily sense of reality is permeated by an unconscious knowledge that the ground swells beneath our feet and that it affords stability and locomotion. Even with my eyes closed, the ground primordially means something-to-stand-upon. This meaning is codetermined by the intrinsic rigidity of my own body and the rigidity of the ground itself. My ability to pick up and grasp this meaning is intrinsic to my being, spontaneous, and prereflective. And with my eyes open, I am able to receive stimulus information about the nature of the ground as a surface. Indeed, look out before you:


M:The field as a whole is reflecting ambient light towards us. The farther away the ground, the more compressed the light reflecting off it. There is thus a texture gradient in the field-as-a-stimulus. This gradient is determined by more or less objective, albeit receiver-relative, laws. I suppose that this stimulus is ordered and meaningful. It affords opportunities for behavior if we are running through it, or it simply stands before us as three-dimensional if we stare at it (a rare activity in the animal kingdom). Now, consider the question of intentionality and the structure of our knowledge of affordances. Surely, we do not need consciousness in order to gain knowledge of affordances. After all, affordances are simply classes of behaviorally similar things. The perceptual development of an organism can be more or less described in terms of learning what the environment affords. We learn that the ground is supportive, that mothers afford comfort and food, that chairs are for sitting, food is for eating, doors are for going-through, etc.

In such cases, the skill to be learned is that of discrimination, not inference. We do not need to infer secondary qualities from meaningless primary qualities. If visual perception was actually achieved by means of inferring depth and motion from single-points of light intensity, vision would surely be miraculous. Instead, we need only suppose that the organism’s knowledge of the world is achieved by means of enaction. Enaction is the history of structural coupling with the environment. Our structural coupling with the environment is codetermined by the structure of the organism and the environment. This is intentionality. Our experience with the world is simultaneously about me and about the world. As I move through the environment, my vision gives me information both about the layout of the world and my own position in respect to that layout. This is why affordance perception cuts across the subject-object divide. Perceptions are both subjective and objective. We must reject a strict dualism between subject and object.

We do not need to add anything to the stimulus. We do not need to preprocess it for consciousness, for our minds. This is unnecessary. Our history of structural coupling guarantees that the environment is directly meaningful in terms of affording opportunities for behavior. Behavior is simply a way of being-in-the-world. It is a way to maintain the unity and structural organization of our bodies so as to maintain our continual rigidity in respect to the environment. Behavior is living.

Knowledge therefore cannot be described in representational terms without falling prey to ambiguity or vicious circularity. While there might be representations in the perceptual system, they are action-oriented, not symbolic. We are thus in the world directly. Our primary mode of access to the world is in behavioral terms. We can call this mode of coping circumspective concern. This view of knowledge indicates a fundamental shift in metaphysics, for metaphysics must include the whole of nature, and we are a part of this whole.

J: Yes, but what of consciousness?

M: That, my friend, is a conversation for another day!

Leave a comment

Filed under Philosophy

Enrichment versus Differentiation: Two Theories of Perception

In their excellent book An Ecological Approach to Perceptual Learning and Development, Eleanor Gibson and Anne Pick distinguish between two broad approaches to visual perception: enrichment theories and differentiation theories. The first theory claims that the initial sensory stream needs to be enriched because the stimulus upon the eye is too poor for accurate perception of environmental structure. It was Bishop Berkeley who first argued that perception of space is impossible without enrichment. Indeed, he says that

It is, I think, agreed by all that distance, of itself and immediately, cannot be seen. For distance being a line directed end-wise to the eye, it projects only one point in the fund of the eye, which point remains invariably the same, whether the distance be longer or shorter.

Because Berkeley assumed that the retinal or “proximal” stimulus is indeterminate in respect to the “distal” stimulus, he thought that the brain needs to make some kind of probabilistic hypothesis or interpretation in order for there to be experience of distance. Thus, our experience in three dimensions is merely the result of our brain “guessing” that the earth is 3D based off the inadequate sensory reception. In this same respect, Helmholtz’s notion of unconscious inference has recently been refined into a computational theory based on the construction of representations, as with David Marr’s influential theory. There is also a rationalist variant of enrichment theories currently in vogue. These rationalists also emphasize inference in perception, but think that the major premises for inference are evolutionarily ancient. This strong nativist view is championed by people like Chomsky and Pinker.

In contrast with enrichment theory, differentiation theories emphasize the redundancy of information available in the environment regardless of whether the perceiver is there to pay attention to it. Accordingly, differentiation theorists take a different approach to Berkeley’s problem of distance. Consider the following diagram.


This picture represents two different formulations of the distance problem. The line with the points A,B,C,D is how Berkeley set up the problem. The points cannot be discriminated in respect to distance. As J.J. Gibson said however, the distance along this line is a fact of geometry, but not one of optics or visual perception. Indeed, the points W, X, Y, Z can be discriminated by the retina. As the observer moves through the ambient field of light that has settled in the environment, there is a pattern of stimulus that transforms across the retina. Because the pattern is structured nomothetically (in a lawlike manner), it corresponds or “contains information” specific  to events, objects, and layouts in the environment. Indeed, the nomothetic relation between distance and the density of optic information allows for the perception of texture gradients along the surface of the earth (Notice how points Y and Z are closer together on the retina). In order to perceive accurately then, the observer simply needs to learn how to discriminate what J.J. Gibson called the variables “invariant over transformation”.

This theory is known as the “ecological” approach to visual perception. It emphasizes that information specific to the level of reality relevant to organisms is widely available and orderly structured in ambient energy arrays. In order to perceive, the animal simply needs to discriminate the invariant patterns of transformation which arise by its movement through the ambient field of energy. This is called “sampling” the optic array. The development of perception is largely concerned with learning these discriminatory skills. Alva Noë has talked at length about these skills in terms of what he calls “sensorimotor knowledge”. Indeed, he says that

The basic claim of the enactive approach is that the perceiver’s ability to perceive is constituted (in part) by sensorimotor knowledge (i.e. by practical grasp of the way sensory stimulation varies as the perceiver moves).

Movement through the ambient array corresponds to a dynamic “optic flow field”. Transformations of this flow field contain information about both the perceiver and the environment. As E. Gibson and Pick write,

There is a second reciprocal relation implied by the affordance concept: a perception-action reciprocity. Perception guides action in accord with the environmental supports or impediments presented, and action in turn yields information for further guidance, resulting in a continuous perception-action cycle. Realization of an affordance, as this reciprocity implies, means that an animal must take into account the environment resources presented in relation to the capabilities and dimensions of its own body. Children begin learning to do this very early and continue to do so as their powers and dimensions increase and change.

As we can see then, enrichment theories and differentiation theories begin with very different assumptions about the nature of the perceptual stimulus. Whereas differentiation theorists hold that the perceptual stimulus is sufficient for the guidance of action, enrichment theorists hold that the stimulus is impoverished. But as the diagram indicates, the stimulus only appears  impoverished if we view it in terms of physiological optics as opposed to ecological optics. British empiricists thought that the retinal stimulus is poor because they failed to consider the problem of perception in terms relevant to the organism’s behavioral needs. This is what happens when mathematicians reason about visual perception from a priori principles of geometry: they wind up missing the abundance of information available for attentional discrimination.


Filed under Philosophy, Psychology

Meta-knowledge: Why Representationalism Is Unnecessary as an Explanation of Visual Consciousness

[What change blindness experiments suggest] is that the visual brain may have hit upon a very potent problem-solving strategy, one that we have already encountered in other areas of human thoughts and reason. It is the strategy of preferring meta-knowledge over baseline knowledge. Meta-knowledge is knowledge about how to acquire and exploit information, rather than basic knowledge about the world. It is not knowing so much as knowing how to find out. The distinction is real, but the effect is often identical. Having a super-rich, stable inner model of the scene could enable you to answer certain questions rapidly and fluently, but so could knowing how to rapidly retrieve the very same information as soon as the question is posed. The latter route may at times be preferable since it reduces the load on biological memory itself. Moreover, our daily talk and practice often blurs the line between the two, as when we (quite properly) expect others to know what is right in front of their eyes.

-Andy Clark, Natural-born Cyborgs

I really like this quote. I think it captures perfectly the evolutionary argument against representational internalism, which stipulates that the brain continuously generates an internal phenomenal model to compensate for imperfections in the retinal image, particularly in respect to “depth ambiguity” (since the image would be more or less 2D). That my current experiential content is the result of a compensatory brain simulation seems wildly unparsimonious. In regards to the computational problem of depth ambiguity, we can reasonably propose that ambient light in normal environments nomothetically reflects certain information concerning the surface layout. An important part of this information directly relevant to spatial perception is the texture gradient. Take this field:


The ambient light of the sun “settles” into a stable array wherein the visual angles meeting at your geometric point of view specify a “gradient” of texture density that conforms to the actual 3D layout of the environment. Because this information is reflected by the light and contained in the structure of the overlapping visual angles, we can say that information directly concerning 3D layout is “specified” by the ambient light. If we wanted to access spatial information for usage in locomotion or hunting, how do you think Mother Nature would accomplish this task? By developing a simulation system that literally constructs phenomenal visual experience from ambiguous retinal inputs through inferential reasoning? Or would evolution develop an Andy Clark-style on-the-fly access system that developed metaknowledge about how to pick up information specified in the ambient array (this is called “sampling” the optic array)?

On this “externalist” view, additional information processing to jump from 2D to 3D is unnecessary provided that the brain-body system learns how the ambient optic array changes in response to bodily locomotion. By learning the rules between how our eyes move and how the visual angles are transformed (this might be the function of microsaccades), we can pick up information in such transformation that specifies the 3D layout of the environment (thanks for texture gradients and motion parallax). Accordingly, the experiential content of visual perception does not consist in experiencing a brain simulation but rather, experiencing the brain-body system behaviorally reacting or “resonating” to the information specified in the environment relevant to our bodily concerns and projects. Such information is not just visual but tactile, gravitational, chemical, and aural. Behavioral resonance of course becomes complicated when we realize that the human environment contains information not just relevant to navigating through a 3D world, but also, information that is relevant to social concerns and our higher-order narrative consciousness.

Hopefully this brief essay has showed why representationalism is unnecessary and unparsimonious as an explanation of visual consciousness. It is also worth mentioning that this critique of internal representationalism does not rule out the usefulness of representations in theoretical explanation e.g. topographic or “isomorphic” representations in the cortex don’t suffer from the ontological problems that “indicator” representations do.

1 Comment

Filed under Philosophy, Psychology

Coming Around on Representationalism

I have been reading a lot of Dreyfus and Heidegger lately, and naturally, I have been slightly leaning towards the anti-representationalist camp. By anti-representationalism, I mean the school of thought that deemphasizes the importance of representations in cognition in favor of an embodied, enactive approach to the traditional philosophy problems. Don’t get me wrong, I am still in favor of such approaches, but thanks to a discussion over at Pete Mandik’s blog, I have turned a more sympathetic ear to the representationalist camp.

Two papers that were linked in the blog discussion made me re-think my position. The first is a reply to Dreyfus by Rick Grush and Pete Mandik. In the paper they argued that representations have explanatory usefulness and furthermore, that just because an action is context-dependent doesn’t mean that that activity isn’t representational. They also defend representationalism on phenomenological grounds with examples such as the ability to represent alternative chess-positions when playing. Dreyfus would counter by saying that truly “skilled” grand masters do not make such representations but rather engage the chessboard and “deal” with it non-representationally. I think Dreyfus would be right, but that would be an exceptional case. I imagine that most people are not able to cope with the chessboard in such a manner and have to consciously represent the board and alternate possibilities.

The second paper that pushed me further from the anti-representationalist camp, posted by Eric Thomson, was by William Bechtel. In this paper, Bechtel discusses dynamical systems theory and the role for representations and explanation in models of cognition. Bechtel defuses the revolutionary character of dynamic systems theory and instead discusses how such approaches can complement more traditional representational and mechanistic explanatory models.

So, while I still hold that for some cases, such as action, a minimal representational approach is superior, thanks to Mandik and Bechtel, I have become much more sympathetic towards explanatory models of cognition that utilize representations.


Filed under Philosophy, Psychology