The Semantics of Brain Computations

In his paper “The Semantic Challenge to Computational Neuroscience”, Rick Grush claims that “there is a fact of the matter concerning whether or not a given neural state is representing something: a fact that is neither created nor imperiled by the interpretive whims of whomever might be examining the neural system.” However, Grush does not offer in this paper an objective, “whimless” criterion to determine whether or not some neural system is representing. Instead, he offers a critique of two attempts on the market: information semantics and biosemantics. So although Grush does not present his own view in this paper, he seems satisfied that such a criterion could, in principle, be given (and presumably thinks he could provide one himself).

I’m not so optimistic that the computationalist can ever escape from the imperilments of their “interpretative whims”. I believe that representationalism often takes advantage of the power of metaphors to make semantic interpretations of neural phenomena more philosophically substantive than they really are. I think that if you take away the power of metaphor, then the examples are basically left barren as exemplars of the semantic interpretation. Take the example Grush uses of neurons in the posterior parietal areas 7a. Grush claims that a semantic interpretation would look like this: “The cell is processing information about retinal location and eye orientation to provide information about direction of stimulus relative to the head.”

For me, the crucial phrase is that the cell “provides information about”. Another term might be “carries information about”. Obviously, Grush can’t be talking about providing in the same way we would say “Bob provided food for his family”; neither could he be talking about carrying in the same way someone might carry a book. Neuronal cells can’t possibly be providing or carrying like that. So what does providing or carrying mean in this context? Perhaps Grush means that the products of the neuronal processing are “about” the direction of the stimulus and when the cell “provides” or “carries” this information it simply passes it along to the next cell through its outputs. But here we are still dealing with metaphors. My claim is that if we really took a hard metaphysical look at the phenomena, we would discover that the only things happening are purely mechanical, not semantic. When we look at a posterior parietal cell firing, all we as scientists can observe is the physical operations of the mechanisms. We can not observe a neuron “providing information” to another cell. We can merely interpret neuronal activity as if it was carrying information.

Although I can’t speak for Grush’s actual view, perhaps he would appeal to something like isomorphic representation, which is essentially a kind of preservation of structural features. Retinotopic maps in the visual cortex are an obvious example. In these cases, it seems like we might say that the visual cortex represents certain aspects of the retina in virtue of having these similar structural features. But a question arises: is the visual cortex causally effective in virtue of the “similarity” relation between the retina, or in virtue of its own intrinsic physical powers, that are just shaped similar to the retina for whatever evolutionarily reason? There’s no denying that there is, objectively speaking, an isomorphic representation in the visual cortex, but what work is that isomorphism doing that isn’t already being done by the intrinsic causal powers of the visual cortex?

It doesn’t look like the semantic interpretation gets us any real functional work above and beyond what we already know brain cells are doing. So what does it get us? I believe the representational view helps our minds more easily understand a complex causal process. It is just illuminating to interpret the neural system as if it was representing things in the external world. And it seems possible to operationalize representations such that we have a criterion for when to talk about representation. So for example, if we throw a stick into some tall grass and a dog goes chasing after it, it makes perfect sense to say that the dog has a representation in its mind that is about that stick. This is indeed the classic scenario in which representational semantics seems perfectly appropriate.

But there is another interpretation that makes sense of the same data without losing the explanatory power that representational semantics gives us. We could say that instead of the mental state in the dog’s head being about that stick out there in the world, that mental state merely mediates between the stimulus of first seeing that stick thrown, and the complex searching behavior which results. We might say that instead of the mental state representing that external piece of the world, the mental state keeps that dog in a state of searching behavior until the stimulus finally comes into view again. So it is not necessary to talk about the dog’s brain as being about the stick. We can simply say that the dog’s brain mediates between the stimulus and the behavior in such a way that the stimulus doesn’t have to be in continuous view in order to be effective in triggering complex behavior. In virtue of mediating representations, the dog’s brain can pursue prey that have ran behind a boulder. And although it’s certainly possible to interpret the dog’s brain as if it had an internal representation of that prey, an alternative view is that what the neuronal representations are “about” is more related to their own intrinisic activity rather than anything “out there” in the world.

One form of intrinsic processing that seems plausible for a neuron to do is a kind of prediction about the future states of its own activity. On this view, neuronal representational processing is not really “about” the external world, but rather, is directed towards the future state of its own activity. So the representations stay within the brain, semantically speaking. By reducing their own prediction error, neurons could form complex mediational pathways for reliably guiding actions in the face of a messy and variable world. For example, although my coffee cup is never quite in the same position when I go to reach for it, reducing prediction error helps my fingers on-the-fly readjust their dynamics in order to meet my invariant goal of bringing it to my lips. So having invariant representations coding for a goal and using prediction error that adjust motor dynamics to bring me closer to that goal in light of noisy variables seems like a plausible semantic story to be told. But notice how this semantic story is never really about some neuronal state “standing in for” or “carrying information” about some external state of affairs. The brain seems to be only focused on its own dynamics and its own goals.

The mechanist might still respond by saying that even the prediction-error story is still overly semantic and prone to worries about interpretational whims. The mechanist might say that it’s still just mechanisms all the way down, and that talking about neurons as if they are predicting their own future states of affairs is just a useful fiction to help us understand a complex system. This might be true. My only point was that a semantic story about the brain seems to be less mysterious if we make the “intentionality” or “directionality” of the representations less far-reaching. So rather than neurons being about something that’s way out there in the external world, it seems more tractable for neurons to be about their own activity.

Advertisements

Leave a comment

Filed under Philosophy, Psychology

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s