[What change blindness experiments suggest] is that the visual brain may have hit upon a very potent problem-solving strategy, one that we have already encountered in other areas of human thoughts and reason. It is the strategy of preferring meta-knowledge over baseline knowledge. Meta-knowledge is knowledge about how to acquire and exploit information, rather than basic knowledge about the world. It is not knowing so much as knowing how to find out. The distinction is real, but the effect is often identical. Having a super-rich, stable inner model of the scene could enable you to answer certain questions rapidly and fluently, but so could knowing how to rapidly retrieve the very same information as soon as the question is posed. The latter route may at times be preferable since it reduces the load on biological memory itself. Moreover, our daily talk and practice often blurs the line between the two, as when we (quite properly) expect others to know what is right in front of their eyes.
-Andy Clark, Natural-born Cyborgs
I really like this quote. I think it captures perfectly the evolutionary argument against representational internalism, which stipulates that the brain continuously generates an internal phenomenal model to compensate for imperfections in the retinal image, particularly in respect to “depth ambiguity” (since the image would be more or less 2D). That my current experiential content is the result of a compensatory brain simulation seems wildly unparsimonious. In regards to the computational problem of depth ambiguity, we can reasonably propose that ambient light in normal environments nomothetically reflects certain information concerning the surface layout. An important part of this information directly relevant to spatial perception is the texture gradient. Take this field:
The ambient light of the sun “settles” into a stable array wherein the visual angles meeting at your geometric point of view specify a “gradient” of texture density that conforms to the actual 3D layout of the environment. Because this information is reflected by the light and contained in the structure of the overlapping visual angles, we can say that information directly concerning 3D layout is “specified” by the ambient light. If we wanted to access spatial information for usage in locomotion or hunting, how do you think Mother Nature would accomplish this task? By developing a simulation system that literally constructs phenomenal visual experience from ambiguous retinal inputs through inferential reasoning? Or would evolution develop an Andy Clark-style on-the-fly access system that developed metaknowledge about how to pick up information specified in the ambient array (this is called “sampling” the optic array)?
On this “externalist” view, additional information processing to jump from 2D to 3D is unnecessary provided that the brain-body system learns how the ambient optic array changes in response to bodily locomotion. By learning the rules between how our eyes move and how the visual angles are transformed (this might be the function of microsaccades), we can pick up information in such transformation that specifies the 3D layout of the environment (thanks for texture gradients and motion parallax). Accordingly, the experiential content of visual perception does not consist in experiencing a brain simulation but rather, experiencing the brain-body system behaviorally reacting or “resonating” to the information specified in the environment relevant to our bodily concerns and projects. Such information is not just visual but tactile, gravitational, chemical, and aural. Behavioral resonance of course becomes complicated when we realize that the human environment contains information not just relevant to navigating through a 3D world, but also, information that is relevant to social concerns and our higher-order narrative consciousness.
Hopefully this brief essay has showed why representationalism is unnecessary and unparsimonious as an explanation of visual consciousness. It is also worth mentioning that this critique of internal representationalism does not rule out the usefulness of representations in theoretical explanation e.g. topographic or “isomorphic” representations in the cortex don’t suffer from the ontological problems that “indicator” representations do.