Tag Archives: representations

Is Visual Perception Really a Nonstop Hallucination? A Plea for Conceptual Revision

Anyone who has taken Philosophy of Mind 101 will be familiar with the following claim: “We’re hallucinating reality all the time”. In this post, I will critically examine whether this statement should be taken as literally true. My intuition is that such claims are over-extended metaphors, and the true nature of visual perception is more complicated.

The popularity of the Matrix has provided a common conceptual framework to make sense of what philosophers and vision scientists have been claiming for many years e.g. Helmholtz’s claim that perception is a “unconscious inference”. The original philosophical motivation can be traced to Descartes’ musings about whether we could ever distinguish reality from a dream. Nowadays, vision scientists frame these ideas in terms of vision being “representational”.

But is it true? The argument is prima facie convincing. Start with the phenomenon of visual illusions or visual hallucinations. For example, in Charles Bonnet Syndrome (CBS) people have “complex” visual hallucinations wherein people and objects are hallucinated wholesale in dazzling detail. These fascinating cases clearly  demonstrate the brain is able to “represent” or “generate” non-existent objects in full phenomenological detail. But here’s the crucial move: if the brain can generate complex visual hallucinations, is it possible that ALL perception is a complex visual hallucination?

But as with all questions of possibility, we should be skeptical of any argument that jumps from possibility to actuality. Sure, it seems possible that ALL perception is a hallucination, but are we forced to make this conclusion on the basis of knowing that complex visual hallucinations are possible? Not at all!

I’d like to suggest a different metaphor for understanding the relation between hallucinations and normal perception that preserves their essential difference rather than collapsing them into a single, continuous category. Instead of thinking the existence of hallucinations forces us to think we are in the Matrix, I think it’s more useful to think of hallucinations as akin to augmented reality.


The idea is fairly straightforward: hallucinations such as CBS are analogous to the augmented “over-lay” in the above picture. The basic idea is that there is a more-or-less continuous stream of “veridical” perception underlying our basic animal perception and that complex hallucinations such as CBS are “projected upon” that stream just as an augmented reality HUB projects upon normal perception.

I think the AR metaphor for perception is more plausible than the wholesale Matrix hypothesis. My reasoning is grounded by an evolutionary thought experiment. Suppose for the sake of argument that the Matrix metaphor is correct and that ALL perception is a hallucination. Presumably, the brain is responsible for generating these representations. A further assumption is that more-or-less all mammalian brains have a similar hallucination generation capacity. But how did such a capacity evolve over time? Take the earliest mammalian ancestor who lived “fully” in the Matrix of their brain. How did their parent’s brain work? Was their perception only 99% a hallucination? And their ancestors’ perception 98% hallucinatory? And so on.

As we imagine the slow evolution of Matrix-style perception, we are faced with a Sorites paradox of sorts. As nervous systems get simpler and simpler it becomes implausible that nervous systems composed of only several hundred neurons are generating a completely hallucinatory inner-model. The neurons are more likely acting as a kind of complex “mediation” between stimulus and response rather than a representational medium.

But if we start going forward in evolutionary time and nervous systems get more and more complicated, it seems wrong to me to think that the brain ever “gets rid of” that underlying non-representational form of perception. Rather, the brain “adds” onto that basic veridical perception. But at no point will the nervous system switch from 50/50 veridical-hallucinatory to 100% hallucinatory such that we become fully immersed in the Matrix. Like augmented reality, the most evolutionary recent brain developments like the neocortex “overlay” more basic forms of perception.We might think of hallucinations like CBS as neocortical memory-patterns that are projected upon the real-time dynamic stream of veridical perception.

Obviously this post represents a very rough-and-ready formulation of an alternative to the standard Matrix metaphor and will need much further development. But on the other hand, I am skeptical that the Matrix metaphor has ever been rigorously developed past the level of intuitive metaphor. It’s even possible that we can never move beyond metaphor in dealing with the most unknown and esoteric psychological phenomena. And if this is the case, we have a real imperative to reexamine popular metaphors such as the Matrix and replace them with new ones.



Filed under Consciousness, Philosophy, Psychology

A crude theory of perception: thoughts on affordances, information, and the explanatory role of representations

Perception is the reaction to meaningful information, inside or outside the body. The most basic information is information specific to affordances. An affordance is a part of reality which, in virtue of its objective structure, offers itself as affording the possibility of some reaction (usually fitness-enhancing, but not necessarily so). A reaction can be understood at multiple levels of complexity and mechanism. Sucrose, in virtue of its objective structure, affords the possibility of maintaining metabolic equilibrium to a bacteria cell. Water, in virtue of its objective structure, affords the possibility of stable ground for the water strider. Water, in virtue of its objective structure, does not afford the possibility of a stable ground for a human being unless it is frozen. An affordance then is, as J.J. Gibson said, both subjective and objective at the same time. Objective, because what something affords is directly related to its objective structure; subjective, because what something affords depends on how the organism reacts to it (e.g. human vs. water strider)

The objective structure of a proximal stimulus can only be considered informationally meaningful if that stimulus is structured so as to be specific to an affordance property. If a human is walking on the beach towards the ocean, the ocean will have the affordance property it has regardless of whether the human is there to perceive information specific to it. The “success” or meaningfulness of the human’s perception of the ocean is determined by whether the proximal stimulus contains information specific to that affordance property. A possible affordance property might be “getting you wet”, which is usually not useful, but can be extremely useful if you are suddenly caught on fire. Under normal viewing conditions, the objective structure of the ambient array of light in front of the human contains information specific to the ocean’s affordance properties in virtue of its reflective spectra off the water and through the airspace. But if the beach was shrouded in a very thick fog, the ambient optic array would stimulate the human’s senses, but the stimulus wouldn’t be meaningful because it only conveys useless information about the ocean, even though that information is potentially there for the taking if the fog was cleared. An extreme version of “meaningless stimulus without perception” is the Ganzfeld effect. On these grounds, we can recreate, without appealing to any kind of representational theory, the famous distinction between primary and secondary qualities i.e. the distinction between mere sensory transduction of meaningless stimuli and meaningful perception.

Note too how perception is most basically “looking ahead” to the future since the affordance property specifies the possibility of a future reaction. This can be seen in how higher animals can “scan” the environment for information specific to affordances, but restrain themselves from acting on that information until the moment is right. This requires inhibition of basic action schemas either learned or hardwired genetically as instinctual. In humans, the “range” of futural cognition is uniquely enhanced by our technology of symbols and linguistic metaphor. For instance, a human can look at a flat sheet of colored paper stuck to a refrigerator and meaningfully think about a wedding to attend one year in the future. A scientist can start a project and think about consequences ten years down the road. Humans can use metaphors like “down the road” because we have advanced spatial analogs which allow us to consciously link disparate bits of neural information specific to sensorimotor pathways into a more cohesive, narratological whole so as to assert “top-down” control by a globally distributed executive function sensitive to social-cultural information.

This is the function which enables humans to effortlessly “time travel” by inserting distant events into the present thought stream or simulating future scenarios through conscious imagination. We can study the book in our heads of what we have done and what we will do, rehearse speech acts for a future occasion, think in our heads what we should have said to that one person, and use external symbolic graphs to radically extend our cognitive powers. Reading and writing, for example, has utterly changed the cognitive powers of humans. Math, scientific methodology, and computer theory have also catapulted humans into the next level of technological sophistication. In the last few decades, we have seen how the rise of the personal computer, internet, and cellphone has radically changed how humans cope in this world. We are as Andy Clark said, natural born cyborgs. Born into a social-linguistic milieu rich in tradition and preinstalled with wonderful learning mechanisms that soak up useful information like sponges, newborn humans effortlessly adapt to the affordances of the most simple environmental elements (like the ground) to the most advanced (the affordance of a book, or a website).

So although representations are not necessary at the basic level of behavioral reaction shared by the unicellulars (bacteria reacting to sucrose by devouring it and using it metabolically), the addition of the central nervous system allows for the storage of affordance information into representational maps. A representational map is a distributed pattern of brain activity which allows for the storage of informational patterns which can be utilized independently of the stimulus event which first brought you into contact with that information. For example, when a bird is looking right at a food cache, it does not need its representational memory to be able to get at the food; it simply looks at the cache and then reacts by means of a motor program for getting at the food sparked by a recognition sequence. However, when the cache is not in sight and the bird is hungry, how does the bird get itself to the location of the cache? By means of a re-presentation of the cache’s spatial location which was originally stored in the brain’s memory upon first caching the food. By accessing stored memory-based information about a place even when not actually at that place, the bird is utilizing representations to boost the cognitive prowess of its nonrepresentational affordance-reaction programs. Representations are thus a form of brain-based cognitive enhancement which allow for the reaction to information which is stored within the brain itself, rather than just contained in the external proximal stimulus data. By developing the capacity to react to information stored within itself, the brain gains the capacity to organize reactions into more complicated sequence of steps, delaying and modifying reactions and allowing for the storage of information for later retrieval and the capacity to better predict events farther into the future (like the bird predicting food will be at its cache even though it is miles away).


Filed under Consciousness, Philosophy