Tag Archives: affordances

The Meaning of Meaning

What is meaning? This simple question is at the heart of philosophy of mind. Mentality and meaning have always gone hand in hand, and philosophers have tried to give an account of meaning for thousands of years.  Despite the many spirited attempts, a concrete understanding of meaning has been elusive in philosophy, and a broad consensus is no where to be seen. It’s a devishly complicated question to ask what the meaning of meaning is; it boggles the mind as to how to even go about answering the question. Although it is difficult to give a straight-forward definition of meaning, I do believe, contra Socrates, that giving examples is helpful in the art of producing a rigorous definition for a concept.

Philosophers have often focused on meaning at the linguistic level, wondering how the phrase “the cat is on the mat” means that the cat is actually on the mat. Moreover, what is the practical import of the statement? What does it mean to tell someone the cat is on the mat? If the cat belongs to no one, the import is probably zilch. But if the owner of the cat has been looking for it for days, then the import of stating where the cat is is likely to be highly meaningful. From an evolutionary perspective, it seems like the practical import of a linguistic statement is more developmentally basic, in both a phylo and ontogenetic sense. In other words, meaning comes first, then language. But this pushes back the question, what is nonlinguistic meaning?

The question of nonlinguistic meaning is tied into the question of nonverbal mental content. Linguistic meaning is usually talked about in terms of propositional content e.g. the content of the statement “the cat is on the mat” is the state of affairs of a cat being on a mat. So verbal content is relatively easy to make sense out of, because we can understand the conceptual content in terms of the implied propositional content, which can be spelled out in terms of beliefs and desires. If I don’t know where the cat is and I am looking for the cat, then someone telling me that the cat is on the mat will update my belief system such that I will, ceteris paribus, be motivated to go look on the mat, and will actually look. This is a fairly orthodox way of accounting for linguistic content. But what about nonverbal mental content? How can we make sense of that?

The question is philosophically vexing in that it’s difficult to use language (the medium of philosophy) to talk about mental content that exists independently of language. One way to get a better sense of nonverbal mental content, and thus nonverbal meaning, is to ask which creatures “have” nonverbal mental content. Let’s start with unicellulars like bacteria. Does a bacterium have a “mental life”? Not in the traditional sense of the term, since it seems strained to say that a bacterium believes anything, and having beliefs has long been a traditional criteria in distinguishing creatures with mentality from those without. Whereas we could, if we wanted to, adopt an intentional stance and say that when the bacterium senses a sucrose gradient a belief is formed by the bacterium that this is indeed sucrose it is encountering. But we know deep down that the “sensing” of the sucrose is entirely constituted by the physical-chemical nature of the bacterium. The sensing and digestion of the sucrose is entirely reactive and mechanistic. The bacterium’s “decision” to devour the sucrose based on its “belief” is entirely mechanical. The belief-forming talk is just that, talk. We do not really think that the intracellular machinery’s job is to form beliefs; its job is to perform biochemical functions that aid in the continuation of the bacterium’s metabolic existence.

But although the bacterium does not have beliefs, and thus does not “have” propositional attitudes except those we ascribe to it, it still makes sense to say that bacterium has a mental life, however dim compared to more complex creatures. For what is mental life? I claim a creature has a mental life just insofar as there is something it is like to be that creature. And, following Heidegger, I claim there is something it is like to be a creature just insofar as that creature “lives in” a phenomenal world. “Living in” in a phenomenal world is not like a spatial sense of “in” as in the case of the pencil being “in” the box. Living “in” a phenomenal world is more like being-in-the-world where being-in-the-world is a matter of (1) having concerns and (2) living in an environmental niche. A bacterium has concerns insofar as it is “concerned” about its own survival. Its whole existence is constituted by a desire to stay alive, to maintain its autonomous living. It “does” this in virtue of its complete biochemical nature. But its biochemical nature is organized in such a way as to constitute a machine which has a homeostatic equilibrium and the means by which to maintain that equilibrium despite perturbations from a changing environment and breakdowns in the stability of the internal mechanisms. So because the bacterium is “concerned” about itself in virtue of having its physical structure, the bacterium therefore lives in a phenomenal world insofar as it lives in an environment. The bacterium’s world is such that what is meaningful to the bacterium is that which enables it to keep on living. Thus, sucrose is meaningful to the bacterium because it affords the possibility of digesting it for maintenance of its homeostatic equilibrium.

We have then a foundation of meaning upon which to build more complex types of meaning. Basic nonverbal mental content, and thus basic nonverbal meaning, is based around autonomy. The bacterium is an autonomous machine because it gives itself its own principles for behavior based on its nature. These principles are properties of its organization as a physical object. One of the principles is concern oriented insofar as the maintenance of a dynamic nonlinear homeostatic equilibrium is the fundamental concern. And as we said, if you are concerned about something, then you live in a phenomenal world. If you live in a phenomenal world, you “have” phenomenal experience (where having is understand to be a metaphor, and not a literal “having” of an object like having a hammer in your hand). And if you have phenomenal experience, there is something it is like to be you. Thus, there is something it is like to be a bacterium.

But notice how the bacterium has no nervous system. If my argument goes through, then we can conclude that looking for the neural correlates of phenomenal experience is a completely misguided enterprise that is bound to fail. However, since I have been trying to argue that phenomenal experience and consciousness do not overlap, this means that we can still coherently look for the neural correlates of consciousness. But the NC of phenomenal experience is completely misguided, because, as I have tried to establish, there is something it is like to be a bacterium, and bacteria do not have nervous systems. If I am right, then neurophilosophers trying to pinpoint the NCs of phenomenal experience have been barking up the wrong tree. For the fundamental principle of mental life is not consciousness but living in a phenomenal world i.e. a world of real value and meaning, where entities are encountered as significant. Rocks do not live in a phenomenal world. There is nothing a rock is concerned about. It does not care if you break it in two. There is nothing it is like to be a rock. A rock has no mental life. But what a world of difference in the bacterium! The bacterium is alive. It has concerns. It lives in an ecological (i.e. phenomenal) niche. Whereas the rock does not strive to stay together in a particular organizational pattern, the bacterium does. Sucrose means nothing to a rock, for nothing means anything to a rock, but things matter to bacteria. Sucrose is meaningful to bacteria.

And that is the meaning of meaning in its most basic form. Of course, I am glossing on the complexity of both primordial meaning and linguistic meaning. Linguistic meaning, though grounded by primordial meaning, takes on a life of its own once established in a population. This is why Heidegger made pains to distinguish between being-in-the-environment and being-in-a-linguistic-world, with the latter reserved for those humans who have learned a language and grown up in a social-linguistic community.

11 Comments

Filed under Consciousness, Philosophy

A crude theory of perception: thoughts on affordances, information, and the explanatory role of representations

Perception is the reaction to meaningful information, inside or outside the body. The most basic information is information specific to affordances. An affordance is a part of reality which, in virtue of its objective structure, offers itself as affording the possibility of some reaction (usually fitness-enhancing, but not necessarily so). A reaction can be understood at multiple levels of complexity and mechanism. Sucrose, in virtue of its objective structure, affords the possibility of maintaining metabolic equilibrium to a bacteria cell. Water, in virtue of its objective structure, affords the possibility of stable ground for the water strider. Water, in virtue of its objective structure, does not afford the possibility of a stable ground for a human being unless it is frozen. An affordance then is, as J.J. Gibson said, both subjective and objective at the same time. Objective, because what something affords is directly related to its objective structure; subjective, because what something affords depends on how the organism reacts to it (e.g. human vs. water strider)

The objective structure of a proximal stimulus can only be considered informationally meaningful if that stimulus is structured so as to be specific to an affordance property. If a human is walking on the beach towards the ocean, the ocean will have the affordance property it has regardless of whether the human is there to perceive information specific to it. The “success” or meaningfulness of the human’s perception of the ocean is determined by whether the proximal stimulus contains information specific to that affordance property. A possible affordance property might be “getting you wet”, which is usually not useful, but can be extremely useful if you are suddenly caught on fire. Under normal viewing conditions, the objective structure of the ambient array of light in front of the human contains information specific to the ocean’s affordance properties in virtue of its reflective spectra off the water and through the airspace. But if the beach was shrouded in a very thick fog, the ambient optic array would stimulate the human’s senses, but the stimulus wouldn’t be meaningful because it only conveys useless information about the ocean, even though that information is potentially there for the taking if the fog was cleared. An extreme version of “meaningless stimulus without perception” is the Ganzfeld effect. On these grounds, we can recreate, without appealing to any kind of representational theory, the famous distinction between primary and secondary qualities i.e. the distinction between mere sensory transduction of meaningless stimuli and meaningful perception.

Note too how perception is most basically “looking ahead” to the future since the affordance property specifies the possibility of a future reaction. This can be seen in how higher animals can “scan” the environment for information specific to affordances, but restrain themselves from acting on that information until the moment is right. This requires inhibition of basic action schemas either learned or hardwired genetically as instinctual. In humans, the “range” of futural cognition is uniquely enhanced by our technology of symbols and linguistic metaphor. For instance, a human can look at a flat sheet of colored paper stuck to a refrigerator and meaningfully think about a wedding to attend one year in the future. A scientist can start a project and think about consequences ten years down the road. Humans can use metaphors like “down the road” because we have advanced spatial analogs which allow us to consciously link disparate bits of neural information specific to sensorimotor pathways into a more cohesive, narratological whole so as to assert “top-down” control by a globally distributed executive function sensitive to social-cultural information.

This is the function which enables humans to effortlessly “time travel” by inserting distant events into the present thought stream or simulating future scenarios through conscious imagination. We can study the book in our heads of what we have done and what we will do, rehearse speech acts for a future occasion, think in our heads what we should have said to that one person, and use external symbolic graphs to radically extend our cognitive powers. Reading and writing, for example, has utterly changed the cognitive powers of humans. Math, scientific methodology, and computer theory have also catapulted humans into the next level of technological sophistication. In the last few decades, we have seen how the rise of the personal computer, internet, and cellphone has radically changed how humans cope in this world. We are as Andy Clark said, natural born cyborgs. Born into a social-linguistic milieu rich in tradition and preinstalled with wonderful learning mechanisms that soak up useful information like sponges, newborn humans effortlessly adapt to the affordances of the most simple environmental elements (like the ground) to the most advanced (the affordance of a book, or a website).

So although representations are not necessary at the basic level of behavioral reaction shared by the unicellulars (bacteria reacting to sucrose by devouring it and using it metabolically), the addition of the central nervous system allows for the storage of affordance information into representational maps. A representational map is a distributed pattern of brain activity which allows for the storage of informational patterns which can be utilized independently of the stimulus event which first brought you into contact with that information. For example, when a bird is looking right at a food cache, it does not need its representational memory to be able to get at the food; it simply looks at the cache and then reacts by means of a motor program for getting at the food sparked by a recognition sequence. However, when the cache is not in sight and the bird is hungry, how does the bird get itself to the location of the cache? By means of a re-presentation of the cache’s spatial location which was originally stored in the brain’s memory upon first caching the food. By accessing stored memory-based information about a place even when not actually at that place, the bird is utilizing representations to boost the cognitive prowess of its nonrepresentational affordance-reaction programs. Representations are thus a form of brain-based cognitive enhancement which allow for the reaction to information which is stored within the brain itself, rather than just contained in the external proximal stimulus data. By developing the capacity to react to information stored within itself, the brain gains the capacity to organize reactions into more complicated sequence of steps, delaying and modifying reactions and allowing for the storage of information for later retrieval and the capacity to better predict events farther into the future (like the bird predicting food will be at its cache even though it is miles away).

7 Comments

Filed under Consciousness, Philosophy

Response to Fred Adams' latest critique of "Embodied Cognition"

Fred Adams has a new article out online in Phenomenology and the Cognitive sciences entitled “Embodied Cognition”. Adams is renowned for being skeptical of the 4E movement in philosophy of mind (embodied, embedded, extended, enacted). He wrote a book with Ken Aizawa called “The Bounds of Cognition” that challenges the core claims of embodied cognition. However, given his familiarity with the literature, I am very puzzled by the paper. He starts off the paper talking about Varela and Gallagher as exemplars of the embodied cognition thesis, but then spends most of the paper talking about how to reduce sentential belief-symbols to literal simulations of motor output. He writes as if sentential comprehension is the main explanatory target of EC theorists when they say “cognition is embodied”.

Anyone who has read Varela and Maturana’s work on autopoiesis would be very confused about this formulation of the problems that embodied cognition sets out to study. Varela says, for instance, that “Living systems are cognitive systems, and living as a process is a process of cognition. This statement is valid for all organisms, with and without a nervous system.” Varela thinks that even the unicellular organism “cognizes” in virtue of its emergent self-organization of autopoiesis. This is the actual claim of embodied dynamic systems approaches to cognition, a far cry from the thesis that:

In the embodiment literature, we find the empirical step consisting of empirical correlations between certain kinds of cognitive processing and sentence comprehension and certain kinds of perceptual/motor performance.

Gibson was never concerned with “sentence comprehension”. While an admirable explanandum, Gibson thought we need to first better understand the more basic cognitive processes before we attempt to theorize about higher cognitive processes. He was almost always concerned with the cognition that we share with our animal cousins, not sentence comprehension or symbolic cognition. Many EC theorists actually propose a dual-level or dual-process model of reasoning wherein there exists a primordial, nonsymbolic level of cognitive processing shared by all animals (online processing) and a evolutionary recent and sententially grounded level of rational, serial processing (offline processing). I don’t know of any serious theorist proposing these two levels of distinctions  who makes the absurd claim that offline processing must be explained strictly in terms of online processing. Once external representations are taken up and integrated with the functioning of the cognitive system, there is no reason to suppose that the mechanism is only that of “simulation”. For example, Gibbs claims that representational (propositional) reasoning depends heavily upon analogical reason, which needs to be analyzed at the appropriate level of abstraction, not that of neurons firing. In all likelihood, it will require different explanatory tools and and terminology to explain both offline processing and online processing. Most EC theorists would simply emphasize the importance of recognizing that propositional reasoning comes after or “out of” online processing on both the phylogenetic and ontogenetic scales.

Accordingly, there seems to be a strange disconnect between Adams’ picture of EC and what the majority of serious theoreticians (that I know of) are proposing. The more I think about it, the more I think that this is a result of a widespread misunderstanding of what EC is, particularly in respect to the original formulations of Merleau-Ponty and Gibson. Some EC critics think that when we say “cognition is embodied” we are claiming that their conception of “cognition” is embodied. In actuality, we are trying to redefine what we mean by “cognition” and move away from definitions of cognition focused on sentential understanding. This is why Evan Thompson follows Varela in saying that all lifeforms exhibit cognition. Cognition is no longer manipulation of symbols, but regulation and coordination of emergent autonomous animacy/agency. This forces us to think about representations in terms of control and coordination of intrinsic movement rather than in terms of mirroring or “belief-formation”. Cognition is not sentence comprehension nor mastery of propositional concepts. We need to come up with a different concept to capture such higher-level processes.

I follow Julian Jaynes in making a distinction between what we can call cognition and narrative-consciousness. Narrative-consciousness enables the type of sentential mastery and understanding that Adams spent most of his time in the paper talking about. Giving the unique representational medium of sentential symbols, I see no reason why there cannot be an abstract analysis of such narrative mastery in terms that do not reduce to “sensori-motor simulation”. Which isn’t to say that we can make no progress on learning about the underlying functional circuitry which enables offline processing. Researching into resting state connectivity and anti-correlated functional networks is now opening up new vistas in understanding the neural distinction between online and offline processing.

This brings me to my next point: the misunderstanding of “meaning” and “affordances”. Adams follows Glenberg and Kaschak in defining affordances as “a set of actions available to the animal.” In this view, Adams seem to suggest that affordances are those cognitive systems which enable and support interaction between animal and environment. But this is exactly wrong. Affordances are not within the animal and they do not “arise” or “emerge” out of the interaction or “relation” between the animal and the environment. Affordances are real and objective. Meaning is external to the animal. For example, the ground affords support to all animals whether or not any particular one of them utilizes it for support. The affordance-property of support is embedded into the actual nature of the ground. What it really is determines what it means for the animal.

Accordingly, meaning is not generated by the interaction by the animal and environment, it is sought out and utilized. I get the feeling many EC supporters make this mistake as well. Meaning is external to the animal and needs to be found and used. For animals with the appropriate bodily capacities then, the process of finding the affordances can be decoupled from the process of using the resource. I therefore have problems with Zwaan and Madden, who Adams quote as saying “…there are no clear demarcations between perception, action, and cognition.”

I think this is stated poorly. For many higher animals, there is a clear distinction between the processing of detecting affordance-information (what Gibson calls “stimulus” or “ecological” information) and the utilization of that information for means of adaptive behavior. The is the distinction that Gibson makes between exploratory behavior and performatory behavior. However, it would be a mistake to conclude from this that the input-output model of perception is therefore right. The fact that the physical stimulus does not equate with the informational stimulus supports that idea that perception is but a perturbation upon an intrinsic dynamic network not a specific input which is mechanically read-off and used to send specific commands. As the frame-problem indicates, any concept of the cognitive system which understands the input to be “raw” or “meaningless” is bound to fail to produce functional specificity across widely changing environment demands. For embodied cognition, the given is already valenced in terms of what kind of information the animal is seeking in accordance with its internal dynamics and regulatory demands. The is the only way to avoid the input-out model. Doing so also allows us to escape from the Myth of the Raw Input, otherwise known as the Myth of the Given.

9 Comments

Filed under Philosophy, Psychology