Jake Young over at Pure Pedantry has an interesting article on cognition and emotion. He summarizes a review paper by Luiz Pessosa that argues that cognition and emotion are not separate. He goes through a number of different arguments in favor of such a thesis, which I will not list here, but needless to say, seem quite compelling. However, since Antonio Damasio’s work in the 90s, I don’t think this is a very radical hypothesis( Damasio published Descarte’s Error in 1994). Fascinating article nevertheless.
Monthly Archives: January 2008
In my last post, I briefly discussed the most pertinent results from Benjamin Libet’s 1982 experiment and some of the implications. In this post, I would like to put on my speculative hat and talk about an alternative to the dichotomy I set up in the previous post I linked.
If you don’t recall, the dichotomy was set up between what appears to happen when we hear a noise behind us and what Libet thought was going on. What seems to be going on is that we consciously hear a sound and then turn around, but Libet proposed that we unconsciously hear it first, turn around, and then our brain performs a “backwards subjective referral” of the event to make it seem like the first scenario is happening. In my last post I said this was a false dichotomy and now I will speculate on what I think is really going on. Bare with me.
Imagine that a human body is sitting in a classroom and attending to the lecturer. All of a sudden this attentive human becomes aware of a door opening behind his back and cocks his head around accordingly in order to see what just interrupted the lecture. All of this activity, including the awareness of the noise of the door opening and the innervation of the appropriate neck and back muscles, all happened within the more-or-less continuous, 3d perceptual space that we all are familiar with.
So, while there are hundreds of potential things to be attentive of in any normal classroom(including the awareness of your own body sitting down), the available cognitive processing power responsible for “shifting the attentional spotlight” allocated almost the entirety of its capacity to the perception of the door opening, probably because of the neural contrast/distinction of such a sudden noise but also because this human body knows from past experience that usually the only things that open doors are other humans and it is this implicit social knowledge imbedded into action-schemas that makes it hard not to notice sudden changes in the social environment.
This contextual social knowledge makes it obvious why more attentional capacity would be allocated to the perception of the door opening rather than say the perception of his body twisting around or the perception of practically anything else in the room, even though the overall body-system never stopped being aware of these things, it is just that the percentage of attentional capacity devoted to the door opening made that particular activity more vivid that anything else upon episodic recollection. With the situation set up in this way, let us try go back to explaining Libet’s half-second delay.
Under my conceptualization, the reason why the neural isomorphic representation of the door opening seems to “echo”(thanks Dennett) around the brain for an “extra” half second past the “necessary” evoked motor potentials is because the brain is essentially “telling a story to itself” about the event. The functionality of this generated “after-the-fact” story comes from the fact that the door-opening-event is now easily fed into a variety of different cognitive systems thanks to the considerably long(in brain terms) half-second of processing necessary to turn such “noisy” sensory data into higher-level “conceptual” representations that implicitly include such linguistic conceptual relativities as self/other, internal/external, etc. important for metaphor-based story telling.
Furthermore, I speculate that evolutionarily speaking, the most advantageous way of putting these high-level representations of low-level sensory-action-data to use would be through an advanced memory/prediction/empathy system. This interrelated triad would be of great advantage in a social atmosphere. Perhaps I will elaborate on this triad a later post, but for now I am done wildly speculating from my armchair.
In 1982, Benjamin Libet carried out a remarkable study on consciousness that is still being debated by contemporary philosophers and scientists. In today’s post I would like to briefly highlight the results and spell out some implications. Here are the most pertinent results as far as I can see them:
- In order to “consciously experience” a sensation, it must apparently bounce around the somatosensory cortex, or some other “high-level” area of the cortex for about half a second, probably isolated to the frontal areas(Libet, 1982)
- “A touch on the skin that the subjects would otherwise have reported feeling was retroactively masked up to half a second later by a stimulation to the cortex”(Blackmore, 2004)
Okay, so how can phenomenological consciousness “drag” half of a second behind the real world when clearly we are able to react much faster than that? The most obvious idea is to say that consciousness has no causal power, it is merely a resultant and not a force (in James’ terms). However, this is at odds with the “hard problem” of consciousness because if our “unconscious” does all the important work, such as reacting to dangerous stimuli in split-second situations, there would have been no evolutionary pressure for phenomenal consciousness to tag along and “dangle” half a second behind the real important things going on in the world, such as a stepping on a snake or braking for a red light.
I believe that I can sketch out a framework that can reasonably explain how consciousness could happen “after the fact”, yet still have enough function that it could easily have evolved in the way that it did given the close-knit social structures of our early hominid ancestors.
Let us look at Blackmore’s example of turning around to look who just opened a door while you are sitting in a classroom. This is what seems to happen:(from blackmore)
- Consciously hear sound
- Turn around to look
According to Libet, it should be more like this:
- Unconsciously “hear” sound
- Turn around to look
- Backwards subjective referral of consciousness to make it seem like Scenario 1 is what actually happened
So how do we extricate ourselves from this mess? I think the first step is to recognize that you are setting up a false dichotomy of sorts by trying to directly reconcile scenarios 1 and 2 as the only two options. Furthermore, we should follow Dennett’s advice and use extreme conceptual caution when using the terms “conscious” and “unconscious”, because the nature of our language necessarily forces an implicit acceptance of the Cartesian Theater whenever we use the language of conscious/unconscious, and it is this intuitive dichotomy that makes it impossible to solve these kinds of philosophical problems using ordinary conceptual frameworks.
However, if we use the framework of enactive perception and attentional theories of consciousness, we will get a better understanding of why trying to decide between either Scenario 1 or 2 will only result in frustration and headaches. In my next post I will discuss an alternative way of looking at this problem. Stay tuned!
Libet, B. 1982 Brain Stimulation in the study of neuronal functions for conscious sensory experiences. Human Neurobiology 1, 235-42
Blackmore, S. 2004 Consciousness: An Introduction
Basically, Harvard researchers built a machine that slices thin layers of brain tissue and then takes high-resolution pictures with an electron microscope, hoping to form detailed diagrams of the actual circuitry of the brain i.e. “connectomes”. This process stands to generate a huge amount of data on how the brain is actually wired. Exciting times!
I am a big fan of Douglas Hofstadter, the author of the classic Godel, Escher, Bach and more recently, I Am A Strange Loop. Hofstadter is a master of metaphors and today I would like discuss one metaphor in particular, The Careenium.
Hofstadter asks you to imagine a frictionless billiards table with lots and lots of tiny, magnetic marbles, or “simms”(small interacting magnetic marble), bouncing around, careening endlessly. Because these simms are slightly magnetic, they are apt to stick together into clusters called “simmballs”(see where this is going yet?). These simmballs are more or less stable with simms transferring in and out endlessly. Furthermore, imagine that the walls of the billiard table are sensitive to the outside environment and for every force, the walls flex inwardly slightly. Naturally, this flexibility is reflected in the careening simms and ultimately in the large simmballs.
Thus the simmballs be be said to encode for the events in the environment and in principle, if someone was well-versed in Careenium mechanics, they could interpret the simmballs as being symbolic. In case you haven’t figured out the mappings of the metaphor yet, let me lay it out explicitly. The simms map onto neurons(small events) and the simmballs map onto patterns of neurons(larger events) and by virtue of encoding for the environment, the simmballs(symbols) have representational qualities.
The point of Hofstadter’s metaphor is relatively simple. He wants you to imagine a scenario where the brain(Careenium) could be seen in two different perspectives. One perspective, which comes naturally to scientists, is reductionist. That is, one could in principle view all the activities of the Careenium in terms of the tiny simms bouncing around, acting in accordance with well-known laws of physics. On the other hand, one could take could the high-road, and view the system in terms of the larger simmballs and their macroscopic, representational properties.
In order to help you visualize the implications of the metaphor, Hofstadter asks you to imagine two perceptual shifts of the Careenium. The first shift is to speed everything up, so that the fast-moving simms become too fast to be seen by the naked eye and the larger, slower moving simmball clusters become more active, bouncing around in a lively fashion, interacting with each other. The second perceptual shift involves zooming out so that the the simmballs become the only thing one can attend to. With these two perceptual shifts in mind, Hofstadter asks the following question: who shoves whom around inside the Careenium?
On one hand, there is the view that the tiny, meaningless simms are the primary “shovers” and the simmballs are merely along for the ride. On the other perspective, zoomed out and sped up, the simmballs are the only interesting feature of the system, with a rich symbolic “logic” that corresponds to the environment being represented. Which perspective is the “truth”? Well, as Hofstadter says, it “all seems topsy-turvy.” I’ll leave you with a quote from the book:
From our higher-level macroscopic vantage point as we hover above the table, we can see ideas giving rise to other ideas, we can see one symbolic event reminding the system of another symbolic event, we can see elaborate patterns of simmballs coming together and forming even larger patterns that constitute analogies-in short, we can visually eavesdrop on the logic of a thinking mind taking place in the patterned dance of the simmballs. And in this latter view, it is the simmballs that shove each other about, at their own isolated symbolic level.
I thought this article was an interesting insight into the thought process of those who disregard the relevance of neuroscience for understanding the mind. The terminology Lehrer uses gives him away:
Even our sense of consciousness is explained away with references to some obscure property of the frontal cortex.
[According to reductionism]The mind, in other words, is just a particular trick of matter, reducible to the callous laws of physics.
You are simply an elaborate cognitive illusion, an “epiphenomenon” of the cortex. Our mystery is denied.
All of these quotes highlight a curiosity in Lehrer’s phrasing. Does he really think that if neuroscience succeeds in explaining cognitive phenomena in mechanistic terms, the mind will be “explained away”? Was heat “explained away” when we reduced it to the movement of molecules? Were the properties of water “explained away” when we reduced it to H2O? On the contrary, the phenomena of both are still with us and it is ridiculous to assume that if neuroscience is successful it will reduce the mind to “just a trick”. On the contrary, the mind will be seen as a complicated set of cognitive phenomena not just “reducible to” but explained by mechanisms in the brain/body system.
So, the question isn’t as Lehrer says whether or not neuroscience can move “beyond reductionism”, but rather, what can be successfully explained in mechanistic terms and what can’t? It is clear that there is useful phenomenological data to be had at the higher levels of abstractions that characterize our thoughts about the mind, but it should be said again that these abstractions aren’t “just” tricks, but rather, complicated phenomena in their own right that need explaining. Whether or not that explanation will be in the terms of neuroscience or at the higher level of cognitive psychology has yet to be determined, but it seems clear that the empirical method itself will give us a clearer picture of the mind.
This brings me to my last point, and that is whether or not neuroscience is capable, in principle, of explaining all cognitive phenomena. For me, the answer is a resolute yes, but I want to emphasize the term “in principle”, because explaining all cognitive phenomena at the molecular level may be pragmatically out of reach. We should be grateful that evolution has given us a language capable of discussing cognitive phenomena at a higher abstraction than that of science, but we should also learn to accept the fact that ultimately everything in the universe, including the mind, can be “reduced” to the physical motions of matter. It might seem like I am making a category mistake, but it seems intuitively plausible to me. I am not saying that all cognitive phenomena will be reduced to the physical level, but I think in principle, it can be. But I don’t think that is a very interesting idea. What is more interesting to me is the question of what will be explained in mechanistic terms and what won’t, and that is a pragmatic question of science that we will be continuously working on for what seems like an indefinite period of time.
In his Philosophical Investigations, Wittgenstein said ‘Don’t say “there must be something common or they would not be called ‘games'”- but look and see whether there is anything common to all.’
This is excellent advice, but Wittgenstein himself did not follow it. He famously declared that when it comes to games, instead of a definition, there is only a “complicated network of similarities overlapping and criss-crossing.” Thus, Wittgenstein used games as an example par excellence that there are at best ” family resemblances” characterizing the definitions of most words, instead of necessary and sufficient conditions.
In his supremely witty and delightful book The Grasshopper:Games, Life, and Utopia, Bernard Suits takes up Wittgenstein’s advice and actually looks to see if it is possible to define games. Suit’s definition is as follows:
To play a game is to engage in activity directed towards bringing about a specific state of affairs, using only means permitted by rules, where the rules prohibit more efficient in favor of less efficient means, and where such rules are accepted just because they make possible such activity…playing a game is the voluntary attempt to overcome unnecessary obstacles.”
By engaging in brilliant parodies of Platonic dialogues, Suits runs through many counter-examples and deftly defends his definition against the objection that it is too broad or too narrow. I would only be spoiling the book if I attempted to summarize some of the pithy and playful dialogues, so I can only suggest that you read it yourself! I leave you with a quote from Simon Blackburn, who says says that Suits “engages not only Wittgenstein but human life itself at the highest level, in a book that challenges philosophical orthodoxies, while all the time flowing like honey.”
In The Ecological Approach to Visual Perception James Gibson poses the following question:
The essence of an environment is that it surrounds an individual…the term surroundings is nevertheless vague, and this vagueness has encouraged confusion of thought. One such is the question of how the surroundings of a single animal can also be the surroundings of all animals. If it is assumed that no two observers can be at the same place at the same time, then no two observers ever have the same surroundings. Hence, the environment of each observer is “private,” that is unique.
So, how does Gibson resolve such a philosophical “puzzle”? He first notes that one can consider the layout of the surrounding surfaces in terms of a stationary point of observation, or one can consider the surrounding surfaces in terms of a moving point of observation. This latter consideration is much more useful because animals typically move about. So, for Gibson, “the available paths of locomotion in a medium constitute the set of all possible points of observation.”
Thus, all animals have an equal opportunity to explore the “persisting substantial layout” of the environment and in this way it “surrounds all observers in the same way that it surrounds a single observer.” By reconceptualizing visual perception in ecological terms, Gibson is able to cast off the ancient tradition of treating observers as standing “at the center of his or her private world.”
For Gibson, this fact of a moving point of observation is central to his approach to perception and its implications are “far-reaching”.
I have been reading a lot of Dreyfus and Heidegger lately, and naturally, I have been slightly leaning towards the anti-representationalist camp. By anti-representationalism, I mean the school of thought that deemphasizes the importance of representations in cognition in favor of an embodied, enactive approach to the traditional philosophy problems. Don’t get me wrong, I am still in favor of such approaches, but thanks to a discussion over at Pete Mandik’s blog, I have turned a more sympathetic ear to the representationalist camp.
Two papers that were linked in the blog discussion made me re-think my position. The first is a reply to Dreyfus by Rick Grush and Pete Mandik. In the paper they argued that representations have explanatory usefulness and furthermore, that just because an action is context-dependent doesn’t mean that that activity isn’t representational. They also defend representationalism on phenomenological grounds with examples such as the ability to represent alternative chess-positions when playing. Dreyfus would counter by saying that truly “skilled” grand masters do not make such representations but rather engage the chessboard and “deal” with it non-representationally. I think Dreyfus would be right, but that would be an exceptional case. I imagine that most people are not able to cope with the chessboard in such a manner and have to consciously represent the board and alternate possibilities.
The second paper that pushed me further from the anti-representationalist camp, posted by Eric Thomson, was by William Bechtel. In this paper, Bechtel discusses dynamical systems theory and the role for representations and explanation in models of cognition. Bechtel defuses the revolutionary character of dynamic systems theory and instead discusses how such approaches can complement more traditional representational and mechanistic explanatory models.
So, while I still hold that for some cases, such as action, a minimal representational approach is superior, thanks to Mandik and Bechtel, I have become much more sympathetic towards explanatory models of cognition that utilize representations.
Abstract:We argue that heterophenomenology both over- and under-populates the intentional realm. For example, when one is involved in coping, one’s mind does not contain beliefs. Since the heterophenomenologist interprets all intentional commitment as belief, he necessarily overgenerates the belief contents of the mind. Since beliefs cannot capture the normative aspect of coping and perceiving, any method, such as heterophenomenology, that allows for only beliefs is guaranteed not only to overgenerate beliefs but also to undergenerate other kinds of intentional phenomena.
I thought this was an interesting critique of Dennett’s heterophenomenology. If you don’t know, heterophenomenology is a research methodology that acts as “a bridge – the bridge – between the subjectivity of human consciousness and the natural sciences.” Essentially, the heterophenomenologist is an objective gatherer and interpreter of first-person subjective reports who doesn’t construe the reporter as completely authoritative.
What this interpersonal communication enables you, the investigator, to do is to compose a catalogue of what the subject believes to be true about his or her conscious experience.
So, the heterophenomenologist interprets all intentional phenomena as beliefs. This is a problem for Dreyfus and Kelly because it overgenerates mental content. They use the example of going out of a door to illustrate their point on overgeneration. If you ask someone going out of a door whether they “believed there was a chasm on the other side”, they might say yes, but in reality, as they were going out of the door, they were thinking no such thing but were merely responding to the “to-go-out” solicitation given by the door. No beliefs were involved in the act at all, just pure motor intentionality.
This last point on “motor intentionality” is crucial, because Dreyfus and company also accuse the heterophenomenologist of undergenerating intentional contents.
But to deny that skillful coping involves belief is not to deny that it lacks intentional content altogether. There is a form of motorintentional content that is experienced as a solicitation to act. This content cannot be captured in the belief that I’m experiencing an affordance. Indeed, as soon as I step back from and reflect on an affordance, the experience of the current tension slips away. Since beliefs cannot capture this normative aspect of coping and perceiving, any method, such as heterophenomenology, that allows for only beliefs is guaranteed not only to overgenerate beliefs but also to undergenerate other kinds of intentional phenomena.