Monthly Archives: October 2010

Robert Sapolsky on Major Depression in U.S.

Great lecture; very informative. Sapolsky nicely compresses a lot of research and integrates it into a comprehensive bio-social-psychological model.

Advertisements

Leave a comment

Filed under Psychology

Heidegger's Insight Into the Dynamics of Consciousness

Heidegger is usually seen as arguing against all forms of “psychical” theorizing and introspectionist psychology, denying that the human mind is fundamentally a matter of self-consciousness, of peering inwards on its own mental states. For centuries, self-consciousness was said to be the foundation upon which we build our mental world. Heidegger clearly had problems with the introspectionist psychologies of his time, most of which were Cartesian in nature. Instead of grounding our mental states in self-consciousness, Heidegger grounded them in moods.

Heidegger calls mood-mentality “Befindlichkeit”, literally translated as “the state in which one may be found”. Macquarrie and Robinson translate Befindlichkeit as “state-of-mind”. For many Heideggerian scholars, this translation leaves a sour taste in their mouths for its “cognitivist” flavor. I’m going to explain later why I think it is a good translation. But first, what does it mean to be in the “state in which one may be found”?  Right away Heidegger is insistent that this “finding of oneself” is not self-reflexive in nature. Rather, “In a state-of-mind Dasein is always brought before itself, and has always found itself, not in the sense of coming across itself by perceiving itself, but in the sense of finding itself in the mood that it has” (SZ 135).

Many scholars take passages like these as definitive evidence that Heidegger was an anti-cognitivist thinker. Hubert Dreyfus is famous for claiming that Heidegger wanted to kill the “myth of the mental”. Dreyfus’s Heidegger downplayed all forms of mentalistic theorizing, including talk about beliefs and desires, rationality, intellectual judgments, etc. For Dreyfus, what does most of the work is “mindless absorbed coping”. Sure, Dreyfus admits that we can “step back” and rationally deliberate once in awhile, but expert behavior is always a matter of “mindlessness”.

However, this “mindless” reading of Heidegger doesn’t make sense of passages like this one:

Factically, Dasein can, should, and must, through knowledge and will, become master of its moods; in certain possible ways of existing, this may signify a priority of volition and cognition. Only we must not be mislead into denying that ontologically mood is a primordial kind of being of Dasein, in which Dasein is disclosed to itself prior to all cognition and volition, and beyond their range of disclosure. (SZ 136)

This is a really interesting passage (in a really interesting section: 29). It isn’t often you hear Heidegger talk about “mastering” yourself through knowledge and will. Heideggerian scholars would normally say the most important thing in this passage is how moods are prior to cognition. They emphasize the part of the section which says “a state-of-mind is very remote from anything like coming across a psychical condition by the kind of apprehending which first turns round and then back” (SZ 136).

But this denial of primordiality is not to negate the higher-order reflective capacities of knowledge and will, volition and cognition. Let us call these capacities for higher-order reflection consciousness.  To say that moods are prior to consciousness is not to negate that consciousness occurs. It is only a matter of getting the phenomenology straight. For the most part, our decisions are not a matter of consciousness, but rather, of being swept up in the attractive-repulsive forces in the world. Moods are what make possible being directed towards something e.g., a goal, a person, an object, an event. Being directed towards the world is a matter of vital significance, of things mattering to us. “Existentially, a state-of-mind implies a disclosive submission to the world, out of which we can encounter something that matters to us” (SZ 137). Recognizing the phenomenological priority of moods, however, does not require the denial that we are conscious creatures capable of stepping back, reflecting, and rationally deliberating about our moods and experiences so as to arrive at a better decision or clearer understanding of the world. Personally, I think Heidegger’s discussion of “mastery” is almost certainly tied up with his conception of “authenticity”, but that is another post.

I’d like to come back to the concept of “encountering something that matters”. They actually have psychological models of decision-making that are based on the concept of “mattering”, although few of them would recognize their Heideggerian roots. A popular model of drug addiction is called the “incentive salience” model. Robinson and Berridge say, for example, that

(1) Potentially addictive drugs share the ability to produce long-lasting changes in brain organization.
(2) The brain systems that are changed include those normally involved in the process of incentive motivation and reward.
(3) The critical neuroadaptations for addiction render these brain reward systems hypersensitive (“sensitized”) to drugs and drug-associated stimuli.
(4) The brain systems that are sensitized do not mediate the pleasurable or euphoric effects of drugs (drug “liking”), but instead they mediate a subcomponent of reward we have termed incentive salience or “wanting”. We posit the psychological process of incentive salience to be specically responsible for instrumental drug-seeking and drug-taking behavior (drug “wanting”).

In other words, the drug addicts “world” is valenced in such a way that drug-related stimuli trigger “wanting” such that the addict engages in the various automatic subroutines of drug-usage. The addict is not wanting to shoot up at one minute, but then he walks into the room and sees a needle on the table. Because he is “hyper sensitized” to drug-stimuli, the sight of the needle easily triggers a neural wave to cross over the threshold which is inhibiting the drug-using behavior. Once the threshold is reached, the inhibition fails and the task of getting high is automatically carried out. “States-of-mind are so far from being reflected upon, that precisely what they do is to assail Dasein in its unreflecting devotion to the ‘world’ with which it is concerned and on which it expends itself” (SZ 136).

So I actually think “state-of-mind” is a good translation of Befindlichkeit. It captures the sense in which a drug-addict is in a “junkie” state-of-mind. His junkie-moods valence the whole world such that everything pushes or pulls him towards the task of getting high. He discloses the world in accordance with his state-of-mind, which isn’t static, but rather, constantly changing and modifying itself. These mood-mentalities are primordial insofar as they are the motivating force behind all most basic kinds of decision-making. Mood-based decision making isn’t a matter of intellectual deliberation. Rather, as John Protevi says,  “Decisions are precisely the brain’s falling into one pattern or another, a falling that is modeling as the settling into a basin of attraction that will constrain neural firing in a pattern.” Indeed, “Dasein has, in the first instance, fallen away from itself as an authentic being its Self, and has fallen into the ‘world’ (SZ 175).

3 Comments

Filed under Consciousness, Heidegger, Phenomenology

Keywords

Photobucket

1 Comment

October 21, 2010 · 1:04 pm

What is consciousness?

In the literature, there are roughly two ways to pin down the explanandum of phenomenal consciousness: first-order approaches and second-order approaches. The difference is simple enough. For first-order theories, phenomenal consciousness is synonymous with awareness; for second-order theories, phenomenal consciousness is associated with the awareness of awareness. Fred Dretske is well-known for defending a first-order definition. In his 1993 paper Conscious Experience, he says:

[The] distinction between a perceptual experience of x and a perceptual belief about x is , I hope, obvious enough. I will spend some time enlarging upon it, but only for the sake of sorting our relevant interconnections (or lack thereof). My primary interest is not this distinction, but, rather, in what it reveals about the nature of conscious experience, and thus,  consciousness itself. For unless one understanding the difference between  a consciousness of things (Clyde playing the piano) and a consciousness of facts (that he is playing the piano), and the way this difference depends, in turn, on a difference between a concept-free mental state (e.g., an experience) and a concept-charged mental mental (e.g., a belief), one will fail to understand how one can have conscious experiences without being aware that one is having them. One will fail to understand, therefore, how an experience can be conscious without anything – including the person having it – being conscious of having it. Failure to understand how this is possible constitutes a failure to understand what makes something conscious and , hence, what consciousness is.

For Dretske then, the explanandum of consciousness is simple: awareness. Take the famous truck-driver example from Armstrong:

After driving for long periods of time, particularly at night, it is possible to “come to” and realize that for some time past one has been driving without being aware of what one has been doing. The coming-to is an alarming experience. It is natural to describe what went on before one came to by saying that during that time one lacked consciousness.

Drestke thinks exactly the opposite. The truck-driver is conscious of the road the whole time, otherwise he wouldn’t be able to differentially respond to the road conditions. Drestke claims that in order to recognize differences (such as a road obstacle),  we must be aware of both the road and the obstacle. If we weren’t aware that the obstacle is there, how would we be able to “see it” and then respond appropriately by driving around it? For first-order theorists, phenomenal consciousness is simply synonymous with awareness.

When asked to define awareness, first-order theorists often say that it means, roughly, “to experience”. But when asked what this means, they usually do not offer a robust definition. First-order theorists love to say that if you got to ask, you ain’t never going to know. In other words, they don’t provide arguments for this definition, they just claim it is obvious. Everyone knows what experience is, right? It’s that strange sense that it feels a certain way to be alive and perceive the world. There is “something it is like” to experience the world. It seems to be one way or another.

This is, of course, a complete circle of reasoning. But most first-order theorists acknowledge this; they just don’t think it’s a problem. They say that we can come up with a theory of experience later, but right now it is important to get our definitions straight: consciousness is awareness and one doesn’t have to be aware that you are conscious  in order to be conscious.

Second-order theorists deny this and claim that first-order experiences require a higher-order representational state in order to generate true “phenomenal feels”, or “what-it-is-likeness”. The most well-known second-order theorists in the analytic literature are David Armstrong, David Rosenthal, William Lycan, Peter Carruthers, Robert van Gulick, Uriah Kriegal,  Rocco Gennaro, and a couple others. Second-order theorists are a fractious bunch. Armstrong and Lycan take what’s called a Higher-order Perception theory (HOP). This is often called an “inner sense” theory because it posits an internal perceptual “spotlight” that is scanning the lower-order states and this scanning generates phenomenal feels. Rosenthal and Carruthers take what’s called a Higher-order Thought theory (HOT). This is pretty much the same as the HOP theory, they just don’t like the spotlight metaphor. Instead of a spotlight, they talk about higher-order beliefs and representations. Kriegal takes what’s called a self-representational higher-order approach where phenomenal feels are generated when the system represents itself to itself in a particular way. The one thing they all agree on though is that it is conceptually plausible to suppose that an agent could have nonconscious experiences, something the first-order theorists flat out deny as violating basic intuitions.

Second-order theorists also divide on the question of whether animals have higher-order mental states, and thus, phenomenal consciousness. Theorists like Van Gulick are reluctant to deny nonhuman animals phenomenal consciousness, and thus they claim that higher-order representations aren’t that cognitively sophisticated and it’s likely a widespread phenomenon in  the animal world. Theorists like Carruthers bite the bullet and deny that nonhuman animals have phenomenal consciousness. Carruthers thus claims that there is nothing “it is like” to be an animal. They have experiences, but these experiences are nonconscious and don’t “feel” in the way that our experiences “feel”. There is something special – phenomenal – about our own experiential states.

Personally, I think this whole debate is terribly confused. Here’s my understanding:

All lifeforms possess “phenomenal consciousness”. There is something-it-is-like to be a bacterium just as there is something-it-is-like to be a bat. There can be degrees of experiential richness but it denies common sense to suppose that there is nothing it is like to be an embodied, living organism. However, I do not think that phenomenal feels require higher-order representations in order to feel one way or another. Phenomenal feels are generated at the first level of experience.

But this is precisely wrong. Phenomenal feels are not “generated” as if they were objects or things the brain was literally squirting out. This is a homuncular theory right down to its core. We must be careful not to let our evolutionary disposition for object-oriented abstraction fool us into thinking that experiences are “generated” as if they were physical objects. Phenomenal feels are not generated, they are what-it-is-like to exist as a lived body. Existence is to be cashed out behaviorally. But not in terms of Skinner’s behaviorism, a dead theory based on antiquated notions of linear stimulus-response mechanics and simple associationist learning models. Behavioral models are now based on an understanding of dynamic systems theory and complex categorization and pattern-recognition learning models. The concept of stimulus-response is replaced by concept of self-determining behavior and attention-salience models of decision making. The organism is a self-organized, self-determining, closed operational loop. The material products made by the organism are  the components that play a role a making up the production factories that generate the very structural components  of the organism. Organisms are organizatinally closed but thermodynamically open. I think it is intuitive to understand these dynamic temporal processes as having the “right stuff” for phenomenal feeling. What-it-is-like to be  a rock is radically different and of a different register than what-it-is-like to be an autonomous dynamic system.

But here is where I disagree with contemporary higher-order approaches. Whereas I do think consciousness requires a second-order explanation, I do not think that second-order theories of consciousness are supposed to be explaining phenomenal feeling. I think that phenomenal feels are a separate explanandum than consciousness. I thus take what’s called a narratological or social-constructivist approach to consciousness. Here, I follow Julian Jaynes in claiming that consciousness proper is “[T]he development on the basis of linguistic metaphors of an operation of space in which an ‘I’ could narratize out alternative actions to their consequences”. Recent defenders of social-constructivist approaches to conscious self-hood include Julian Jaynes, Gilles Deleuze, Charles Taylor, Daniel Dennett, Tor Norretranders, J. D. Velleman, Daniel Hutto, John Protevi, and James Austin (and many others).

I would thus say that an earthworm is “aware” of certain properties in the environment, but that it is, strictly speaking, not “conscious” because it does not have the right sort of higher-order metacognitive awareness. There are strong theoretical and empirical reasons for denying nonhuman animals the capacity for second-order cognition. While it is certainly possible to use a second-order explanation for nonhuman animal behavior, for any given case, I guarantee that there is a first-order explanation that is biologically plausible and theoretically adequate to account for all the facts. I also think that first-order explanations are more metaphysically parsimonious have more predictive power precisely because they are more biologically realistic given their dependence on dynamic systems theory and autopoietic, adaptive self-determination So what is consciousness proper? Consciousness is

…is an operation rather than a thing, a repository, or a function. It operates by way of analogy, by way of constructing an analog space with an analog “I” that can observe that space, and move metaphorically in it. It operates on any reactivity, [consciously selects] relevant aspects, narratizes and [assimilates] them together in a metaphorical space where such meanings can be manipulated like things in space. Conscious mind is a spatial analog of the world and mental acts are analogs of bodily acts. (Jaynes, 1976)

BONUS:

For a more systematic account of my theory of cognition and consciousness, check out my paper that was recently published in Phenomenology and the Cognitive Sciences, “What is it like to be nonconscious? A defense of Julian Jaynes”:

http://www.springerlink.com/content/e832238u36211688/

(for those without university access, the preprint copy can be found here)

2 Comments

Filed under Consciousness, Philosophy, Psychology

The pipedream of genetic astrology

The relationship between genes and visible traits is very different from the way in which it is usually presented to the public. The idea that a gene is a sequence of DNA that codes for a product, and variations in the DNA sequence can cause a difference in the product and hence in the phenotype, is just too simplistic. Coding sequences are only a small part of DNA, and DNA is just a part of the cellular network that determines which products are produced. When and where these products are produced depends on what goes on in other cells and what the environmental conditions are like. Cellular and development networks are so complicated that there is really no chance of predicting what a person will be like merely by looking at their DNA. Although it has considerable rhetorical and marketing power, the dream of genetic astrology is just that – a dream.

~Eva Jablonka and Marion J. Lamb, Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life, p. 67

2 Comments

Filed under Random

Most groundbreaking psychology text of the last decade?

Photobucket

Although I am only 110 pages in, I think the answer is Ian McGilchrist’s The Master and His Emissary: The Divided Brain and the Making of the Western World. The book is ridiculously well researched. The second chapter alone, where McGilchrist synthesizes an enormous amount of data concerning the functions of each respective hemisphere, has a staggering 525 endnotes, each citing one or more scientific studies. The scholarly work that went into this book is epic.

What drew me to the book was its Jaynesian thesis: the brain, and hence the mind, is fundamentally divided. McGilchrist basically argues that we can make sense of the history of human civilization in terms of how the left and right hemispheres functionally developed over time, with the right hemisphere being the “Master” and the left hemisphere being the “Emissary”. This idea of a master-slave relationship is more or less similar to the Jaynesian distinction between the god-complex and the human-complex, respectively. The rise of modernity occurred when the Emissary increasingly isolated itself from the Master, locking itself into a self-determined logical cage, viewing the world through an objective, mechanical lens.

Like I said, I am only 110 pages in. But this book has already stunned me in its scope and significance. We can no longer talk about brain function without recognizing the fundamental asymmetries between the left and right hemispheres.


4 Comments

Filed under Psychology

On the Direct Perception of a Property

The debate between direct perception and indirect perception has been going on for quite some time. Indirect theorists often point out that anatomical facts such as afferent and efferent nerves undoubtedly indicate that perception is indirect because simple anatomy tells us that the stimulus has to first be transduced and shuttled through the various nervous channels before being cognitively processed and transformed into a genuine “perception”. But if we are going to make  theoretical progress , we must realize that anatomical facts will not settle this debate.

Direct theorists have never denied that the perceptual process can be  artificially decomposed into anatomical facts. Both sides can agree that a stimulus must pass through various nervous channels; it does not get a free pass straight to the mind, the stimulus must be mediated by the brain. But what is the nature of this stimulus? Theorists on both sides rarely make their definition of stimulus explicit. It is assumed that everyone knows what everyone else means when they talk about the perceptual stimulus. This is a mistake. The issue is much more complicated that it first seems.

Indirect theorists often start their psychologizing from the perspective of neuroanatomy and physiology. They first zoom in on the retina very close and attempt to build a psychological model of vision beginning with meaningless physical intensities as proposed in the physical sciences. It is usually assumed that any psychological theory of visual perception must explain how the brain interprets this raw physical data (“sense-data”) and converts it into a meaningful percept. Sometimes this transition from meaninglessness to meaning is talked about in terms of the generation of true beliefs or true representations. But the essential question is always, How do you go from raw physical data to meaningful perception when the meaningless physical intensities are highly ambiguous and often irrelevant?

Direct theorists also make a distinction between meaningless sensation and meaningful perception, but reject the idea that the perceptual stimulus is meaningless. The classic example is the Ganzfeld experiments. 20th century vision scientists discovered that if the physical stimulus is undifferentiated, meaningful perception fails to occur even though the visual system is being stimulated. Direct theorists thus make a distinction between sensory stimulation and stimulus information. Imagine standing in an open field on a bright, cloudless day. When you orient yourself such that the sky fills your entire visual field, your sensory receptors are being stimulated but there is no meaningful perception occurring because there is no meaningful information to be differentiated from the stimulus because the stimulus is entirely homogeneous and undifferentiated. In other words, the undifferentiated sky contains no stimulus information, although it is stimulating.

Now here is the important point. The facts associated with sensory stimulation are facts of an anatomical or physiological nature. But they are not psychological facts. We cannot decide between an indirect theory and a direct theory on the basis of these physiological facts. We must focus on the perception of meaningful stimulus information.

Indirect theorists explain meaningful stimulus information with a mix of association psychology and computational representationalism. Meaningful percepts are generated whenever the cognitive system makes certain inferences (associations) from the raw stimulus with the premises either innate or learned in experience. Classic cognitive science talks about explicit symbol systems and generalized intelligence, but modern computational stories have become more and more complex. But almost all of them assume that the quintessential problem for visual perception is turning meaningless data into meaningful perceptions. This is nothing less than the mind/body problem applied to visual science.

But direct theorists reject this approach altogether. Although direct theorists admit that stimulation is sometimes meaningless (such as when we are looking at the undifferentiated sky or in a snow storm), they emphatically insist that, under normal circumstances, the immediate terrestrial environment is differentiated and highly organized. The differential structure of the ambient energy fields surrounding an organism is informationally rich. But not in the Shannon cybernetics sense of information (which was never meant to be a psychological theory). The environment is informationally rich insofar as it contains information specific to affordances.

Direct theorists claim that the ambient energy fields are filled to the brim with redundant information specific to affordance properties. Affordance properties are real, objective facts about the environment. I thus disagree with Chemero and side with Reed on the ontological status of affordances. On my view, affordances are real properties of the environment that persist through time. The fact that the ground will afford my locomotion upon it is a fact that is independent of whether I actually utilize the ground for the purpose of locomoting. But it would be a mistake to think that this fact about the world is a molecular or local fact. The fact that the ground surface supports locomotion is a molar fact.

If we look at the ground on the timescale of millions of years, the ground is but a ceaseless flow of energy, ever shifting and changing. On the ecological timescale, however, the ground is stratified, ossified, and stabilized. And since our perceptual systems are tuned into this ecological scale, we do not perceive the molecular flux of the ambient energy fields. The ground is perceived as a continuous rigid surface with the property of “supportability”. This is an affordance-property. The detection of such properties by the nervous system is highly useful. We can expect that evolved systems would be optimally tuned to detect these properties because they are facts about the environment most relevant to survival.

We can cash this out psychologically in terms of how the perceptual systems seek information specific to these affordance properties. The affordance-property of supportability is a persisting fact about the ground surface. Under normal evolutionary conditions, the perception of this affordance-property is done so as to coordinate the motor system  and enable successful navigation through the terrestrial environment.

Take Herbert Simon’s example of an ant crawling along the beach surface. On first blush, its locomotive pathway seems highly complex and difficult to explain.The indirect theorist would attempt to explain its locomotive patterns in terms of internal control wherein the motor sysem is totally in charge of directing where to place each leg.  However, the direct theorist would explain its locomotive patterns by saying that the ant is merely following the contours of the sand. Rather than the ant controlling itself from within, the environment is guiding the ant. Put another way, the ant is using the affordance-properties of the beach to coordinate and regulate its behavior. The pathway looks complex only because the sand surface is complex, but the psychological control is actually quite simple.

On the neural level, we can say that there is a intrinsic flexibility and variability in the nervous system, otherwise the system would never be able to handle the complexity and novelty of the ever changing environment. However, the persisting affordance properties of the environment are sought out and detected so as to help coordinate motor behavior. Rather than the perceptual stimulus being a raw mechanical instruction, the perceptual stimulus helps “select” or “trigger” useful patterns of neural activity from the intrinsic variability. Faced with the same tasks and problems over a developmental life cycle, certain patterns are going to be burnt in that help the animal cope with the environment. But it would be a mistake to decompose the task of action-coordination into purely internal neural circuity. The affordance theory recognizes how animals use both internal and external means of coordinating behavior. The neural system readily uses information specific to affordances to regulate behavior. This means that some behavior control is “external”. The problem then is not, How does the brain generate meaning from meaningless data? Rather, the problem is, How does the brain seek out meaningful information and then use it to regulate and coordinate its autonomous behavior?

Leave a comment

Filed under Philosophy, Psychology