Tag Archives: Dreyfus

Coming Around on Representationalism

I have been reading a lot of Dreyfus and Heidegger lately, and naturally, I have been slightly leaning towards the anti-representationalist camp. By anti-representationalism, I mean the school of thought that deemphasizes the importance of representations in cognition in favor of an embodied, enactive approach to the traditional philosophy problems. Don’t get me wrong, I am still in favor of such approaches, but thanks to a discussion over at Pete Mandik’s blog, I have turned a more sympathetic ear to the representationalist camp.

Two papers that were linked in the blog discussion made me re-think my position. The first is a reply to Dreyfus by Rick Grush and Pete Mandik. In the paper they argued that representations have explanatory usefulness and furthermore, that just because an action is context-dependent doesn’t mean that that activity isn’t representational. They also defend representationalism on phenomenological grounds with examples such as the ability to represent alternative chess-positions when playing. Dreyfus would counter by saying that truly “skilled” grand masters do not make such representations but rather engage the chessboard and “deal” with it non-representationally. I think Dreyfus would be right, but that would be an exceptional case. I imagine that most people are not able to cope with the chessboard in such a manner and have to consciously represent the board and alternate possibilities.

The second paper that pushed me further from the anti-representationalist camp, posted by Eric Thomson, was by William Bechtel. In this paper, Bechtel discusses dynamical systems theory and the role for representations and explanation in models of cognition. Bechtel defuses the revolutionary character of dynamic systems theory and instead discusses how such approaches can complement more traditional representational and mechanistic explanatory models.

So, while I still hold that for some cases, such as action, a minimal representational approach is superior, thanks to Mandik and Bechtel, I have become much more sympathetic towards explanatory models of cognition that utilize representations.


Filed under Philosophy, Psychology

Dreyfus Strikes Again

Heterophenomenology: Heavy-handed sleight-of-hand

Abstract:We argue that heterophenomenology both over- and under-populates the intentional realm. For example, when one is involved in coping, one’s mind does not contain beliefs. Since the heterophenomenologist interprets all intentional commitment as belief, he necessarily overgenerates the belief contents of the mind. Since beliefs cannot capture the normative aspect of coping and perceiving, any method, such as heterophenomenology, that allows for only beliefs is guaranteed not only to overgenerate beliefs but also to undergenerate other kinds of intentional phenomena.

I thought this was an interesting critique of Dennett’s heterophenomenology. If you don’t know, heterophenomenology is a research methodology that acts as “a bridge – the bridge – between the subjectivity of human consciousness and the natural sciences.” Essentially, the heterophenomenologist is an objective gatherer and interpreter of first-person subjective reports who doesn’t construe the reporter as completely authoritative.

What this interpersonal communication enables you, the investigator, to do is to compose a catalogue of what the subject believes to be true about his or her conscious experience.

So, the heterophenomenologist interprets all intentional phenomena as beliefs. This is a problem for Dreyfus and Kelly because it overgenerates mental content. They use the example of going out of a door to illustrate their point on overgeneration. If you ask someone going out of a door whether they “believed there was a chasm on the other side”, they might say yes, but in reality, as they were going out of the door, they were thinking no such thing but were merely responding to the “to-go-out” solicitation given by the door. No beliefs were involved in the act at all, just pure motor intentionality.

This last point on “motor intentionality” is crucial, because Dreyfus and company also accuse the heterophenomenologist of undergenerating intentional contents.

But to deny that skillful coping involves belief is not to deny that it lacks intentional content altogether. There is a form of motorintentional content that is experienced as a solicitation to act. This content cannot be captured in the belief that I’m experiencing an affordance. Indeed, as soon as I step back from and reflect on an affordance, the experience of the current tension slips away. Since beliefs cannot capture this normative aspect of coping and perceiving, any method, such as heterophenomenology, that allows for only beliefs is guaranteed not only to overgenerate beliefs but also to undergenerate other kinds of intentional phenomena.

Leave a comment

Filed under Philosophy, Psychology

Heidegger and AI

Why Heideggerian AI Failed and How Fixing it Would Require Making it More Heideggerian

This is a really interesting paper. In it, Hubert Dreyfus, known for his books What Computers Can’t Do, goes over why some of the more well-known AI projects have failed and also explores some worthwhile avenues where AI can succeed.

[In the 1960s] AI researchers were hard at work turning rationalist philosophy into a research program.

Dreyfus is referring to the Physical Symbols Theory of Newell and Simon that strove to empirically show that what is “really going on” in minds is the shuffling of symbols in a systematic way. By setting up the framework of AI in terms of this input>>processing>>output “boxology”, AI researchers attempted to demonstrate that the brain is really a very complicated information processor that could in principle be replicated on a silicon medium. After all, if all that matters is the “function” of information processing, then the actual substrate of the mind is irrelevant. All that matters is the algorithms, or “software”, running over-top the “hardware”. Notice that the entire research paradigm of AI, derived from cognitive science, is based on the metaphor of the computer. It is this metaphor that Dreyfus wants to combat and instead replace it with a more phenomenologically accurate account of what goes on when humans with minds interact with the environment.

Dreyfus uses the “frame problem” as a prime example of why this traditional symbol-shunting, representationalist program was doomed from the beginning. The frame problem is simply the problem of knowing the relevant context for a particular problem. AI programs need to know what particular knowledge is relevant to the the situation in order to realistically cope with the world. As Dreyfus is apt to point out, the human world of meaning is saturated with significance precisely because we are immersed in a “referential totality”. So for example, modeling the human use of tools can’t be done with “brute force” because whenever we use a hammer, the referential totality of nails and what-we-are-hammering-for comes into use. There is a particular way of being of hammers because they are embedded in a cultural “existential matrix” that is imparted onto the human world through the communal use of language.

Dreyfus concludes that in order for an AI to get past this crucial problem of contextual relevance, they would need to be imbued with particular “bodily needs” in order so that the AI could “cope” with the world. In other words, these AI need to be embodied and embedded in the world so that there is a particular significance for the program, or else it will never be able to act intelligently in the world. You can’t develop a truly artificial intelligence based on pure symbol shunting because the significance of the world stems not from our brain “processing” symbolically, but rather from the entire referential totality of culture. We can’t escape from the fact that our intelligence results from persons coping with an environment.

1 Comment

Filed under Philosophy, Psychology