Heidegger and AI

Why Heideggerian AI Failed and How Fixing it Would Require Making it More Heideggerian

This is a really interesting paper. In it, Hubert Dreyfus, known for his books What Computers Can’t Do, goes over why some of the more well-known AI projects have failed and also explores some worthwhile avenues where AI can succeed.

[In the 1960s] AI researchers were hard at work turning rationalist philosophy into a research program.

Dreyfus is referring to the Physical Symbols Theory of Newell and Simon that strove to empirically show that what is “really going on” in minds is the shuffling of symbols in a systematic way. By setting up the framework of AI in terms of this input>>processing>>output “boxology”, AI researchers attempted to demonstrate that the brain is really a very complicated information processor that could in principle be replicated on a silicon medium. After all, if all that matters is the “function” of information processing, then the actual substrate of the mind is irrelevant. All that matters is the algorithms, or “software”, running over-top the “hardware”. Notice that the entire research paradigm of AI, derived from cognitive science, is based on the metaphor of the computer. It is this metaphor that Dreyfus wants to combat and instead replace it with a more phenomenologically accurate account of what goes on when humans with minds interact with the environment.

Dreyfus uses the “frame problem” as a prime example of why this traditional symbol-shunting, representationalist program was doomed from the beginning. The frame problem is simply the problem of knowing the relevant context for a particular problem. AI programs need to know what particular knowledge is relevant to the the situation in order to realistically cope with the world. As Dreyfus is apt to point out, the human world of meaning is saturated with significance precisely because we are immersed in a “referential totality”. So for example, modeling the human use of tools can’t be done with “brute force” because whenever we use a hammer, the referential totality of nails and what-we-are-hammering-for comes into use. There is a particular way of being of hammers because they are embedded in a cultural “existential matrix” that is imparted onto the human world through the communal use of language.

Dreyfus concludes that in order for an AI to get past this crucial problem of contextual relevance, they would need to be imbued with particular “bodily needs” in order so that the AI could “cope” with the world. In other words, these AI need to be embodied and embedded in the world so that there is a particular significance for the program, or else it will never be able to act intelligently in the world. You can’t develop a truly artificial intelligence based on pure symbol shunting because the significance of the world stems not from our brain “processing” symbolically, but rather from the entire referential totality of culture. We can’t escape from the fact that our intelligence results from persons coping with an environment.

1 Comment

Filed under Philosophy, Psychology

One response to “Heidegger and AI

  1. Pingback: Coming Around on Representationalism « Minds and Brains

Leave a comment