[The idea of information processing] sometimes leads to serious confusions. The most seductive confusion could be called the Myth of Double Transduction: first, the nervous system transduces light, sound, temperature, and so forth into neural signals (trains of impulses in nerve fibers) and second, in some special central place, it transduces these trains of impulses into some other medium, the medium of consciousness! That’s what Descartes thought, and he suggested that the pineal gland, right in the center of the brain, was the place where this second transduction took place–into the mysterious, nonphysical medium of the mind. Today almost no one working on the mind thinks there is any such nonphysical medium. Strangely enough, though, the idea of a second transduction into some special physical or material medium, in some yet-to-be-identified place in the brain, continues to beguile unwary theorists. It is as if they saw — or thought they saw — that since peripheral activity in the nervous system was mere sensitivity, there had to be some more central place where the sentience was created. After all, a live eyeball, disconnected from the rest of the brain, cannot see, had no conscious visual experience, so that must happen later, when the mysterious x is added to mere sensitivity to yield sentience.
~Daniel Dennett, Kinds of Minds, p. 72
Dennett’s Kinds of Minds has been on my to-read list for quite some time and I am glad that I am finally getting around to reading it. Although I am still on the fence about the philosophical utility of the so-called “Intentional stance” and the metaphysical agnosticism it seems to lead to, I am very much sympathetic to Dennett’s ideas on minds, especially his view of the difference between animal minds and human minds and his emphasis on the importance of language for transforming sensitive-reactive systems into minds proper. Dennett also seems to perfectly understand the looming threat of Cartesian dualism behind even the most hard-nosed scientific reductionisms. Understanding the Myth of Double Transduction is crucial for understanding why the Neural Correlates of Consciousness is a bankrupt research program that starts from the illicit assumption that phenomenal experience is somehow “produced” or “generated” in the brain like a special material substance.
Coming back to metaphysical agnosticism though, I am troubled by Dennett’s willingness to call anything an intentional system so long as we can appropriately treat it as if it were an intentional system. This “stance view” seems to waver on the real metaphysical question of demarcating “true minds” from pseudominds. Presumably, Dennett holds onto the stance view because he thinks that robots, could, in principle have genuine minds, and anything except a stance-oriented, functionalist position would amount to some kind of biological chauvinism. However, I’m not sure that functionalism necessarily implies a stance-oriented view. It seems to me that we could use a kind of microfunctionalism to make a strong demarcation between real minds and pseudominds (like thermometers), while still preserving a sense of mind that an advanced robot could theoretically possess in the future. Dennett thinks this press for realism and philosophical clarity leads to all kinds of chauvinisms, but I don’t think such a chauvinism is at odds with functionalism provided we are clear about the kinds of functions unique to biological systems, or at least very difficult to achieve artificially (autonomy, self-maintenance, homeostatic regulation, etc.) Instead of saying that an intentional system is merely whatever can be appropriately labeled as if it were a mind (while remaining agnostic about what they really are), we could instead offer a genuine demarcation for a mind, and say that robots or thermometers can either fail or succeed in meeting this pattern and their metaphysical status can be secured (I think thermometers fail to qualify as minds, and at best are pseudocognitive systems). However, we could still account for our propensity to overestimate the extent to which inanimate objects have minds, as well as account for the explanatory utility of taking the “intentional stance” as a late-blooming evolutionary adaptation (or, most likely, an exaptation). In my opinion, Dennett buys too much of Jamesian pragmatism, which seems to waver on metaphysical issues for the sake of achieving a philosophical productivity (“The intentional stance is such a useful way of talking!”). I want to know what minds really are, independently of any stance we might take towards them. But such a realism about minds certainly doesn’t necessitate a dualism, nor does it necessitate an essentialism about minds, biological chauvinism, or abandonment of the functional position.
Just my thoughts.