I was just reading the Stanford encyclopedia article on Searle’s Chinese room and I wanted to share this great paragraph:
A further related complication is that it is not clear that computers perform syntactic operations in quite the same sense that a human does—it is not clear that a computer understands syntax or syntactic operations. A computer does not know that it is manipulating 1’s and 0’s. A computer does not recognize that its binary data strings have a certain form, and thus that certain syntactic rules may be applied to them, unlike the man inside the Chinese Room. Inside a computer, there is nothing that literally reads input data, or that “knows” what symbols are. Instead, there are millions of transistors that change states. A sequence of voltages causes operations to be performed. We may choose to interpret these voltages as binary numerals and the voltage changes as syntactic operations, but a computer does not interpret its operations as syntactic or any other way. So perhaps a computer does not need to make the move from syntax to semantics that Searle objects to; it needs to move from complex causal connections to semantics.Furthermore, any causal system is describable as performing syntactic operations—if we interpret a light square as logical “0” and a dark square as logical “1”, then a kitchen toaster may be described as a device that rewrites logical “0”s as logical “1”s.
This seems right to me. The question, then, is not how do we get from syntax to semantics, but rather, whether or not the MIND IS A COMPUTER metaphor is even worthwhile if we concede that we don’t know how the computer actually “computes” or “knows” anything, as opposed to simply changing physically in accordance with voltages, etc. Accordingly, if the computer does not “know” or “read” syntax except metaphorically, then what is going on when an organism “knows” the world? It seems unlikely that a messy biological brain would be doing the same thing as a computer.
If computational cognitivism is based on a flawed analogy, then we need to reconsider why we abandoned behaviorist approaches to cognition. In other words, if the mind is not “computing” the world when it knows the world, then the most obvious alternative simply claims that knowledge is for behavior, and accordingly, when the animal mind “knows” the world, it knows it in terms of possibilities and affordances of physical behavior. Perceptual cognition then becomes reactive i.e. organic behavior, not representational computation. The ontological bifurcation between syntax and semantics is replaced by a form of being in the world, and a question arises of how is it that our physical bodies resonate to the environment so as to comport with an optimal bodily grip.
But wait, did we not already learn our lesson about behaviorism? Psychology shifted into the Cognitive Revolution because there are some human behaviors that cannot be explained in terms of a complex behavioral resonance. What are these behaviors? Introspection, internal workspaces with conscious content manipulation (working memory, visual sketchpads, phonological loops, etc. ), narratization, advanced social cognition, conscious planning, episodic and autobiographical memory,executive impulse control and decision making, etc. Can these epistemic actions be intelligibly explained in terms of a complex behaviorism? Seems unlikely. But the takeaway message here is that we need not explain the basic biological coping of “knowing” the world with the same explanatory framework we use to explain the more recent and more advanced epistemic actions of conscious content manipulation.
With this distinction between online coping and offline thinking we can deal with many of the philosophical problems associated with theories of mind, including qualia, inverted spectrums, the explanatory gap, etc. But that is for another post.