Tag Archives: Searle

A Twist on Searle’s Chinese Room Argument: Why Rules Are Not Enough

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

Searle has cooked the books with this thought experiment by setting up the Chinese Room in an artificial manner. I contend that if we tweak the thought experiment slightly, we will get an entirely different result: no fluent Chinese speaker would believe the machine understands Chinese.

In Searle’s version, the only inputs are Chinese language characters written neatly on a slip of paper and inserted into a slot. In my version, however, the Chinese speakers are allowed to write or draw anything on the slip of paper, including pictures, graphs, arbitrary symbols, etc. Furthermore, the piece of paper is simply held up before an optical scanner rather than being inserted into a slot.

The crucial twist is this: what happens when the Chinese speakers draw a picture that would be culturally relevant only to native Chinese persons (e.g. a cartoon character from a favorite TV show) and then draw an arrow to the picture and write in Chinese “What is that? Explain its cultural relevance.”

In order for Searle sitting in the Chinese Room to answer this question, his “rule book” needs to be vastly more complicated, with the ability of visual pattern recognition and cultural knowledge acquisition. What if the instruction manual was made in 1999 but the cultural symbol comes from 2013? How would the instruction manual recognize the drawing in order to answer the Chinese speakers’ question? The only way this would work is if the Chinese Room was continuously being fed information from the outside world. But once you add this stipulation, the “understanding” of the Chinese Room begins to look more genuine because this simple “instruction manual” now has the ability for novel visual recognition and processing complex cultural information as well as being hooked up in real-time to a complex causal network.

As I see it, Searle has two options, but neither work in his favor. First, he can admit that the Chinese Room could not answer questions about pictures drawn on the slips of paper. But that is not an argument for the impossibility of constructing such a machine. Second, he could beef up the Chinese Room by hooking it up to the outside world so that the rule book can stay up-to-date on what’s going on in Chinese culture. But this complex causal interaction starts to look like the type of complex interaction typical of humans and their environment, undermining Searle’s intuition pump that it’s “obvious” that the Chinese Room does not “really” understand.

12 Comments

Filed under Philosophy