A Twist on Searle’s Chinese Room Argument: Why Rules Are Not Enough

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

Searle has cooked the books with this thought experiment by setting up the Chinese Room in an artificial manner. I contend that if we tweak the thought experiment slightly, we will get an entirely different result: no fluent Chinese speaker would believe the machine understands Chinese.

In Searle’s version, the only inputs are Chinese language characters written neatly on a slip of paper and inserted into a slot. In my version, however, the Chinese speakers are allowed to write or draw anything on the slip of paper, including pictures, graphs, arbitrary symbols, etc. Furthermore, the piece of paper is simply held up before an optical scanner rather than being inserted into a slot.

The crucial twist is this: what happens when the Chinese speakers draw a picture that would be culturally relevant only to native Chinese persons (e.g. a cartoon character from a favorite TV show) and then draw an arrow to the picture and write in Chinese “What is that? Explain its cultural relevance.”

In order for Searle sitting in the Chinese Room to answer this question, his “rule book” needs to be vastly more complicated, with the ability of visual pattern recognition and cultural knowledge acquisition. What if the instruction manual was made in 1999 but the cultural symbol comes from 2013? How would the instruction manual recognize the drawing in order to answer the Chinese speakers’ question? The only way this would work is if the Chinese Room was continuously being fed information from the outside world. But once you add this stipulation, the “understanding” of the Chinese Room begins to look more genuine because this simple “instruction manual” now has the ability for novel visual recognition and processing complex cultural information as well as being hooked up in real-time to a complex causal network.

As I see it, Searle has two options, but neither work in his favor. First, he can admit that the Chinese Room could not answer questions about pictures drawn on the slips of paper. But that is not an argument for the impossibility of constructing such a machine. Second, he could beef up the Chinese Room by hooking it up to the outside world so that the rule book can stay up-to-date on what’s going on in Chinese culture. But this complex causal interaction starts to look like the type of complex interaction typical of humans and their environment, undermining Searle’s intuition pump that it’s “obvious” that the Chinese Room does not “really” understand.

Advertisements

11 Comments

Filed under Philosophy

11 responses to “A Twist on Searle’s Chinese Room Argument: Why Rules Are Not Enough

  1. Why should *complicating* the syntactic system have the effect of weakening the intuition that this can’t be understanding? It remains the case that rule-governed interactions are all we have, after all.

    I actually agree think that it does weaken the intuition, but the important question is, Why? Is it simply because it’s easier to anthropomorphize (adopt the intentional stance)? If so, then I’m not sure Searle would be all that concerned.

    Or is there something more elusive at work?

    • Hi rsbakker,

      Thanks for the comment.

      My opinion is pulled in several different directions. On the one hand, the tough-minded metaphysician in me wants to say the thought experiment only proves the concept of “real understanding” is meaningless because if the souped-up computer with human-like powers of visual perception and language use does not have understanding then, from the third-person perspective there is no reason to think humans “have” understanding as well.

      The other angle is to argue that based on its ability to solve the Frame problem and achieve human powers of visual perception (no easy feat) the computer must not be engaging in mere rule-following after all, which is compatible with Searle’s original argument that formal-syntactic rule-following is not enough for genuine cognition.

      But then I start worrying about Wittgenstein’s distinction between acting in accord with a rule vs following a rule and perhaps both humans AND computers merely act in accord with rules but humans delude themselves into thinking we “follow” or “obey” or “understand” rules. I see no non-question begging way to rule this possibility out.

      Also, any distinction between mind and not-mind, understanding and not-understanding, cognition and not-cognition does not seem to me to be well-grounded as is always amenable to counter-examples and sorites paradoxes. To me this indicates there is something rotten in how we think and talk about mental concepts, not that there are higher-order emergent properties like aboutness that are special and distinct from physical matter.

      Sorry for not being clear. Still working these thoughts out.

  2. I never thought I would be caught dead writing as much, but I would trust the ‘tough-minded metaphysician’! I have about as much faith in the ‘feeling of understanding’ as I do in the ‘feeling of willing’ – or the ‘feeling of the apriori’ for that matter. Syntax doesn’t seem to give us semantics because semantics is what syntax looks like to a blind man, and we all suffer metacognitive Anton’s Syndrome.

    Check out, http://rsbakker.wordpress.com/2013/09/24/cognition-obscura/ . One way of looking at Leibniz’s Mill type thought experiments is that they simply *add dimensions of information* missing from our intuitive conception. Since we have no way of intuiting that those dimensions are missing, we assume the sufficiency of our intuitive conception. Thus the ‘ghost-like’ character of the ‘mental.’ Thus the incompatibility. All you have to do is look at the difficulties the brain faces cognizing its environments at an everyday grain, and the notion of the brain cognizing its own astronomical complexities in anything other than an adventitious, radically heuristic way becomes out-and-out preposterous.

  3. Sergio Graziosi

    To me Searle’s Chinese room argument has always been a disappointment (I usually enjoy Searle’s arguments, even when I disagree), but the whole thing (talking as a programmer) is just silly. Consider this: in order to save time, Searle manages to learn by heart all of the instructions+DB. He can now “follow the rules” without external help. Does he understand written Chinese now? Can’t be so sure of the answer, right? Moreover, what happens if by learning the instructions by heart, he understands the logic behind it? Will he know Chinese at this stage?
    What this extension of the thought experiment tells me is that the original formulation is inherently invalid: it breaks the equivalence Computer-Searle. The first contains all the rules, but the second only contains the ability to find and follow them. Once you put the rules inside the second, you can say that Searle knows the rules, and won’t be too sure that this doesn’t mean that he understands Chinese.
    Has anybody proposed this counter-argument before? I couldn’t find it, which is peculiar, it does look obvious to me (hence my disappointment), or am I missing something?

  4. bjf

    i don’t think this twist is crucial. analogously, suppose you ask your grandma about a new TV character, and she has no good reply. that obviously doesn’t show that she fails to understand english.

    i get the sense that more generally, you think that *some* ability to adapt and learn the meanings of new symbols is crucial to understanding a language. you hint at this in saying that a room equipped with a constantly-updated rulebook “begins to look more genuine” because it “has the ability for novel visual recognition and processing complex cultural information”. it’s not a crazy thought, at first glance. but what is the underlying argument for this? you might appeal to causal (i.e. “externalist”; “wide”) theories of meaning/reference for support. on this view, meaning requires a causal connection between a symbol-system and the environment. but that route has already been explored in traditional replies like the ‘robot reply’, and searle will give his standard counters there. i’m not saying searle is right; just that i don’t see this twist as a game-changer.

    dennett, fodor, and rey have endorsed versions of the robot reply:
    http://plato.stanford.edu/entries/chinese-room/#4.2

    re: sorites sequences and mental concepts, nearly *every* concept can be used to construct a sorites sequence. to list just a few: bald, heap, red, tall, chicken, egg, life, death. why think this is a sign of ‘rottenness’? rottenness itself is susceptible to sorites arguments…

  5. “Suppose you ask your grandma about a new TV character, and she has no good reply. that obviously doesn’t show that she fails to understand english.”

    True, but if I handed my grandma a piece of paper with a drawing on it and an arrow pointing to it, she would at least know that I am asking about what the symbol means, and I could also draw other abstract shapes/characters that she would recognize that I bet a formal symbol cruncher wouldn’t. For me the interesting thing about the twist is the idea that you can write ANYTHING on the piece of paper in order to have a conversation with the Chinese Room, not just the “appropriate” syntactical form of correct written Chinese. This makes the problem significantly harder because human symbol formation can be incredibly abstract and very metaphorical e.g. drawing a symbol of a “cross” and asking the computer to explain how Christian atonement works.

    As for my argument about why cultural learning is super important, I don’t really have one (see my above reply to rsbaker). However, I’ve always been suspicious that a static and finite rule-book could store enough information to deal with truly novel situations. In the paradigm example, Searle always seemed to talk about the rule-book as just sitting on his desk and not changing or updating in any way.

    As for the rottenness of “bald, heap, red, tall, chicken, egg, life, death”, all these words could be easily operationalized without any real controversy if we wished to practically solve real-world debates about who is bald and who isn’t, or what counts as a chicken and what doesn’t. Tallness is an obvious target for operationalization: if you want to know if X is taller than Y, have both X and Y stand against a ruler and compare the markings on the ruler. Life and death are trickier, but I take it this is a reason why the “science of life” is on shakier ground than the science of who’s taller than who. My working idea is that certain concepts are less rotten than others to the extent they can be non-controversially operationalized.

    • bjf

      fair enough re: grandma. but the generalized point about infinitely-flexible learning goes pretty far beyond your example. it’s not clear that even humans have this capability, though obviously we approximate the ideal much better than a static rulebook. anyway, i think the spirit of this line is subsumed under relevant versions of the robot reply.

      another (related) way to think about your suggestion is as a way of saying, roughly: “the chinese room only passes a very circumscribed version of the turing test, one that *merely* involves performance with respect to a limited/fixed/dead language. but humans can do much more than that (including capacities that outstrip language), and some of these capacities matter for intelligence/mentality. so we need to broaden the scope of the test (and the example) to include other behavioral capacities.” again though, the same replies/counters will apply here.

      i want to re-emphasize that i’m not saying that searle is right. just that you can see how the moves in the debate would play out, given what’s been said before.

      can androids dream of electric sheep?

      you give a pretty good operational definition for the “taller than” relation, even though it is still fuzzy (because rulers do not yield perfectly precise measurements). but i want to know about the property “tall”, not the relation “taller than”. how would you operationalize “tall”? i doubt whether you can do it without drawing an arbitrary/stipulative distinction between things that are tall and things that are not-tall. but if you do draw an arbitrary distinction, that opens up your definition to ‘operational controversy’, which (it seems) you want to avoid. nonetheless, i don’t see how the impossibility of uncontroversially precisifying “tall” renders the concept ‘rotten’ in any normatively significant sense.

  6. ” i doubt whether you can do it without drawing an arbitrary/stipulative distinction between things that are tall and things that are not-tall.”

    I agree that ultimately the quest for increasingly precise operationalizations will eventually lead to either convention, stipulation, or arbitrariness. I can live with stipulation. But this is why I made the point that the debate has to be real-world with real-world consequences, not merely speculative. If, for example, a Supreme Court case hinged on ascribing the property “tall” to an object, I am almost positive that the Court would come up with an operational procedure that would be agreed upon by both sides. If there were grant proposals for engineering projects that hinged on getting clear on what the word “tall” means, I bet all parties involve in project could agree on a procedure for more or less reliably ascribing the predicate “tall”.

    In other words, the “operational controversy” has to be a real controversy in the real world, where actual people greatly care to resolve a debate about whether an object is “tall”. However, given that I am aware of no real intellectual controversy that is hinging on ascribing the predicate “tall” to something, I am fine with making it more or less arbitrary, or based on conventional norms.

    Thus, when it comes to perfectly precisifying our concepts, I am apt to agree with the founder of operationalism, Bridgman, when he said: ““In the end, when we come to the place where human weariness and the shortness of life forces us to stop analyzing our operations, we are pretty much driven to accept our primitive operations on the basis of a feeling in our bones that we know what we are doing.”

    So at the end of the day, human weariness presents me from plugging up all the left-over holes in any operational definition. But since I am attracted to operationalism for purely pragmatic reasons, I am fine with giving a pragmatic justification for stopping the process of precisification at an arbitrary point, so long as doing so doesn’t land up in a host of trouble with my peers. But I take it that “tall” is a concept that few people get worked up about, which makes it non-rotten in my sense because it’s an uncontroversial concept. Last I checked, no one was writing philosophical tomes undermining the legitimacy of the concept “tall”, or engaging in grand, public debates about who’s tall and who’s not tall.

  7. bjf

    I’m not denying that you can stipulate definitions. Surely this is possible. But I think you overestimate the extent to which this can be done uncontroversially. To take your grant-proposal case, say that grants will be awarded to all and only tall engineers, and you decide that “tall” only applies to people greater than 1.854321 meters. The engineers that are merely 1.85420 meters are going to be pretty upset about this, and justifiably so, since that particular cutoff point for “tall” was arbitrary to begin with (ex hypothesi).

    You probably think, “who cares about ‘tall’? It doesn’t really matter if we get precise about “tall”.” I confess I don’t really care about “tall” as such either. It’s true that this is a toy example, and that *usually* it doesn’t matter much whether we define some more precise notion of “tall”. But I didn’t ask about that particular example because I’m “worked up” about it. Rather “tall” provides a clear illustration of a more general and important point: for any precisification of any vague concept, controversy over the new definition can be driven by practical consequences. So I don’t see how appealing to usefulness gets around the core issue.

    I feel like you’re not engaging the main thrust of what I’m saying though: fuzzy concepts are already perfectly useful for a great many purposes, and we use them all the time to get things done and to think/say true things about the world. This is why I don’t understand why you think all fuzzy concepts are rotten.

    I like the Bridgeman quote. It suggests that no matter how hard you try to precisify your concepts (here, operationalized concepts), you’ll never achieve perfect precision because you’ll die first. Does this mean you ought to stop operationalizing your concepts? No! Less-than-fully-precise concepts are pretty good for many purposes, and we basically “know what we are doing” with them despite their having fuzzy edges or ‘holes’. But then, I don’t see why you should treat perfect precision as requirement for the non-rottenness of operationalized concepts. And if it’s not a requirement for operationalized concepts, I don’t see why it should be a requirement on the non-rottenness of concepts more generally.

  8. “fuzzy concepts are already perfectly useful for a great many purposes, and we use them all the time to get things done and to think/say true things about the world. This is why I don’t understand why you think all fuzzy concepts are rotten.”

    The view I am developing is that although all concepts are fuzzy some are less fuzzy than others. I propose then that there is a continuum of fuzziness and some concepts are on one extreme of the spectrum and some concepts are on the other end (also, concepts can evolve over time and thus change their position on the spectrum). For example, the concept “meter” is on the least fuzzy end of the spectrum, and the concept “qualia” is on the other end of the spectrum (to use a cheap-shot example).

    Although there is some inherent fuzziness in how the meter is defined — eventually bottoming out in some conventional stipulation — the relevant point is that there is an international community of scholars who have accepted the standardized conventions surrounding not only what the concept “meter” means but also how to operationalize it in most practical situations. In contrast, there is no such consensus on what “qualia” means let alone how to operationalize it. Worse, some people think it cannot be operationalized in principle and others think it can.

    Everyday concepts like “tall” and “bald” are probably somewhere in between “meter” and “qualia”. Thus, my ascription of rottenness to concepts is not all or nothing. “Meter” is somewhat rotten, but not as rotten as “qualia”. Also, I agree wholeheartedly with you that fuzzy concepts are very useful in day-to-day life but I take it the justification is more conversational than epistemic. If I’m talking to my grandparents, pragmatics of conversation suggest that we don’t need to be super precise in our language, but the justification is purely pragmatic i.e. it serves my desires as a social creature.

    However, once we start asking which concepts are most useful for scientists engaged in scientific inquiry, I propose that they will be most usefully served by concepts that are minimal in their fuzziness e.g. “meter”, “kelvin”, “pressure”, “gram”, “triple point of water”, etc. All these concepts are fuzzy at their core, but to make up for this we have developed comprehensive systems of standardization that allow us to greatly minimize fuzziness to the extent our finite resources/willpower allows. To the extent that psychological concepts have not reached the kind of precision physical concepts have, they are relatively worse scientifically speaking. Which is not to say they are useless in situations where great precision is not needed (everyday life).

    • Your going to want this quote from Russell, “”the method of
      ‘postulating’ what we want has many advantages; they are the
      same as the advantages of theft over honest toil.” 😉

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s