Tag Archives: artificial intelligence

A Twist on Searle’s Chinese Room Argument: Why Rules Are Not Enough

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

Searle has cooked the books with this thought experiment by setting up the Chinese Room in an artificial manner. I contend that if we tweak the thought experiment slightly, we will get an entirely different result: no fluent Chinese speaker would believe the machine understands Chinese.

In Searle’s version, the only inputs are Chinese language characters written neatly on a slip of paper and inserted into a slot. In my version, however, the Chinese speakers are allowed to write or draw anything on the slip of paper, including pictures, graphs, arbitrary symbols, etc. Furthermore, the piece of paper is simply held up before an optical scanner rather than being inserted into a slot.

The crucial twist is this: what happens when the Chinese speakers draw a picture that would be culturally relevant only to native Chinese persons (e.g. a cartoon character from a favorite TV show) and then draw an arrow to the picture and write in Chinese “What is that? Explain its cultural relevance.”

In order for Searle sitting in the Chinese Room to answer this question, his “rule book” needs to be vastly more complicated, with the ability of visual pattern recognition and cultural knowledge acquisition. What if the instruction manual was made in 1999 but the cultural symbol comes from 2013? How would the instruction manual recognize the drawing in order to answer the Chinese speakers’ question? The only way this would work is if the Chinese Room was continuously being fed information from the outside world. But once you add this stipulation, the “understanding” of the Chinese Room begins to look more genuine because this simple “instruction manual” now has the ability for novel visual recognition and processing complex cultural information as well as being hooked up in real-time to a complex causal network.

As I see it, Searle has two options, but neither work in his favor. First, he can admit that the Chinese Room could not answer questions about pictures drawn on the slips of paper. But that is not an argument for the impossibility of constructing such a machine. Second, he could beef up the Chinese Room by hooking it up to the outside world so that the rule book can stay up-to-date on what’s going on in Chinese culture. But this complex causal interaction starts to look like the type of complex interaction typical of humans and their environment, undermining Searle’s intuition pump that it’s “obvious” that the Chinese Room does not “really” understand.

12 Comments

Filed under Philosophy

Chess, Consciousness, and Computers

Photobucket

Philosophy has, believe it or not, dropped off my radar (for now). The school semester is over. Most of my philosophical work is done. I am no longer reading books and articles for hours a day. Why? Because I have chess on the brain! The game has somehow transformed my consciousness. I am actually getting less sleep because I go into a lighter sleep cycle in the early morning and my consciousness turns on and starts automatically thinking about chess tactics and moving pieces.

The game has absolutely intrigued me. It is a game of wit, cunning, logic, and creativity. And it is a game of real sport. There are attacks and defenses, thrusts and parries, traps and tactics, pins and skewers, bluffs and brilliance. The theoretical depth to the game is absolutely stunning. But it is balanced through this really interesting rating system, which is all about relative skill.

Let’s say I have a score of 1000. Theoretically, I should be less likely to beat a player with a score of 1500. However, if I do beat that player, then my score will rise dramatically and their score will drop accordingly. The objectivity of the rating system makes the competitive play very interesting. A grandmaster with a high score would not waste his time playing a beginner, for this would not be a challenge and his score would not go up if he won. And likewise, a beginner will probably want to play someone with a rating closer to his own. Accordingly, the better you get at chess the harder it becomes to win on a regular basis, because you start playing people with higher ratings. In this way, the competitive play in chess is balanced wonderfully so that the level of competition is usually such that you get interesting games.

And if you are a naturally competitive person like me, then the objectivity of the rating system is truly inspirational. The rating system allows for a rare quantification of intellectual skill. This quantification allows for a measure of objectivity in self-evaluation (This is especially true with the advent of computer analysis, as I explain below). As you play more players, and start beating people with higher and higher ratings, you can get an objective sense of where you stand in relation to everyone else. It seems unlikely that I will ever reach the highest levels of competitive play, but I think there is an intrinsic value to the enrichment of one’s intellectual gumption. Even if you don’t become the best in the world, the practice of chess is truly an exercise in the radical augmentation of consciousness.

Indeed, the way your average chess player operates is through sheer consciousness. They have to consciously imagine the various “If, then” conditionals then are associated with each possible move. At every step in the game, the player has to step offline, and reflect on the various possibilities. “If I do that, then they will do this. And if they do that, then I can do this or that. And if I do this, then that might happen, etc.” For causal players, you only need to think one move in advance, but the further into the future you are able to calculate, the higher your level of play and the more likely you are to develop devastating attacks on your opponent. To play chess effectively then requires a highly developed sense of conscious reflection. The ability to explicitly calculate the various futural possibilities is critical for playing chess successfully. However, since even the best players can’t look too far into the future without overloading their working memory with too much information, they must rely on intuition and creativity. The fact that you are unlikely to play the same chess game twice requires a smooth interplay between logical calculation and creative hypothesis testing.

This is especially true of blitz games were you don’t have the luxury of deeply calculating every move. Good blitz players must operate through their intuition. On this level of play, they “feel out” possibilities rather than rationally calculating every move. This fast-paced play requires that the unconscious mind be adequately trained such that slow, deliberate calculation is replaced by speedy intuition. According to Hubert Dreyfus’ model of expertise [1], there are 5 stages to directed skill acquisition:

  1. Novice.  “Most beginners are notoriously slow players, as they attempt to remember all these rules and their priorities.”
  2. Advanced beginner. “With experience, the chess beginner learns to recognize overextended positions and how to avoid them. Similarly, she begins to recognize such situational aspects of positions as a weakened king’s side or a strong pawn structure despite the lack of precise and situation-free definition. The player can then follow maxims such as: Attack a weakened king’s side.”
  3. Competence. “The class A chess player, here classed as competent, may decide after studying a position that her opponent has weakened his king’s defenses so that an  attack against the king is a viable goal. If she chooses to attack, she can ignore features involving weaknesses in her own position created by the attack as well as the loss of pieces not essential to the attack. Pieces defending the enemy king become salient. Successful plans induce euphoria, while mistakes are felt in the pit of the stomach.”
  4. Proficient. “The proficient chess player, who is classed a master, can recognize almost immediately a large repertoire of types of positions. She then deliberates to determine which move will best achieve her goal. She may, for example, know that she should attack, but she must calculate how best to do so.”
  5. Expertise. “The expert chess player, classed as an international master or grandmaster, experiences a compelling sense of the issue and the best move. Excellent chess players can play at the rate of 5 to 10 seconds a move and even faster without any serious degradation in performance. At this speed they must depend almost entirely on intuition and hardly at all on analysis and comparison of alternatives. It has been estimated that a master chess player can distinguish roughly 50,000 types of positions. For much expert performance, the number of classes of discriminable situations, built up on the basis of experience, must be comparably large.”

When Artificial Intelligence was first dreamt up, chess represented one of the highest peaks of intelligence. The mixture of creativity, strategy, boldness, wit, deviousness, and logic were enough to convince many people that if computers could ever beat a human grandmaster, then they would be, without a doubt, truly intelligent. And now we have $10 iphone apps with chess programs smart enough to beat almost any human chess player. But, obviously, iphones are not intelligent in the way a human is intelligent. So what happened? How is it that chess programs became so good without also developing intellectual skills that are domain general rather than radically domain specific? During the 1950s, it was assumed that the exponential growth of chess move possibilities would bog down any computer if it attempted to just brute force the game moves based on an algorithmic analysis of each move as if it were an isolated book program. But this is the only obvious way to program computers to play good chess. So if computers ever did beat humans, it would be pretty amazing.

But with the widespread availability of cheap computing power, we can now have grandmasters in the palm of our hands. This has radically changed the chess world. But not in the way you’d think. The game hasn’t been “solved”, unlike checkers. Thankfully, chess is too complex for computers to determine the winner after a single move. But when you can calculate 200 million possible moves a second, lack of intuition becomes no hindrance to complete domination of humans. Computers are now literal chess gods, always playing the move that has the best possible likelihood of winning.

I find this development in the chess world to be absolutely interesting. With computer analysis now available, the object of human chess skill acquisition is to play like a computer. But since the human mind will never be able to rival 200 million possible moves a second, we must consciously train our unconscious mind to play like computers. The conscious mind is the worst way to mimic computer play. Conscious access to working memory only allows a limited amount of information chunks to be simultaneously weighed. Conscious thinking is slow, linear, and clunky (although certainly capable of stunning feats of intelligence). But the unconscious mind is much faster thanks to parallel processing and a deeper cognitive reservoir with theoretically unlimited memory, which always blows my mind a little when I think about it.

In this sense, there is a little bit of truth in the classic myth that we only use 10% of our brains. There is a lot of wisdom in this statement, but you have to break it down and look past its obvious falsity. When someone says “we only use 10% of our brains”, the “we” they are referring to is the autobiographical consciousness, not the unconscious mind. What they mean to say is that our consciousness only has access to a small fraction of the total cognitive reservoir. There is a good evolutionary reason for this. Consciousness is too slow to react to a bear in the woods or a snake beneath our feet. As the famous deafferentation case of IW demonstrates, if we had to use our slow consciousness to control our bodies, the results would be less than efficient.

There are then many reasons why I have suddenly become drawn to the world of chess. The game appeals to my mind in many ways. I like the idea of reshaping my brain through practice and training. With computer training, chess players are training themselves to think like computers so when the pressure is on, they can not think like computers, and use complex situational awareness (Stages 3-5 of skill acquisition). I have been playing this iphone chess computer all the time. I train myself by trying to guess what the computer is going to do next. If I can’t understand why the computer moved where it did, I will sit there and study the situation until I can come up with a reason. This is because the best chess players don’t memorize patterns and blindly calculate. They have reasons and principles for what they do. And some real John Madden type strategy. Watching chess masters play is amazing.

Well, that’s what has been on my mind lately. I should get back to practicing chess…

References:

[1] Dreyfus, H. L. (2002). Intelligence without representation – Merleau-Ponty’s critique of mental representation The relevance of phenomenology to scientific explanation. Phenomenology and the Cognitive Sciences, 1(4), 367-383.

3 Comments

Filed under Consciousness, Phenomenology, Psychology

Heidegger and AI

Why Heideggerian AI Failed and How Fixing it Would Require Making it More Heideggerian

This is a really interesting paper. In it, Hubert Dreyfus, known for his books What Computers Can’t Do, goes over why some of the more well-known AI projects have failed and also explores some worthwhile avenues where AI can succeed.

[In the 1960s] AI researchers were hard at work turning rationalist philosophy into a research program.

Dreyfus is referring to the Physical Symbols Theory of Newell and Simon that strove to empirically show that what is “really going on” in minds is the shuffling of symbols in a systematic way. By setting up the framework of AI in terms of this input>>processing>>output “boxology”, AI researchers attempted to demonstrate that the brain is really a very complicated information processor that could in principle be replicated on a silicon medium. After all, if all that matters is the “function” of information processing, then the actual substrate of the mind is irrelevant. All that matters is the algorithms, or “software”, running over-top the “hardware”. Notice that the entire research paradigm of AI, derived from cognitive science, is based on the metaphor of the computer. It is this metaphor that Dreyfus wants to combat and instead replace it with a more phenomenologically accurate account of what goes on when humans with minds interact with the environment.

Dreyfus uses the “frame problem” as a prime example of why this traditional symbol-shunting, representationalist program was doomed from the beginning. The frame problem is simply the problem of knowing the relevant context for a particular problem. AI programs need to know what particular knowledge is relevant to the the situation in order to realistically cope with the world. As Dreyfus is apt to point out, the human world of meaning is saturated with significance precisely because we are immersed in a “referential totality”. So for example, modeling the human use of tools can’t be done with “brute force” because whenever we use a hammer, the referential totality of nails and what-we-are-hammering-for comes into use. There is a particular way of being of hammers because they are embedded in a cultural “existential matrix” that is imparted onto the human world through the communal use of language.

Dreyfus concludes that in order for an AI to get past this crucial problem of contextual relevance, they would need to be imbued with particular “bodily needs” in order so that the AI could “cope” with the world. In other words, these AI need to be embodied and embedded in the world so that there is a particular significance for the program, or else it will never be able to act intelligently in the world. You can’t develop a truly artificial intelligence based on pure symbol shunting because the significance of the world stems not from our brain “processing” symbolically, but rather from the entire referential totality of culture. We can’t escape from the fact that our intelligence results from persons coping with an environment.

1 Comment

Filed under Philosophy, Psychology

The Turing Test

I, Robot

In 1950, Alan Turing published a landmark paper in the journal Mind entitled “Computing Machinery and Intelligence”. In this paper he asked the question “Can machines think?” and proposed a method for determining whether a machine thought intelligently or not. This method became known as the Turing Test.

The test runs as follows, from wikipedia:

a human judge engages in a natural language conversation with one human and one machine, each of which try to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test. In order to keep the test setting simple and universal (to explicitly test the linguistic capability of the machine instead of its ability to render words into audio), the conversation is usually limited to a text-only channel.

It is interesting to note that Turing himself thought that question itself(can machines think?) was “too meaningless to deserve discussion”. By this he meant that the most common objections to the question were usually drenched in emotional overtones to such a degree as to make them irrelevant. Nevertheless, Turing went on to discuss several objections to the idea can machines could ever properly be said to “think”.

Some of the objections he dismissed outright as ridiculous(such as the “head in the sand” objection that it would simply be too dreadful if machines thought), but others he gave more careful consideration of. The objection that I would like to discuss in this post is the “Argument from Consciousness” which denies the validity of the Turing Test because “No mechanism could feel(and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.”

Turing counters this objection in the following way:

According to the most extreme form of this view the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view.

Turing goes on to note that if you do not accept this extreme viewpoint, then necessarily one must accept the validity of the terms of his test, namely that if a machine could fool you through textual-typing that it was human, then by all intents and purposes, that machine could be said to have thoughts. This might be seem silly at first because you might object and say “well, I can imagine a machine who thinks, but doesn’t have any emotions. It doesn’t care about anything”. The Turing test gets around this obvious objection because it postulates that what matters about minds is whether or not they can act in an intelligent way. He further argues that if any machine could act(type) in such a way as to convince any human observer that it was intelligent, then surely, it simply is intelligent. Furthermore, an example of intelligent thinking that would necessarily include an understanding of emotional overtones would be the reading of a good novel. This example illustrates the fact that emotion and intelligence are interlinked in such a way as to make it impossible to extract the two.

One might still object by saying that a machine could only possibly “represent” intelligent thoughts, but representations are not the same thing as real thoughts. My favorite philosopher Daniel Dennett has a fascinating reply to this objection. He asks us to imagine a computer simulation of a mathematician. Would it not be silly to complain that this simulated mathematician only gave mere representations of mathematical proofs, but not real proofs? Dennett, of course, says that representations of proofs are proofs because if this simulation of a mathematician produced proofs, would it not be valuable as a “colleague” to any proof-producing math department?

The moral of this simulated mathematician is that the criteria for what we consider thoughts depends not on whether it is represented or not represented, but rather, on the organizational pattern. In the same way that we would not care if a mathematical proof is “merely represented” if it is in fact a real proof, the question of whether “represented thoughts” were really thoughts becomes moot. We must take the Zen approach, and “unask” the question because it only obfuscates the important qualities of thought, namely its real-world effects.

3 Comments

Filed under Philosophy, Psychology

Minds and machines

Cog

In this post, I want to briefly overview MIT’s exciting Cog project

Simply stated, Cog is “a set of sensors and actuators which tries to approximate the sensory and motor dynamics of a human body.” So what is the point of trying to replicate the “sensory and motor dynamics” of humans? Basically, the Cog researchers are trying to create an Artificial Intelligence.

In order to understand why an AI seemingly must have a humanoid body in order to be intelligent, one must have a basic understanding of embodiment theory.

The main thesis behind embodiment theory can be found in Shaun Gallagher’s How the Body Shapes the Mind. In this seminal work, Gallagher precisely defines the vocabulary necessary to talk about the thesis stated in the title: how the body shapes and influences the mind. Another overview of the embodiment thesis can be found here, by important embodiment researcher Andy Clark.

So, what does all this have to do with Cog and artificial intelligence? The MIT webpage has a nice overview and states “If we are to build a robot with human like intelligence then it must have a human like body in order to be able to develop similar sorts of representations.” Thus, the morphological(form) as well as the functional characteristics of our body-brain system play a critical role in shaping the dynamics of intelligent human interaction with the environment. The Cog project is not trying to “simulate” human intelligence on a symbolic level,which has been the traditional approach of Good Old-Fashioned Artificial Intelligence(GOFAI) but rather, is attempting to get human-level cognition to emerge from an intermodal and dynamic interaction with the environment.

Justification for the importance of Cog’s humanoid facial features is the fact that social interaction is perhaps the most important facet of human-environment-reciprocity that makes human intelligence uniquely human relative to the other great apes. It is the early prenatal and postnatal social-learning and development that gives rises to important conceptual constructs such as relativity(self/other, inside/outside, etc). If you are interested a neurological discussion of how such concepts arise from our embodiment, see my paper Mirror Neurons and the self

If this brief discussion of Cog as piqued your interest, you will probably be interested in some of MIT’s video overviews.

Lastly, I will end this post with another quote from the MIT page:

In any case….it turns out to be easier to build real robots than to simulate complex intereactions with the world, including perception and motor control. Leaving those things out would deprive us of key insights into the nature of human intelligence.

Leave a comment

Filed under Philosophy, Psychology