In 1950, Alan Turing published a landmark paper in the journal Mind entitled “Computing Machinery and Intelligence”. In this paper he asked the question “Can machines think?” and proposed a method for determining whether a machine thought intelligently or not. This method became known as the Turing Test.
The test runs as follows, from wikipedia:
a human judge engages in a natural language conversation with one human and one machine, each of which try to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test. In order to keep the test setting simple and universal (to explicitly test the linguistic capability of the machine instead of its ability to render words into audio), the conversation is usually limited to a text-only channel.
It is interesting to note that Turing himself thought that question itself(can machines think?) was “too meaningless to deserve discussion”. By this he meant that the most common objections to the question were usually drenched in emotional overtones to such a degree as to make them irrelevant. Nevertheless, Turing went on to discuss several objections to the idea can machines could ever properly be said to “think”.
Some of the objections he dismissed outright as ridiculous(such as the “head in the sand” objection that it would simply be too dreadful if machines thought), but others he gave more careful consideration of. The objection that I would like to discuss in this post is the “Argument from Consciousness” which denies the validity of the Turing Test because “No mechanism could feel(and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.”
Turing counters this objection in the following way:
According to the most extreme form of this view the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view.
Turing goes on to note that if you do not accept this extreme viewpoint, then necessarily one must accept the validity of the terms of his test, namely that if a machine could fool you through textual-typing that it was human, then by all intents and purposes, that machine could be said to have thoughts. This might be seem silly at first because you might object and say “well, I can imagine a machine who thinks, but doesn’t have any emotions. It doesn’t care about anything”. The Turing test gets around this obvious objection because it postulates that what matters about minds is whether or not they can act in an intelligent way. He further argues that if any machine could act(type) in such a way as to convince any human observer that it was intelligent, then surely, it simply is intelligent. Furthermore, an example of intelligent thinking that would necessarily include an understanding of emotional overtones would be the reading of a good novel. This example illustrates the fact that emotion and intelligence are interlinked in such a way as to make it impossible to extract the two.
One might still object by saying that a machine could only possibly “represent” intelligent thoughts, but representations are not the same thing as real thoughts. My favorite philosopher Daniel Dennett has a fascinating reply to this objection. He asks us to imagine a computer simulation of a mathematician. Would it not be silly to complain that this simulated mathematician only gave mere representations of mathematical proofs, but not real proofs? Dennett, of course, says that representations of proofs are proofs because if this simulation of a mathematician produced proofs, would it not be valuable as a “colleague” to any proof-producing math department?
The moral of this simulated mathematician is that the criteria for what we consider thoughts depends not on whether it is represented or not represented, but rather, on the organizational pattern. In the same way that we would not care if a mathematical proof is “merely represented” if it is in fact a real proof, the question of whether “represented thoughts” were really thoughts becomes moot. We must take the Zen approach, and “unask” the question because it only obfuscates the important qualities of thought, namely its real-world effects.