The experience machine is a hypothetical thought experiment whereby one is jacked into a supercomputer and you live in a Matrix-like virtual reality that is experientially identical to normal experience here on Earth. Here’s the question: would you “plug in” to the experience machine and experience a perfectly pleasant existence despite the fact that it wouldn’t be “real”? From what I have gathered, most philosophers believe the most sensible answer is “No, I would not plug in; I prefer reality”. The reasoning behind this judgement is tied into the nature of achievement. What is more satisfactory? To climb Mount Everest in real life or in an experience machine? Most philosophers argue that real achievement, as opposed to false achievement (“You didn’t actually climb Mount Everest”) is more satisfactory or valuable. Real achievement is supposed to be more meaningful, valuable, and satisfactory.
There is also another worry about the experience machine: it would generate false beliefs. If you were in the experience machine and climbing Mount Everest your belief “I am at Mount Everest” would be in fact false. You would really be sitting in some darkened room, plugged into the experience machine, not at Mount Everest. Since it is rational to value true beliefs as opposed to false beliefs, we should not want to plug into the experience machine, because almost all of our beliefs would be false.
I think both of these objections fail in their attempt to show the inferiority of the experience machine. Let me start with the epistemic worry. Suppose that the experience machine is designed so that when you first plug in you retain all your memories acquired in real life. Moreover, suppose that before you plugged in, you asked the programmers to code a clear and definite signal once you are plugged in that says “Hello X, this is the programmers. We’re just letting you know that you are now in fact in the experience machine. Have fun!” Once you plug in and “wake up” in the indistinguishable virtual world, you hear a great booming omnipresent voice that says ” “Hello X, this is the programmers. We’re just letting you know that you are now in fact in the experience machine. Have fun!” Since you will have retained your memory of having talked to the programmers about this very signal, you have reasonable confidence (inference to the best explanation) that this voice does in fact mean that you are in the experience machine, and that you didn’t hallucinate the signal (which would have disastrous effects if you wanted to be a dare devil in the experience machine).
Thus, when you climb the virtual Mount Everest, you will not in fact believe “I am actually at Mount Everest”. Instead your belief will be “I am climbing Mount Everest inside the experience machine”. This conscious knowledge of being in the experience machine, so long as you continue to recall that fact, will inevitably affect every single other belief. Thus, your beliefs will be by and large true. The objection that the experience machine will lead to false beliefs therefore fails, because so long as you are conscious of the fact that you are in the experience machine, you can have a meta-knowledge to the effect that “I am not actually climbing Mount Everest, I am just in the experience machine”.
Now, let me spell out how it’s possible to be genuinely successful in your achievements in the experience machine. I’ve been assuming that while in the experience machine you have genuine conscious choice. That is, in the experience machine you have the genuine ability to consciously direct your actions. If you consciously decide to drink a cup of coffee in the experience machine, it will because the experience machine is responding to your genuine intentions (which are obviously grounded in an objectively real neural substrate). So let’s take the example of playing chess in the experience machine. If you decided to play someone in a game of chess in the experience machine, each and every one of your decisions about how to move the pawns and pieces would be a decision that you and you alone consciously chose to make. No one forced you to make those moves, and they weren’t the result of some automatic mechanism (except to the extent that the fleshy brain processes realizing your conscious intentions are themselves “automatic”, which in this sense simply means causally deterministic as opposed to not consciously intentional). Moreover, since the only memories you would be programmed to have access to were your original memories, any chess theory recalled during a decision making process would be a result of having remembered it from your actual study during real life or during study while in the virtual machine (it seems perfectly possible to get better at chess while in the experience machine). I therefore think it would be false to say that if you won a game of chess in the experience machine, it wouldn’t count as a genuine achievement. I believe it’s intuitive that winning a game of chess against a programmed chess opponent would count as a genuine intellectual achievement, especially if the programmed chess opponent wasn’t a patzer.
Chess is a clear example because it seems intuitive that intellectual achievements can be substrate neutral. It doesn’t matter if you play chess in real life, over the internet, or in virtual reality, each and every decision made is a result of your own conscious will just the same as in real life. A win is a win: a clear demonstration of your intellectual skill, an achievement if there ever was one. So that’s one way to generate “genuine” satisfactions in the experience machine. But I think that even something like climbing Mount Everest in the experience machine could still be considered a genuine achievement. Sure, you are not placing your life in jeopardy or exerting actual physical energy, but the programmers could be extremely clever. They could simulate difficulty of breathing, feelings of fatigue, etc., that you would have to mentally fight against. Moreover, it would take genuine climbing skills, knowledge, and effort to be able to determine which route to take in the experience machine. If a complete climbing novice was in the experience machine, they could attempt it under realistic simulation conditions all they wanted, but the chance of them figuring out how to actually choose where to step and where to hold so as to get to the top are slim. So it would in fact require genuine intelligence to be able to consciously choose which ascension route to take.
Therefore, the claim that the experience machine is inferior to real life cannot be supported by the arguments that one will have primarily false beliefs and that one will be incapable of genuine achievement. With the right programming and the presence of genuine conscious belief and genuine conscious decision making, true belief and genuine achievement are possible in the experience machine. It might be objected that one will miss having “genuine” social encounters if one was in the experience machine. But so long as we are discussing science fiction, there is no reason why different people in different experience machines couldn’t interact in a realistic version of Second Life. Now let’s fire up our imaginations and suppose that every person in the world was plugged into different experience machines so that they could all live in a perfectly realistic version of Second Life. Let us also suppose that (1) the experience machine technology is eternally self-repairing and (2) the experience machine technology is eternally life-supporting.* Now what would be a better possible world? A future “real” world or a future virtual utopia without any worry of death or suffering? If someone decided to consciously inflict evil while plugged in, the programming would simply prevent that person from interfering with the well-being of the other virtual persons. It seems obvious to me that virtual utopia is much more valuable and genuinely optimific than our current reality as mortal beings on Earth.
*When the Sun eventually dies out billions of years from now, the robots will have to evacuate all the plugged in humans to a safer system. I also assume that some kind of heat death wouldn’t be a problem. And even if it was a problem, the extension of sentient pleasure all the way to the farthest possible time in universal history would have still been the best thing to have done, regardless if it wasn’t eternally everlasting.