James Bernat writes that it’s built into the “paradigm” of death that “death is fundamentally a biological phenomenon.” But suppose humans in the distant future are successful in building an artificial intelligence that has person-level properties such as consciousness, memory, etc. And suppose this robot is destroyed. Would we not want to say that the robot died? What other concept would be appropriate for describing what happened to this artificial intelligence? Thus it seems like death is not a fundamentally biological phenomenon.
Tag Archives: personal identity
Imagine a future society with teleportation technology. Instead of having to spend all day traveling to get from Orlando to L.A., you can now step into a teleporter booth, hit a glowing green button and be more or less instantly transported anywhere on the planet. Here’s how it works: the machine scans your full atomic structure, stores the pattern, then beams it to another teleporter, where a matter-assembler puts you back together again from a stock pile of atoms. You have used this machine many times with no qualms whatsoever. Now, imagine that one day you step in the booth, press the green button, but nothing happens. You are then told in a polite, robotic voice, “I’m sorry traveler, but something went wrong. Although we successfully scanned your body and reassembled you in L.A., the disintegration process failed. Would you please press the purple button in order to finish the disintegration process?”
Horrified, you run out of the booth because there is no way you are going to commit suicide by pressing the purple button. And suicide is exactly what it is to push the purple button. This twist in the teleporter story is meant to alert us to deep issues in the philosophy of personal identity. The point of the twist with the purple button is to pump our intuitions to the effect that any sane person would refuse to use the teleporter at all. After all, the only difference between the normal operation and the purple button case is that you failed to die in the latter. This is supposed to show that there is no continuity between the original person and the duplicate made in the matter-assembler. After all, if there is continuity, why would you not press the purple button? If you sincerely thought your personal identity would be preserved in the reassembly process, and you knew the machine had already reassembled you elsewhere, why would you not press the purple button? The upshot of this analysis is supposed to be that using a normal teleporter is akin to suicide. Identity is not preserved.
I’m not convinced. I would use a teleporter if I knew there was a 99.999999999999999999…% chance of it working properly. But I would not press the purple button. This is not a contradiction. There is in fact a crucial difference between the green button and the purple button. Only a fool would press the purple button, but only a fool would refuse to press the green button if it was working properly. What’s the difference? The green button is useful. It does something, namely, allow you to efficiently travel from location A to B. The purple button, however, does nothing. It kills you. Therefore, any rational agent should have no qualms pressing the green button but would be a fool to press the purple button (unless they wanted to commit suicide). But let’s say you are the traveler in the purple button situation. You know there is a near physical duplicate of you out there somewhere. You know that the duplicate will fulfill whatever responsibilities you have in L.A. What should you do? Well, whatever you want. If you want to travel to New Zealand, you can press the green button and utilize that amazing technology to achieve your desires. The fact that you have a duplicate does not matter. You have no obligation to kill yourself. Why commit suicide when you can travel the world by simply pressing a button? It would be foolish not to use something that had a 99.999999999999999999…% success rate in doing something so incredibly useful.
Thus, I propose that any rational agent, knowing the extreme usefulness of the teleporter and it’s normal success rate, should use the teleporter but shouldn’t press the purple button (unless the agent actually wants to die). It would be quite irrational to refuse to use the teleporter out of the fear that your identity wouldn’t be preserved. The reassembled clone is atom-by-atom identical to the you that pressed the green button. You can’t get any better in terms of continuity of identity than an atom-by-atom preservation. But suicide is irrational unless you want to die. If your atom-by-atom duplicate wanted to use the machine to travel, then presumably that person did not want to commit suicide (unless they were teleporting to an ideal place to commit suicide). Therefore, given that you would have identical desires, it would be strange to want to commit suicide by pressing the purple button given you just previously had a traveling mindset. There is thus no contradiction between using a working teleporter but not pressing the purple button.