Monthly Archives: January 2012

How to be a genuine achiever in the experience machine

Photobucket

The experience machine is a hypothetical thought experiment whereby one is jacked into a supercomputer and you live in a Matrix-like virtual reality that is experientially identical to normal experience here on Earth. Here’s the question: would you “plug in” to the experience machine and experience a perfectly pleasant existence despite the fact that it wouldn’t be “real”? From what I have gathered, most philosophers believe the most sensible answer is “No, I would not plug in; I prefer reality”. The reasoning behind this judgement is tied into the nature of achievement. What is more satisfactory? To climb Mount Everest in real life or in an experience machine? Most philosophers argue that real achievement, as opposed to false achievement (“You didn’t actually climb Mount Everest”) is more satisfactory or valuable. Real achievement is supposed to be more meaningful, valuable, and satisfactory.

There is also another worry about the experience machine: it would generate false beliefs. If you were in the experience machine and climbing Mount Everest your belief “I am at Mount Everest” would be in fact false. You would really be sitting in some darkened room, plugged into the experience machine, not at Mount Everest. Since it is rational to value true beliefs as opposed to false beliefs, we should not want to plug into the experience machine, because almost all of our beliefs would be false.

I think both of these objections fail in their attempt to show the inferiority of the experience machine. Let me start with the epistemic worry. Suppose that the experience machine is designed so that when you first plug in you retain all your memories acquired in real life. Moreover, suppose that before you plugged in, you asked the programmers to code a clear and definite signal once you are plugged in that says “Hello X, this is the programmers. We’re just letting you know that you are now in fact in the experience machine. Have fun!” Once you plug in and “wake up” in the indistinguishable virtual world, you hear a great booming omnipresent voice that says ” “Hello X, this is the programmers. We’re just letting you know that you are now in fact in the experience machine. Have fun!” Since you will have retained your memory of having talked to the programmers about this very signal, you have reasonable confidence (inference to the best explanation) that this voice does in fact mean that you are in the experience machine, and that you didn’t hallucinate the signal (which would have disastrous effects if you wanted to be a dare devil in the experience machine).

Thus, when you climb the virtual Mount Everest, you will not in fact believe “I am actually at Mount Everest”. Instead your belief will be “I am climbing Mount Everest inside the experience machine”. This conscious knowledge of being in the experience machine, so long as you continue to recall that fact, will inevitably affect every single other belief. Thus, your beliefs will be by and large true. The objection that the experience machine will lead to false beliefs therefore fails, because so long as you are conscious of the fact that you are in the experience machine, you can have a meta-knowledge to the effect that “I am not actually climbing Mount Everest, I am just in the experience machine”.

Now, let me spell out how it’s possible to be genuinely successful in your achievements in the experience machine. I’ve been assuming that while in the experience machine you have genuine conscious choice. That is, in the experience machine you have the genuine ability to consciously direct your actions. If you consciously decide to drink a cup of coffee in the experience machine, it will because the experience machine is responding to your genuine intentions (which are obviously grounded in an objectively real neural substrate). So let’s take the example of playing chess in the experience machine. If you decided to play someone in a game of chess in the experience machine, each and every one of your decisions about how to move the pawns and pieces would be a decision that you and you alone consciously chose to make. No one forced you to make those moves, and they weren’t the result of some automatic mechanism (except to the extent that the fleshy brain processes realizing your conscious intentions are themselves “automatic”, which in this sense simply means causally deterministic as opposed to not consciously intentional). Moreover, since the only memories you would be programmed to have access to were your original memories, any chess theory recalled during a decision making process would be a result of having remembered it from your actual study during real life or during study while in the virtual machine (it seems perfectly possible to get better at chess while in the experience machine). I therefore think it would be false to say that if you won a game of chess in the experience machine, it wouldn’t count as a genuine achievement. I believe it’s intuitive that winning a game of chess against a programmed chess opponent would count as a genuine intellectual achievement, especially if the programmed chess opponent wasn’t a patzer.

Chess is a clear example because it seems intuitive that intellectual achievements can be substrate neutral. It doesn’t matter if you play chess in real life, over the internet, or in virtual reality, each and every decision made is a result of your own conscious will just the same as in real life. A win is a win: a clear demonstration of your intellectual skill, an achievement if there ever was one. So that’s one way to generate “genuine” satisfactions in the experience machine. But I think that even something like climbing Mount Everest in the experience machine could still be considered a genuine achievement. Sure, you are not placing your life in jeopardy or exerting actual physical energy, but the programmers could be extremely clever. They could simulate difficulty of breathing, feelings of fatigue, etc., that you would have to mentally fight against. Moreover, it would take genuine climbing skills, knowledge, and effort to be able to determine which route to take in the experience machine. If a complete climbing novice was in the experience machine, they could attempt it under realistic simulation conditions all they wanted, but the chance of them figuring out how to actually choose where to step and where to hold so as to get to the top are slim. So it would in fact require genuine intelligence to be able to consciously choose which ascension route to take.

Therefore, the claim that the experience machine is inferior to real life cannot be supported by the arguments that one will have primarily false beliefs and that one will be incapable of genuine achievement. With the right programming and the presence of genuine conscious belief and genuine conscious decision making, true belief and genuine achievement are possible in the experience machine. It might be objected that one will miss having “genuine” social encounters if one was in the experience machine. But so long as we are discussing science fiction, there is no reason why different people in different experience machines couldn’t interact in a realistic version of Second Life. Now let’s fire up our imaginations and suppose that  every person in the world was plugged into different experience machines so that they could all live in a perfectly realistic version of Second Life. Let us also suppose that (1) the experience machine technology is eternally self-repairing and (2) the experience machine technology is eternally life-supporting.* Now what would be a better possible world? A future “real” world or a future virtual utopia without any worry of death or suffering? If someone decided to consciously inflict evil while plugged in, the programming would simply prevent that person from interfering with the well-being of the other virtual persons. It seems obvious to me that virtual utopia is much more valuable and genuinely optimific than our current reality as mortal beings on Earth.

*When the Sun eventually dies out billions of years from now, the robots will have to evacuate all the plugged in humans to a safer system. I also assume that some kind of heat death wouldn’t be a problem. And even if it was a problem, the extension of sentient pleasure all the way to the farthest possible time in universal history would have still been the best thing to have done, regardless if it wasn’t eternally everlasting.

Advertisements

3 Comments

Filed under Consciousness, Philosophy

The Argument From Marginal Cases For Animal Rights

As of late, I’ve been getting really interested in animal rights philosophy, not because I’m close to turning into a vegan or anything, but simply because I find philosophical arguments that depend on comparative animal psychology to be really interesting. And I’ve been interested in the philosophy of animal minds for a long time, so the connection to my research is obvious. In particular, the Argument from Marginal Cases (AMC) really interests me. The AMC is one of the primary arguments used to support the idea that nonhuman animals have rights just the same as humans.  I found the following summary of the AMC in a paper by Daniel Dombrowski:

1. It is undeniable that [members of ] many species other than our own have ‘interests’ — at least in the minimal sense that they feel and try to avoid pain, and feel and seek various sorts of pleasure and satisfaction.
2. It is equally undeniable that human infants and some of the profoundly retarded have interests in only the sense that members of these other species have them — and not in the sense that normal adult humans have them. That is, human infants and some of the profoundly retarded [i.e. the marginal cases of humanity] lack the normal adult qualities of purposiveness, self-consciousness, memory, imagination, and anticipation to the same extent that [members of ] some other species of animals lack those qualities.
3. Thus, in terms of the morally relevant characteristic of having interests, some humans must be equated with members of other species rather than with normal adult human beings.
4. Yet predominant moral judgments about conduct toward these humans are dramatically different from judgments about conduct toward the comparable animals. It is customary to raise the animals for food, to subject them to lethal scientific experiments, to treat them as chattels, and so forth. It is not customary — indeed it is abhorrent to most people even to consider — the same practices for human infants and the [severely] retarded.
5. But absent a finding of some morally relevant characteristic (other than having interests) that distinguishes these humans and animals, we must conclude that the predominant moral judgments about them are inconsistent. To be consistent, and to that extent rational, we must either treat the humans the same way we now treat the animals, or treat the animals the same way we now treat the humans.
6. And there does not seem to be a morally relevant characteristic that distinguishes all humans from all other animals. Sentience, rationality, personhood, and so forth all fail. The relevant theological doctrines are correctly regarded as unverifiable and hence unacceptable as a basis for a philosophical morality. The assertion that the difference lies in the potential to develop interests analogous to those of normal adult humans is also correctly dismissed. After all, it is easily shown that some humans — whom we nonetheless refuse to treat as animals — lack the relevant potential. In short, the standard candidates for a morally relevant differentiating characteristic can be rejected.
7. The conclusion is, therefore, that we cannot give a reasoned justification for the differences in ordinary conduct toward some humans as against some animals

So here’s why I think the AMC is rather weak.

I don’t have any problems with premise (1). Premise (2) is already problematic though. The claim is that “human infants and some of the profoundly retarded [i.e. the marginal cases of humanity] lack the normal adult qualities of purposiveness, self-consciousness, memory, imagination, and anticipation to the same extent that [members of ] some other species of animals lack those qualities.” While it is undoubtedly clear that a human baby possesses less self-consciousness, imagination, and anticipation that human adults, there is a lot of evidence to support the idea that human babies are remarkably well-developed cognitively, they just lack the capacity of expression. So a human baby is certainly more intelligent than a chicken, and possibly more intelligent than a cow. The problem is that human babies have no way to express their intelligence since they can’t speak yet nor can they use their motor skills to communicate. But subtle experiments demonstrate the extent of their cognitive sophistication.

Moreover, the AMC ignores an obvious extension of the “marginal case” of the human baby: human fetuses. It seems like many speciesist would not include human fetuses in the moral sphere precisely because of how marginal their cognition is. And the development of human-like cognition is one of the markers for where we start drawing the line for abortion. The more developed the brain becomes, the less we feel it’s right to abort a child. And it could be said that the actual birth is an arbitrary cut-off point. If a baby was born without any brain, then it’s likely we would not include that baby into the moral sphere and mercifully end its life without its explicit consent.

But what about mentally retarded people like those with severe autism or Alzheimers? Clearly these entities lack the uniquely human cognitive capacities that characterize a normal human adult, yet we don’t treat them like cattle. Isn’t this inconsistent? Hardly. In the case of most autistic children, I believe the evidence shows that they either have a reduced human cognitive skill set or a different cognitive skill set, but it is rare that they have no skill set at all. I would daresay that your average autistic child is more cognitively sophisticated than a chicken. And the same for your average Alzheimers patient. Likely, an Alzheimer patient, for the majority of their disease progression, has a reduced cognitive skill set, but they don’t lack one altogether. And when such persons do eventually completely lack consciousness, why would a speciesist assume that they have full moral rights? Personally, if I ever developed Alzheimers, I would hope that my society permitted assisted suicide or mercy killing once I reach a totally advanced stage of the disease. Likewise for vegetative coma patients. It seems as if humans who totally lack consciousness are not fully included into the moral sphere, as, say, a normal human adult. This explains our attitudes towards those in comas with no foreseeable chances of recovery.

Thus, I think premise (3) is wrong in almost all cases. Moreover, we can use a different strategy to show why it’s consistent for a speciesist to treat newborn infants differently than they treat cattle: counterfactual biological development. Under normal healthy circumstances, a human infant will grow into a cognitively sophisticated adult. Under healthy circumstances, it is very very unlikely that a cow will grow into a cognitively sophisticated adult. And if that cow ever does mutate and develop the ability to rationally talk and engage humans in high-level moral conversation, then we should include that cow into the moral sphere. But what about someone with severe mental retardation who has no potential to grow into a normal adult? Well, as I said before, it’s doubtful that most retarded children are as cognitively stupid as a cow or chicken. Moreover, we can engage in a counterfactual analysis and think that it would have taken much less different alignment of genes for a retarded child to have been born with the potential to grow into a normal adult than it would be for a chicken or cow. A cow would have to have a total restructuring of the genome in order to produce a brain capable of learning human-like cognitive skills. So the counterfactuals are in fact quite different.

And there is another point where the AMC fails: it paints a false dichotomy whereby either animals have rights equivalent to adult human rights or they have no rights at all. This is a false dichotomy, because we can imagine a continuum of rights rather than an on or off switch. It makes sense to me that although a bonobo or dolphin has less rights than a human adult, it has more rights than a chicken, and a chicken has more rights than an oyster. I would never treat a bonobo like I would a chicken or a mosquito, but I would not treat a bonobo like a human child or a human adult. If there was a burning building, I would rescue a normal human adult or child over a bonobo, but I would rescue a bonobo over a chicken. Moreover, it’s false that this reasoning is arbitrarily speciesist because I would rescue a bonobo over a vegetative coma patient or a human fetus.

Now I want to discuss premise (5): human uniqueness. I see it claimed a lot in animal rights literature that the attempt to find uniquely human cognitive attributes has failed. Oh yeah? What about the set of cognitive attribute that allows you to send a robot to Mars? Or write a philosophy book?* Although there are certainly many similarities between humans and nonhuman animals, I just don’t take seriously anyone who denies the obvious and vast differences. Robot to Mars! Seriously! For those skeptical of human uniqueness, I highly recommend Michael Gazzaniga’s excellent book Human: The Science Behind What Makes Us Unique. As evidenced by practically everything in our culture as well as particular neural structures/functions, we are not just different by degree, but in kind. And even if it was just in degree, the level of difference in degree is of such magnitude it stills warrants the conclusion of human cognitive uniqueness. See this post for more.

So yeah, imo, the AMC has so many problematic premises it can barely even get off the ground as a convincing argument.

*Edit: I’ve realized that someone might wonder why the ability to send a robot to Mars is morally relevant. I don’t think it is. But the type of creature capable of sending a robot to Mars is also probably capable of moral deliberation and reflection, which certainly seems to me like a candidate capacity for bestowing moral worth. But since I do in fact place some value on basic organic sentience, clearly moral reflection is not the source of all of human worth, but I do think it grounds the majority of human worth. In fact, I think moral reflection (which is a skill enables by reflective consciousness) is of such importance than it generates moral value in terms of the counterfactuals for biological potential.

5 Comments

Filed under Consciousness, Philosophy, Psychology

Just how far should we expand the circle of ethics?

Right now I am reading Peter Singer’s book The Expanding Circle. It’s a good book so far. It’s clear, well-argued, and written with a sense of moral urgency. The central argument is that due to the way ethical reasoning works on the basis of impartiality, it would be arbitrary to restrict the moral community to a single group, such as your own tribe, gender, or race. Hence, the evolution of morality over the years is moving (and will hopefully continue to move) in the direction of ever greater impartiality as seen by societal advances in abolition, woman’s right’s, etc. However, Singer also argues that we should expand to circle of ethics beyond the human realm to all other sentient creatures capable of feeling pleasure or pain. Singer argues that it would be just as arbitrary to restrict ethical considerations to humans as it would be to restrict them to a certain class of humans.

But then how far down the evolutionary continuum should we go? Singer thinks we should probably draw the line around oysters and the like, since it seems implausible that oysters are capable of feeling pleasure or pain. And Singer definitely thinks we should not expand the circle to include inanimate entities like mountains or streams. So what’s so special about the ability to feel pleasure or pain? Singer thinks that this capacity is a nonarbitrary dividing line because it’s something that humans can take into consideration. On what basis could we include mountains and streams into our moral deliberation? There seems to be none. But the fellow capacity to feel pleasure and pain seems like a good candidate.

This is where I must disagree with Singer. I simply don’t see what’s so morally special about the ability to detect cellular damage. And that is all pain perception really is. It’s an evolved biological mechanism that registers damage to cells and then relays that information to appropriate motor centers to move the creature out of harm’s way, which increases the biological fitness of the creature, maintaining homeostasis and organisational structure. Vegetarians like Singer loathe this line of thinking because it brings to mind the old practice of torturing cats and dogs because Descartes argued they can’t really feel pleasure or pain because animals are simply unfeeling mechanisms. But I don’t think the permissibility of wanton torture follows from the idea that pain perception is just a simply biological mechanism for damage detection. Even if it is permissible to use animals for food, it doesn’t follow that it’s permissible to torture them for fun. Even if it’s permissible to eat animals for food, we might still be obligated to treat them with respect and try to lower the occurrence of pain to it’s absolute minimum. But, personally, I believe that just having the capacity to feel pain doesn’t launch you into the moral category whereby it becomes impermissible to be used for food for humans.

I’ve heard it claimed that this kind of speciesism is injustifiable if we consider the cognitive capacities of those who are extremely mentally handicapped or incapacitated. Since presumably I think speciesism is justifiable because humans are cognitively superior to nonhuman animals, then it should be ok to treat cognitively inferior humans just like we do cattle. Since we wouldn’t think it’s ok to do this to mental invalids, we can’t just use cognitive superiority to justify the way we treat nonhuman animals. My immediate response to this is that there is a difference between entities who, if everything had been biologically optimal, could have developed to the human cognitive level, and entities who could never reach that level despite being developed in optimal biological conditions. This principle of potentiality is enough to show how it’s nonarbitrary to treat human invalids different from nonhuman animals.

 There’s another point I want to make about the moral worth of pain itself. How could it be of that much importance when nonhuman animals themselves seem to be indifferent to it compared to the typical human response to pain? I read in Euan MacPhail’s The Evolution of Consciousness that there have been field reports of chimps getting into fights with other males, having their testicles bitten off, and immediately afterwards being capable of having sex with a female. I doubt there is any human who is horny enough to ignore the pain of having their genitals mutilated just to have sex. On the basis of this observation, we can infer that chimp pain perception is different from the awareness of pain that humans possess. And since chimps are seen by people like Singer as being the most worthy of our ethical consideration, what does this say about the pain capacities of animals even lower down the totum pole than chimps? Nonhuman animals don’t seem to “care” about their pain to the same extent that humans do. Caring about pain as opposed to pain itself goes by another name: suffering i.e meta-consciousness of pain. While it is plausible that some nonhuman animals have the capacity for a kind of protosuffering, it seems clear to me that human suffering is of a level of sophistication far beyond that of any nonhuman animal. Now, I don’t have a clear argument for why human suffering is more morally valuable than the mere pain of nonhuman animals, but it is at least a nonarbitrary cutting off point and one that has a kind of intuitive support.

However, I don’t think the moral worth of human suffering over nonhuman pain is enough to justify the claim that nonhuman pain has no moral worth at all. As a matter of fact, I agree with Singer that the pain of nonhuman sentient beings does have some moral worth, and that we are obligated, ultimately, to reduce that pain. For this reason, if I was presented in a supermarket with the choice of eating real beef or artificial beef grown in a lab, I would choose the artificial beef. So the only reason I am not a non-meateater is because the right technology has not been invented yet. As  soon as that technology becomes available (and they are working on it), I will gladly give up my practice of eating meat. But since I believe that eating meat is a very healthy way to get protein and animal fats into my diet, I do not think the current pains of nonhuman animals is enough to overcome the selfishness involved in maintaining my own health, for I value my own life over those of nonhuman animals. Again, this is not because I don’t place any value in nonhuman life. In my ideal world, not a single sentient entity would ever feel unnecessary pain. I feel predation to be evil, but I nevertheless eat animals for health reasons. If I sincerely thought vegetarianism was healthier than an omnivorous diet, I would be a vegetarian (which would be nice because it would line up with my beliefs in the evils of predation). But since I am a speciesist and value human life more than nonhuman life, I think it is permissible for me to continue my practice until the technology of artificial meat becomes widely available. I’m aware of the possibility that this reasoning could be nothing more than a post-hoc rationalization of my comfortable habits of meat eating. But I do think that there is a nonarbitrary argument to be made for speciesism that makes the exclusion of nonhuman animals from the moral sphere far less arbitrary than the exclusion of subclasses of humans. Contra Singer, I don’t think speciesism is equal to racism or sexism.

11 Comments

Filed under Philosophy