This is a first draft of a paper I’m writing this semester for Gillian Russell’s proseminar on analytic philosophy. Feedback is welcome.
_________________________________________
I think it is uncontroversial that most philosophers believe mental events like sensations are private. In this paper I will investigate the extent to which this claim is true. Borrowing from Wittgenstein, I will show by way of thought experiment that sensations are not private in the sense usually reserved for the term by philosophers. If the thought experiment is conceivable and my interpretation of it is plausible, then the concept of absolute privacy will have to be rejected and replaced with the concept of practical privacy. Moreover, it is not just our folk concept of privacy which will be under scrutiny, but rather, the very existence of absolutely private sensations will come into question. It is my view that the thought experiment establishes, not just that the concept of absolutely private sensations is problematic, but that there actually are no such things as absolutely private sensations. The reason the concept of absolute privacy is problematic is because it doesn’t correspond to anything in reality. This does not mean that laypersons will stop believing in the concept of privacy after hearing these arguments. Many generations will have to pass before our lay concepts of privacy will implicitly and explicitly reflect my debunking of absolute privacy. But by showing that absolutely privacy does not exist, I will argue that it is best if we try to reject the idea of absolute privacy. However, it is undeniable that we will, as a matter of convenience and habit, often slip back into familiar ways of thinking in terms of absolute privacy.
Absolute privacy: what is it?
Although it is ultimately an empirical question whether laypersons really believe this, I take it for granted that something like the concept of absolute privacy concerning sensations is well entrenched in how the folk think about their own mental lives, as well as reinforced by most philosophers. Absolute privacy is the idea that only I have access to my sensations and it is impossible that someone else could share my sensations. When I burn my finger and feel the throbbing sensations of pain, the thesis of absolute privacy states that only I have access to the phenomenal content of painfulness. Although it is possible for other people to infer that I am in pain on the basis of the observation of publically available data (such as taking an aspirin or saying “Ouch!”), I do not have to infer that I am in pain, I simply know it noninferentially. The essential idea behind the concept of absolute privacy is that what-it-is-like for a subject to feel sensations can only be known by the individual subject, and no one else. As Hilary Putnam as argued [reference], there could be a race of Spartans who privately feel the sensation of pain while exercising great willpower in inhibiting all external behavioral indications that they are in pain.
The thought experiment
Now I will demonstrate why there are no absolutely private sensations. Imagine the human race has continued evolving at its current rate of technological acceleration for the next million years. Above all these future humans have developed their techniques of robotic neurosurgery to the point of utter sophistication. One of the most popular recreational pursuits in this far-future society is neurosplicing. The basic idea can be illustrated as follows. Take Subject A and B and place them side-by-side on operating tables. The robotic surgeons then take Subject A’s wrist and open it up such that all the nerves are exposed. The surgeons then take specially designed wires and place splitters on each of A’s nerves such that the nerve signals going from wrist to brain are perfectly copied and sent down the wires. The wires are now attached to B’s nervous system in such a way as to mimic the input pattern of A’s hand nerves into A’s central nervous system. Now that the operation is complete, the robots begin to stroke A’s hand with a feather. Here’s the crux: what does B feel when A’s hand is stroked? Is A’s sensation of being stroked shared by B? If so, what does this show about the nature of absolute privacy?
There are multiple ways to interpret the thought experiment. One way is to continue to insist that what A feels can only be felt by A and that A’s privacy has not been violated despite the neurosplicing. This interpretation is supported by the claim that in order for it to be the exact same sensation there would not just have to be an identical input pattern, but an identical way of processing that input. So it might be said that although B had a very similar input to his central nervous system, B doesn’t know what A actually felt because they don’t have similar central nervous systems. Accordingly, A and B bring all the weight of their differing neural histories to bear on their interpretation of the input of the feather stroke. So the mere fact of being spliced into A’s nerve inputs is not enough for B to know what-it-is-like for A to be tickled.
Another interpretation is to say that the thought experiment shows that sensations cannot be absolutely private. This is the interpretation I prefer. In order to show that A’s sensations are not absolutely private, we only need to tweak the parameters of the thought experiment. The wrist-nerve splicing case is rather simple compared to what the far-future robotic surgeons are really capable of. So whereas it might be thought that simple mental events like tickling sensations could be shared, more complex, global mental states like having a headache must be absolutely private. To show why this is not necessarily true, now consider that the robots are capable of not just mimicking peripheral nervous system patterns, but cortical activity itself. Assuming a weak modularity of the mind, it should be trivial for the robotic surgeons to implant artificial cortical modules that are capable of replicating the precise input-output activity of the real biological cortical modules. Now assume the module is a perceptual module. Stroking A’s hand now generates an identical cortical pattern in B’s head that corresponds to the module-activity in A’s head.
Are we still warranted in claiming that A’s tickling sensation is private? I believe that the similarity is enough to overcome absolute privacy because the question of whether B’s sensation is identical to A’s sensation is irrelevant to the question of whether A’s sensation is absolutely private. It could be the case that precisely what-it-is-like to be A is different from precisely what-it-is-like to be B in virtue of idiosyncrasies in their central nervous system. If A and B’s central nervous system were exactly alike except for the difference of a single neuron, would what-it-is-like to be A be different from what-it-is-like to be B? If what-it-is-likeness supervenes on the physical components, then it seems like there is a difference in what-it-is-likeness despite there being a difference of only one neuron.
But is this difference enough to show that A’s sensation is absolutely private? I don’t think this follows. The concept of privacy is often discussed in terms of informational access. The idea is that if I have a headache, only I have direct access to that headache. Other people might be able to infer that I have a headache on the basis of me taking an aspirin or saying something like “I have a headache”. But if in the nerve-splicing scenario A’s cortex becomes wired into B’s cortex, it seems plausible that B could directly know whether A is having a headache without having to make an explicit inference. So the question of whether B’s experience of A’s headache is identical to the A’s experience of their headache is irrelevant to the question of whether B has to explicitly infer that A is having a headache. I think it is plausible that given enough time to adapt to A’s cortical patterns, B could noninferentially know that A is having a headache simply in virtue of being wired into A’s cortex in the right way.
Wittgenstein’s thought experiment
I propose that this anti-absolute privacy interpretation of the thought experiment is a good way of understanding some of the remarks Wittgenstein made in regards to sensory privacy. In fact, a simpler version of the thought experiment can be found in the Blue Book:
One might in this case argue that the pains are mine because they are felt in my head; but suppose I and someone else had a part of our bodies in common, say a hand. Imagine the nerves and tendons of my arm and A’s connected to this hand by an operation. Now imagine the hand stung by a wasp. Both of us cry, contort our faces, give the same description of the pain, etc. Now are we to say we have the same pain or different ones? If in such a case you say: “We feel pain in the same place, in the same body, our descriptions tally, but still my pain can’t be his”, I suppose as a reason you will be inclined to say: “because my pain is my pain and his pain is his pain”. And here you are making a grammatical statement about the use of such a phrase as “the same pain”. You say that you don’t wish to apply the phrase, “he has got my pain” or “we both have the same pain”, and instead, perhaps, you will apply such a phrase as “his pain is exactly like mine”. (It would be no argument to say that the two couldn’t have the same pain because one might anaesthetize or kill one of them while the other still felt pain.) Of course, if we exclude the phrase “I have his toothache” from our language, we thereby also exclude “I have (or feel) my toothache”. Another form of our metaphysical statement is this: “A man’s sense data are private to himself”. And this way of expressing it is even more misleading because it looks still more like an experiential proposition; the philosopher who says this may well think that he is expressing a kind of scientific truth. (Wittgenstein, 1958, p. 54-55)
When Wittgenstein suggests that this thought experiment undermines the metaphysical statement “A man’s sense data are private to himself”, I suggest that Wittgenstein is talking about absolute privacy, not practical privacy. This interpretation also helps to make sense out of some cryptic remarks in the Philosophical Investigations. Consider ¶253:
In so far as it makes sense to say that my pain is the same as his, it is also possible for us both to have the same pain. (And it would also be imaginable for two people to feel pain in the same – not just the corresponding – place. That might be the case with Siamese twins, for instance.)
It is important that we distinguish two different interpretations of Wittgenstein’s remark about privacy “making sense”. On the stronger reading, we might see Wittgenstein as arguing that there actually isn’t any such phenomena as a private sensation. On the weaker reading, we might see Wittgenstein as arguing that the concept of sensory privacy is somehow problematic or confused. I suggest that Wittgenstein makes the weaker claim about concepts because there is no such thing as absolute privacy, as established by considerations such as the nerve-splicing thought experiment. If the thought experiment shows that there is no such thing as absolute privacy, then it is reasonable to ask us to update our concept of privacy to account for this. The concept of absolute privacy needs to be rejected precisely because it does not latch onto any corresponding fact.
Accordingly, it would be wrong to interpret Wittgenstein as arguing that we don’t actually have a concept of absolute privacy. I believe Wittgenstein thinks we do have such a concept. What I think Wittgenstein is doing in Philosophical Investigations is trying to show that this concept is not based on any kind of corresponding metaphysical fact about the absolute privacy of sensations, but rather, is only a product of a language game based on the realities of practical privacy. The story then goes like this: because of practical privacy, humans developed the language game of absolute privacy. Once the language game got going and sufficiently established in our ways of speaking, philosophers became convinced of the truth of absolute privacy as a metaphysical statement. But once we realize that all we possess is practical privacy, we should no longer affirm the truth of metaphysical statements about absolute privacy.
It is an empirical question as to whether humans will ever be able to implicitly give up belief in the truth of absolute privacy. It might be a contingent fact that humans, in virtue of their cognitive machinery, are unable to stop implicitly believing in the truth of something like absolute privacy. But humans are capable of modifying their explicit, consciously held beliefs about absolute privacy. So although right now I have a conscious belief that if surgical nerve-splicing technology ever advanced my sensations could be shared with others, I also have the conscious belief that since we don’t have such technology, my sensations are in fact private. As a matter of fact, I could walk up to my friends in great pain and they would never know it if I sufficiently suppressed my external pain behaviors. And my implicit beliefs reflect this knowledge of how and to what extent my sensations are private. But on the conscious level I also recognize that absolute privacy is an illusion fostered by the depth of practical privacy.
Thus, when Wittgenstein talks about sensory privacy as a grammatical fiction (¶307), what is fictional is absolute privacy. But the cognitive depth of practical privacy lent itself to the construction of myths of absolute privacy (“It’s impossible that my sensations could ever be experienced by someone else”). This interpretation also suggests a way to make sense of Wittgenstein’s famous remarks about the beetle in the box:
Suppose everyone had a box with something in it: we call it a “beetle”. No one can look into anyone else’s box, and everyone says he knows what a beetle is only by looking at his beetle. – Here it would be quite possible for everyone to have something different in his box. One might even imagine such a thing constantly changing [or empty]. (¶293)
I suggest it’s plausible to interpret the “beetle” as a stand-in for an “absolutely private sensation”. The point then is that we could all coherently and intelligibly talk about absolutely private sensations without there actually being any absolutely private sensations (the box could be empty). The key is to realize that just because we have a concept of absolutely private sensations does not mean that absolutely private sensations actually exist.
But it’s also important to realize how our beliefs in absolute privacy are not quite delusions. The reason they aren’t delusional or irrational is that the real existence of practical privacy is enough to underwrite the rationality of believing in absolute privacy. So although the concept of absolute privacy does not track metaphysical truth, it would be strange to say that someone is irrational because they assent to the truth of the statement “My sensations can only be experienced by me”.
The technological relativity of publically observable behavior
Many philosophers are impressed enough with practical privacy that they assent to the truth of absolute privacy. The possibility of Spartans seems enough to conclusively demonstrate that there is more to pain than just pain behavior. In addition to publically observable behavior, the Spartan case seems to suggest that there are also private sensations. What I suggest is that the concept of “publically observable behavior” is relative to the technological sophistication of the society. What’s publically observable for far-future societies is different than what’s publically observable for us today, or for our ancient ancestors. With the invention of better brain imaging and surgical techniques, what becomes publically observable changes. And if it were the case that the precise patterns of our central nervous system were publically available in the sense of anyone else being capable of “splicing” in, then the very data out of which our own brains generate sensations would be available for other brains to digest.
Coming back to the issue of whether B’s experience of A’s headache is identical to A’s headache, we can now see that the question of “direct vs indirect” access is also relative to the way in which B observes A. If B judged that A is having a headache simply by observing A take an aspirin, then we could say that B did not have direct access. If B judged that A is having a headache because scientists correlated headaches with certain kinds of neural activity and B is looking at a brain scanner of A, then we would also say that B’s access is indirect. But if B’s cortex was directly wired into A’s cortex, is the judgment about A’s headache direct or indirect? It seems intuitive to me to say that B’s judgment is direct. But in this case what is the real difference between direct and indirect knowledge? It seems like the directness cannot simply be a matter of direct causal linkage because in the case of looking at A’s brain scan, there is a direct causal link between A’s brain activity, the image displayed on the computer, and B’s looking at the computer display of A’s brain activity. The question of direct or indirect seems then to be a matter of whether the judgment happens explicitly or tacitly. In the case of looking at A’s brain scan, the judgment is indirect because has to be made on the basis of explicit scientific knowledge of various correlations between brain activity and headaches. But surely there is a difference between a novice interpreter of brain scanning images and an expert. Whereas the novice might make a slow explicit judgment, the expert could directly know A is having a headache based on years of experience of looking at headache-brain correlations. It seems then that the nerve-splicing case is more similar to the case of the expert than the novice, because once B’s cortical module has been exposed to A’s cortical activity for long enough, B’s cortical module would start to automatically make judgment’s about A’s activity in the same automatic way B’s cortical module would make judgments about other cortical modules in B’s brain.
Conclusion
In this paper I have argued that when it comes to investigations concerning whether or not sensations are private, it is crucial to distinguish between absolute and practical privacy. Based on the nerve-splicing thought experiment, I have tried to show that absolute privacy does not exist. In normal situations, what we have instead is practical privacy. It’s merely practical rather than absolute because the inability of other people to know what I am feeling is only a matter of those people not having access to the right technology. If we lived in a far-future society where nerve-splicing had become incredibly sophisticated, we would better understand why statements about absolute privacy are false. Because of the depth of practical privacy, we feel justified talking about absolute privacy as if it corresponds to some metaphysical fact. But I have tried to argue that any facts of privacy are merely practical, not absolute. Accordingly, this suggests that we should revise our concepts of privacy to be about practical privacy. Although this conceptual revision can happen on the explicit, conscious level, the extensiveness of practical privacy suggests that it will take a long time before our implicit beliefs can catch up with any explicit denial of absolute privacy. And because we know that practically speaking our sensations will be private until the distant future, absolute privacy will always seems like an attractive thesis. But as the thought experiment suggests, this conviction of absolute privacy is mistaken.