Getting scooped: Derk Pereboom's Qualitative Inaccuracy Hypothesis

One thing I have learned in studying philosophy is that there is rarely anything new under the sun. I thought I had come up with an original idea for my current paper I am working on, but yesterday I was wandering the library stacks and randomly pulled out Derk Pereboom’s book Consciousness and the Prospects of Physicalism. I read the first page of the introduction and realized I had been scooped by Pereboom’s “Qualitative Inaccuracy Hypothesis”. According to this hypothesis, when we introspect upon our phenomenal experience our introspection represents our experience as having qualitative features that it in fact does not have. For example, I might introspect on my phenomenal experience and represent my phenomenal experience as having special qualitative features that generate the Knowledge or Conceivability arguments against physicalism. Pereboom’s idea is that our introspection systematically misrepresents our phenomenal experience such we are deluded into thinking our phenomenal experience is metaphysically “primitive” when in fact it is not primitive. Although Pereboom only argues that the qualitive inaccuracy hypothesis is a live possibility, the mere possibility of it is enough to cause wrinkles in the Knowledge and Conceivability argument. That is, if the hypothesis is correct, then Mary turns out to have a false belief upon stepping outside the room and introspecting upon her experience (since her introspection misrepresents and her subsequent knowledge is thus false). Moreover, the conceivability and zombie argument doesn’t go through because if our phenomenal experience does not in fact have the special qualitative features we introspect it as having (primitiveness) then it becomes impossible to conceive all physical truths being the same as they are now (P), a “that’s all clause” (T), and there not being phenomenal experience (~Q) for the same reason that it’s impossible to conceive PT and there not being any water. That is, if our only evidence for phenomenality having the special features that make the zombie argument go through is to be found in our introspection, if there is a possibility of our introspection getting the data wrong, then the zombie argument does not work without arguing for the (questionable) assumption that our introspection is necessarily accurate.

However, despite getting scooped on this, I believe my paper is still an original contribution to the literature. For one, I give a more empirically plausible model of how our introspection works as well as give more elaborate details on how it misrepresents our experience. I also tie in this introspective inaccuracy to the well-known “refrigerator light problem” in consciousness studies. I also develop a methodological strategy for getting around the introspective inaccuracy that I call the “stipulation strategy”. From this, I develop some implications for our ascription of phenomenality to nonhuman organisms and argue that the most common stipulation strategies end up ascribing phenomenality almost everywhere in the organic world (which contradicts central tenets of Higher-order theory). This is a surprising conclusion. My paper is also well-sourced in the empirical literature and unlike Pereboom, I don’t spend much time dealing with Chalmers and all the intricate details of the Knowledge and Conceivability arguments. I spend much more time developing a model of how introspection works and how it could possibly by inaccurate with respect to our own phenomenal experience.

So although it’s nice to know I’m not alone in arguing for what I call the “Indeterminacy of Introspection”, it’s always a shock when you spend so much time developing what you think of as an original idea and then discovering that someone else already had the same idea. Luckily, my paper has a lot more going on in it, and I think it can still be published as an original contribution to the literature.



Filed under Consciousness

15 responses to “Getting scooped: Derk Pereboom's Qualitative Inaccuracy Hypothesis

  1. Speaking as someone who has worked in the private sector my whole career, it is a known truth that it can be just as bad to have no competitors as to have too many.


  2. Charles Wolverton

    I recently had an exchange (at Splintered Mind) somewhat related to this post. It went nowhere, hopefully due to “failure to communicate” among the various participants rather than to failure to make any sense on my part alone. So, I’d like to see if I can fare any better here.

    To clarify the terminology, let’s define a scenario. A subject S is seated in a controlled environment and asked to stare straight ahead. Then S’s retinas are illuminated with uniformly with colored light in the “red” part of the light spectrum. S is then asked two questions:

    1. How would you describe the external environment in this room based on your phenomenal experience?

    2. How would you describe your phenomenal experience?

    (The latter being my interpretation of the question implicit in the phrase “introspect upon our phenomenal experience”.)

    Now, I can easily imagine S’s answer to Q1 being in error, inaccurate, misrepresenting, etc. In response to Q1, S might answer:

    A1. There appears to be a very large, flat, and red surface illuminated by white light.

    If, for example, the phenomenal experience were the consequence of S’s retinas being directly irradiated by narrow beams from a red light source in an otherwise unlit room, the description would be in error on all counts.

    However, I can’t imagine S’s answer to Q2 being in error. Subjects typically might answer something like:

    A2. My phenomenal experience is of a uniform expanse of formless color, specifically of red.

    But suppose S’s answer were something like:

    A3. My phenomenal experience is of alternating uniform bands of formless color, specifically bands of red and blue, all undulating together.

    In what sense could either answer be “in error”? Ie, what would be the standard – the “reality”, if you will – relative to which S’s answer is to be evaluated?

    Even if there is a convincing answer to those questions, here are a couple more raised by the post:

    “Mary turns out to have a false belief upon stepping outside the room and introspecting upon her experience”

    Viewing “belief” as a propositional attitude, what sort of proposition is Mary implicitly assumed to now hold true? An obvious candidate is “Aha, now I believe that this is the phenomenal experience that corresponds to the word ‘red'”. But how would Mary make that association? Not having had the experience before, she can’t have learned to do so. She might infer the association from seeing a cherry-shaped object and knowing that ripe cherries are typically red, but that inference may be wrong. In any event, suppose her initial exposure is like the experience of S above, ie, to formless ambient light so that there is nothing from which to infer. Of course, in principle she might have learned to associate measurable neurological activity with exposure to light in the “red” part of the spectrum, but then she would already know that association and would learn nothing new.

    But suppose we assume that she somehow does acquire a new ability to assert a proposition associating “red” and her new phenomenal experience. If we take “knowledge” to be “belief plus something more”, what is the something? If we take it to be Sellarsian justification – entering the space of offering reasons to one’s peers – what reasons might she offer? Ie, how does she convert belief into knowledge?

    • Gary Williams

      Hi Charles,

      You say “However, I can’t imagine S’s answer to Q2 being in error.” Here is one way it could be in error: if S says “My experience of the red light as I am introspecting upon that experience is raw”. Raw is defined as being a nonintrospective experience i.e. an experience that is in no way influenced or changed by the act of introspection. So if you are introspecting on your experience, and you believe that what you are experiencing is raw, then you are mistaken (and often we are not introspectively aware of when introspection is having an influence), for the very act of introspecting on your experience makes your experience during the introspection non-raw. Thus, the relevant point for my purposes is that introspection gives us no access to “raw phenomenal experience” precisely because introspected experienced always comes pre-interpreted.

      This point is relevant to setting up the explanadum. Take the assumed datum of the “red experience S is having when looking at a uniform red light source”. This assumed datum is ambiguous with respect to the contribution introspection makes to what it feels like to look at the red light. If you are having the red experience independently of introspective access then you cannot describe it. If you are not having the red experience independently of introspection then it is not a raw experience. There very well might be something-it-is-like to look at the red light when you are not introspecting. But you have no introspective access to that what-it-is-likeness. Thus, if you are not careful, you might make an error about the contribution introspection makes to what-it-is-likeness during acts of introspecting upon phenomenality for the sake of philosophizing about it. And this is how you can be in error about your own experience. If you think your introspected experience is raw, then you are mistaken because introspected experience is necessarily not-raw. I think this happens too often when philosophers sit around in their armchairs gazing on patches of color and thinking that their “gazing experience” captures the phenomenology of say, a bat, or a rat.

  3. Charles Wolverton

    Gary –

    I’ve been trying to translate your response into my way of thinking about these issues, which is – as I recently discovered from reading Hylton’s book – Quinean. Whether or not you find my way convincing re phenomenal experience, you may find it’s relationship with blindsight interesting.

    A simplified model of my view is that we are basically organisms with very complex multimodal sensory receptors that are being excited by various inputs from the environment. We have some primitive innate responses, and over time we develop a large repertoire of more complex responses. In translating your comment into that model, I first needed to identify what is fundamental, and that seems to be a “raw feel”. I assume that the entity in my model that roughly corresponds is the neuronal activity consequent to sensory stimulation. Then your “raw experience” (RE) seem to be such activity plus what I take to be responses to the activity – although restricted to responses that are not linguistic (my need for that restriction will become clear).

    Now admittedly, that doesn’t seem like much to work with, but let’s see where it takes us. Think of babies, who add only innate primitive responses to raw feels – typically crying and squirming – to produce primitive REs. As time goes on, babies develop responses that are more complex but still non-linguistic – and therefore still result only in REs. At some point, babies develop a primitive linguistic ability. This leads us to the concept of “describing” those (still rather primitive) REs, and therefore to my first problem with the vocabulary of your post – and with the vocabulary of my first comment as well. Your “introspection” suggests the Cartesian Theater view, as does my “description” in its usual sense. You may mean “introspection” in some other sense, so I’ll just try to correct my use of “description”. In the case of an RE, by “description” I meant only a learned verbal response to that RE. So, I’ll subsequently call such a verbal response a “P-description”. For example, when a prelingual baby first acquires the word “red”, uttering that response to an RE is the baby’s P-description of the RE. But I’ll emphasize that a P-description is not a verbal representation of a visual mental image attendant to an RE, only a learned response to the RE.

    But what about the visual mental image that accompanies visual sensory stimulus, which I take “phenomenal experience” (PE) to mean? Well, PE has to be the P-description (or a phenomenon somehow derived from it) since in my simple stimulus-response model, not being a stimulus leaves a response as the onlly possibility. But this reverses the usual idea of a PE being a mental image that a linguistically capable organism can “describe” in the usual sense; in my model, the P-description comes first. If there is any phenomenon that might correspond to a visual mental image – ie, a PE as we usually think of that term – it must be consequent to the emergence of P-descriptions. (In this interpretation, PEs are indeed epiphenomenal, as some argue.)

    One may object to this, arguing that prelinguals presumably have PEs but of course haven’t yet acquired P-descriptions. But my impression is that people typically can’t describe much, if any, of their earliest – roughly prelingual – experiences notwithstanding that they no doubt did respond to stimuli in various ways. This suggests the possibility that prelinguals actually don’t have anything like what we usually have in mind by use of the term “PE”. In other words, they can’t recall P-descriptions because they didn’t have any and can’t recall images of events because they don’t have stored P-descriptions of the neuronal activity attendant to those events.

    I think of the process by which the language-capable produce a PE (in the sense of a visual mental image) as analogous to paint-by-numbers. There must be distinguishable neuronal activity patterns corresponding to different colors. If we imagine the neurons corresponding to the retinal field as being laid out on a grid like a canvas, the distinguishable patterns would identify P-description color words much as numbers identify paint colors. (Arnold Trehub’s retinoid theory of visual processing includes the idea of neuronal arrays somewhat like this idea. See Then the answer to the question “where are the colors that appear in mental images?” is “nowhere; there are only color words, ie, P-descriptions”. Although note that a P-description can be used to create – in a medium such as paint – a representation of a mental image, and the representation can be described (in the usual sense) as being, say, red (in its usual sense).

    In this view, the idea that a subject’s P-description of a PE can be in error isn’t meaningful since the latter is defined to be the former or derived from it. A subject’s P-description “red” in response to reflected light from an object may not be the “normal” one – almost everyone else responds with, say, “green” – but that would be a linguistic error, not an error of “introspection”.

    From this perspective, the phenomena of “blindsight” is failure of the “paint-by-numbers” process that converts a P-description into the illusion of a visual mental image – ie, into a PE (in the usual sense) – notwithstanding that the neuronal “canvas” may be largely intact. So, it seems that something like “PE-less sight” might be a more descriptive name. “Blind” suggests total loss of visual functionality, while a person with “blindsight” can have considerable visual functionality.

  4. I’m very interested in taking a looksee at your paper, Gary. For me, it’s the ‘Introspective Incompetence Thesis,’ and it’s something I’ve been thinking through for more than a decade now. I discuss it with reference to Schwitzgebel’s Perplexities at:

  5. Charles Wolverton

    Scott –

    I read your paper and really like your contrast between “intro-olfaction” and introspection. I have thought many times that part of the difficulty with these issues is that the vocabulary of vision is so familiar and information-rich that it’s often careless use inevitably misleads. I’d like to use your analogy between the two sensory modes to rephrase my query to Gary (my first comment above). To do so, I need a little more vocabulary than you used.

    In any sensory mode there is a distinction between the entities that constitute the sensory input and their source. As you note, “We smell odours as readily as odorous things.” Which is fine as far as it goes, but it leaves implicit the process by which we identify (correctly or not) “odorous things” based on the odors they emit. Let me call that process “perception”. Then I’ll rephrase your quote as “we can speak of disembodied odors as easily as we can speak of odorous objects perceived based on those odors”.

    Turning to vision, I’d like to rephrase “we typically see things, not the light they reflect” in that vocabulary: “we typically speak of objects perceived but not of photons impinging on optical sensors”. This allows me to contrast two sentences:

    1. A description of the object as perceived based on impinging photons.

    2. A report of the phenomenal experience attendant to the act of perception.

    Answers to the first question can obviously be in error: in the case of vision, hallucinations; in the case of olfaction, the analogous misidentification of an odor’s source. And possible errors can be readily identified by the absence of third party confirmation.

    My question then becomes: “what would it mean for answers to the second question to be in error?” There seems to be general agreement that such errors are possible – even common – so it would seem that simple examples would be abundant. However, the examples I’ve seen so far are either really answers to the first question or are IMO rather forced, such as Gary’s (an answer about “raw experience” seems unlikely since that response would be available at best to people knowledgeable in this field, and probably not even to many of them). And what process analogous to third party confirmation would be available for verification of “correct” answers?

    I should emphasize that I am not addressing “consciousness” (about which I’m skeptical, and hence agree with the gist of that part of your paper), just simple perception. When presented with a uniform monochromatic surface and asked question 2 above, I say “It’s the phenomenal experience I associate with the word ‘red’.” What does it mean for that answer to be in error and how might such errors be identified?

  6. Hi Charles. Eric’s strategy in Perplexities, as you know, is to simply accumulate reports from different subjects, point out all the (sometimes gobsmacking) inconsistencies, and say, ‘Given that we share the SAME consciousness, someone has to be wrong, don’t they?’ He fully acknowledges that there’s many aspects of introspection, such as colour perception, where the difficulties he considers do not arise. His answer to you should be: ‘Yeah, we seem pretty good when it comes to introspecting our conscious experience of colour.’

    In other words, it’s not an all or nothing affair, just as we should perhaps expect. Different modalities of conscious experience possess different bandwidth, so you should expect introspective competence to vary according to circumstance, just as you should expect perceptual competence to vary according circumstance, as when trying to hear someone speak in a noisy bar, or guess the colour of a car in the dark. Some rooms of the brain are darker and noisier than others.

    I’m sure you agree this is well and fine – and that it actually doesn’t tackle the *import* of your question, which, as I take it, is simply ‘What on earth could ‘wrong’ mean in introspective contexts?’ Eric has no answer to this, but then he’s a skeptic–the difficulty of answering this question is part of his point.

    I do have an answer, but let’s make certain we’re clear on the issue. On the one hand, the inconsistent reports that occasion various kinds of introspective judgments suggest quite forcefully that we are often ‘wrong about conscious experiences.’ On the other hand, how do you deny those introspectors actually possessed the conscious experiences they report? We have the indications of systematic deception, but no clear way of conceiving what ‘deception’ could mean in these contexts.

    Part two (upcoming) of that post on Error Consciousness deals with precisely this issue. The key, IMO, lies in what might be called the ‘environmental prejudice’ that seems built into our cognitive systems. When we reflect on experience, we rely on machinery that is primarily adapted (likely to the tune of *hundreds of millions* of years) to cognizing external environments (social or natural). Aboutness works quite well in instances of environmental cognition. I was ‘wrong about that’ comes easy because there’s always more to ‘that,’ more information to be gleaned, thanks to locomotion and the way it allows us to sample our environments from variable positions. Not so with introspective cognition, where what you see is what you get, that’s it, that’s all! It’s not like we can stroll around ‘red’ and kick its tires.

    This is the point. The fact that we have such intuitive difficulty understanding what ‘wrong’ means vis a vis introspection does not so much argue its incorrigibility as demonstrate the severity of its informatic straits. There’s lies, damned lies, and lies that cannot be seen through or around – short of science.

  7. Gary Williams

    Here’s another way that our introspection can get things wrong: it can fall for the “refrigerator light illusion”. Assume that introspective consciousness is the type of consciousness that explicitly philosophizes about consciousness. Whenever that introspective consciousness asks itself “Am I conscious now? What is it like for me to be turned on?”, it is liable to confuse the what-it-is-likeness of introspective consciousness for the what-it-is-likeness of nonintrospective experience. This is how Julian Jaynes says it:

    “Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of…It is like asking a flashlight in a dark room to search around for something that doesn’t have any light shining on it. The flashlight, since there is light in whatever direction it turns, would have to conclude that there is light everywhere. And so [introspective] consciousness can seem to pervade all mentality when actually it does not

    Assuming Jaynes is right (which I think he is), then introspecting philosophers (and amateur navel-gazers) are highly likely to make “introspective errors” about their experience by inadvertently extrapolating from their introspective consciousness qualitative features that don’t below in nonintrospective experience.

    p.s. Scott, I will probably be posting the final draft of my paper on my site in the next week or so, so stay tuned.

  8. Cool beans, Gary! I look forward to it.

    In a strange way, I actually think the analogy does more explanatory work when turned upside down – the way I think Heidegger would have preferred! The lights are always on in the refrigerator, but go out whenever we open the door, forcing us to peer and make guesses. Since we never see the inside with the lights on, we have no way of distinguishing informatic poverty and ‘peering and making guesses’ from informatic sufficiency and incorrigible cognition, and so confuse the absence of light – which is to say, the absence of information sufficient for reliable cognition – FOR LIGHT.

    The very notion of ‘introspection,’ on this account, is an anosognosiac artifact. Given the developmental and structural constraints it faces, I think something like this (and the Blind Brain Theory) has to be the case.

  9. Charles Wolverton

    Scott –

    Thank you, thank you, thank you. I’m not entirely in tune with your reply, but at least I have the (novel) feeling that we’re communicating!

    I picked monochromatic color perception as an example because it indeed seemed a situation that shouldn’t be very controversial. Wrong!

    The problem I have in analogizing the refrigerator example with “introspective consciousness” (light on?) and “nonintrospective consciousness” (light off?) is that as noted earlier I find analogizing “introspection” to shining a light on the “inner workings of the mind” hopelessly Cartesian – plus, consistent with Gary’s 8-21 “quote of the day” I don’t see “consciousness” as referring. Which is why I consistently try to apply Quine’s straightforward model of sensory stimulation (from sources external or internal) and context-dependent responses (implemented with or without external manifestations). And in that model, although clearly a subject’s description of the external environment based on sensory stimulation from that environment can be in error vis-a-vis the “actual” environment (ie, a consensus description by others), it isn’t clear to me even what would constitute a subject’s “description” of the internal environment other than their response to some stimulation that is being called an “act of introspection”. And to say that such a “description” is either “correct” or “incorrect” seems like saying that someone’s reflexive response to a knee tap is either “correct” or “incorrect”. (And yes, I view even the most complex stimulus+environment+response scenario to be essentially reflexive.)

    As in my exchange at Eric’s blog, I need to emphasize that I am not arguing “introspective incorrigibility”, a phrase suggesting that given a stimulus and a context, there is a correct “description” of the internal environment (ie, response to a stimulus). I’m questioning whether assessing such a description/response as being “correct” or “incorrect” is meaningful.

    BTW, I haven’t read Eric’s book, hence am not familiar with his examples. However, in the exchange at his blog mentioned above in my first comment, he (I think) and his defenders (I’m sure) argued against even my simple monochromatic perception example.

    Finally, I agree with your evolutionary tack re “extrospection” vs introspection – and suspect it actually applies even on the time-scale of an individual life. Some are “better than others at introspection”- which is only to say that their responses to relevant stimuli are more nuanced, not, of course, that they are more often “correct”. Along these lines, in the unlikely event that you aren’t already familiar with it, Rorty’s evolutionary thought experiment re “Antipodeans” in Chap 2 of “Phil and the Mirror of Nature” might be of interest.

    Oh, and what’s the Blind Brain Theory?

  10. Hi Charles. I had tried posting this with a couple of links attached, but it vanished, so I’m assuming there’s some kind anti-spambot thing going on. I have a paper on the Blind Brain stuff on my site if you’re interested.

    “I’m questioning whether assessing such a description/response as being “correct” or “incorrect” is meaningful.”

    PMN actually made a ‘pop’ noise when I pulled it off my shelf, it’s sat untouched for so long! While I appreciate contextualist critiques of representation (I was once, among other things, a Wittgensteinian for a time) I no longer think they offer a way out or around the impasses that face philosophy of mind. Rule and Context are as much intentional concepts as Representation, and as such carry the very presumptive ballast I think neuroscience needs to throw overboard to make real inroads explaining consciousness. ‘Information’ is my preferred unexplained explainer these days.

    So eliminativist critiques along Rorty’s or Dennett’s pragmatic lines strike me as ‘opportunistic’ in a ‘have their conceptual cake and eat it too’ sense. They use functional arguments to undermine ‘original intentionality,’ then draw *pragmatic* consequences from this. The result, I find anyway, is that they get to say both that meaning is a fiction *and* that meaning is autonomous – in some arcane sense.

    This strikes me as awfully convenient – hinky.

    From the Blind Brain Theory standpoint representation and context are of a piece, artifacts, you could say, of my Inverse Refrigerator Light metaphor, what happens when deliberative cognition tackles the woefully inadequate information attentional awareness has at its disposal when ‘introspecting.’ BBT presumes that ‘conscious experience’ is the product of brains once entirely dedicated to environmental tracking evolving to track themselves, and that ‘consciousness’ or first person tracking is a result of this relatively recent development. It then asks some simple questions: What information should we expect that this new ‘first person tracking’ system will and will not have access to? How might these informatic constraints find themselves expressed in conscious experience?

    The standpoint that falls out of this is brutally nihilistic, so much so I can’t say I actually believe it. ‘Correct/incorrect’ aren’t even ‘heuristic fictions’ (or stances) warranted by their predictive utility in BBT–they are simply artifacts of philosophers trapped in their reflective, second-order informatic bottleneck… a blinkered way for philosophers, not anyone else, to make theoretical sense of things. And what we call ‘consciousness’ is more like seeing Mary, Mother of God, in a waterstain than anything. As soon as our cognitive systems are primed, we cannot but manufacture what we see given the gappiness and paucity of the information available to first-person tracking. Our ‘brain blindness.’

    So, I partially agree with you: correct and incorrect (as philosophically thematized) do not apply, not because they are ‘meaningless’ in this context, but because they never applied in the first place. Information is all there is, and it becomes increasingly depleted and unreliable the more our attention is directed toward the first person. Since flagging informatic insufficiency actually requires more information, actual insufficiency generates the illusion of greater sufficiency.

    The intuition of introspective sufficiency, you could say, is an example of the Dunning-Krueger effect! As are the varieties of Error Consciousness that result from it.

  11. Charles Wolverton

    Scott –

    Sorry to be so slow in responding to your last comment, but formulating a response has turned out to be quite taxing owing to the fact that the attempt led me back to several Davidson essays in the collection “Subjective, Intersubjective, Objective”, and altho I had read them all one or more times before, I discovered (unsurprisingly) that I had barely understood them. But on this rereading, I think I finally “got it” and discovered that several (in the “Subjective” section) seemed either directly or tangentially relevant to this thread. I was going to try to summarize DD’s argument in favor of “first person authority” (his term, which I’m inclined to avoid since it seems to suggest “introspective incorrigibility”, which as I’ve said doesn’t make sense to me and therefore isn’t the way I’d describe his conclusion), but it is taking me a long time to understand it well enough to do so. So, for the moment I’ll just ask if you (or anyone else interested in the issue) are familiar with those essays or have access to them? If not, I’ll try to work up an outline of his (rather involved and somewhat convoluted argument). In short, it seems to me to be a quite sophisticated extension of the simple “uniform color” argument against the meaningfulness of “introspective error”. (He even uses that simple scenario as an example at one point.)

    I may also have some more comments on your Schwitzgebel review (and possibly your DD paper), but again it will take a while longer to formulate them.

    Re Rorty, my only response is that although Rorty resonates with me, I don’t think of him as being much into consciousness per se. (And despite at one time being an enthusiastic denizen of Conscious Entities, now neither am I). The word arises in the early chapters of PMN but I think not in quite the same context in which it is now commonly used. And it occurs only a few time in the index (actually under a heading “Mind as …” where “…” is various approaches to conceptualizing “mind”. I see Rorty in general and PMN in particular as more “philosophy of philosophy” than “philosophy of mind”. In the index to “Rorty and His Critics”, the word hardly appears at all. CIS has no index, but I’m pretty sure consciousness isn’t a focus there either. The only reason I referred to PMN was the Antipodean thought experiment wherein a hypothesized different (recent) evolutionary path leads to introspective insight into the details of neural activity that dramatically extends the “information horizon” (assuming I’m interpreting that term correctly).

  12. Charles Wolverton

    The SEP entry for contextualism starts with an historical survey in which appears this quote:

    “[while] man is a social animal…when it comes to the justification of beliefs philosophers have tended to ignore this fact” (David Annis1978)

    As best I recall, in EPM Sellars doesn’t explicitly describe functioning in the “logical space of reasons, of justifying … what one says” as a social practice, but that seems an obvious interpretation (in his introduction to the 1997 edition of EPM Rorty explicitly so describes it, and in PMN he says “our certainty will be a matter of conversation between persons … we shall be in what Sellars calls ‘the logical space of reasons'”).

    Also not explicit in EPM (IIRC) is that the “space of reasons” is populated by relevant peers, and the objective is to achieve consensus within that peer group. But that also seems an obvious assumption (I take Rorty ‘s glib “truth is what your peers let you get away with saying” is a reference to the Sellars quote). Understood that way, Sellars’ position seems to acknowledge context-dependence, specifically dependence on the specific peer group inhabiting the space of reasons. Eg, see the example addressing the audience-dependent adequacy of reasons (in the same SEP entry paragraph).

    If these observations are correct, it would seem that Sellars was one of the earliest contextualists. Yet, he is not cited in the SEP entry. And this seems a surprisingly frequent oversight. Am I missing something?

  13. Hi Charles. Sellars is a relative latecomer. Nietzsche and the early American pragmatists were clearly contextualists in the modern sense, I think. But (if you exclude certain ancient Greeks) Hegel is contextualism’s papa. I’m guessing you’ve read James’ paper on why consciousness doesn’t exist? Sellars’ essay on the Manifest and Scientific Images of Man is also a gem. Davidson makes my head hurt, but his arguments for anomalous monism are as strong as any. The fact is, no one knows what the fuck they’re talking about! I certainly don’t.

    Since there seems to be a strong correlation between the scarcity of information and the human ability to cognize, this is the approach I take (with BBT). Whatever consciousness, intentionality, normativity, meaning, qualia, and so on are, we possess only enough information to opine, and not nearly enough information to cognize. My approach simply asks why this might be. I formulate a number of possible constraints, then ask how these might be expressed in what we take experience to be.

    The contextualist approach, by my lights, simply begs the question. It presumes an interpretation of normativity and meaning which it (all too often) then uses to block inquiry into normativity and meaning, claiming that it is ‘wrong-headed’ in this or that respect. What can a naturalist do but shrug their shoulders and plug on?

    In my case, it amounts to little more than another philosophical attempt to conjure theoretical virtue out of ignorance.

  14. Charles Wolverton

    Scott –

    I’m guessing you’ve read James’ paper on why consciousness doesn’t exist?

    In my case, this is almost always a bad guess – I have no formal and only very narrow informal background in philosophy, although those things that I have read have been read and reread pretty carefully. So, I’ll appreciate any recommendations of things you consider must-reads in this arena.

    no one knows what the fuck they’re talking about

    I’m glad to hear someone well-informed voice this opinion. Even Davidson, who strikes me as being on the right track (presumptuous, of course, coming from a benchmark for ignorance in the relevant disciplines like me) seems to edge toward a more neurologically based analysis (what I assume you mean by “naturalism”) but ultimately veers off back into the referent-free vocabulary you highlight (“consciousness, intentionality”, et al). In DD’s case, it seems that he actually knows WTF he’s talking about but is for some reason reluctant to express it in a better (more natural?) vocabulary. And that may contribute to his “hurting your head”. I worked on his “Indeterminism and Anti-realism” essay for a week, probably reading every paragraph half a dozen times on average. I don’t think the concepts are all that difficult (eg, I’m reasonably good at abstract math), but his arguments seem very hard to follow. Makes me wonder if his preference for essays was due to some bad experiences with book editors who insisted on better organization!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s