Some thoughts on logic, explanation, and the philosophy of mind.

Since I am starting a PhD program in philosophy at Wash U, I will be required to fulfill some logic requirements over the next few years. I have never taken any course on formal logic, except for a class on critical thinking during my undergrad, but I don’t think that actually counts. Although I am starting to get more interested in pure logic for logic’s sake, I have always been skeptical of the direct relevance of formal logic to my research. My primary research interest is to understand the mind. Some logicians might say that insofar as logic is the study of reasoning, and reasoning is a product of the mind, the study of logic will allow one to better understand the mind. But I’m not so sure about how far this takes us. Logic is the study of reasoning at the most general level. When you study pure logic, you are not actually trying to produce a true idea about the world that might turn out to be wrong. In logic, the goal is not to make a substantive claim about reality, except insofar as logic itself as part of reality. Instead, you are trying to study the form of what a true argument looks like. Frankly, this just doesn’t interest me. I am interested in producing true theories about how the mind actually works which involve making substantive claims that might actually turn out to be wrong. The study of logic doesn’t produce true theories about the mind because that just isn’t what logic does. Does this mean that I am uninterested in using logic to produce truth? Hardly. Just like the jazz musician doesn’t need to know about the physics of acoustics in order to play good music, a philosopher doesn’t need to know formal logic in order to produce logical arguments that lead to truth.

When I say I am interested in producing truth about the mind, what does this mean? What does an “explanation” of the mind look like? For some orthodox philosophers, an “explanation” or “account” of the mind might look like this: every mental state supervenes on the physical world; mental states cannot change unless there is a corresponding change in the physical world. For these orthodox philosophers, this is where their job of explaining the mind ends. This type of explanation is supposed to be an argument for a “materialistic theory” of mind. Of course, these philosophers produce crafty arguments in order to reach the conclusion that the mental supervenes on the physical. And these philosophers are probably also involved in the defense of their thesis statement against various counter-examples and thought experiments such as Mary the color-blind neuroscientist, zombies, etc, In order to defend their “materialistic theory” of the mind, these philosophers would spend a significant amount time defending the supervenience theory against these thought experiments. To successfully respond to the “zombie argument” against materialism would count as “progress” in the expansion of the materialistic theory of mind. Likewise, many orthodox philosophers of mind think they are making progress in the field by coming up with counter-examples and purported knockdown arguments against other philosophical “explanations” of the mind, without ever making a substantive claim about the world that may in fact turn out to be wrong.

But honestly, I am not very impressed by such “materialistic theories”. I even think it might be problematic to call such ideas “theories of mind”. So what does a real materialistic explanation of the mind look like? For one, it’s going to be incredibly complicated and not easily compressed into a neat claim like “the mind supervenes on the physical” given that the brain, the seat of the mind, is the most complicated three pounds of matter in the known universe. To be sure, the mind sciences are in their infancy. This is why I have a love/hate relationship with philosophers. An orthodox philosopher might be content with “explaining” the mind without once referencing the brain. To me this is totally unacceptable. An explanation of the human mind MUST involve some reference to the science of mind, not just the philosophy. Thus, I think philosophy of mind is simply the theoretical branch of psychology, much like theoretical physics and its relationship to experimental physics. Philosophy jumps ahead of the data and produces theories that unify data into a more explanatory framework, which leads to better experimentation, which leads to better theory, and so on.

Now, the orthodox philosophers will probably respond by saying that such a brain-based explanation of the mind is surely limited to the local domain of earth-bound creatures, but that’s not what they are interested in. Surely, they will say, if we met an alien entity who appeared to be intelligent but did not have a brain like ours, we would not say that it lacked a mind. Hence, these orthodox philosophers claim to be interested in explaining the mind at such a level of generality that it applies to ALL minds, including exotic aliens with strange nervous systems. So any explanation of the mind that references the human brain must not be a real explanation of the mind, because it cannot handle different kinds of exotic minds. So when philosophers come up with “theories” of mind like “everything mental supervenes on the physical”, this explanation is supposed to apply to all minds in the universe, and not just humans. Thus, these philosophers think that they have some deeper insight into the mind because their account is so general.

But I think this generality and lack of concreteness is precisely the weakness of such theories. Let’s grant that an alien species would have a radically different way of thinking. Now, if we wanted to theoretically study an alien mind, would what be the best way to do so? By coming up with a priori necessary truths like supervenience? Hardly. I think the best way to learn about possible alien minds would be to study something like xenobiology. Evolutionary theory would still apply to the aliens. So would other scientific theories. I thus think that the best way to learn about “minds in general” is to study science, not a priori philosophizing. If you understand a great deal about how biological organisms evolved on this planet, I think you would have a better chance of understanding what an alien mind might be like than if you were to simply sit in your armchair and try to come up with a priori necessary truths such as “the mental supervenes on the physical”. Now, don’t get me wrong. I actually do think that the mental supervenes on the physical. How could I not being the materialist that I am? It’s just that I don’t think philosophy of mind should stop there and consider its job of explanation finished. And no, responding to endless counter-examples is not “progress”. Progress involves better understanding the biology and social conditioning of the mind, in all its glorious complexity. It involves at least making specific hypotheses locating mental functions to anatomy, and looking closely at the effects of development and the social milieu on mental function .

But isn’t this just going back to phrenology? I don’t think so. Phrenology was an unprincipled investigation into the location of brain function. It is based on a false belief, namely, that brain function can be understood by looking at bumps on the head. But “locating” mental processes to specific neural circuits (or distributions of circuitry, as is more likely) is vastly superior as an explanation of the mind than any kind of orthodox philosophical explanation. For example, my colleague Micah Allen and I have made concrete hypotheses about the default mode network’s involvement in reflective consciousness, and proposed a provisional model of how the DMN interacts with lower processes in the course of everyday human cognition. Our model is based on both phenomenological principles (i.e. that humans have both a prereflective and reflective consciousness) and neurofunctional principles based on recent discoveries in cognitive neuroscience. Is our model the end of the story? No. The explanation of the mind is just getting started. The proper way to progress from here would be to continue the interdisciplinary style of explanation wherein philosophy and science work in harmony to produce true statements about the mind that may or may not turn out to be false.



Filed under Consciousness, Philosophy

6 responses to “Some thoughts on logic, explanation, and the philosophy of mind.

  1. Corey

    Hi Gary,

    I think you raise some interesting points, but I don’t think that many of the philosophers of mind you mention (the “orthodoxy”) would claim that they are offering explanations of the mind. Rather, they are attempting to offer explanations of how two seemingly very disparate kinds of things might be related. I don’t think any philosopher of mind, even one who thought she had solved the mind-body problem, would claim that she had offered an “explanation” of the mind, unless by “explanation” you only mean an account of how minds are related to physical things. No serious philosopher of mind would claim that the empirical sciences of the mind are useless in determining how the mind actually works, although a serious philosopher of mind might claim that even a detailed neuroscientific/psychological theory would leave out important philosophical questions.

    There is a separate question (something philosophers of psychology tend to worry about a bit more) about the degree to which neuroscientific details are relevant to psychological theory. It’s an orthogonal concern to whether the mental supervenes on the physical whether mentality can be studied independently from its “hardware”.

    Finally, I take issue with your taking responding to examples and counterexamples as something other than progress. This is precisely how progress is made when it comes to deep theoretical issues. If the examples and counterexamples come from experiment, great. But if they are hypothetical, that does not count against them (in any principled way I can see). What are the implications for general relativity for a faster-than-light traveling particle? Well, we can say what they are whether we find any such particles or not. Or, if the theory has nothing to say about a hypothetical example, then we can illuminate the limits of that theory’s domain. The founds of quantum mechanics did this quite a bit; I think one would be hard pressed to say that this process did not count as progressing the theory.

    • Gary Williams

      Hi Corey,

      Thanks for the insightful comment; you have given me a lot to think about. You said that “No serious philosopher of mind would claim that the empirical sciences of the mind are useless in determining how the mind actually works.” I agree with you here, simply because the question of how the mind “works” is an empirical issue. The question though, is to what extent philosophy of mind should be tethered to the sciences of mind in figuring out anything about the mental world. I feel like the orthodox (or “traditional”) philosophical position is to claim that philosophy of mind can proceed more or less independently of the sciences of the mind. I do not feel like I am attacking a strawman. I didn’t want to name names, but look at this quote from Colin McGinn’s influential text on the philosophy of mind, The Character of Mind:

      “One influential contemporary approach to the mind urges that we
      pay special–even exclusive–attention to the results of the empirical sciences. As philosophers of mind, we should, on this view, see ourselves as commentators on what the scientists are up to. I have little sympathy for this point of view, then or now. Of course, we should be interested in empirical findings, but I believe that the real philosophical problems are not to be handled in this way. Indeed, I believe that scientists carry with them a good deal of tacit philosophical baggage, which conditions the work they do and their means of reporting it. Philosophy, for me, is still anterior to science, and largely independent of it. This book embodies that (unfashionable) point of view.”

      While McGinn thinks it is currently “unfashionable” to claim that philosophy of mind is “largely independent” of science, I think that this is actually the orthodox position in philosophy of mind, although there are of course many people like myself who disagree. The point of my post was to try and challenge the legitimacy of this autonomy.

      You said “I don’t think any philosopher of mind, even one who thought she had solved the mind-body problem, would claim that she had offered an “explanation” of the mind, unless by “explanation” you only mean an account of how minds are related to physical things.” Well sure, I grant you this, but this is exactly the problem: I don’t think it is possible to show how minds are “related to physical things” unless you first have a firm understanding of what a mind is and I think the best way to know what a mind is to have a good understanding of both phenomenology and the cognitive sciences, not just one or the other. Philosophers like McGinn accept that one must pay attention to the phenomenology, but they seem to disagree with philosophers who think that philosophy of mind must be done closely alongside the empirical sciences. McGinn seems to strongly disagree with this view, and I don’t think he is the only one. In fact, I think his claim about philosophy of mind being “largely independent” of the sciences represents the current dogma of philosophy of mind, although many people are paying lip service to the importance of empirical work.

      Also, you said “But if they [thought experiments] are hypothetical, that does not count against them”. Yes, I agree. Thought experiments can be useful in some circumstances, but I deny that the whole cottage industry of responding, for example, to the numerous counter examples and counter-counter examples to the zombie argument represents progress in the philosophy of mind. In fact, I think the whole debate, and others like it, have gotten the field stuck in a deep rut, although of course this is just my person opinion based on my reading of the literature (particularly on the philosophy of consciousness). This comes back to what I said above about getting the “data” right. If philosophers are concerned with figuring out the relationship of the mental to the physical, I think it is important to first get a firm grasp of what the mind is in the first place. And I deny that one can get the best grasp of what the mind is without having at least some grounding in the mind sciences.

  2. Gary,

    Before I continue, let me say what I should have said earlier: Congratulations on starting at WashU in the fall! There are some excellent people there, and I consider Carl Craver a mentor, and an excellent philosopher (not to mention a very nice guy). It sounds like the PNP program (which I’m assuming you’ll be a part of, given your interests) is the right place for you.

    I do agree with you: there is a large group of philosophers of mind who, in my opinion, underestimate the importance of caring about data. Personally I don’t see the point in drawing hard lines between disciplines, and it may be historical accident that what is called “philosophy of mind” isn’t, instead, “theoretical psychology” (I have in mind the difficulty, for me anyway, of distinguishing philosophy of physics with theoretical physics; whatever difference there is does not seem to be of great importance). The only thing I really take issue with, which may not be exactly what you’re saying, is that there is no place for progress to be made without attending to the data. The best philosophy of mind/psychology, in my opinion, plays the role that one might imagine could/should be played by theoretical psychology.

    The larger methodological issue here is an interesting one: to what extent can we possibly make progress toward understanding the mind (in some sufficiently broad sense of “understanding”) in the absence of constraining data (which is what I take to be part of your concern with zombie-ish and Mary-ish arguments)? While I don’t, as they say, have a horse in this race (I gave up on consciousness myself), I think there is a place for this kind of theorizing in exploring (at least part of) the space of “how-possibly” explanations, and in sharpening up whatever pre-theoretical ideas we have about the mind. Getting a firm grasp of what the mind is does require attending to some data, but that’s not going to be sufficient: we have to interrogate both our concepts going into those investigations, and how our concepts might either change—or need to be replaced entirely—in light of new data. And those are tasks that are, in my opinion, what philosophers are often quite good at.

    In my own case, I’ve become interested in the nature of guilt and shame. Many philosophers have written about these emotions without attending to much data: they’ve relied on their own conceptions about what counts as guilt, and what counts as shame. But, it turns out that there is empirical data on these two subjects, and they can be distinguished in ways that some might find surprising. But that doesn’t settle the issue: the psychologists studying guilt and shame have supplied various operational definitions of the two, and insofar as those operational definitions correspond to the “outside the lab” notions, they are telling us something about guilt and shame. But how are we to decide that? There is no guilt-o-meter that has a bead on the “real” emotion (or mental state, or whatever) that we call guilt. Rather, we should do some theorizing, given the data, and given what we think guilt should be, what role it should play in our other theories, and so on.

    Now, I suppose it could be that some people think that something about mentality is abstract in a way that data really couldn’t speak to issues regarding (certain kinds of) understanding of the mind. No data can settle the issue of whether modal realism is correct, or whether mathematical statements are true because mathematical objects exist outside of spacetime, or whether determinism is true. I suspect some people have this view about consciousness (like McGinn). It’s not what I want to work on, but I’m not sure that such work doesn’t amount to progress. Unless, of course, you think that there really is no such thing as philosophical progress, which is a much larger issue still.

    • Gary Williams

      Dear Corey,

      I absolutely agree with your statement that “there is… [a] place for progress to be made without attending to the data.” Being rather sympathetic to the phenomenological tradition, I definitely think that one can make great progress in understanding the mind by simply paying close attention to your own personal experience, a skill which I think needs to be trained and honed unless it comes naturally (which is rare, I think). I think that many of the more “neuro” oriented philosophers of mind suffer from lack of phenomenological detail in their writings. Their examples to illustrate a point are often stiff and dry and lacking the close phenomenological detail that brings explanations to life (I am thinking of examples like “S knows that P”, which just don’t do much for my imagination). Moreover, I think there is an important methodological point to be made about having the introspection right before attempting to neurologize. I have always liked how Julian Jaynes said it: “We first have to start from the top, from some conception of what consciousness is, from what our own introspection is. We have to be sure of that, before we can enter the nervous system and talk about its neurology”.

      As for philosophical progress, I definitely think that we have come a long way since Plato. I mean, post-Darwinian philosophy alone is a radical leap towards progress compared to accounts of mind which were written before the introduction of evolutionary theory. And there are probably hundreds of other examples that count as examples of philosophical progress insofar as we have refined our conceptions, introduced new distinctions, and integrated new empirical findings.

  3. Charles Wolverton

    I came to phil of mind from a comm system engineering background (unsullied by exposure to either phil or neuroscience) and tend to think in those terms. So, for me a person (actually, a community of persons ala Davidson and Sellars) is a comm system, which like any complex system, can be viewed in terms of decreasing levels of integration: total system => subsystems => assemblies => components). There are issues specific to each such level, and different capabilities are required by those addressing issues relevant to a given level. However, doing well at one level typically requires at least some degree of familiarity with the issues relevant to the next lower level or two.

    From that perspective, I see philosophers of mind as addressing issues relevant to the highest levels of integration, ie, functioning in a way analogous to system or subsystem engineers. Child development psychologists (et al), physiologists, and neuroscientists address issues at the lower levels (what people like me refer to as “implementation details”). But that means that as good “system engineers”, philosophers of mind should follow Gary’s lead and keep abreast of developments in the relevant areas of those fields, if for no other reason than to be aware of “constraining data” per Corey, ie, to get reality checks on their speculations. I can’t imagine what benefits anyone would think accrue to studiously avoiding such inputs.

    Some neuroscientists claim that there remains no role for philosophers, that future results must come from the laboratory – a position Dennett may have had in mind when he described some relevant activity as “reverse engineering”. But that description seems not quite accurate. In reverse engineering as I understand it, one has a good grip on the system-level functionality and how lower level components work – what’s missing is the specific way the assemblies built up from the latter implement the former. In the case of human communication, there clearly is much yet to be learned about the “components” (neurology). But also, the dearth of references to Davidson, Sellars, Wittgenstein, et al, in my (admittedly quite limited) experience in online discussions of thought, language, learning, etc, suggests to me that the system and subsystem level functionality may not be adequately – or at least widely – understood, leaving plenty of room for additional philosophical consideration, ie, “system engineering”.

    This “top-down” systems engineering view seems consistent with Gary’s quote from Jaynes. And from that perspective, McGinn’s quote could perhaps be charitably interpreted as arguing against a bottom-up “reverse engineering” approach that starts with, say, cell biology. That does strike me as no more promising than trying to understand an electronic communication system’s functionality by examining the logic gates in its integrated circuits.

    Aside to Corey:

    FWIW, I also “gave up on consciousness”, although probably for somewhat different (and no doubt less sophisticated) reasons. In trying to learn about this area, I’ve found the word used so inconsistently as to call into question whether it even corresponds to a coherent concept. Also, there are claims that something like 98% of what we do is un- or sub-“conscious”, which suggests that with a little tweaking of whatever definition of “conscious” was used to get that number, the “concept” might be eliminated entirely. (And I’ve come to have much the same eliminativist attitude toward “the mind”.)

  4. Martin Mondello

    If you are going to study the mind from a materialistic science point of view you will have to deal with mind-body distinction held by virtually all science and
    philosophy. I personally follow Searle in treating the distinction as an option, not a necessity. Where mind and brain diverge are unspecifiable it seems to me. One either reduces the mental to the physical or does a Berkeley, and reduces the material to the mental.
    If you conk someone on the head and they can’t speak and look dead to the world this is supposed to be an illustration of how the mind reduces to the brain.
    But in this is the assumption that awareness arises from the material. But this is rather an assumption for it could be that awareness is prior and gives rise to the world. In other worlds, that awareness projects and “sees” the content of sense and mind and yet is prior and separate from them. So that getting knocked in the head and seeming dead to the world does not affect awareness—but only the content of awareness. Instead of seeing senses and
    mind, the awareness sees the body getting conked and the resulting absence of content—and upon the restoration of sense and mind–sees that too.
    I know that you will not entertain such a possibility in the realm of philosophy/science you are entering—and that is why I wanted to note
    that the scientific approach accepts without question the mind-body duality and
    the materiality of awareness.
    This comes of presuming that these are hard and fast distinctions denoting real entities instead of say, presuming them optional abstractions from a broader experience. In other words—-treating them as real things.
    But the difficulty this creates is this: that if you acknowledge there is such a thing as mind, and mind is not the same thing as brain, and it is acknowledged that reality is our subjective notion and material brain itself , those firing neurons, are not the same thing as notions–then reality must be due to mind and not brain.
    Another way to get at it is to say that subjectivity must be an objective thing
    if it is scientifically real, and objectivity, given the subjective nature of thought, is confirmed subjectively. Brain is viewed subjectively.
    There is no escaping the fact that these
    two mutually exclusive things- mind and brain or mind and body– are defined in terms of each other–they are not separate entities at all.
    Better to acknowledge the mind -brain split as a premise of your research
    “given the mind-brain and mind- body split” and so qualify your research in that way–than hold the split as a real thing.
    Ultimately, you will not succeed in reducing mind to brain via experiment because there is nothing necessary about the split. One changes and there are correlated changes in the other–that is all. Having made the split, how can you say that one is primary when it means nothing without the other?
    Philosophically there is no necessity to do so.
    This is why I say, be aware of this premise and acknowledge it–then in your research you are on safe ground.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s