Monthly Archives: April 2012

On Becoming an Expert

Gradschool is providing me a great opportunity to become an expert in my chosen field: philosophy. But there is another field besides philosophy that I am also interested in reaching expert level  in: chess.  Before I die I would love to reach “Master” level in chess. However, this is no easy task. If I assume the 10,000 hour rule is roughly accurate, and I study an hour a day every day, it will about another 30 years before I reach Master level. Obviously, if I spend some days studying more than an hour a day, I can speed this up, but there are also going to be days when I don’t study at all, or don’t study that seriously. So it looks like it really is going to take a long time for me to become a true chess expert. What will probably be the biggest factor in reaching expertise in chess will be to maintain motivation for study over the decades. If in a few years I get burned out from chess and take a long break, then obviously my path to expertise will be slower and less effective. Continuous motivation for decades without burning out is necessary to achieve expertise in just about any field.

When I think about being an expert in philosophy though, it’s a little different. Chess expertise is easy to quantify because there is an objective ratings system. But how do you know when you have become an expert philosopher? Is it when you publish? Or when you’ve read enough literature? Or get a tenure-track job? Hard to say. There is only one way to be a good chess player (make better moves than your opponent) but there are probably thousands of different ways to be an expert in philosophy. So the question of expertise is a great deal more subjective than in chess. The objectivity of chess is in fact one of the reasons why I love it. When you play a game, there is only the board, with everything in plain sight. Philosophy expertise is more fluid and harder to pin down. It’s hard to “see” expertise in philosophy. Yet I strive for it everyday. At the moment this involves just surviving grad school, reading as much as I can, and writing my thoughts down as often as possible while trying to stay sane. Doing these things will hopefully translate into philosophical expertise some day. In the mean time, I will also be working on my chess. The struggle then is to find a balance between these two tracks towards expertise. Every time I am reading I could be studying chess, and vice versa. The ideal balance is also not going to be equal, since I should take philosophy much more seriously since it is my future career track. I never plan on making money through chess playing, but this philosophy thing is supposed to turn into a future job (although one could see grad school as a job too). But at the same time, I think it’s good to have an intellectual outlet that isn’t philosophy. When I get tired of reading and writing philosophy, chess is always there to keep my mind stimulated.

Leave a comment

Filed under Random

New Paper: In Defense of the Extraordinary in Religious Belief

Read it here: In Defense of the Extraordinary in Religious Belief

So this is a paper I wrote for Ron Mallon’s Culture and Evolution seminar. I’m really happy with how the paper turned out, and I believe this is the direction I want to go for my future dissertation project. The paper is really a response to some of Pascal Boyer’s claims about the importance of extraordinary religious experience in explaining the origins and cultural success of religious belief. For example, Boyer says:

Even if prophets were the main source of new religious information, that information would still require ordinary nonprophets’ minds to turn it into some particular form of religion…This is why we will probably not understand the diffusion of religion by studying exceptional people, but we may well have a better grasp of religion in general, including that of prophets and other virtuosos, by considering how it is derived from ordinary cognitive capacities. (Boyer, 2001, pp. 310-311)

This is a standard thing to say in the evolutionary origins of religion literature. Most psychologists who are trying to explain religious belief do so in terms of the operation of various ordinary cognitive mechanisms like the Agency Detection Device or our theory of mind capacities. The basic idea then is that we don’t need to posit any sort of “special” religious mechanism that serves as the generator of religious belief. According to what I am calling the Standard Cognitive Model (SCM) of religious belief, religious thoughts are really not that different from any other kind of cognitive operation. Crucially,  the SCM is committed to the idea that the order of explanation is that you explain both religion in general as well as extraordinary experience in terms of the ordinary, and not the other way around.

It’s this emphasis on the “ordinary” that I am arguing against in the paper. My argument is basically this: we cannot use contemporary ratios of ordinary to extraordinary experience as a mirror of what that ratio might have been like in ancient times. Borrowing heavily from Jaynesian theory, I provide several lines of evidence for thinking that what we now consider extraordinary might have actually been quite ordinary in ancient times. If this is right, then we don’t need to think about extraordinary experience as being the exclusive domain of “religious specialists”, as Boyer is prone to think. Instead, we can think about extraordinary experiences such as hearing the voice of a god or demigod talk to you as being quite ordinary.

In the paper, I look at contemporary research on both the incidence of auditory hallucination in children and the factors that lead to the persistence of such hallucinations. What the research shows is that the best predictor of persistence of voice hearing in children is whether they assign the voices to external sources. And prior to the recent invention of the concept of “hallucination”, all ancient voice hearers (like Socrates) would have automatically interpreted their experience in terms of being a communication from an external agent, namely, a god or demigod. Since such attributions are the key predictors of persistence, we can now imagine a society where upwards of 25% or more of adults are actively experiencing auditory hallucinations and interpreting them as being messages from gods or demigods. Accordingly, would we want to still say that “extraordinary experience” is  still exceptional and the exclusive domain of religious specialists?

If this is at all historically accurate, then it looks like we can reverse the explanatory arrow of the SCM. Rather than extraordinary experiences being on the sidelines in determining the cultural success of religion, the familiar experience of auditory hallucination and the shared cultural narratives for interpreting such experiences would have played a much greater role in the spread of religion than the SCM allows. To respond to Boyer then, we can say that perhaps the reason why the “insights” of holy persons were widely accepted is because the ordinary population was already quite familiar with what-it-is-like to hear the voice of a god or demigod commanding you to do something.

1 Comment

Filed under Psychology, Theology

Counterfactuals as Commands

One of the reasons philosophers are prone to accept possible worlds into their metaphysical worldview is to find truthmakers for counterfactual claims. If I say “The cat is on the mat”, the truthmaker for the claim is the fact that the cat is on the mat. However, if I say “If I hadn’t caught that egg it would have hit the ground”, what is the truthmaker for this claim? Because the claim is about something that didn’t actually happen, we can’t just point out the truthmaker as we did with the cat being on the mat. So what grounds the truth of the counterfactual claim? Such questions have led philosophers to posit the existence of possible worlds to ground the truth claims of counterfactuals. So the idea is that there is some possible world where I failed to catch the egg and it splatters. It is this possible world that grounds the truth of the counterfactual claim.

But before we accept the existence of possible worlds into our metaphysics, we should ask ourselves whether counterfactual claims even need truthmakers. After all, not every speech act needs to have a truthmaker. Take commands, for example. If I say to you “Pick up that pen”, what is the truthmaker for this claim? It’s not clear what the truthmaker could be, since in commanding you to do something, I am not describing how the world is, but rather, trying to get you to do something.

So here’s an idea I had last night: what if counterfactuals are a species of command? If counterfactual claims are really commands, then we wouldn’t need to find truthmakers for them, and we wouldn’t need to say possible worlds exist. So how could counterfactual claims be commands? Well, my idea is to think of them as commands to imagine or commands to conceive. When I say to someone “If I hadn’t stopped watering the plant it would still be alive”, what I am implicitly doing is giving instructions on how to imagine something. So the idea is that counterfactuals are implicit commands written for the imagination. When I start talking to someone about how things might be or how things might go, what I am doing is giving their brain instructions on how to make an imaginary construct similar to the one in my head.  And in the same way that commanding someone to pick up a pen doesn’t need a truthmaker, neither does commanding or instructing someone to imagine something. So on this view counterfactual claims are not truth apt.

My account of modality is not quite a Quinean skepticism about modal concepts. I think modal concepts are quite fine in philosophy, since it just seems self-evident that I am in fact capable of imaging how the past might have gone, how the future might go, or even imagining whole other realities. But I don’t think that the ability for us to think modally require the inclusion of possible worlds into our metaphysics. I don’t think it’s quite right to say possible worlds exist. Processes of imagination exist in our heads. But the intentional objects of such processes don’t need to exist. The imagination doesn’t need truthmakers. It’s only true that I am imagining some way the world could have been, but the imaginary construct does not itself need truthmakers, for the same reason that commands don’t need truthmakers. And counterfactual claims are just implicit commands or instructions on how to guide your imagination.

So in everyday conversation if someone asks me to consider a counterfactual and I say in response “That’s true”, this need not commit me to any sort of possible worlds. It could just be a convenient shorthand for “Yes, it’s true that I am capable of imagining what you just instructed me to imagine”.  But I do not think we need to invoke actual truthmakers to make sense of counterfactual language and thought. Let’s reserve truthmakers for things like cats on mats, not abstract imaginary constructs like possible worlds.

I’m quite open to the possibility that this idea of counterfactuals as commands is deeply confused because of some logical quirk about counterfactuals that I am not aware of. I’m also not sure if it’s an original idea or not. But I thought it was a neat idea, and it’s kind of inspired by some of the stuff I have been reading lately about language being first and foremost a tool, which has led me to think about all sorts of other cognitive activities as tools, including Reason. I think you could see counterfactual thinking as a kind of cognitive tool that enables humans to engage in activities that we would otherwise be incapable of.


Filed under Philosophy

Forty-hour Workweek in Academia?

Some good tips about productivity in academia from the Get a Life, PhD blog (h/t The Philosophy Smoker):

 I tend to agree with the author and strive to have a similar work/life balance in my own schedule. I almost never need to do anything school-related after I have been at campus all day. My evenings are usually spent with Katie cooking, hanging out, relaxing, watching TV or movies, or goofing off on the internet. Part of it is just making sure I am really productive on campus. The other part is making sure I actually spend enough time on campus during the week so that I don’t have to bring my work home. But I’ve found that there is almost nothing I can’t accomplish so long as I break up a task into doable chunks that can be finished while I am on campus.

Leave a comment

Filed under Random

The Struggle of Reason

In dealing with the overwhelming desire for cookies last night, I was struck by what I am calling the struggle of reason. Deep within my brain is some hardwired disposition to seek out sweets. This desire for sweets likely served some evolutionary purpose when food was once scarce. But now I can simply walk to the grocery store and purchase premade cookies that I can just pop into the oven. Obviously, if I always indulged in this desire for sweets, it would make me unhealthy in the long term, leading to obesity and diabetes and a host of other health issues. And since I have a strong desire for good health, I am in a dilemma. I could make myself happy in the short-term by satisfying my desire for cookies. Or I could make myself happy 50 years from now when I am enjoying the fruits of good health. There are then two desires at work: short-term cookies desires, and long-term desires for good health.

This can be understood as a competition. Using my powers of reasoning, I have concluded that my short-term desires do not know what’s best for me. So I use my reason to fight against my baser instincts. What’s interesting to me about this struggle of reason is where each desire came from. My desire for cookies obviously comes from my ancient evolutionary past. But where did my desire for health come from? In one sense my desire for cookies is also linked to a desire for health since it was once healthy to stock up on sweets in times of scarcity. But that cookie desiring system is incapable of understanding the complexities of a modern food system. And if I just always indulged my desire for cookies I would ultimately end up unhealthy. So what’s healthy for my cookie desiring system is really just healthy according to ancient standards of gene spreading. It once helped my ancestors to spread their genes to have a strong desire for sweets.

But what about my desire for health? It does not seem to be as closely tied into the basic circuitry for spreading genes. My reason operates at another level of objectivity that takes into account my consciously given values. For instance, I have consciously decided to marry my wife Katie. I desire to make myself healthy for as long as I can in order to be with her and provide for our future family. So if I was reasoning correctly from this desire, I would reason that I ought not to always eat cookies. So my consciously given desire trumps my evolutionarily given desire. This ability of conscious reason to trump baser desires is hugely important for understanding the modern human condition.

The struggle of reason is also interesting because it helps us better understand instrumental rationality. You are instrumentally rational if you make choices that help you satisfy your desires. Normally, instrumental rationality is associated with the reasoning systems of nonhuman animals, with perhaps the strongest desire simply being a desire to stay alive long enough to reproduce. But when we get to humans, instrumental rationality becomes more complex. Am I instrumentally rational to eat the cookies? In a way, yes, because there really are cookie desiring systems in my brain that would be satisfied if I ate the cookie. But eating the cookie would not satisfy my consciously given desire to stay healthy for the sake of my marriage. So there is a conflict of reason.

Some theorists have talked about this struggle in terms of there being two kinds of reasoning systems in the brain: System 1 and System 2. System 1 is the evolutionary more basic reasoning system that would give me a nonconscious desire for cookies. System  2 is the reasoning system that allows me to inhibit my impulses to go to the grocery store and maintain rationality with respect to a higher system of norms, namely, norms and values that I have developed independently of any desire to spread my genes.

So next time you feel an intense desire to raid the kitchen in a late night munchie run, remember the struggle of reason. We are not predestined to give into these baser desires. Although it might be difficult, we are capable of trumping these desires barring any pathological breakdown in System 2 reasoning. It is my opinion that System 2 is ultimately the stronger of the two reasoning systems, which inverts the standard Humean story about reason being the slave of the passions. I believe in the power of the conscious mind to overcome the innate tendencies bestowed to us by evolution. Obviously there are limits to what exactly we can trump. But a healthy adult human with a working System 2 can, if they so choose, trump just about any evolutionarily given desire and act in accordance with whatever values they have worked out for themselves.  Humans are not robots. Although we do come stocked with some innate programming, we also are programmed with the ability to re-program ourselves, to assign new values that provide the basis for instrumental rationality with respect to culturally generated values. In my own case, the values I have placed on making my marriage work allow me to overcome any desire for unhealthy living. Of course, I sometimes fail in living up to my own standards. But I know this failing is not inevitable. To end with a cliche, with enough willpower, just about anything is possible.


Filed under Consciousness

The Ideal State of Mind in Graduate School

Karen Kelsky’s article “Graduate School is a Means to a Job” was posted a few weeks ago, but I keep thinking about this one piece of advice in particular. She says that a graduate student should:

Cultivate a professional persona as a young scholar. That persona is separate from your previous identity as a graduate student and is, instead, confident, assertive, sophisticated, and outspoken.

Confident, assertive, sophisticated, and outspoken. This is the ideal persona to cultivate as a grad student. It’s interesting to me that this persona is supposed to be different from a “graduate student identity”. But I guess I have find this to be largely true: many grad students are not very confident or outspoken, either in class or in their research. Part of being a “young scholar” then is developing ideas and opinions such that you have something to say about a wide variety of issues in your field. If you have nothing to say about any given subject, what will you be doing as an academic except spouting the ideas of the older generation of thinkers? As Kelsky as aptly noted, part of becoming an academic means being willing to step up to the table and make assertions and be opinionated. Of course, it’s nice to back up these opinions with facts and arguments, but it’s also challenging enough just to find something nontrivial yet interesting to say that hasn’t already been said before. And if you have well-developed ideas and opinions, then you can afford to be confident in your ability to engage with other academics. And the more you engage, the more sophisticated you will become in your ideas. This is all part of being a young scholar. I really like Kelsky’s article for how she shows that self-conception goes a long way towards laying the foundations of a career in academia. The article also has a bunch of other good practical advice well worth reading.

Leave a comment

Filed under Random

Kindle Books I'm Reading

  • Stuff: Compulsive Hoarding and the Meaning of Things, by Gail Steketee and Randy Frost
  • Language: The Cultural Tool, by Daniel Everett
  • Neurophilosophy at Work, by Paul Churchland
  • Why We Get Fat, by Gary Taubes
  • A Cognitive Theory of Consciousness, by Bernard Baars
  • Bonk: The Curious Coupling of Science and Sex, by Mary Roach
  • Inference to the Best Explanation, by Peter Lipton
  • The Methods of Ethics, by Henry Sidgewick
  • The Fall of the Roman Empire, by Peter Heather
  • Frederich Nietzsche, by Julian Young
  • The Information, by James Gleick
  • Genius: The Life and Science of Richard Feynman, by James Gleick
  • The Secret Life of Pronouns, by James Pennebaker
  • The Atheists Guide to Reality, by Alex Rosenberg
  • The Greatest Show on Earth, by Richard Dawkins
  • Arguably, by Christoper Hitchens
  • The Better Angels of our Nature, by Steven Pinker
  • Mind of the Raven, by Bernd Heinrich

I plan on doing more short book reviews on this site, so as I finish them I should be writing up my impressions.

Leave a comment

Filed under Random

Blindsight: A Case of Nonconscious Sensation

Blindsight is the curious neurological syndrome where someone is capable of “seeing” to a limited extent but claims to have no subjective experience of seeing. They could, for example, post cards into slits at the right orientation while verbally reporting they have no idea what the orientation is. Such patients have led cognitive scientists to talk about there being two different visual streams in the brain: the ventral “what” stream and the dorsal “where/how” stream. In blindsight patients, the hypothesis was that they had a functioning “where/how” stream but their “what” stream was damaged. The idea then was that only the “what” stream is conscious, and this explained why the blindsight patients lacked visual phenomenology.

But some philosophers have disputed the idea that the dorsal stream is nonconscious. These people have argued that just because someone can’t report or have access to phenomenology that doesn’t mean there is no phenomenology. So now there is a big debate about whether and to what extent the “where/how” stream is conscious.

In order to determine whether the “where/how” stream is conscious, we need to first define what we mean by consciousness. Most philosophers like to define it in terms of what-it-is-likeness, which is more or less synonymous with “awareness”. I like to define consciousness more precisely because the term “awareness” is one of the most vague and least helpful words in the vocabulary of philosophers. Does a moth have “awareness” of a light? Sure, but should we therefore think the moth is conscious? That doesn’t follow. I prefer to follow Julian Jaynes in defining consciousness narrowly as a kind of introspective power. To get an idea of what introspection means close your eyes and imagine yourself sitting in a chair 50 years in the future. Such mental time travel is one of the functions of what I am calling consciousness. It allows you to reflect on what you have done or might do. Its content is varied, ranging from visual, auditory, gustatory, haptic, emotional, bodily, and linguistic content. It allows you to have an inner monologue. On how I define it, consciousness is a kind of reflective/introspective/metacognitive/higher-order monitoring system that takes as its content other brain representations.

But if we thought this kind of reflective consciousness was the generator of “phenomenology” then we would have to think that a great deal of nonhuman animals had no phenomenology. This is a bad result, so we should not conflate reflective consciousness with what-it-is-likeness. Now, in the case of the blindsight patient, what should we say? On my account, we should say that the patients lack reflective consciousness but there is still something-it-is-like to have an operational “what/how” stream. Thus, we can distinguish nonconscious sensation from conscious sensation. The blindsight patients has nonconscious visual sensations in virtue of having an operational “what/how” stream that can accurately discriminate visual information so as to aid motor planning, but lacks “conscious visual sensations”. In virtue of being tied into higher-order monitoring systems that has access to linguistic contents and global workspaces, consciousness is responsible for reportability. So only conscious sensations can be reported on, explaining why the blindsight patients claim to have no visual experience. We can now clarify that they don’t lack nonconscious experience but conscious experience. Since conscious experience is by definition introspectable, we can now clarify the way in which their phenomenology is changed when blindsight sights lose the ability to introspect on their visual stream. So blindsight patients have nonconscious sensation, but lack conscious sensations, which are reportable by definition. They maintain the ability to respond intelligently to the world, but lack the ability to be metacognitively aware of what’s going on in their visual experience.

1 Comment

Filed under Consciousness, Philosophy, Psychology

Strong and Weak Modularity

When evaluating the truth of the modularity thesis about the brain, it’s important to distinguish between two forms modularity can take: a strong form and a weak form. The strong form is the view that the brain is organized along the lines of a swiss army knife, with hundreds or thousands of modules like the “mate selection module”, “food detection module”, or “cheater detection module”, with each module running a dedicated task. The weak form is simply the thesis that you can turn off or take out some parts of the brain without shutting down the whole system. For example, weak modularity is the idea that if you removed the auditory cortex your visual system would not completely crash and vice-versa.

The strong form is usually committed to things like “information encapsulation”. But there are two forms encapsulation might take: strong and weak. The stronger form says that any given module runs completely independently from other modules and when it is running its processes it uses its own internal knowledge to process it. This is supposed to be why the Müller-Lyer can’t be turned off even if you know it’s an illusion. The weak form views encapsulation a little different. On the weak view, each module is “talking” to a lot of other modules, and the idea is that when you have different modules talking to each other, new functions arise. The weak form thus sees modules built out of other modules, like a nested hierarchy. On this view, “encapsulation” has the wrong metaphorical connotations. Encapsulated seems to mean something like “isolated”. But on the weak interpretation, modules are not isolated at all; they are situated in a complex causal network of different modules. Moreover, the stronger form usually says that each module only really runs one process e.g. the cheater detection module only detects cheating. On the weak view however, it’s theoretically possible that a module could do more than one thing.

So when we look at task-based fMRI data using subtraction logic and are tempted to talk about a “theory of mind module” at one particular loci, we need to think about both the weak and strong forms of modularity and the weak and strong forms of information encapsulation. For the weak view of modularity, the theory of mind module is only modular because you could lesion it without shutting down the rest of the brain. And on the weak view of encapsulation, it’s more likely that theory of mind capacity stems from the powers of a distributed network of modules with the one particular loci that is “subtracted” out also being capable of helping out in other things beside theory of mind. The strong view of modularity and encapsulation would say the particular loci that is “most active” is the place where theory of mind happens. Michael Anderson has recently done meta-analyses of fMRI data and concluded that what’s going on often is cases where cortical areas are redeployed to perform new tasks, so the idea that any given brain loci does just one thing is mistaken. Since the brain constantly recruits old circuits to do new tasks, the strong form of encapsulation is going to be wrong: each loci can participate in different tasks in a slightly different way.

Leave a comment

Filed under Philosophy, Psychology

A Thought Experiment to Determine the Relative Worth of Different Species

Imagine that either all humans died tomorrow or some other nonhuman species died tomorrow. From God’s perspective, what is a more tragic loss? If the nonhuman species is worms or mosquitoes it seems clear that the death of all humans would be a more tragic loss. This gets tricky when you run the thought experiment on higher mammals like dolphins and chimpanzees. But I think that my own intuitions tell me that the loss of humans would be the most tragic. Now, imagine that either all humans died or all nonhuman species died. Ignoring things like ecological stability, what is more tragic?  Again, my intuitions tell me that it will always be more tragic if all humans die.

Why do I think this? Well, I think it has something to do with the loss of cumulative culture. If we stopped tending to our libraries or our computer databases, all that information would eventually crumble into the Earth. The thought of it is just so tragic to me. I think it’s so tragic because of a massive loss of potential. Humans have only been seriously cumulating culture for the last 50,000 years. Philosophy has been around for several thousand years. Modern science has been around for several hundred years. Computers for less than a century. The internet for a few decades. All these cultural changes have resulted in massive changes in human understanding. But where will humans be in the year 3500 CE? or 10,000 CE? Or 1,000,000 CE? If we manage to keep from killing ourselves from nuclear war or being killed by things like massive meteoroids or super volcanoes or climate change, the possibilities for cumulative culture are mind-boggling to think about. And if Steven Pinker is right about the thesis that modern civilization as social control leads to reduced violence, then I have great hope for our species. Sure, it is still a fucked up world by any means. No theodicy has ever convinced me. But at the same time, I see so much potential for our species. The sudden loss of a potential for cumulative culture a million years old leaves me sad. For this reason I greatly support the space program, as I would like to see our species ultimately leave our birth planet and spread throughout the stars, preserving our cultural heritage for as long as we can. That’s a beautiful vision to me.

I imagine that some people might think that the very attempt to “determine the relative worth of different species” is a fool’s errand, because it will only result in humans exercising their human bias. Of course a human would have intuitions that the loss of humanity would be more tragic. If a lion could ask the same question, it would have its own bias. But that’s the thing: only we can ask the question, or ask any question really. But why does this bestow worth? What’s so special about cumulative culture? For one, it’s rarity. We are the only species with a cumulative culture that has been ratcheting up for thousands of years. But also it just strikes me that when the tools of such culture allow for the invention of things like philosophy and science, there is a kind of intrinsic worth to one part of the universe having the ability to self-reflect on the rest of the universe. We are the universe getting to know itself through itself. Maybe this whole thought experiment is hopeless because of these biases. But I like to think that if God existed She would agree with me.

1 Comment

Filed under Philosophy