A Quick and Dirty Argument for Behaviorism

1. All the evidence we have as scientific psychologists is publically observable behavioral evidence.

2. The safest epistemic strategy is to limit as much as possible going “beyond” the evidence, an inevitably risky gambit.

3. “Subjectivity”, “mental states”, “cognition”, “representations”, “feelings”, “consciousness”, “awareness”, “experience”, etc. are not publically observable i.e. if you open up someone else’s skull you will not see cognitions or mental states, you will see a pulsating hunk of flesh.

3. If we value epistemic safety above all, we should never leap beyond behavioral evidence to talk about unobservable mental states, unless such talk is self-consciously understood to be an abbreviated paraphrase of a long conjunction of behavior reports. Therefore,

4. The safest epistemological stance in psychology is behaviorism.

But wait! Don’t scientists in other fields go “beyond” raw data by talking about “unobservable” theoretical entities like atoms and black holes? If it’s epistemically warranted for physicists to appeal to “unobservable” theoretical entities like atoms in order to explain the experimental data, then it should also be okay for psychologists to appeal to “unobservable” theoretical entities like “episodic memory” or “engrams” in order to explain the behavioral data.

Two things can be said in defense of behaviorism.

First, it’s an open question in the philosophy of science whether physicists are in fact epistemically warranted to go “beyond” the data. According to physicist-philosopher-of-science Duhem, theories are only supposed to be tidy and convenient summaries or compressed descriptions of experimental findings, not statements literally describing an unobserved metaphysical reality. On this view, we do not use theoretical entities and equations to explain the data but rather use equations and theories to help us cope with the large and unwieldy collection of facts gathered by experimenters. Duhem argues that if humans didn’t have such finite memories, scientists would not find it necessary to tidily represent messy experimental findings in terms of neat equations and law-like statements.

Consider this: If a theory about domain X is true, then all possible experimental findings relevant to domain X deductively follow from the theory and thus have the same truth-value as a long conjunction of descriptive reports of real scientific experiments. But once you have all the experimental findings on your head, what’s the need for the theory? The need is purely practical, a result of human finitude and our desire for convenience, simplicity, and genuine understanding.

Second, even if we grant physical scientists an epistemic license to go “beyond” the data and talk about theoretical entities, this practice only works well when there are widespread conventions in place for operationalizing theoretical terms (i.e. translating theory into real experimental operations) as well as standards for conducting and verifying results of experimental procedures (measurement verification procedures). It’s not clear to me that cognitive science has reached any widespread consensus on any of these issues.

Compared to “mature” sciences like thermometry with widespread industry standards, there seems to be little if any widespread consensus in the “mind sciences” about theoretical terminology let alone operational criteria for testing theoretical claims or even nailing down what exactly it is we are supposed to be studying in the first place.

Thus, the true problem with psychology is not that it talks about “unobservable” entities or employs theoretical jargon but rather there is no widespread consensus on how to define our concepts and operationalize our methods for getting access to the unobservable phenomena.

The problem facing psychology is two-fold: (1) a lack of consensus on how to pick out the phenomena due to a lack of theoretical consensus in understanding the ostensive definition of a psychological concept and, (2) a lack of consensus on how to interpret the evidence once we have collected it.

 Case in point: recent developments in the “science” of consciousness. First, there is there little to no consensus on where to even look for consciousness to begin the process of measuring it and studying it as a natural phenomena. Can any theorist answer this simple question: should we look for consciousness in insects?

Some theorists think if we looked for it in insects, we will find it because on their definition “consciousness” is not that fancy of a phenomena (e.g. enactivists and neo-panpsychists would both predict a consciousness-meter would register a small amount of consciousness in insects). According to other theorists, if you looked for consciousness in insects you will not find it because on their preferred definition “consciousness” is fancy and thus probably found only in “higher” animals like mammals. Who is right? No appeal to empirical facts will help in this debate because the problem is fundamentally about how to interpret the evidence given all we can go on as psychologists is behavior, which is of course neutral between rival theories of consciousness.

Some might object that I have picked an easy target and that the science of consciousness is a bad example of how psychology in general is done because it is the newest and most immature of the psychological sciences. But in my humble opinion, the science of consciousness is on no worse footing than most other subfields and niches of psychology, which are continually making progress “towards” various grand theories. However, insofar as another subfield of psychology is on firmer ground than consciousness studies, it will be because they have imitated the physical sciences by operationalizing their theoretical concepts in terms that can be directly measured by physical instruments. That is, a subfield of psychology is on firmer epistemic ground insofar as it sticks closer to physical, behavioral evidence, which is all any psychologist has to go on in the end. This is close enough to behaviorism for me.

I have much more to say on this topic, but I promised to be quick and dirty. Remaining questions include: how should we define the observable vs unobservable distinction?



Filed under Philosophy, Philosophy of science, Psychology

5 responses to “A Quick and Dirty Argument for Behaviorism

  1. Doesn’t it seem like we should probably be trying to explain more than mere behavior, since we seem to also do these other things like cogitate about things and then talk about them. Consciousness probably isn’t measurable and may not even exist, but shouldn’t we be trying to explain why we think we have it? Just as we know that we perceive light, color and movement because of complex activity in our eyes and brains, shouldn’t we be trying to understand what causes our perception of consciousness?

    • “but shouldn’t we be trying to explain why we think we have it?”

      Don’t get me wrong, I am not suggesting that psychologists give up their day jobs and stop trying to explain things.

      However, how you do know it “seems” that people cogitate about things? Your own familiarity with these concepts comes through your own introspectional perspective. But suppose for the sake of argument that you were in an atomic scanner that projected a complete read-out of the atomic states of your entire brain and body. You would watch the read-out scroll across a giant screen as it records the physical-correlates of your “inner” phenomenology. What kind of meaning would that atomic read-out have for you?

      My point is this: if that atomic read-out would be psychologically opaque to you while you were “experiencing” what-it’s-like to be the atoms from the “inside” as it were, it will be infinitely more opaque to other scientists looking at your from the “outside”. But just as staring at your own atomic read-out would not disabuse you of the reality of your own point of view, staring at someone else’s atomic read-out does not entail they don’t have a point of view, the atomic read-out is just what you have epistemic access to.

      • It sounds like what you’re hoping for is the discovery of objective reality, but that’s unlikely to happen. That isn’t really what science is for. Science merely allows us to explain the reality we have access well enough to manipulate our worlds.

  2. I think you’ve hit a number of crucial problems on the head. As I’m sure you know, the problem with programmes that opt for the relative epistemic safety of observations goes back to the famous debates between Boltzmann and Mach – a debate that Boltzmann won (albeit posthumously) by showing the way positing atoms provides physicists with a ‘guide to discovery.’ So if the question is, ‘Why do I have all these specific dispositions to behave?’ then descriptions of these dispositions (what Schwitzgebel would call (approvingly) a ‘superficial account’) don’t really seem to get us anywhere. What we generally want is the ‘deep account,’ some idea of the otherwise hidden mechanisms that explain what’s going on.
    The problem with 20th century behaviourism was simply that the science of the day was nowhere near up to the task of reverse-engineering the brain – it was doomed to be ‘superficial.’ The kind of new behaviourism you’re mulling here, however, doesn’t suffer this problem at all – and in this sense it isn’t so much ‘behaviouristic’ as it is anti-intentional.
    As such, it’s stranded with the elephantine problem all anti-intentionalisms are stranded with: the question of how meaning and normativity can be naturalized. Look at Rosenberg, for instance. In a sense he’s simply doomed to rehearse all the old incompatibilities between the natural and the intentional and to point out the theoretical primacy of the former. Lacking any way to naturally account for the intentional, the door remains wide open for the intentionalist. ‘What about formal semantics!’ they will shout. ‘Are you telling me that mathematics isn’t a science? Pluh-ease.’
    And if it’s a *real phenomena* we’re talking about, it’s pretty damn hard to argue against attempts to understand it scientifically. Epistemological risk is every bit as important to science as epistemological security. What else can we do but throw ourselves at the problem and trust that the requisite operationalizations will arise in due course?
    Their accusation will be that you are simply mistaking the nascency of cognitive science for its errancy… or to put it in the terms I hear all the time, ‘throwing the baby out with the bathwater.’

  3. So those theorists not partial to the REC-ist vantage might succumb to the following (assuming there is nothing beyond ‘mere talk’ that counts as an operationalisation of consciousness or: does talking about e count as an operationalisation of e?):

    L- Routines vs. Non-L Routines:

    1. For agents x and y there is a non-language (non-representational) routine A such that either agent has demonstrably more competency at than the other. There is some phenomena e that is operand in A.

    2. For agents x and y there is a language (representational) routine L, such that either agent has demonstrably more competency at than the other. L purports to be about A; thus L purports to be about e.

    (By ‘language routine’ I mean generally communication- inscriptive/enunciative: writing, scrawling, speech, talk, discussion, debate, lecturing, etc. By ‘operand’ I mean e is included in the execution of A or A is an interaction between agent and e or there are constituent actions in A that are operant upon e etc.)

    If x is more competent at L than y should this mean that x has more agency with respect to e?

    At the Cider Saloon:
    I enter a cider bar and sit next to a bearded gentlemen resembling Dan Dennett is every way. A few pints later our conversation turns to the hot topic of table tennis. The man’s eloquence, perspicuity and seamless comprehension leave my humble words for dead on the matter. Am I to assume that Dan’s doppelganger is better than I at the game? Is his eloquence a fair index of his otherwise situated and enactive sportsmanship?

    Alternatively, is there a non-L routine that Dopple-Dan can execute involving the phenomena of consciousness say, such that he is clearly more competent at or perhaps, that I have no competency at all? What would this non-L routine consist in? To what extent is 2 really a guarantee of 1? And, if one’s general competency with respect to any e is exhausted by language routines, then what are we to say honestly regarding one’s agency with respect to e? Clearly x can have more armchair knowledge purporting to be about e than y, but both can be equally competent or incompetent in non-L routines where e is genuinely operand. Perhaps I was hoping in some opaque and under-qualified way to state that “I know as much as Dan Dennett does about consciousness” without that statement being completely false, since- eloquent speech activities aside- there is nothing beyond ‘mere talk’ that either of us does consisting of ‘operations upon consciousness’ that one is clearly more competent at.
    “The true meaning of a term is to be found by what a man does with it, not by what he says about it” –P. W. Bridgman.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s