An example of "extended cognition" for Ken Aizawa

4EA skeptic Ken Aizawa is always asking for clear examples of extended cognition that do not violate the coupling-constitution fallacy. In a recent post, he challenges the following premise from Wilson:

(e) External cognitive resources often play the same or similar functional roles in the detection and creation of meaning as do internal cognitive resources, or complement, compensate for, or enhance those roles.

Ken’s claim is that the evidence shows that external resources play only a causal role rather than a constitutive role. In other words, external resources are merely causal inputs into the cognitive system and do not themselves play a functional or constitutive role. In Clark and Chalmer’s well-known thought experiment involving a man with Alzheimer’s using a notepad to aid his impaired navigation skills, Ken famously rebutted by claiming the notepad is not literally a “a part” of the cognitive system, but rather, just a causal resource to lean on. If it were actually a part of cognitive system, Ken thinks that it would be impossible to stop a “cognitive bloat” wherein the cognitive systems gets extended into everything that cognition causally depends on. With the notepad, Ken responds that we are at best entitled to say that the notepad is causally coupled to the cognitive system, and that we cannot conclude from such coupling that the notepad is literally a part of the cognitive system rather than just an input or “resource” to lean on.

We would need a better example or a prior theoretical reason to believe that cognitive systems do in fact extend into the environment, one that outweighs the theoretical reasons for believing the orthodox story about internal representationalism. I think there are such theoretical reasons, but I also have a concrete example of extended cognition that I want to try out. It’s not based on a thought experiment, but rather, anthropological research into ancient decision-making processes. I refer, of course, to sortilege or cleromancy.

Photobucket

I was turned onto this example by Julian Jaynes. He called sortilege an “exopsychic decision-making process”. This is, in my mind, the first stated argument for extended cognition in the literature (1976). Does anyone have an earlier reference? He describes sortilege as follows:

Sortilege or the casting of lots differs from omens in that it is active and designed to provoke the god’s answers to specific questions in novel situations. It consisted of throwing marked sticks, stones, bones, or beans upon the ground, or picking one out of a group held in a bowl, or tossing such markers in the lap of a tunic until one fell out. Sometimes it was to answer yes or no, at other times to choose one of out a group of men, plots, or alternatives. But this simplicity – even triviality to us – should not blind us from seeing the profound psychological problem involved, as well as appreciating its remarkable historical importance. We are so used to the huge variety of games of chance, of throwing dice, roulette wheels, etc., all of them vestiges of this ancient practice of divination by lots, that we find it difficult to really appreciate the significance of this practice historically. It is a help here to realize that there was no concept of chance whatever until very recent times. Therefore, the discovery (how odd to think of it as a discovery!) of deciding an issue by throwing sticks or beans on the ground was an extremely momentous one for the future of mankind. For, because there was no chance, the result had to be caused by the gods whose intentions were being divined. (1976, p. 240)

I’m fairly confident that this example of sortilege doesn’t violate the so-called “coupling-constitution” fallacy. I think it is reasonable to first define cognition as a regulatory or coordinating process that serves to select effective neural pathways out of internal variability. In other words, cognition is about making decisions and controlling the sensorimotor system to get things done in the world. I think this is a fairly theory-neutral definition of cognition that can accommodate both representational and dynamic systems approaches to behavioral control.

With that said, I think casting lots is a clear case of “off-loading” cognitive decision making processes onto the environment. The casted lots are not just “causally coupled” to the ultimate sensorimotor decision, but rather, constitute the decision making process itself. The lots serve a functional role similar to that of internal neural-neural control. It serves as a regulatory resource that is used in novel situations to deal with complex environmental variables.  It serves the functional, constitutive role of coordinating behavior and simplifying the task parameters. As Clark would say, you could imagine that a random “casting-lots” mechanism had evolved inside a brain that would be utilized in the same way so as to regulate and coordinate behavior.

The only way to avoid the conclusion that cognitive decision making processes are “offloaded” into the environment during sortilege would be to disagree with the definition of cognition as behavior regulation. If you defined it differently, I suppose you could come up with a model of the mind wherein the casted lots serve as mere “input” into the functional system rather than genuinely playing a cognitive role.

But I think such an approach is phenomenologically flawed. If you were to get inside the minds of these ancient people, I think the lots would be experienced as a genuine behavioral authority that is external to the agent. That is, the lots would be “authorized” by the nervous system to serve a direct role in the coordination of behavior, similar to the authorization of verbal control in hypnosis. The experiential aspect would include an “absorption” into the external world such that the chance results are directly taken as significant for social control. I think it would be difficult for representational models to replicate this thrownness or absorption.

Advertisements

13 Comments

Filed under Philosophy, Psychology

13 responses to “An example of "extended cognition" for Ken Aizawa

  1. Do you need to go this complicated? What about me tossing a coin? Or are you suggesting that the key here is that these people didn’t think of the process as entailing randomness?

    Interesting example, anyway 🙂

  2. Gary Williams

    Andrew,

    I’m smiling at the simplicity of your example in comparison with mine. I picked sortilege because it has historical documentation is still widely practiced by religious types. And I also wanted to use that Jaynes quote.

    But I don’t think the key is the issue about randomness. I think the key is the off-loading of the decision making task onto the environmental resource. I like the sortilege example because it makes this quite clear. But I think there are simpler examples, such as letting the ground guide and regulate your steps in walking. But these types of motor control examples never seem to impress the skeptics because they don’t seem sufficiently “cognitive”.

    • I’m a simple man 🙂 I was just curious as to why you wanted to go all the way to this example, is all.

      This is a good example, though, it does a lot of what you want it to I think and it’s the kind of example that’s well worth drawing to the attention of the literature.

      Ken, further down you mention you’re ok with non-neural cognitive processes. What counts as such a thing that maintains a bound on cognition?

      • Ken Aizawa

        Well, I think cognitive processes are a species of manipulation of non-derived content bearing representations. There is, on the face of it, nothing in this theory that constrains these processes to occur in collections of neurons.

        But, to delimit the spatiotemporal location is a matter of delimiting the spatiotemporal location of the manipulating mechanisms and the manipulated representations. (There is a literate in the philosophy of science on mechanism. Machamer, Darden and Craver’s “Thinking about mechanisms” is a good starting place.) This is not an insurmountable feat in digital computers or calculators. So, I don’t see an in principle difficulty.

  3. Its a good example, and it also highlights a mistake I think a lot of people make when dealing with the coupling/constitution fallacy. The external resource exploited by a cognitive system does not need to mirror or duplicate in function any process that is already embedded within the system; it doesn’t even need to be the kind of thing that brains can do on their own. You can’t just look at some external process and recognize it as a cognitive extension; you have to see the role it plays in the behavior of the overall system, no matter what kind of process it is.

    What worries me is the basic assumption, that as far as I can tell is undefended, that there is any sense to make of a difference between merely “causally leaning on” some resource, and that resource genuinely being a “part of” the overall system. Clark’s examples in NBC and StM all seem to trade on the intuition that there is no principled way of drawing such a line, and more importantly that any such attempt to draw the line will cross freely over the apparently intuitive ‘boundaries’ of mind and world. It seems to me that the onus is on the defenders of internalism to give a reason for suspecting that the resources necessary for explaining cognition are to be found only within the brain, especially when externalist models handle a variety of cases so well.

    One way to put the worry is that cognitive bloat IS a real problem, and it is one the mind solves by focusing only on what is relevant, and ignoring most of the rest of it. And there are many studies (see, for instance, Berti and Frassinetti, 2000)that show that our criteria of relevance changes depending not only on our goals, but also on the tools and resources that are available in the environment, and how familiar we are with those tools. In other words, it is precisely by understanding the variety of connections between the mind and the world that we can solve the problem of bloat; if we limit ourselves to the resources of the naked brain, it is utterly mysterious how we handle bloat.

    • Gary Williams

      What worries me is the basic assumption, that as far as I can tell is undefended, that there is any sense to make of a difference between merely “causally leaning on” some resource, and that resource genuinely being a “part of” the overall system. Clark’s examples in NBC and StM all seem to trade on the intuition that there is no principled way of drawing such a line, and more importantly that any such attempt to draw the line will cross freely over the apparently intuitive ‘boundaries’ of mind and world. It seems to me that the onus is on the defenders of internalism to give a reason for suspecting that the resources necessary for explaining cognition are to be found only within the brain, especially when externalist models handle a variety of cases so well.

      I’m very sympathetic to this response. I do think it is difficult to draw clean lines between different components in the cognitive system. I also agree with you that the best way of understanding the problem of cognitive bloat is to look into how the mind solves the frame problem.

      Although I am skeptical about the validity of the coupling-constitution fallacy as a “test” of extended cognition, I wanted to come up with an example that passed it anyway because once it is established that the mind sometimes extends itself into the environment, then we have recourse to begin examining other real-world cases. I’m of the opinion that cases of “exopsychic” decision making are going to be the norm once we start learning how animals utilize affordances for the regulation of behavior in natural environments.

    • Ken Aizawa

      Hmm. I don’t know anyone who conflates the C-C arguments and the cognitive equivalence arguments. To my knowledge, everyone sees these as separate issues. Clark, (2009), p. 88, seems to think this happens, but it would be good to see some texts where this happens. Or maybe it only happens in discussion.

      I don’t see that it is the burden of the EC critic to explain the coupling-constitution distinction, since it is one that at least many advocates of EC invoke. Take for example Noe “According to active externalism, the environment can drive and so partially constitute cognitive processes. (Noë, 2004, p. 221).” It looks like he is using a causal notion of “driving” as evidence for a constitutive claim. So, it looks like he at least draws a distinction between causation and constitution and uses it as part of an argument for extended cognition. If that’s not what is going on in this passage, what is? But, note that Clark, (2009), also accepts the difference between HEC and HEMC as being under debate, which includes the causation-constitution distinction.

      But, I’ve never seen anyone commit to print to the idea that there is no difference between being a cognitive process and triggering (and being triggered by) a cognition process. I’ve not read everything on this, so maybe it is out there.

      Finally, I am happy to accept the hypothesis of extended explananda for human performance, or some such, but that seems to me different than the hypothesis of extended cognition. I take it that ambient oxygen is part of the explanation for human performance and ambient oxygen is in the environment.

  4. Ken Aizawqa

    I think this is an excellent post with a lot going on in it. I think it also brings out a theme in the EC literature that Adams & Aizawa have yet to address. But, stuff is in the works …

    So, begin with the simple.

    A putative instance of EC does not commit the C-C fallacy. It is only a (bad) argument that commits a fallacy. So, what matters is the argument one provides for thinking that the instance is an instance of EC. I think Gary and A&A agree this far.

    And, I think that Gary provides an argument that does not rely (in the obvious way) on a coupling to constitution kind of inference. Instead, he approaches the case, it seems to me, by way of a “mark of the cognitive” approach. He tries to state what a cognitive process is, then indicate that the thing that meets this specification is the person plus tool.

    Things get a little bit more complicated, as I see things, because there is a shift from talk of cognition as “about making decisions and controlling the sensorimotor system to get things done in the world” and talk of a decision making process. It seems to me that these are not equivalent. So, a reply will have to address both of these claims (and probably more).

    Note as well that there are two possible types of replies to this account of cognition and this account of decision making processes. You can argue that they are false or that they are of marginal interest.

    So, right away there is going to be an explosive growth in the cases that will have to be addressed.

    To be continued …

  5. Ken Aizawa

    So, let me take up your definition of cognition: “I think it is reasonable to first define cognition as a regulatory or coordinating process that serves to select effective neural pathways out of internal variability.” I’m not sure what your target is here, since I don’t think the cognitive is defined by the neural. I’m thinking one can have non-neural cognitive processes. On that, at least, I seem to agree with Varela and Maturana.

    But, rather than try to develop a counterexample, let me try to speak to what I take to the core of your approach, namely, it seems to be to offer a kind of broad functional characterization of cognition. This seems to be the idea that Justin Fisher pushes in his review of the Adams and Aizawa book. He doesn’t offer a definition as you do, but sketches a strategy of offering a functional characterization. We should treat “cognition” as a term like “flight” where we understand flight to be something like any scheme for locomotion through the atmosphere.

    But, here is the line I took in reply to Fisher:
    “Yet, like some general category of heat, some general functional characterization of flight, such as any scheme for locomotion through the atmosphere, is of limited scientific interest. How much use would this definition be to someone designing a helicopter, a space rocket, or a fighter jet? How would this help the ornithologist studying bird flight or the entomologist studying insect flight? As the study of flight advances, deeper principles and mechanisms in specific contexts become more important, where the superficial concept of locomotion through the atmosphere is diminished. Memory is another case in point. Surely there is a generic, ordinary language conception of memory, but in the scientific study of memory, one from time to time finds something like the view that there is no such thing as memory. Instead, there is only long-term memory, short-term memory, procedural memory, declarative memory, or other specific types of memory, as there might be. The rough idea is that simple generic notions suitable for everyday use are of diminished importance with the advance of science. We have proposed that insofar as the hypothesis of extended cognition ties its fortunes to ordinary, common sense notions, its enduring relevance to science is diminished.”

    So, I’m thinking that the kind of definition you are offering is not the stuff of revolutions. The kind of intracranial cognitive psychology that has been going on for a few decades now need not worry about these sorts of conceptions of cognition. Old fashioned representationalist cognitive psychology will just go on as a species of one of these weak tea functional characterizations. And, it’s the integrity of this enterprise that has been my concern from the beginning, back in the 2001 “Bounds of Cognition”. And, given that Rupert so often turns to examples from extant cognitive psychology, it is probably what he is concerned with as well.

    So, that’s a “marginalization” kind of reply to your definition, rather than a refutation of your definition.

    • Gary Williams

      Ken, I am torn when it comes to the question of artificial intelligence. On the one hand, I don’t think there is anything “special” about biology that could not, in principle, be replicated in an artificial machine. This would make me a kind of functionalist, I guess. But I am also a kind of radical micro-functionalist. I think that if we ever hope to mimic human cognition, we are going to need to scale really far down, possibly down to the level of cells.

      However, I think I could widen my definition so that it doesn’t rely on neurons. We might say that cognition is a matter of regulating behavior, period, by any means. But then we need to define what we mean by behavior. Following the Varela school of thought, I think a minimum amount of autonomy is necessary for there to be “agentive behavior” in need of cognitive control, as opposed to “mechanical behavior”. On my account, the behavior of a wind-up watch would not count as behavior. So cognition would be defined in terms of regulating and coordinating an autonomous process such that the system maintains its structural integrity.

      In this way, we could say that the internal activity of a unicellular organism is a kind of cognition because it serves to maintain the system integrity within certain bounds. And the neural activity of higher organisms is merely an expansion on the principle of behavior regulation. The nervous system simply allows for more flexibility in the phase space of possible action. This is why I originally defined cognition in terms of a selection process that narrows the phase space of possible action and selects effective neural patterns. I think the principle still holds. Cognition is a matter of selecting possibilities out of a dynamic variance. It is inherently discriminatory rather than constructive.

      So yeah, I think you can have nonneural cognitive processing. But I imagine that any such autononomous, self-regulating system would work the same way as a neural being does. It would need to be flexible in its phase space such that the internal variability of behavior options would be able to handle changing environmental demands. It would also need to be robust enough such that too much change in the environment doesn’t destroy the habitual patterns that are effective. So a cognitive system needs to be both robust and flexible in order to handle the frame problem.

  6. Ken Aizawa

    Regarding “decision making process”, I have an upcoming post run in terms of “the process of problem solving”. It is based on Rob Wilson’s recent return to the example of Kanzi. So, I won’t pre-empt that.

  7. Pingback: Some thoughts on the coupling-constitution fallacy and the mark of the cognitive | Minds and Brains

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s