On the idea of massive modularity, or, coming around to computationalism

I feel weird saying this, but I am actually coming around to the idea of “modularity”, particularly the “massive” kind argued for by people like Peter Carruthers. Last week I started reading Carruthers’ highly ambitious 2006 book The Architecture of the Mind. As someone who has resisted representationalism, computationalism, and modularity for many years, I find myself agreeing  with Carruthers more often than not, which is a kind of novel experience for me, since usually such language strikes me as problematic and I am constantly thinking “No!”. Granted, I still have to perform a mental substitution for some of his terminological preferences in order to read his claims without thinking them vacuous, but that I am able to always make a plausible interpretation of his claims speaks to the power of his overall vision, and the depth of encyclopedic knowledge on display.

First, what does Carruthers mean by “modularity”? In general, modularity refers to the way a functional system can be broken down into dissociable components and subcomponents. For example, you can exchange the tires on a car without effecting the the functionality of the engine, or you can replace a speaker in a Hi-Fi system without damaging the rest of the system. The car is thus modular in the sense that it is made out of exchangeable parts that can break down independently of the functionality of other parts of the system. Crucially, modules must be understood in terms of their functionality, not in respect to their anatomical or physiological structure (although knowing that structure is of course helpful for understanding the function, and likewise). In the case of brain modules, we can’t simply point to one clump of neural tissue and say that’s a module; we have to examine the function of that tissue to determine where the modular components come apart, since they are defined along functional, not anatomical, lines. It also crucially important to note that for Carruthers, “modular” doesn’t necessarily mean “innate” or “genetically determined”, since the functionality of any module can be changed by development, and development itself can lead to the learning of new functional capabilities (especially with the imitative abilities of humans). Moreover, an important part of a modular functional system is that it can be understood in terms of input/output with particular kinds of computations done on the input in order to generate output. And as Carruthers defines it, “The input to a system [is] the set of items of information that can turn the system on.

Normally, I am quite opposed to the idea of using a computational “input/output” framework to explain the mind because it ends up falling prey to the Myth of the Given whereby the “input” is raw and meaningless, leading to passive forms of linear processing chains that miss the action-perception cycling that makes perception fundamentally meaningful all the way down at the input level. But Carruthers definition of input avoids these problematic passive-Cartesian assumptions and is in fact compatible with my own preferred mental metaphysics of “reactivity”. My basic idea is that the organic system is reactive, with the nervous system realizing a particular kind of reactivity. The organism reacts to the environment, reacting to its own reactions, with reactivity all the way down.

Accordingly, Carruthers’ definition of input is compatible with a metaphysics of reactivity in the following way. We can understand computations in terms of the chains of neural reactivity cascades in response to a perturbation of the system from either an external or internal source, with external and internal understood, not epistemologically, but in terms of the boundary of the organism’s membranes. The input to a module is simply that set of information that causes the module to “turn on”, i.e., to start reacting in particular and functionally specific ways. The reaction to the input is the “computation” that is carried out by the module, and the end-result of the reaction is the output, which can act as input to other modules, i.e. it can cause patterns of reactivity in other parts of the brain. Hence, the output of some modules can actually come back around and influence the reactivity of modules that are causally closer to the source of the perturbation, allowing for “top-down” effects. I think that this definition of input/output and computation is perfectly compatible with the “enactivist” tradition, which has traditionally been critical of the input/output paradigm on account of it missing the circular nature of action/perception cycles.

On my reading, Carruthers avoids these problem by defining the input as that which turns the system on, which can be cashed out in a biologically plausible way. Moreover, since Carruthers defines input as the kind of information contained in the stimulus which turns the module on, this is also compatible with a Gibsonian affordance ontology wherein it is the information about affordance-properties contained in the raw stimulus which actually effects the perceiver in such a way as to constitute perception (as opposed to merely sensation, which is noninformative). Hence, we could say that information about affordances in the ambient optic array turn on the modules that are evolutionarily designed to react to that information in adaptive ways. This avoids the Myth of the Given since that affordance information isn’t necessarily raw. And since the response to affordances is cashed out in an ontology of reactivity, we avoid the internalism and foundationalism of traditional computational approaches inspired by Locke.

So when applied to the brain, Carruthers’ thesis is that the brain(and hence the mind) is massively modular. How is this different from the classic modularity thesis put forward by people like Jerry Fodor? Carruthers radically differs from Fodor in the sense that Fodor only thought that the shallow perceptual processes such as vision were modular. When it comes to “general” cognitive systems like reasoning or believing, Fodor thought that these processes were not modular, but general. Carruthers’ thesis is radical in the sense that he thinks that even the most abstract, general, multi-modal, and intellectual of human cognitive processes are modular i.e. capable of being broken down into dissociable functional components. I read this thesis as compatible with a kind of Dennettian theory wherein there is no “general” place where it “all comes together”. There is simply a complex and messy “kludge” of functional components and subcomponents, which run their functions more or less independently from other processes (although as I mentioned above, the output of one particular module can be the input to another, so there is still communication and interaction between different modules rather than complete encapsulation as normally assumed by modular stereotypes). However, it is important to note that Carruthers, as he should, argues that there does seem to be exception to the normal independence of modules in the function of narratological and reflective consciousness in human adults. In this case, it seems necessary to talk about a more “global” neural interactivity (probably realized over the default mode network). But this is compatible with the overall thesis of massive modularity, since there still is an awful lot of domain-specific reactivity in the brain, particularly for prereflective cognition. Even if a global consciousness function is not modular in the sense that mouse vision is modular, it doesn’t follow that there isn’t a massive amount of modularity in all animals, including humans.

I like the massive modularity thesis because it seems in accordance with the Jaynesian principle that what is to be found in higher-order cognitive processes must first be found in the lower-order cognitive processes, and the functionality of those higher-order cognitive processes doesn’t require a general theater where it “all comes together” to slowly evolve as a distinct neurological center. Rather, the higher-order processes come into being through exaptions and readaptions of previous modules, often buffered by mechanisms of neural plasticity. It is the multiple and widely distributed functionally reactive/modular networks of neurons that realize the higher-order processes rather than some general-purpose CPU that does all the higher-order work in a fashion completely different from lower-order networks, which make up the vast majority of neural tissue in the brain. As Jaynes says, there is nothing in reflective consciousness that was not first found in behavior. And as Carruthers argues, rather than suppose that the human mind is becoming less modular and “more general” as we increase our cognitive powers across evolutionary time, we should instead see the human mind as becoming more modular as it evolves, corresponding with the increase in the functional specificity of modern living in a complex social-political-technological world. The number of modules and submodules we need to automatically cope with everything from driving a car, navigating websites, taking tests, playing sports, constructing a skyscraper, programming a computer, farming, hunting, etc. is truly astronomical in comparison with the functional specificity and developmental “niches” of other species. So, instead of massive modularity indicating biological primitiveness, supermassive-modularity indicates supreme functional development on both the biological and sociological scale.


1 Comment

Filed under Philosophy, Psychology

One response to “On the idea of massive modularity, or, coming around to computationalism

  1. Vic Panzica

    I think what’s proven is that the modular approach shows the brain as an environmental adaptive organ so modules have a “virtual” nature or go in and out of existense as necessary. I think the higher in the brain we go, the more virtual they are in nature. The lower in the brain, the more fixed or permanent in nature as phantom limb syndrome proves our basic body awareness is through the module in the lower part of the brain which translates along with higher cognitive modules to become Metzinger’s transparent self module. Likewise the gap between physical vs phenomenal concepts which Chalmers talks about (Water is H “two” O).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s