Tuesday 21 May 2013

(Immature) cognitive science and explanatory levels

When I was working on cognitive extension last year, I was particularly taken by the suggestion that cognitive science is not yet a "mature science" (Ross & Ladyman 2010). By this it was meant that criticising a theory for failing to meet some intuitive "mark of the cognitive" presupposes that we have a good idea of what such a mark might look like. In fact cognitive science is still mired in metaphorical and imprecise language, making it conceptually unclear what we are even meant to be studying.

These guys lack the mark of the cognitive.

Bechtel (2005) makes a similar point, although he focuses on the level at which cognitive scientific explanation is aimed. Typically we begin with a characterisation of a phenomenon at either the neural or personal level, whilst seeking an explanation at some intermediary level (say, computational). The problem is that we have yet to settle on a clearly defined level that everyone agrees upon. Bechtel contrasts this with biological science, which appears to have gone through a similar struggle during the 19th century.

This helps explain why there is currently so much debate over what kind of answers we should even seek to be giving in cognitive science. Fodor rejects connectionism as simply specifying a certain kind of implementation, and in response he is accused of abstracting away from what really matters. There's no easy way to solve this problem, although the mechanistic approach that Bechtel (and others) have advocated does seem promising. Ultimately we'll have to wait for cognitive science as a whole to settle (or splinter), but this approach does at least have the virtue of conforming to (apparent) scientific practice.

More on this next time, where I will be attempting to summarise the mechanistic approach to scientific explanation...

  • Bechtel, W. 2005. "Mental Mechanisms: What are the operations?" Proceedings of the 27th annual meeting of the Cognitive Science Society. 208-13.
  • Ross, D. & Ladyman, J. 2010. "The Alleged Coupling-Constitution Fallacy and the Mature Sciences." In Menary (ed.), The Extended Mind. 155-65.

Sunday 19 May 2013

Two New Approaches to Cognitive Extension

(I wrote this way back in the summer, then for some reason decided not to publish it. My views have moved on somewhat since then, but hopefully some of this is still worth reading - Joe.)

The most recent issue of Philosophical Psychology (25:4) features a pair of articles on cognitive extension, each exploring a different approach to the theory. Both articles attempt to introduce a principled way of limiting cognitive extension, a problem that has been at the heart of the debate since it began in 1998. I wrote my undergraduate dissertation on the extended mind (and the narrative self), and whilst I'm more sceptical now than I was when I began, I still don't think there's any principled way of limiting cognition entirely to the physical brain.The most famous opponents of extended cognition, Fred Adams & Ken Aizawa, and Robert Rupert, try to avoid this conclusion by introducing various necessary limitations on "the bounds of cognition".

Shannon Spaulding agrees that the bounds of cognition should be limited, but argues that the strategy of "offering necessary conditions on cognition [that] extended processes do not satisfy" is misguided (2012: 469). She mentions Adams & Aizawa and Rupert as proponents of this strategy, and focuses on the former's attempt to identify a necessary "mark of the cognitive" (Adams & Aizawa 2008: 10) that would legitimately restrict cognition to the brain. She finds this attempt to be problematic primarily because it is inherently question begging: any necessary condition that opponents of extended cognition can come up with will be based on precisely the kinds of current cognitive-scientific practice that proponents of extended cognition are opposed to (Spaulding 2012: 473).

Instead she proposes that critics of cognitive extension should challenge the theory on its own terms. This means demonstrating that there is, in practice, insufficient parity between intra-cranial and trans-cranial processes for them to ever form an extended cognitive system, even at the coarse-grained level that proponents of extended cognition tend to focus on (ibid: 480-1). At the fine-grained level, she focuses on functional organisation and integration, which has the advantage of being familiar territory to many who support cognitive extension. Here she points to several obvious differences in the way intra- and trans-cranial processes function. She identifies the coarse-grained level with "folk psychological functional roles" (ibid: 483), where she again points to several obvious differences that might count against cognitive extension.

Spaulding's rebuttal of necessary condition based arguments against cognitive extension is simple and compelling. All of these arguments base their conditions in current cognitive science, and one of the core point that extended cognition seeks to make is that cognitive science must become a wider, more inclusive discipline than it is now. Ross & Ladyman (2010) make a similar point: cognitive science is not a mature discipline, and what kinds of conditions will eventually come to define it is precisely what is at stake in the debate over cognitive extension. For the most part I also agree with Spaulding's approach to assessing extended cognition. By accepting many of the initial premises of extended cognition, including the "prima facie" possibility that some trans-cranial process satisfies her conditions (2012: 481), it allows for a more balanced debate, as well as the possibility that some cognitive processes might be extended whilst others aren't.

Where I disagree is in the details of the examples that she gives, and perhaps in particular with the example that she chooses to focus on throughout: that of Otto and Inga, originally introduced by Clark & Chalmers (1998). I'm starting to think that this example, what we might perhaps call the ur-example, has outlived its usefulness. Much of the debate over extended cognition has, almost incidentally, focused exclusively on it, and as a defender of extended cognition I think we might be better off coming up with some new examples. Where Otto and Inga has failed, something else might well succeed. In particular I think that social extension, involving as it does the interaction between two (or more) uncontroversially cognitive systems, might be a far more productive source of examples than simple 'material' extension. Spaulding focuses on this particular (admittedly paradigmatic) example of cognitive extension, but hopes that her argument achieve similar results with others (2012: 488, en3). Whether or not this is the case, I certainly agree with her when she states that "we must proceed on a case-by-case basis" (ibid: 487).   

Rather than arguing directly for or against cognitive extension, Tom Roberts focuses on setting a "principled outer limit" to cognitive extension, based on the tracking of a mental state's causal history (2012: 491). He sets this out as contrasting with previous arguments for cognitive extension, which have by and large been "ahistorical", focusing on a state's effects rather than its causes (ibid: 492). He rejects Clark & Chalmers' original suggestion that external cognitive states must be subjct to prior conscious endorsement (a kind of historical constraint) for much the same reason that they themselves raise: it risks disqualifying uncontroversial cognitive states such as subliminally acquired memories (ibid: 495).

Instead Roberts pursues a theory of cognitive ownership, arguing that a subject must take responsibility for an "external representational resource" if it is to become part of their extended cognitive system (ibid: 496). Responsibility in this sense requires that (for example) a belief is acquired in a certain meaningful way, and that an "overall consistency and coherency" is maintained between ones beliefs (ibid: 496-9). This, Roberts hopes, will exclude more radical cases of extension without dismissing extension outright. He concludes by admitting that such a criteria might generate an area of vagueness, but suggests that this is not necessarily such a bad thing, and that we will nonetheless find clear cases of extension (or not).

I'm sympathetic to Roberts' argument, and in particular the attempt to give a principled boundary to cognitive extension without dismissing it entirely. However I've never been entirely convinced by the historical accounts of mental representation that he draws upon, and it's also not clear whether this kind of argument would apply to cognitive extension in general, or only specifically to the extension of beliefs. Admittedly, much of the extended mind literature has focused on the extension of beliefs, but in principle it might be possible for other cognitive functions, such as perception or problem solving, to be extended as well.

I'm also wary of relying on any concept of belief ownership, implying a central and distinct individual to do the owning. This is perhaps a more esoteric concern, but at the very least I think it's worth considering what exactly it is that does the owning when you're considering extended cognitive systems that might well involve the extension of whatever 'self-hood' is.

No pictures in this one, sorry.

  • Adams, F. and Aizawa, K. 2008. The Bounds of Cognition. Oxford: Blackwell.
  • Clark, A. and Chalmers, D. 1998. “The Extended Mind”. Analysis 58: 7-19. Reprinted in
    Menary (ed.), 2010: 27-42.
  • Menary, R. (ed.) 2010. The Extended Mind. Cambridge, MA: MIT Press.
  • Roberts, R. 2012."Taking responsibility for cognitive extension." Philosophical Psychology 25(4): 491-501.
  • Ross, D. and Ladyman, J. 2010. “The Alleged Coupling-Constitution Fallacy and the Mature Sciences.” In Menary (ed.)  2010: 155-166.
  • Spaulding, S. 2012. "Overextended cognition." Philosophical Psychology 25(4): 469-90.

Sunday 5 May 2013

Is natural language a high-level programming language?

If the mind is a computer, then there must be something in the brain that corresponds to the low-level strings of data (the machine code) that computing mechanisms manipulate. This machine code provides the basic structure for everything that a computer is able to do. In electronic computers it is implemented in the flow of electricity across circuits. In the brain, it might be implemented similarly, in the flow of electricity across neurons.

Can you read this?

What exactly it means (or does) will depend on the computing mechanism in question, but even granted this information it is incredibly difficult (and time-consuming) for people to program in machine code. Because of this, programmers typically make use of a hierarchy of programming languages. Each language is an abstraction of those beneath it, eventually bottoming out in the machine code itself. A programmer will write code in whatever language s/he finds most accessible, and once s/he is done it will be translated into machine code by a compiler.
This is basically the plot of Neil Stephenson's Snow Crash...
Similarly, it seems likely that the basic code used by a computational brain could be incredibly difficult for us to decipher. Contrary to Fodor's fabled language of thought, there doesn't seem to be any reason why (at the computational level of description) the brain should operate on natural language. Nonetheless, there does seem to be an intimate relationship between the brain and natural language. We obviously produce language whenever we speak, and (perhaps less obviously) language can exert a powerful influence on how we think and behave. In a quite literal sense, it could be seen as re-programming the mind. So if the mind is a computer, then it might make sense to think of natural language as (among other things) a high-level programming language.

Note added 10.05.13: Apparently Fodor beat me to it. Piccinini writes that "Fodor likened human public languages to high level programming languages, and the human LOT to a computer's machine language" (2004: 387). I haven't found the original reference yet, but I think it's in The Language of Thought somewhere.

  • Piccinini, G. 2004. "Functionalism, Computationalism, and Mental Contents." Canadian Journal of Philosophy, 34/4: 375-410.