Showing posts with label extended mind. Show all posts
Showing posts with label extended mind. Show all posts

Tuesday, 21 May 2013

(Immature) cognitive science and explanatory levels

When I was working on cognitive extension last year, I was particularly taken by the suggestion that cognitive science is not yet a "mature science" (Ross & Ladyman 2010). By this it was meant that criticising a theory for failing to meet some intuitive "mark of the cognitive" presupposes that we have a good idea of what such a mark might look like. In fact cognitive science is still mired in metaphorical and imprecise language, making it conceptually unclear what we are even meant to be studying.

These guys lack the mark of the cognitive.

Bechtel (2005) makes a similar point, although he focuses on the level at which cognitive scientific explanation is aimed. Typically we begin with a characterisation of a phenomenon at either the neural or personal level, whilst seeking an explanation at some intermediary level (say, computational). The problem is that we have yet to settle on a clearly defined level that everyone agrees upon. Bechtel contrasts this with biological science, which appears to have gone through a similar struggle during the 19th century.

This helps explain why there is currently so much debate over what kind of answers we should even seek to be giving in cognitive science. Fodor rejects connectionism as simply specifying a certain kind of implementation, and in response he is accused of abstracting away from what really matters. There's no easy way to solve this problem, although the mechanistic approach that Bechtel (and others) have advocated does seem promising. Ultimately we'll have to wait for cognitive science as a whole to settle (or splinter), but this approach does at least have the virtue of conforming to (apparent) scientific practice.

More on this next time, where I will be attempting to summarise the mechanistic approach to scientific explanation...

  • Bechtel, W. 2005. "Mental Mechanisms: What are the operations?" Proceedings of the 27th annual meeting of the Cognitive Science Society. 208-13.
  • Ross, D. & Ladyman, J. 2010. "The Alleged Coupling-Constitution Fallacy and the Mature Sciences." In Menary (ed.), The Extended Mind. 155-65.

Sunday, 19 May 2013

Two New Approaches to Cognitive Extension

(I wrote this way back in the summer, then for some reason decided not to publish it. My views have moved on somewhat since then, but hopefully some of this is still worth reading - Joe.)

The most recent issue of Philosophical Psychology (25:4) features a pair of articles on cognitive extension, each exploring a different approach to the theory. Both articles attempt to introduce a principled way of limiting cognitive extension, a problem that has been at the heart of the debate since it began in 1998. I wrote my undergraduate dissertation on the extended mind (and the narrative self), and whilst I'm more sceptical now than I was when I began, I still don't think there's any principled way of limiting cognition entirely to the physical brain.The most famous opponents of extended cognition, Fred Adams & Ken Aizawa, and Robert Rupert, try to avoid this conclusion by introducing various necessary limitations on "the bounds of cognition".

Shannon Spaulding agrees that the bounds of cognition should be limited, but argues that the strategy of "offering necessary conditions on cognition [that] extended processes do not satisfy" is misguided (2012: 469). She mentions Adams & Aizawa and Rupert as proponents of this strategy, and focuses on the former's attempt to identify a necessary "mark of the cognitive" (Adams & Aizawa 2008: 10) that would legitimately restrict cognition to the brain. She finds this attempt to be problematic primarily because it is inherently question begging: any necessary condition that opponents of extended cognition can come up with will be based on precisely the kinds of current cognitive-scientific practice that proponents of extended cognition are opposed to (Spaulding 2012: 473).

Instead she proposes that critics of cognitive extension should challenge the theory on its own terms. This means demonstrating that there is, in practice, insufficient parity between intra-cranial and trans-cranial processes for them to ever form an extended cognitive system, even at the coarse-grained level that proponents of extended cognition tend to focus on (ibid: 480-1). At the fine-grained level, she focuses on functional organisation and integration, which has the advantage of being familiar territory to many who support cognitive extension. Here she points to several obvious differences in the way intra- and trans-cranial processes function. She identifies the coarse-grained level with "folk psychological functional roles" (ibid: 483), where she again points to several obvious differences that might count against cognitive extension.

Spaulding's rebuttal of necessary condition based arguments against cognitive extension is simple and compelling. All of these arguments base their conditions in current cognitive science, and one of the core point that extended cognition seeks to make is that cognitive science must become a wider, more inclusive discipline than it is now. Ross & Ladyman (2010) make a similar point: cognitive science is not a mature discipline, and what kinds of conditions will eventually come to define it is precisely what is at stake in the debate over cognitive extension. For the most part I also agree with Spaulding's approach to assessing extended cognition. By accepting many of the initial premises of extended cognition, including the "prima facie" possibility that some trans-cranial process satisfies her conditions (2012: 481), it allows for a more balanced debate, as well as the possibility that some cognitive processes might be extended whilst others aren't.

Where I disagree is in the details of the examples that she gives, and perhaps in particular with the example that she chooses to focus on throughout: that of Otto and Inga, originally introduced by Clark & Chalmers (1998). I'm starting to think that this example, what we might perhaps call the ur-example, has outlived its usefulness. Much of the debate over extended cognition has, almost incidentally, focused exclusively on it, and as a defender of extended cognition I think we might be better off coming up with some new examples. Where Otto and Inga has failed, something else might well succeed. In particular I think that social extension, involving as it does the interaction between two (or more) uncontroversially cognitive systems, might be a far more productive source of examples than simple 'material' extension. Spaulding focuses on this particular (admittedly paradigmatic) example of cognitive extension, but hopes that her argument achieve similar results with others (2012: 488, en3). Whether or not this is the case, I certainly agree with her when she states that "we must proceed on a case-by-case basis" (ibid: 487).   

Rather than arguing directly for or against cognitive extension, Tom Roberts focuses on setting a "principled outer limit" to cognitive extension, based on the tracking of a mental state's causal history (2012: 491). He sets this out as contrasting with previous arguments for cognitive extension, which have by and large been "ahistorical", focusing on a state's effects rather than its causes (ibid: 492). He rejects Clark & Chalmers' original suggestion that external cognitive states must be subjct to prior conscious endorsement (a kind of historical constraint) for much the same reason that they themselves raise: it risks disqualifying uncontroversial cognitive states such as subliminally acquired memories (ibid: 495).

Instead Roberts pursues a theory of cognitive ownership, arguing that a subject must take responsibility for an "external representational resource" if it is to become part of their extended cognitive system (ibid: 496). Responsibility in this sense requires that (for example) a belief is acquired in a certain meaningful way, and that an "overall consistency and coherency" is maintained between ones beliefs (ibid: 496-9). This, Roberts hopes, will exclude more radical cases of extension without dismissing extension outright. He concludes by admitting that such a criteria might generate an area of vagueness, but suggests that this is not necessarily such a bad thing, and that we will nonetheless find clear cases of extension (or not).

I'm sympathetic to Roberts' argument, and in particular the attempt to give a principled boundary to cognitive extension without dismissing it entirely. However I've never been entirely convinced by the historical accounts of mental representation that he draws upon, and it's also not clear whether this kind of argument would apply to cognitive extension in general, or only specifically to the extension of beliefs. Admittedly, much of the extended mind literature has focused on the extension of beliefs, but in principle it might be possible for other cognitive functions, such as perception or problem solving, to be extended as well.

I'm also wary of relying on any concept of belief ownership, implying a central and distinct individual to do the owning. This is perhaps a more esoteric concern, but at the very least I think it's worth considering what exactly it is that does the owning when you're considering extended cognitive systems that might well involve the extension of whatever 'self-hood' is.

No pictures in this one, sorry.

  • Adams, F. and Aizawa, K. 2008. The Bounds of Cognition. Oxford: Blackwell.
  • Clark, A. and Chalmers, D. 1998. “The Extended Mind”. Analysis 58: 7-19. Reprinted in
    Menary (ed.), 2010: 27-42.
  • Menary, R. (ed.) 2010. The Extended Mind. Cambridge, MA: MIT Press.
  • Roberts, R. 2012."Taking responsibility for cognitive extension." Philosophical Psychology 25(4): 491-501.
  • Ross, D. and Ladyman, J. 2010. “The Alleged Coupling-Constitution Fallacy and the Mature Sciences.” In Menary (ed.)  2010: 155-166.
  • Spaulding, S. 2012. "Overextended cognition." Philosophical Psychology 25(4): 469-90.

Sunday, 3 February 2013

The evolutionary implausability of outlandish alien cognition

Contemporary arguments for (and against) the extended mind hypothesis (eg. Sprevak 2009) regularly invoke hypothetical aliens with outlandish forms of internal cognition. Sprevak asks us to imagine an alien that stores memories "as a series of ink-marks" (ibid: 9). This is meant to be functionally equivalent to the case where someone 'stores' their memories in an external diary. The point is that, in order to preserve multiple realisability and the Martian intuition, we are forced to accept that both the alien and the diary-user constitute cognitive systems, with the only difference being that the latter extends beyond the biological brain.

Baby Martian?

In another example, this time intended as a reduction ad absurdum of functionalism and the extended mind, Sprevak proposes an alien with an innate, internal cognitive sub-system that calculates the exact date of the Mayan calendar (ibid: 21). Again, his point is that there seems to be no functional difference between this sub-system and the one that he claims to have installed on his office computer1. Ergo, his extended mind includes this implicit knowledge of the Mayan calendar.

Ignoring for the moment any questions about the extended mind per se, we should question the plausibility of these kinds of aliens. In each case, but especially the second, it seems that our aliens would possess remarkably over-specialised brains. The ink-jet memory system seems cumbersome, and the Mayan calender calculator is an extremely niche-interest device, one that would probably never see any use. In both cases it is difficult to imagine how or why such a cognitive architecture would have evolved.

This doesn't constitute a counter-argument, as regardless of any evolutionary implausibility Sprevak's aliens serve their rhetorical purpose. However it's interesting to note that much of Clark's own use of the extended mind is intended to highlight the way in which human brains off-load these kinds of specialised skills on to the environment (see his 2003), meaning that we are precisely the kind of generalists that these aliens aren't. Perhaps it's important not to get too caught up with outlandish aliens when we consider the extended mind, and return to the much more homely (and relevant!) examples which it was originally intended for.


1. I have a meeting with him in his office tomorrow, so I'll try and check if is true...

References
  • Clark, A. 2003. Natural Born Cyborgs. Oxford: OUP.
  • Sprevak, M. 2009. "Extended cognition and functionalism." The Journal of Philosophy 106: 503-527. Available at (and references to) http://dl.dropbox.com/u/578710/homepage/Sprevak---Extended%20Cognition.pdf

Friday, 21 December 2012

Gilbert Ryle's Concept of Mind

I'd call this a book review, but I haven't finished the book yet. I am enjoying it though, so I thought I'd write a few words about some of the more relevant themes.

Just chilling, no doubt reading some Wittgenstein

As I mentioned last time, it was Gilbert Ryle who coined the term "ghost in the machine" to refer to the disembodied mind that cognitive science seems intuitively drawn towards. The Concept of Mind is to a large extent aimed at dispelling this intuition, but along the way it also touches upon a number of other fascinating topics. Below is a list of ideas that Ryle either introduces, expands upon, or pre-empts:
  • "Knowing How and Knowing That": This is the title of a whole chapter, wherein he draws a conceptual distinction between the two kinds of knowing. In brief, the first is the skilful execution of an action, the second the reliable recollection of a fact. The "intellectualist legend", according to Ryle, makes the former subordinate to the latter, in that all activities are reduced to the knowledge of certain rules (32). That this reduction is false is fundamental to his broader point - there is no isolated realm of the mental, and all cognitive activity must be expressed through action (or at least the potential for action).
  • Embodied cognition and the extended mind: In the same chapter, he devotes a few pages to the common notion that thinking is done "in the head" (36-40). This notion, he argues, is no more than a linguistic artefact, stemming from the way we experience sights and sounds. Unlike tactile sensations, sights and sounds occur at some distance from our body, and so when we imagine or remember them, it makes sense to highlight this distinction by saying that they occur 'in the head'. By extension thought, which Ryle conceives of as internalised speech,1 is also said to occur 'in the head'. However this idiomatic phrase is just metaphorical, and there is no reason that thinking should (or could) occur exclusively in the head.
  • "The Will": Another chapter, this time de-constructing our understanding of volition and action. Suffice to say, Ryle thinks we've got ourselves into a terrible mess, in particular in supposing that to do something voluntarily requires some additional para-causal spark. Rather, to describe an action as voluntary is simply to say something about the manner in which, and circumstances under, it is performed. Free will, under this reading, is something to do with the kind of causal mechanism involved, rather than anything 'spooky' or non-physical.2 Personally I've never found this kind of account particularly convincing, but it is nonetheless influential to this day.
  • Higher-order thought as a theory of consciousness: Although he never explicitly puts it this way, there is a passage where Ryle describes how some "traditional accounts" claim that what is essential for consciousness is the "contemplation or inspection" of the thought process that one is conscious of (131). This is very similar to contemporary 'higher-order' theories of consciousness (see Carruthers 2011). Ryle doesn't exactly approve, dismissing such theories as "misdescribing" what is involved in "taking heed" of one's actions or thoughts.
So there you have it: Gilbert Ryle, largely forgotten but by no means irrelevant. As you may have noticed, a lot of his ideas influenced Daniel Dennett, which isn't surprising, seeing as Dennett studied under Ryle at Oxford.
1. This, perhaps, is one source of Dennett's fable about the origins of consciousness (1991).
2. Again, this is reminiscent of Dennett (2003).
 
References
  • Carruthers, P. "Higher-order theories of consciousness." Stanford Encyclopedia of Philosophy. Retrieved from http://plato.stanford.edu/archives/fall2011/entries/consciousness-higher [21.12.2012]
  • Dennett, D. 1991. Consciousness Explained. Little, Brown & Company.   
  • Dennett, D. 2003. Freedom Evolved. Little, Brown & Company.   
  • Ryle, G. 1949. The Concept of Mind. Hutchinson. 

Tuesday, 6 November 2012

Functionalism reconsidered

I've long considered myself to be a functionalist about mental states such as belief and pain. Functionalism is the theory that mental states should be identified not by their physical instantiation but by their functional role, i.e. the role that they play within a given system. The classic example is pain, which is said to be defined by behaviours such as flinch responses, yelling out, and crying (and perhaps a particular kind of first-person experience). One of the main motivations for functionalism is the "Martian intuition" - the intuition that were a silicon-based Martian to exhibit pain-behaviour, we would want to say that it is in pain, despite it lacking a carbon-based nervous system like our own. A less exotic intuition is that an octopus or capuchin monkey can probably feel pain, despite the exact physical instantiation of this pain differing from our own.


Martian octopus, perhaps in pain? 
(with permission from Ninalyn @ http://studiodecoco.tumblr.com/)

However I'm now beginning to suspect that there might be more than a few problems with functionalism. For starters, functional states are often defined as being those that are "relevantly similar" to an imagined paradigm case - thus, a Martian who screamed and recoiled when we punched might be said to be in pain, but one that laughed and clapped its hands (tentacles?) probably wouldn't. This is fine up to a point, especially in seemingly clear-cut cases like the above, but what should we say when we're faced with the inevitable borderline case?

Whether or not fish can feel pain seems to be a case like this. Research into fish pain behaviour is contentious - whilst fish exhibit apparent pain behaviour, they have only recently been shown to exhibit more complex pain avoidance behaviour that might be thought essential to pain. The problem is not just a lack of evidence either, there's a more fundamental lack of clarity about how exactly we should define the functional role of pain, or indeed any other mental state.

Having said that, the problem isn't limited to the functionalist account of mental states. Biological species appear to form vague natural kinds, a problem which has motivated the idea of homeostatic property cluster kinds, categories of kinds that share some, but not all, of their properties. So we maybe we could say that functional kinds, such as pain, are a category of HPC kinds. That still wouldn't necessarily give us a straight answer in genuine border-line cases, but at least we'd have good reason to think functional roles might sometimes pick out genuine kinds (albeit perhaps not natural kinds)

The problems don't stop there though. By arguing that it entails a radical form of cognitive extension, Mark Sprevak has pushed functionalism to its logical extreme. If he is correct then being a functionalist would commit you to apparently absurd conclusions,1 such as that the entire contents of the Dictionary of Philosophy sitting on my desk form part of my cognitive system, or that my capacity for mental arithmetic is bounded only by my access to electronic computing power. I think there might be a way for functionalism to avoid the full force of this argument, but it comes with its own problems and costs.

Essentially what the functionalist needs to do is to stop talking about cognition and mental states as though they were one kind of thing. They're not, and rather than lumping memory, personality, beliefs  and so on into one unitary framework, we need to look at giving finer-grained functional descriptions in each case. This might even mean getting rid of some mental states, such as belief, or at least admitting that they're more complex than we first thought. This approach will still entail some degree of cognitive extension, but hopefully in a more subtle and intuitive way. So whilst it might not be true that the contents of the Dictionary are part of my 'cognitive system', they may nonetheless form part of a belief-like system, albeit one that functions differently to my regular belief system. 

Would this still be functionalism? In a sense yes, because it would maintain a degree of multiple realisability, only at a more fine-grained level. So a Martian with a silicon brain might have beliefs, but equally they might have something more akin to the belief-like system that is constituted by me-and-the-Dictionary. The problem with functionalism is that it tends to reify our folk intuitions about mental states, and we need to remember that these might not be entirely accurate. I suppose I'm beginning to lean towards a form of eliminativism, although I still think that there's room for an instrumentalist account of functional roles. 


1. I say "apparently" because I'm not entirely convinced that one shouldn't just bite the bullet and accept these conclusions. That's probably a post for another day though.

Saturday, 23 June 2012

Embodied Ethics

(by Joe, with credit to Marc Morgan at Trinity College Dublin for inspiring some of these thoughts. Marc is a contributor at socialjusticefirst.)

I used to think that the majority of actions were morally neutral, and that only those things that caused harm or suffering could be classified as 'bad'. In and of itself I wouldn't have said that lying was wrong, or sleeping good. Only when coupled with contingencies such as the lie being malicious, or the sleep necessary to rejuvenate the mind and body, could these things be considered in any way moral. My practical ethics are still largely consequentialist, but I've been reconsidering how I classify things within that framework.


A body.

When it comes to practical, applied ethics (certainly the most important kind of ethics), we need to look at everything in context. Whether an action is good or bad, whether it causes harm, will depend on so many contingent factors that it is extremely difficult to make accurate ethical judgments in advance. The best we can hope for is to establish guiding heuristics that will help us to make moral decisions in the future. With that in mind, let's return to the classic example of lying.

As I mentioned above, I used to say that lying was only immoral if it caused harm. That's still basically what I think, only now I'd be tempted to expand harm to include more subtle effects like the degradation of the liar's moral character, and the long-term instability of a relationship build on deception. So whilst in the abstract lying might be morally neutral, in practice it could almost always wrong. Of course there are going to be exceptions, such as if you're sheltering a refugee from a murderous band of thugs, but my moral compass is beginning to swing distinctly towards the "lying is usually wrong" side of things. 

We could call this kind of approach "embodied ethics", in that it emphasises the "in the world" nature of moral judgments. Another sense in which ethics should be considered embodied is that it is very much a product of our evolved and biological nature. To a large extent, things are essentially right or wrong to the degree that they facilitate a way of life that is guided by our evolution. Just to be clear, I'm not saying that everything we've evolved to do is inherently right, but only that evolution has guided the way in which we make ethical considerations, as well as defining the things that matter to us. So ethical discourse must be underwritten by an understanding of our biological, embodied nature.

Finally, ethics is embodied because the mind and the self are embodied. As I've written about elsewhere, the potential for the extension of the mind and perhaps even the self has serious ethical implications. More generally, an understanding of morality demands an understanding of the mechanics behind the human mind, and an understanding of how it interacts with the world. Otherwise our ethics will be too abstract to be meaningful - this is why the ethical debates that philosophers sometimes have can seem so odd and out of touch. I'm still working on the full details, but from now on I'm going to try and make sure that my ethical theorising is firmly embodied, in all three of the ways that I've outlined here.

Tuesday, 19 June 2012

The Invisible Self

(by Joe)

"What am I? Tied in every way to places, sufferings, ancestors, friends, loves, events, languages, memories, to all kinds of things that obviously are not me. Everything that attaches me to the world, all the links that constitute me, all the forces that compose me don't form an identity, a thing displayable on cue, but a singular, shared, living existence, from which emerges - at certain times and places - that being which says "I." Our feeling of inconsistency is simply the consequence of this foolish belief of the permanence of the self and of the little care we give to what makes us what we are."

My copy of the book looks like this.

There is a muddy area where my philosophical research and my political beliefs meet, and the above quote, from The Coming Insurrection (Invisible Committee, 2009: 31-2), sums it up nicely. The Coming Insurrection was written in 2007 by an anonymous collective (calling themselves 'The Invisible Committee') based in France, and it is clearly strongly influenced by the philosophy of that country, most notably the situationist movement of the 1960's, but also continental philosophy more broadly. It is pompous, vague and quite rightly criticised by many in the left-libertarian circles that I inhabit - Django over at Libcom described it as "a huge amount of hyperbole and literary flourish around some wafer-thin central propositions". Nonetheless, the approach towards the self expressed in the above extract appeals to me. 

Put very crudely, I think that the self is an illusion or an abstraction, a "narrative center of gravity" that helps guide our lives and our interactions with others (Dennett, 1992). The mechanisms behind this formation of the self have evolved for a reason, and for pragmatic reasons we shouldn't strive to eliminate it entirely, but to focus on it too much is unhealthy and unhelpful. Such a focus has, since the enlightenment, led to a heightened sense of individualism throughout the western world, one which I think is at the heart of our capitalist, consumerist and ultimately selfish culture. We can overcome this individualism by studying what the self truly is, and perhaps eventually realising that it doesn't truly exist. 

There is an obvious link with Buddhist philosophy here, one which I am currently trying to learn more about. There is also a somewhat less obvious link with embodied cognition, and in particular the extended mind hypothesis (Clark & Chalmers, 1998). If the self is an illusion constructed by our mind, and that mind is embedded in, or even extended into, its environment, then the self can be thought of as a product of that environment. This could have quite serious consequences, not only for metaphysics and the philosophy of mind, but also for ethics and political philosophy.

Which brings us back to The Coming Insurrection. In the passage I quoted, they describe the sense of "inconsistency" that we feel when we realise that whilst the self is composed of our interactions with things in the world, those things "obviously are not me". The self is invisible, and however hard we try to look for it we can never find it. David Hume expressed a similar feeling when he wrote that "I never can catch myself at any time without a perception, and never can observe any thing but the perception"(A Treatise of Human Nature: Book 1, Part 4, Section 6). We are what we do, and what we do is interact with the world. The focus on the individual over the last few hundred years has clouded that fact, and created an entity, the solid, 'real' self, that does not in fact exist. In coming to understand that who we are is so heavily dependent upon who others are, I hope we might eventually be able to learn to behave more compassionately and co-operatively with other people, as well as with our non-human environment. Satish Kumar embodies this hope in the phrase "You are, therefore I am" (Kumar, 2002), a play on Descarte's "I think, therefore I am", itself a perfect slogan for enlightenment individuality.

There is also an element of the absurd that is recognised, I think, by both Hume and the Invisible Committee. We are confronted with on the one hand an unshakable conviction in the existence of the self, and on the other with convincing evidence that no such thing exists. Similar absurdity can be found in our struggles with free will, moral realism and even scepticism about the external world. In each case a pragmatic route must be found, one that allows us to go on, but at the same time acknowledges the truths that we have learned about the world. In the case of the self, I think that this means accepting that we are a lot closer to the world around us than our privileged, first person view-point makes it seem, and that in order to survive in such a world we must understand and respect our place in it.

There's a lot more I'd like to say about a lot of things here, but I'll save it for future posts. Otherwise we might get complaints about the lack of monkeys!

Here you go.


Clark, A. & Chalmers, D. 1998. "The Extended Mind." Analysis 58: 7-19.



Invisible Committee, The. 2009. The Coming Insurrection. Los Angeles, LA: Semiotext(e). 

Kumar, S. 2002. You Are Therefore I Am: A Declaration of Dependence. Totnes, UK: Green Books.