Showing posts with label functionalism. Show all posts
Showing posts with label functionalism. Show all posts

Sunday, 3 February 2013

The evolutionary implausability of outlandish alien cognition

Contemporary arguments for (and against) the extended mind hypothesis (eg. Sprevak 2009) regularly invoke hypothetical aliens with outlandish forms of internal cognition. Sprevak asks us to imagine an alien that stores memories "as a series of ink-marks" (ibid: 9). This is meant to be functionally equivalent to the case where someone 'stores' their memories in an external diary. The point is that, in order to preserve multiple realisability and the Martian intuition, we are forced to accept that both the alien and the diary-user constitute cognitive systems, with the only difference being that the latter extends beyond the biological brain.

Baby Martian?

In another example, this time intended as a reduction ad absurdum of functionalism and the extended mind, Sprevak proposes an alien with an innate, internal cognitive sub-system that calculates the exact date of the Mayan calendar (ibid: 21). Again, his point is that there seems to be no functional difference between this sub-system and the one that he claims to have installed on his office computer1. Ergo, his extended mind includes this implicit knowledge of the Mayan calendar.

Ignoring for the moment any questions about the extended mind per se, we should question the plausibility of these kinds of aliens. In each case, but especially the second, it seems that our aliens would possess remarkably over-specialised brains. The ink-jet memory system seems cumbersome, and the Mayan calender calculator is an extremely niche-interest device, one that would probably never see any use. In both cases it is difficult to imagine how or why such a cognitive architecture would have evolved.

This doesn't constitute a counter-argument, as regardless of any evolutionary implausibility Sprevak's aliens serve their rhetorical purpose. However it's interesting to note that much of Clark's own use of the extended mind is intended to highlight the way in which human brains off-load these kinds of specialised skills on to the environment (see his 2003), meaning that we are precisely the kind of generalists that these aliens aren't. Perhaps it's important not to get too caught up with outlandish aliens when we consider the extended mind, and return to the much more homely (and relevant!) examples which it was originally intended for.


1. I have a meeting with him in his office tomorrow, so I'll try and check if is true...

References
  • Clark, A. 2003. Natural Born Cyborgs. Oxford: OUP.
  • Sprevak, M. 2009. "Extended cognition and functionalism." The Journal of Philosophy 106: 503-527. Available at (and references to) http://dl.dropbox.com/u/578710/homepage/Sprevak---Extended%20Cognition.pdf

Tuesday, 6 November 2012

Functionalism reconsidered

I've long considered myself to be a functionalist about mental states such as belief and pain. Functionalism is the theory that mental states should be identified not by their physical instantiation but by their functional role, i.e. the role that they play within a given system. The classic example is pain, which is said to be defined by behaviours such as flinch responses, yelling out, and crying (and perhaps a particular kind of first-person experience). One of the main motivations for functionalism is the "Martian intuition" - the intuition that were a silicon-based Martian to exhibit pain-behaviour, we would want to say that it is in pain, despite it lacking a carbon-based nervous system like our own. A less exotic intuition is that an octopus or capuchin monkey can probably feel pain, despite the exact physical instantiation of this pain differing from our own.


Martian octopus, perhaps in pain? 
(with permission from Ninalyn @ http://studiodecoco.tumblr.com/)

However I'm now beginning to suspect that there might be more than a few problems with functionalism. For starters, functional states are often defined as being those that are "relevantly similar" to an imagined paradigm case - thus, a Martian who screamed and recoiled when we punched might be said to be in pain, but one that laughed and clapped its hands (tentacles?) probably wouldn't. This is fine up to a point, especially in seemingly clear-cut cases like the above, but what should we say when we're faced with the inevitable borderline case?

Whether or not fish can feel pain seems to be a case like this. Research into fish pain behaviour is contentious - whilst fish exhibit apparent pain behaviour, they have only recently been shown to exhibit more complex pain avoidance behaviour that might be thought essential to pain. The problem is not just a lack of evidence either, there's a more fundamental lack of clarity about how exactly we should define the functional role of pain, or indeed any other mental state.

Having said that, the problem isn't limited to the functionalist account of mental states. Biological species appear to form vague natural kinds, a problem which has motivated the idea of homeostatic property cluster kinds, categories of kinds that share some, but not all, of their properties. So we maybe we could say that functional kinds, such as pain, are a category of HPC kinds. That still wouldn't necessarily give us a straight answer in genuine border-line cases, but at least we'd have good reason to think functional roles might sometimes pick out genuine kinds (albeit perhaps not natural kinds)

The problems don't stop there though. By arguing that it entails a radical form of cognitive extension, Mark Sprevak has pushed functionalism to its logical extreme. If he is correct then being a functionalist would commit you to apparently absurd conclusions,1 such as that the entire contents of the Dictionary of Philosophy sitting on my desk form part of my cognitive system, or that my capacity for mental arithmetic is bounded only by my access to electronic computing power. I think there might be a way for functionalism to avoid the full force of this argument, but it comes with its own problems and costs.

Essentially what the functionalist needs to do is to stop talking about cognition and mental states as though they were one kind of thing. They're not, and rather than lumping memory, personality, beliefs  and so on into one unitary framework, we need to look at giving finer-grained functional descriptions in each case. This might even mean getting rid of some mental states, such as belief, or at least admitting that they're more complex than we first thought. This approach will still entail some degree of cognitive extension, but hopefully in a more subtle and intuitive way. So whilst it might not be true that the contents of the Dictionary are part of my 'cognitive system', they may nonetheless form part of a belief-like system, albeit one that functions differently to my regular belief system. 

Would this still be functionalism? In a sense yes, because it would maintain a degree of multiple realisability, only at a more fine-grained level. So a Martian with a silicon brain might have beliefs, but equally they might have something more akin to the belief-like system that is constituted by me-and-the-Dictionary. The problem with functionalism is that it tends to reify our folk intuitions about mental states, and we need to remember that these might not be entirely accurate. I suppose I'm beginning to lean towards a form of eliminativism, although I still think that there's room for an instrumentalist account of functional roles. 


1. I say "apparently" because I'm not entirely convinced that one shouldn't just bite the bullet and accept these conclusions. That's probably a post for another day though.

Sunday, 3 June 2012

"Artifical" Intelligence


(by Joe)

Watching the new Ridely Scott film Prometheus last night, I realised that there's something about the term “artificial intelligence” that doesn't quite sit right with me. SPOILER: there's an android (or humaniform robot, to use Isaac Asimov's term) in the film, one that for all intents and purposes behaves and appears like a human. A somewhat odd human perhaps, one that feigns a degree of subservience to those around it, but a human nonetheless. I would certainly be happy to say that it was conscious, and in terms of intelligence it far exceeds almost every other character in the film. However, I'm not so sure that I'm comfortable calling it “artificial”.

This is Idris Elba. He's not an android.

Early on in the film, one of my companions whispered something like “oh, so he's an AI then”, in what I can't help feeling was a slightly dismissive tone of voice. Whilst this is perhaps technically correct, or at least an accurate use of the term, I don't think that I'd have chosen to use it. Maybe I just read too much science fiction, or spend too long thinking about multiple realisability, but to label a conscious system “artificial” in this way seems distinctly discriminatory to me.

Of course, if like John Searle you think that a conscious, thinking robot is necessarily impossible, then this won't bother you very much. I'm not going to argue for the possibility of Strong AI here, but suffice to say I am essentially a functionalist about consciousness, and thus am firmly committed to the possibility of conscious awareness being instantiated in a non-biological system. Such a system, if we had built it, would be “artificial” in the sense that it would be a constructed artefact, but to label it as such risks distorting our understanding of what it actually is. Referring to an intelligent android as an AI distances it from ourselves, putting it in the same conceptual category as a mindless computer or microwave. We would be tempted to treat such a creature as no more than a tool, and there is certainly an air of dominance towards our creations that the term “AI” can only help reinforce.

In fact, the film managed to address this issue. One otherwise very empathetic member of the ship's crew behaves in a distinctly abusive way towards the android, making constant remarks about how inhuman it is, and treating it as little more than a slave. This behaviour was reminiscent (purposefully, I think) of colonial attitudes towards indigenous populations, being patronising, cruel and dehumanising. I don't think that it would be unreasonable to say that this character was being “racist” towards the android, although we perhaps need a new word for this particular form of discrimination. “Instantialist” is somewhat clumsy, but it gets the point across. I believe we will, in the relatively near future, develop computer “minds” that are functionally similar enough to be thought of as conscious, and when this happens we will be faced with an ethical dilemma. Should we be allowed to treat these creations as creations, or should they be afforded just as much dignity and respect as any other intelligent life-form? We risk inventing a whole new category of discrimination, one that I believe the term AI, with all its connotations of subservience and inferiority, will only exacerbate. 

(The film, by the way, is well worth seeing!) 

Wednesday, 30 May 2012

Consciousness is in the business of producing illusions.

(by Joe)

Gary Williams, whose blog Minds and Brains I enjoy very much (although don't always agree with), has just written a post on the possibility of partial epiphenomenalism. The idea seems to be that the "feeling of consciousness" could be an epiphenomenal 'illusion' without consciousness itself being epiphenomenal. For one thing, this would solve the problem raised by the Libet experiments (which I mentioned briefly here) by allowing the apparently epiphenomenal experience of volition to be preceded by a casually active conscious decision, just one that has yet to be experienced. There's some similarity here with Dennett's interpretation of Libet in Consciousness Explained (1991: 154-67), where he argues for something like the distribution of consciousness into different 'strands'.

I need to give it a bit more thought, but I'm quite tempted by the idea of divorcing the epiphenomenal experience of consciousness from the functional process of consciousness itself. I particularly liked Williams' suggestion that we might want to say that "consciousness is in the business of producing illusions". That is to say, part of what consciousness does is make extremely convincing illusions of, for example, free will, moral agency, or self hood.

Anyway, just some quick thoughts on a post I found interesting. Proper post coming up soon, so watch this space!


Dennett, D. 1991. Consciousness Explained. Little, Brown and Company.

Tuesday, 22 May 2012

Broadly Speaking: In Praise of (a particular) Functionalism

(by Jonny)

In “Philosophy and Flesh” (1996) George Lakoff and Mark Johnson give a clear and lucid introduction to the notion of the embodied mind, and what they see as its major implications. The book is very readable, let a little down by its claim to paradigm shattering originality and tendency toward over-generalisation. One particular point on which I found the authors to be a little confused was in their objection to 'functionalism'. The authors' basic point seems to be that the functionalism is misled in believing mind can be studied in terms of its cognitive functions whilst ignoring the role the body and brain has to play in those functions (75). For them functionalism is “essentially disembodied”,  a view where the mind “can be studied fully independently of any knowledge of the body and brain, simply by looking at functional relations among concepts represented symbolically” (78).



I think Lakoff and Johnson are too quick to jump the gun, too quick to dismiss a strong principle in their eagerness to overthrow the shackles of traditional “Anglo-American” assumptions (75). From my view, responsible functionalism never ignores anything which might reasonably thought of as contributing to the ultimate function of a mental state, and this must include the body and brain. Perhaps functionlism has a tendency to slip into to the impractically abstract, ignoring the very stuff that must be studied in order to understand function- but this is not necessarily so. The authors quote Ned Block saying, “The key notions of functionalism...are representation and computation. Psychological states are seen as systematically representing the world via a language of thought, and psychological processes are seen as computations involving these representations” (257). Yet to be functionalists we don't have to accept a Fodorian language of thought as the underlying force which must define a mental state's function, though even if we do, this will not and should not stop us ignoring the real world inputs and outputs dependent on the brain and body.

I think perhaps the authors of Philosophy and Flesh are conflating a narrow, abstract, empirically removed functionalism with a broad, scientifically informed version. Functionalism in the broader sense is simply the idea that what matters is what stuff does and as Dennett says functionalism construed this way “is so ubiquitous in science that it is tantamount to a reigning presumption of all science” (2006: 17). As he goes on to say, “The Law of Gravity says that it doesn't matter what stuff a thing is made of- only its mass matters...It is science's job to find the maximally general, maximally non-committal- hence minimal- characterization of whatever power or capacity is under consideration”(17-18). When it comes to the mind, functionalism makes the claim that it's not what the brain is made out of as such, but what that stuff does that matters. This does not ignore the stuff, it does not ignore the brain or body, but it does ask why the stuff matters. To quote Dennett one last time, “Neurochemistry matters because- and only because- we have discovered that the many different neuromodulators and other chemical messengers that diffuse through the brain have functional roles that make important differences” (19). In accepting the significance of the body in cognition, from the reliance of our particular sensori-motor apparatus to perception and conceptualisation to the importance of the body's interaction with its environment to reason, we do not need to reject broad, empirically responsible functionalism.


Dennett, D (2006) Sweet Dreams Philosophical Obstacles to a Science of Consciousness MIT Press: Cambridge (MA)

Lakoff, G., Johnson, M (1996) Philosophy of the Flesh: The Embodied Mind and Its challenge to Western Thought Basic Books