Showing posts with label enactivism. Show all posts
Showing posts with label enactivism. Show all posts

Saturday, 24 November 2012

A spectre is haunting cognitive science...

...the spectre of Cartesian materialism. If there's been one consistent theme running through my studies over the last two and a half month, its this. But what is Cartesian materialism, and why is it haunting cognitive science?

A few obligatory words about the man himself before we go any further. René Descartes was a 17th century philosopher and mathematician, probably most famous for the now-infamous words "cogito, ergo sum" - "I think, therefore I am". He also invented the Cartesian coordinate system, which most of you will have been taught, even if you don't know it (it's the classic x-axis, y-axis thing). In modern analytic philosophy he enjoys a dubious status as both the inspiration and the target of many key arguments. It is a great irony that a tradition which owes so much to Descartes also routinely indoctrinates undergraduates against him.

He did most of his philosophising from the comfort of his bed.

Not that this is necessarily a bad thing. Many of Descartes' arguments are terrible, but the intuitions they appeal to remain strong, and his influence (the "spectre" of my title) can be felt throughout cognitive science and analytic philosophy of mind. Foremost amongst these is the intuition that 'mind' and 'body' must refer to two distinctly separate kinds of things. Descartes thought that this meant they must be composed of two separate substances, one physical and extended, the other insubstantial and non-extended. His cogito argument refers to this distinction - my mind, being a thinking thing, seems to exist independently of (and prior to) any physical world.

Empirical philosophy of mind (and thus cognitive science) tends to reject this dualism. Most philosophers of cognitive science (including myself) are physicalists, committed to there being only one kind of substance in the world. Thus the mind must be made out of the same kind of stuff as the body. Despite this commitment, there remains a tendency to conceive of the mind as something special, somehow autonomous from its physical instantiation. This attitude is sometimes called 'property dualism', 'non-reductive physicalism' , or, by its opponents, 'Cartesian materialism'.

Classical cognitive science, which dates back to around the middle of the last century, was (and still is) enamoured with the idea that the mind is essentially a computer program. As such it made sense to think of the mind as something distinct from the brain, a kind of "software" running on biological "hardware". This intuition is still strong today, particularly amongst those wanting to give an account of mental representation ("pictures" in the mind), or of the apparently inferential structure of cognition. Traditional functionalist accounts of cognition also tend towards a form of Cartesian materialism, as the multiple realisability requirement means that strict type identity between the mind and the brain is not possible. Whilst in many cases the mind (classically speaking) simply is the brain, it's conceivable that it might take some other form, and so the two are not strictly identical. 

However, recent (and some not-so-recent) work in embodied cognition argues that the physical body might be more important than classical cognitive science assumes. Examples include John Searle's suggestion that some quality of the neurobiological brain might be essential for consciousness (1980: 78), various enactive approaches to perception (championed by Alva Noë), and the dynamical systems approach that argues that cognition is a continuous process involving the brain, body, and environment. Whilst these approaches differ in many respects, they all agree that the mind cannot be conceived of as distinct or autonomous from the body.

Whilst Daniel Dennett takes an essentially computational and functionalist approach to cognition, he has also warned against the risks of Cartesian materialism - in fact, he invented the term. In Consciousness Explained (1991), he argues that many of our confusions about both consciousness and the self stem from Descartes, and that it is essential that we stop thinking about the mind as a single entity located at some discrete location within the brain. His mentor Gilbert Ryle made a similar point in The Concept of Mind, writing about the "dogma of the ghost in the machine" (1949: 17), the disembodied Cartesian mind that somehow controls the body.

A final Cartesian oddity that I have come across recently is found in the phenomenological work of Jean-Paul Sarte. Despite explicitly rejecting the Cartesian concept of the self, he emphasises a distinction between the "being-in-itself" and the "being-for-itself". The former is something like a physical body, and is all the "being" that a chair or a rock can possess, whilst the latter is what makes us special, the uniquely first-person point of view that we seem to enjoy. IN making this dichotomy he has been accused of resurrecting a kind of Cartesian dualism, in contrast with another famous phenomenologist, Merleau-Ponty, who saw the self as inherently bound up in its relations the the world.

So there you have it, a whistle-stop tour of Cartesian materialism. I'm aware that I've skimmed over a lot of ideas very quickly here, but hopefully it's enough to illustrate the way in which Descartes is still very much exerting an influence, for better or for worse, on contemporary philosophy of mind.

  • Boden, M. 1990. The Philosophy of Artifial Intelligence. Oxford: OUP.
  • Dennett, D. 1991. Consciousness Explained. Little, Brown and Company. 
  • Searle, J. 1980. “Minds, Brains, and Programs.” Reprinted in Boden 1990: 67-88.

Sunday, 26 August 2012

Humans > Computers

I'm going to use this post to discuss a couple of related topics. First up, AI/robotics and a recent development reported here. Then, human-computer interfaces and the embodied cognition paradigm.

Disconcerting, to say the least.
Nico (pictured above), developed by a team at Yale University, is apparently going to be able to recognise itself in a mirror, and is already able to "identify almost exactly where its arm is in space based on [a] mirror image" (New Scientist, 22.08.12). This may not sound like much, but the so-called mirror test is a key psychological experiment used to demonstrate self-awareness. Only a few non-human animals are able to recognise themselves in mirrors (including, off the top of my head, chimps, elephants, and dolphins), whilst in human children it forms a key stage in normal cognitive development (usually at around 18 months). So making a robot that is able to pass this test would be a major development in AI research. 

It's impressive stuff, but what's particularly interesting is how they've programmed it to do this. According to this article, the robot compares feedback from its own arm movements with visual information from its 'eyes', and determines whether or not the arm that it is seeing belongs to it by checking how closely these match. This use of the robot's body to carry out cognitive tasks fits well with the enactive model of vision, whereby we learn about the world through moving and acting in it. It's certainly an improvement on previous models of AI research, which have tended to focus on 'off-line' solutions, forming representations and computing a response based on these. By harnessing elements of our environment (which includes our own body), both we and robots like Nico are able to minimise the cognitive and computational load compared with purely representational solutions. (See Clark 2003 for an accessible discussion of such 'off-loading' strategies.)

This kind of research is very exciting, and self-representation is certainly an important step in developing truly intelligent AI, but it strikes that by focusing on one specific problem like this, researchers risk missing the overall picture. It's all well and good designing a robot that can recognise itself, and another robot that can traverse rough terrain, and yet another robot that can recognise visual patterns, but we'll only start getting truly impressive results when all these abilities are put together. I'm convinced that some elements of human cognition are emergent, only appearing once we reach a critical mass of less advanced capabilities, and how this occurs might not become apparent until we've achieved it. Designing and programming solutions, in advance, for absolutely everything that we might want a robot to do just isn't feasible. Intriguingly Nico seems to have been originally designed to interact with children, which I'll admit is more promising. There's nothing wrong with tackling AI problems in isolation, we just have to remember that eventually we should be looking toward forming these solutions into a coherent whole.

More on this below...

Which leads me, somewhat tenuously, to my next topic. Anderson (2003: 121-5) discusses some interesting proposals from Paul Dourish concerning the future of human-computer interfaces (i.e. the way in which we interact with and make us of computers). For the last century this has largely been constrained by the limitations of the computers, meaning that how we interface with them has not always been ideally suited to our human limitations. The difficulties which many people find with even the simplest computer task attest to these limitations. Research in both 'embodied' AI and embodied cognition is beginning to suggest some alternative ways in which human-computer interfaces might be designed.

As an example of one such alternative Anderson gives the "marble answering machine", which I believe Clark (2003) also discusses. This machine, illustrated above, functions just as a regular answering machine does, but instead of an electronic display or automated message, it releases a different marble for each message recorded. Each marble is unique, and returning it to the machine elicits the playback of the particular message that it represents. Thus, in a very tangible and intuitive way, the user is able to keep track of their messages by handling, even removing, the physical marbles. Similar interfaces could be imagined for many other simple computers (for that is all an answering machine is), or could even be scaled up to the complexity of a desktop PC or laptop.

Here an Anderson makes an interesting contrast between this "tangible computing" and another direction that human-computer interfaces might take: virtual reality (2003: 124). He views the latter as being distinctly unfriendly to humans, drawing them in to the world of the computer as opposed to drawing the computer out in to the world of the human. I think there's room for both approaches, but this seeming dichotomy between the two worlds, one physical and one virtual, is certainly a striking image. What's also striking is the continued interaction between embodied cognition, robotics and AI, and computing, and just how fruitful it can be for all concerned. Once again I am struck by the hugely positive potential for interdisciplinary co-operation, particularly when it comes to philosophy and cognitive science.
 
  • Anderson, M. 2003. "Embodied Cognition: A Field Guide." Artificial Intelligence 149: 91-130.
  • Clark, A. 2003. Natural Born Cyborgs. Oxford: OUP.