Saturday 24 November 2012

A spectre is haunting cognitive science...

...the spectre of Cartesian materialism. If there's been one consistent theme running through my studies over the last two and a half month, its this. But what is Cartesian materialism, and why is it haunting cognitive science?

A few obligatory words about the man himself before we go any further. René Descartes was a 17th century philosopher and mathematician, probably most famous for the now-infamous words "cogito, ergo sum" - "I think, therefore I am". He also invented the Cartesian coordinate system, which most of you will have been taught, even if you don't know it (it's the classic x-axis, y-axis thing). In modern analytic philosophy he enjoys a dubious status as both the inspiration and the target of many key arguments. It is a great irony that a tradition which owes so much to Descartes also routinely indoctrinates undergraduates against him.

He did most of his philosophising from the comfort of his bed.

Not that this is necessarily a bad thing. Many of Descartes' arguments are terrible, but the intuitions they appeal to remain strong, and his influence (the "spectre" of my title) can be felt throughout cognitive science and analytic philosophy of mind. Foremost amongst these is the intuition that 'mind' and 'body' must refer to two distinctly separate kinds of things. Descartes thought that this meant they must be composed of two separate substances, one physical and extended, the other insubstantial and non-extended. His cogito argument refers to this distinction - my mind, being a thinking thing, seems to exist independently of (and prior to) any physical world.

Empirical philosophy of mind (and thus cognitive science) tends to reject this dualism. Most philosophers of cognitive science (including myself) are physicalists, committed to there being only one kind of substance in the world. Thus the mind must be made out of the same kind of stuff as the body. Despite this commitment, there remains a tendency to conceive of the mind as something special, somehow autonomous from its physical instantiation. This attitude is sometimes called 'property dualism', 'non-reductive physicalism' , or, by its opponents, 'Cartesian materialism'.

Classical cognitive science, which dates back to around the middle of the last century, was (and still is) enamoured with the idea that the mind is essentially a computer program. As such it made sense to think of the mind as something distinct from the brain, a kind of "software" running on biological "hardware". This intuition is still strong today, particularly amongst those wanting to give an account of mental representation ("pictures" in the mind), or of the apparently inferential structure of cognition. Traditional functionalist accounts of cognition also tend towards a form of Cartesian materialism, as the multiple realisability requirement means that strict type identity between the mind and the brain is not possible. Whilst in many cases the mind (classically speaking) simply is the brain, it's conceivable that it might take some other form, and so the two are not strictly identical. 

However, recent (and some not-so-recent) work in embodied cognition argues that the physical body might be more important than classical cognitive science assumes. Examples include John Searle's suggestion that some quality of the neurobiological brain might be essential for consciousness (1980: 78), various enactive approaches to perception (championed by Alva Noë), and the dynamical systems approach that argues that cognition is a continuous process involving the brain, body, and environment. Whilst these approaches differ in many respects, they all agree that the mind cannot be conceived of as distinct or autonomous from the body.

Whilst Daniel Dennett takes an essentially computational and functionalist approach to cognition, he has also warned against the risks of Cartesian materialism - in fact, he invented the term. In Consciousness Explained (1991), he argues that many of our confusions about both consciousness and the self stem from Descartes, and that it is essential that we stop thinking about the mind as a single entity located at some discrete location within the brain. His mentor Gilbert Ryle made a similar point in The Concept of Mind, writing about the "dogma of the ghost in the machine" (1949: 17), the disembodied Cartesian mind that somehow controls the body.

A final Cartesian oddity that I have come across recently is found in the phenomenological work of Jean-Paul Sarte. Despite explicitly rejecting the Cartesian concept of the self, he emphasises a distinction between the "being-in-itself" and the "being-for-itself". The former is something like a physical body, and is all the "being" that a chair or a rock can possess, whilst the latter is what makes us special, the uniquely first-person point of view that we seem to enjoy. IN making this dichotomy he has been accused of resurrecting a kind of Cartesian dualism, in contrast with another famous phenomenologist, Merleau-Ponty, who saw the self as inherently bound up in its relations the the world.

So there you have it, a whistle-stop tour of Cartesian materialism. I'm aware that I've skimmed over a lot of ideas very quickly here, but hopefully it's enough to illustrate the way in which Descartes is still very much exerting an influence, for better or for worse, on contemporary philosophy of mind.

  • Boden, M. 1990. The Philosophy of Artifial Intelligence. Oxford: OUP.
  • Dennett, D. 1991. Consciousness Explained. Little, Brown and Company. 
  • Searle, J. 1980. “Minds, Brains, and Programs.” Reprinted in Boden 1990: 67-88.

Wednesday 14 November 2012

The "theory" in theory-theory isn't really a theory

First a little bit of background for anyone who isn't familiar with the theory of mind debate.The question is how we are able to understand the mental states of other people, and whether, broadly speaking, we perceive them directly or rely on some kind of inferential process. There's also a subsidiary debate about whether that inferential process might involve "simulating" the mental states of others within our own mind, but I'm not going to discuss that here. What I want to focus on is the dialectic between Shaun Gallagher (probably the strongest proponent of the direct perception camp) and theory-theory, which is the theory that we rely on a theory in order to understand other minds.

As you may have noticed, philosophers are not particularly imaginative when it comes to naming their theories. Theory-theory is really quite simple, though. It basically says that we possess a theoretical model that we use to interpret other people's actions, allowing us to attribute folk psychological states such as pain and belief to other people, despite having no access to their mental life. So when I see you crying, I am able to understand that you are sad by checking my perceptual evidence ("crying") against my theory of mind ("crying = sad").

This infant, lacking a fully developed theory of mind, is unaware that she is sad.

In contrast, Gallagher appeals to phenomenological evidence to argue that we are actually able to perceive to the states of other minds directly, without any appeal to a theory. I was initially sceptical of this position, not because of the evidence that it appeals to (it certainly feels like I perceive your sadness directly), but rather because it seems to lack any account of the actual process that goes on when we perceive mental states. As soon as we try to give an account of this process, we seem to reintroduce a (limited) kind of theory, one that may not be explicit but nonetheless underlies so called "direct" perception.

My undergraduate supervisor Suilin Lavelle makes a similar point in her paper "Theory-theory and the Direct Perception of Mental States" (2012), and I won't deny that her view has undoubtedly influenced my own. However I do feel that Gallagher is right to deny that we explicitly theorise about other minds, at least under usual circumstances. It's still possible to reconcile his position with theory-theory, but not without putting pressure on our common-sense understanding of what constitutes a "theory".

A quick aside: I am focusing here on what is sometimes called "innate" or "modular" theory-theory. This is the theory that we are born with a theoretical understanding of other minds, one that develops in predictable ways as we pass through infancy. It can be contrasted with "scientific" theory-theory, which says that as infants we form a theory about other minds, based on our inter-personal experiences. I find the former theory a lot more plausible, for reasons that I won't go into here.

Back to direct perception. Lavelle argues that theoretical entities can be thought of as direct objects of our perception, provided that we are equipped with the correct theory (Lavelle 2012: 227-9). If mental states are like this, then theory-theory can claim that when I infer from your crying that you are sad, I am in a sense "directly perceiving" your sadness. This might not be enough to satisfy Gallagher, but it is certainly beginning to look a lot more like the kind of intersubjective experience of mental states that he advocates. In fact, I'm inclined to say that the dispute, from this angle, is little more than an aesthetic one. Gallagher doesn't want to call whatever underlies this process a "theory", whilst Lavelle (and others) do.

So why should we think that the tacit processes underlying our perception of mental states are theoretical? Theory-theorists tend to fall back on experimental evidence at this point, arguing that the kinds of systematic errors we find infants performing when they attribute mental states suggest that a certain kind of theoretical structure is at work. They also claim that in order to support inferential reasoning, our understanding of other minds must come in the form of a theory, with propositions and syntax. On the other hand, if this theory is relatively innate and non-explicit, it's unclear to what extent it could really be a "theory". Perhaps it is best described as a theory, just as we might want to say that I have a "theory of depth" that allows me to perceive a far-away cow as being normal sized, despite it appearing to be small. This doesn't mean that I literally understand depth theoretically though.

I think some theory-theorists would actually agree with this interpretation, which is why I said that the dispute is mostly aesthetic. Some people are happy to call a tacit "theory" a theory, others aren't, but this doesn't mean that they actually disagree about anything significant - which sadly is often the case in philosophy. There may be something more significant and fundamental resting on the distinction between a theory and a non-theoretical perceptual process, but I'll happily admit that I'm not quite seeing it yet.

Some credit is due to everyone in the Cognition, Culture, and Context seminar class at Edinburgh University, with whom I discussed this yesterday. Any mistakes that I've made are my own.



Thursday 8 November 2012

Abandoning the Essential, Embracing the Vague

-->
Ideas of the self continue to bewilder. Philosophy, so often disengaged with the real word, becomes starkly relevant when we consider, for example, the daily tribulations of dementia sufferers and carers, and their very real concerns about identity.

Julian Baggini's The Ego Trick contains a very nice summary of one apparent problem facing notions of the self, 

“Therein lies a paradox of any view of the self which puts psychological continuity at its core. On such views,radical discontinuity destroys it. But if there is no hardcore of self, and it is always in flux, then as long as the change is gradual, two very different stages in a person's life can legitimately be seen as stages in the life of one self.” (pg.56)

To paraphrase wikipedia, a paradox is a statement that leads to a contradiction or a situation which defies logic or reason. Baggini rightly highlights the importance of the “paradox” which arises from a common notion of the self. However, I think the solution is to realise it is no true paradox as such. There is in fact nothing contradictory about holding that the self survives across long term change, whilst also holding that it cannot survive certain rapid and/or sufficient kinds of change. This is is possible when we abandon essentialist criteria, the idea that there is a fundamental core or “pearl” of the self to be discovered, rather than a composite entity arising from memory and cognition. Instead, the self becomes vague; a complex phenomena resulting from our biologically circumstantial cognitive apparatus and our relationship with others. Whereas the essentialist view leads us to believe there must be definite conditions when the self remains the same or not, the alternative frees us to the see the self as far more confusing.

In everyday life it's always clear that the person we saw yesterday is still the same person today. And yet it is not hard to conjure situations where it seems clear they are not. In between there are fuzzy grey areas, fuzzy and grey not because we don't understand the truth about the self not because we lack the understanding about the essential nature of the self, but because there is no truth about the matter, there is no essential nature of the self.

Traditional views of an essential self include the idea of the soul, prevalent at least in West, and enduring.

Much of what determines our feelings of a continuous, pervasive self in ourselves and others, must surely be the result of useful adaption. Even species with a simple social life have the need to track and distinguish other particular members as continuous entities, despite physical and behavioural changes over time. Within astonishingly elaborate human society this basic remains true, but is compounded with myriad complexities that come with higher awareness and profound relationships. This complexity creates uncertainty and confusion, strengthened by our own intuitions that there must be an essential part of ourselves which, once understood, will enlighten us to the mysteries of the self. Our evolved psychology may well predispose us to viewing the person as having an essential core. But by clinging to this idea within philosophy we become confused. This intuition may well be a useful adaption, but it does not follow that it reveals anything metaphysically true.

The Sorities Paradox. Abandoning essentialist criteria and embracing vagueness may cast light on apparent contradictions.


Of course there is nothing wrong with calling the apparent discontinuity of conclusions about the self a paradox, or to be continually puzzled by it. That is an inevitable part of the nebulous self and the human condition. Anyone who has experienced the development of dementia within a loved one (an issue Baggini discusses at length) will understand the conflicted feelings one can feel about the identity of the sufferer.

Is the dementia sufferer the same person they once were? Within the delicate scenarios this question may be raised there may be well founded and pragmatic reasons for assuming one or the other, the reasonableness of which will be relative to the stage and context of the sufferer's illness. Sometimes it may be clear that our loved one is the same person they always were. Changed perhaps. But still them. Unfortunately it is not always so clear. Sometimes it will be hard to say. Every carer has a different perspective, seldom are any of them wrong. That is just the peculiar, sometimes painful nature of the self.

-->
Baggini, J (2011) The Ego Trick Granta Publications: London

Tuesday 6 November 2012

Functionalism reconsidered

I've long considered myself to be a functionalist about mental states such as belief and pain. Functionalism is the theory that mental states should be identified not by their physical instantiation but by their functional role, i.e. the role that they play within a given system. The classic example is pain, which is said to be defined by behaviours such as flinch responses, yelling out, and crying (and perhaps a particular kind of first-person experience). One of the main motivations for functionalism is the "Martian intuition" - the intuition that were a silicon-based Martian to exhibit pain-behaviour, we would want to say that it is in pain, despite it lacking a carbon-based nervous system like our own. A less exotic intuition is that an octopus or capuchin monkey can probably feel pain, despite the exact physical instantiation of this pain differing from our own.


Martian octopus, perhaps in pain? 
(with permission from Ninalyn @ http://studiodecoco.tumblr.com/)

However I'm now beginning to suspect that there might be more than a few problems with functionalism. For starters, functional states are often defined as being those that are "relevantly similar" to an imagined paradigm case - thus, a Martian who screamed and recoiled when we punched might be said to be in pain, but one that laughed and clapped its hands (tentacles?) probably wouldn't. This is fine up to a point, especially in seemingly clear-cut cases like the above, but what should we say when we're faced with the inevitable borderline case?

Whether or not fish can feel pain seems to be a case like this. Research into fish pain behaviour is contentious - whilst fish exhibit apparent pain behaviour, they have only recently been shown to exhibit more complex pain avoidance behaviour that might be thought essential to pain. The problem is not just a lack of evidence either, there's a more fundamental lack of clarity about how exactly we should define the functional role of pain, or indeed any other mental state.

Having said that, the problem isn't limited to the functionalist account of mental states. Biological species appear to form vague natural kinds, a problem which has motivated the idea of homeostatic property cluster kinds, categories of kinds that share some, but not all, of their properties. So we maybe we could say that functional kinds, such as pain, are a category of HPC kinds. That still wouldn't necessarily give us a straight answer in genuine border-line cases, but at least we'd have good reason to think functional roles might sometimes pick out genuine kinds (albeit perhaps not natural kinds)

The problems don't stop there though. By arguing that it entails a radical form of cognitive extension, Mark Sprevak has pushed functionalism to its logical extreme. If he is correct then being a functionalist would commit you to apparently absurd conclusions,1 such as that the entire contents of the Dictionary of Philosophy sitting on my desk form part of my cognitive system, or that my capacity for mental arithmetic is bounded only by my access to electronic computing power. I think there might be a way for functionalism to avoid the full force of this argument, but it comes with its own problems and costs.

Essentially what the functionalist needs to do is to stop talking about cognition and mental states as though they were one kind of thing. They're not, and rather than lumping memory, personality, beliefs  and so on into one unitary framework, we need to look at giving finer-grained functional descriptions in each case. This might even mean getting rid of some mental states, such as belief, or at least admitting that they're more complex than we first thought. This approach will still entail some degree of cognitive extension, but hopefully in a more subtle and intuitive way. So whilst it might not be true that the contents of the Dictionary are part of my 'cognitive system', they may nonetheless form part of a belief-like system, albeit one that functions differently to my regular belief system. 

Would this still be functionalism? In a sense yes, because it would maintain a degree of multiple realisability, only at a more fine-grained level. So a Martian with a silicon brain might have beliefs, but equally they might have something more akin to the belief-like system that is constituted by me-and-the-Dictionary. The problem with functionalism is that it tends to reify our folk intuitions about mental states, and we need to remember that these might not be entirely accurate. I suppose I'm beginning to lean towards a form of eliminativism, although I still think that there's room for an instrumentalist account of functional roles. 


1. I say "apparently" because I'm not entirely convinced that one shouldn't just bite the bullet and accept these conclusions. That's probably a post for another day though.