Wednesday 26 December 2012

Animal Experiments and Laboratory Conditions: Some Initial Thoughts

I stumbled upon this paragraph on the website for the Medical Research Modernization Committee,

“the highly unnatural laboratory environment invariably stresses the animals, and stress affects the entire organism by altering pulse, blood pressure, hormone levels, immunological activities and a myriad of other functions. Indeed, many laboratory "discoveries" reflect mere laboratory artefact”

The article goes on to list and reference several examples where they believe artificial laboratory conditions aided in misleading researchers. For example, they take it that “unnaturally induced strokes in animals has repeatedly misled researchers”. I am in no position to evaluate such medical cases, and the authors of the article insufficiently explicate their examples for a lay person to draw reasonable conclusions. I hope at some point in the future to be able to comment more on this topic. However, their general point is one I have often considered albeit within the arena of animal behaviour and cognition.

An intense mouse.
 Can an artificial environment affect the physiology of an animal in such a way that it bears on medical and other research?  
The rough idea is that the artificial laboratory conditions may affect the results of experiments in important ways.

Scientists researching social cognition in chimpanzees, say, need to be aware that a laboratory environment may affect the animal’s normal psychology. For example, long time interaction with humans may make an animal more susceptible to certain human oriented behaviour, another factor which might affect generalisation from results. Experiments involving tasks in which chimps must assist humans need to take into account whether the subjects have prior history with experimenters. And indeed this is discussed and taken into account in many good experiments.

There is nothing necessarily wrong with experiments into social cognition in chimps within a laboratory setting. It would be foolish of us to disregard all laboratory based research. In most cases it is the only possible environment.

In one of my favourite studies, designed explicitly to compare human infants and young chimpanzee altruistic tendencies, human and infant chimps were tested on similar tasks (Warneken and Tomasello, 2006: 1). A human experimenter confronted a problem and needed assistance (e.g. reaching for an out-of-reach-marker, bumping into object that needs removal), with no reward given for help. Whilst the human infant intervened in more tasks than the chimp, the latter did reliably assist in the tasks involving reaching (incidentally also the task in which the children most reliably helped). These results, I believe, provide good support for a natural capacity, in both human and chimps, for non-selfish helping behaviour and tendencies beyond near kin, . It is hard to fault this study for taking place in laboratory settings.

In addition to possessing theory of mind, this guy can actually possess your mind.
Nevertheless the setting and history of all subjects must be taken into account as a potentially relevant variable. In short we need be aware of the possibility that a laboratory setting might affect the psychology and thus behaviour of animal subjects.


Warneken F, Hare B, Melis AP, Hanus D, Tomasello M (2007) Spontaneous Altruism by Chimpanzees and Young Children. PLoS Biol 5(7): e184. doi:10.1371/journal.pbio.0050184

Friday 21 December 2012

Gilbert Ryle's Concept of Mind

I'd call this a book review, but I haven't finished the book yet. I am enjoying it though, so I thought I'd write a few words about some of the more relevant themes.

Just chilling, no doubt reading some Wittgenstein

As I mentioned last time, it was Gilbert Ryle who coined the term "ghost in the machine" to refer to the disembodied mind that cognitive science seems intuitively drawn towards. The Concept of Mind is to a large extent aimed at dispelling this intuition, but along the way it also touches upon a number of other fascinating topics. Below is a list of ideas that Ryle either introduces, expands upon, or pre-empts:
  • "Knowing How and Knowing That": This is the title of a whole chapter, wherein he draws a conceptual distinction between the two kinds of knowing. In brief, the first is the skilful execution of an action, the second the reliable recollection of a fact. The "intellectualist legend", according to Ryle, makes the former subordinate to the latter, in that all activities are reduced to the knowledge of certain rules (32). That this reduction is false is fundamental to his broader point - there is no isolated realm of the mental, and all cognitive activity must be expressed through action (or at least the potential for action).
  • Embodied cognition and the extended mind: In the same chapter, he devotes a few pages to the common notion that thinking is done "in the head" (36-40). This notion, he argues, is no more than a linguistic artefact, stemming from the way we experience sights and sounds. Unlike tactile sensations, sights and sounds occur at some distance from our body, and so when we imagine or remember them, it makes sense to highlight this distinction by saying that they occur 'in the head'. By extension thought, which Ryle conceives of as internalised speech,1 is also said to occur 'in the head'. However this idiomatic phrase is just metaphorical, and there is no reason that thinking should (or could) occur exclusively in the head.
  • "The Will": Another chapter, this time de-constructing our understanding of volition and action. Suffice to say, Ryle thinks we've got ourselves into a terrible mess, in particular in supposing that to do something voluntarily requires some additional para-causal spark. Rather, to describe an action as voluntary is simply to say something about the manner in which, and circumstances under, it is performed. Free will, under this reading, is something to do with the kind of causal mechanism involved, rather than anything 'spooky' or non-physical.2 Personally I've never found this kind of account particularly convincing, but it is nonetheless influential to this day.
  • Higher-order thought as a theory of consciousness: Although he never explicitly puts it this way, there is a passage where Ryle describes how some "traditional accounts" claim that what is essential for consciousness is the "contemplation or inspection" of the thought process that one is conscious of (131). This is very similar to contemporary 'higher-order' theories of consciousness (see Carruthers 2011). Ryle doesn't exactly approve, dismissing such theories as "misdescribing" what is involved in "taking heed" of one's actions or thoughts.
So there you have it: Gilbert Ryle, largely forgotten but by no means irrelevant. As you may have noticed, a lot of his ideas influenced Daniel Dennett, which isn't surprising, seeing as Dennett studied under Ryle at Oxford.
1. This, perhaps, is one source of Dennett's fable about the origins of consciousness (1991).
2. Again, this is reminiscent of Dennett (2003).
 
References
  • Carruthers, P. "Higher-order theories of consciousness." Stanford Encyclopedia of Philosophy. Retrieved from http://plato.stanford.edu/archives/fall2011/entries/consciousness-higher [21.12.2012]
  • Dennett, D. 1991. Consciousness Explained. Little, Brown & Company.   
  • Dennett, D. 2003. Freedom Evolved. Little, Brown & Company.   
  • Ryle, G. 1949. The Concept of Mind. Hutchinson. 

Saturday 24 November 2012

A spectre is haunting cognitive science...

...the spectre of Cartesian materialism. If there's been one consistent theme running through my studies over the last two and a half month, its this. But what is Cartesian materialism, and why is it haunting cognitive science?

A few obligatory words about the man himself before we go any further. René Descartes was a 17th century philosopher and mathematician, probably most famous for the now-infamous words "cogito, ergo sum" - "I think, therefore I am". He also invented the Cartesian coordinate system, which most of you will have been taught, even if you don't know it (it's the classic x-axis, y-axis thing). In modern analytic philosophy he enjoys a dubious status as both the inspiration and the target of many key arguments. It is a great irony that a tradition which owes so much to Descartes also routinely indoctrinates undergraduates against him.

He did most of his philosophising from the comfort of his bed.

Not that this is necessarily a bad thing. Many of Descartes' arguments are terrible, but the intuitions they appeal to remain strong, and his influence (the "spectre" of my title) can be felt throughout cognitive science and analytic philosophy of mind. Foremost amongst these is the intuition that 'mind' and 'body' must refer to two distinctly separate kinds of things. Descartes thought that this meant they must be composed of two separate substances, one physical and extended, the other insubstantial and non-extended. His cogito argument refers to this distinction - my mind, being a thinking thing, seems to exist independently of (and prior to) any physical world.

Empirical philosophy of mind (and thus cognitive science) tends to reject this dualism. Most philosophers of cognitive science (including myself) are physicalists, committed to there being only one kind of substance in the world. Thus the mind must be made out of the same kind of stuff as the body. Despite this commitment, there remains a tendency to conceive of the mind as something special, somehow autonomous from its physical instantiation. This attitude is sometimes called 'property dualism', 'non-reductive physicalism' , or, by its opponents, 'Cartesian materialism'.

Classical cognitive science, which dates back to around the middle of the last century, was (and still is) enamoured with the idea that the mind is essentially a computer program. As such it made sense to think of the mind as something distinct from the brain, a kind of "software" running on biological "hardware". This intuition is still strong today, particularly amongst those wanting to give an account of mental representation ("pictures" in the mind), or of the apparently inferential structure of cognition. Traditional functionalist accounts of cognition also tend towards a form of Cartesian materialism, as the multiple realisability requirement means that strict type identity between the mind and the brain is not possible. Whilst in many cases the mind (classically speaking) simply is the brain, it's conceivable that it might take some other form, and so the two are not strictly identical. 

However, recent (and some not-so-recent) work in embodied cognition argues that the physical body might be more important than classical cognitive science assumes. Examples include John Searle's suggestion that some quality of the neurobiological brain might be essential for consciousness (1980: 78), various enactive approaches to perception (championed by Alva Noë), and the dynamical systems approach that argues that cognition is a continuous process involving the brain, body, and environment. Whilst these approaches differ in many respects, they all agree that the mind cannot be conceived of as distinct or autonomous from the body.

Whilst Daniel Dennett takes an essentially computational and functionalist approach to cognition, he has also warned against the risks of Cartesian materialism - in fact, he invented the term. In Consciousness Explained (1991), he argues that many of our confusions about both consciousness and the self stem from Descartes, and that it is essential that we stop thinking about the mind as a single entity located at some discrete location within the brain. His mentor Gilbert Ryle made a similar point in The Concept of Mind, writing about the "dogma of the ghost in the machine" (1949: 17), the disembodied Cartesian mind that somehow controls the body.

A final Cartesian oddity that I have come across recently is found in the phenomenological work of Jean-Paul Sarte. Despite explicitly rejecting the Cartesian concept of the self, he emphasises a distinction between the "being-in-itself" and the "being-for-itself". The former is something like a physical body, and is all the "being" that a chair or a rock can possess, whilst the latter is what makes us special, the uniquely first-person point of view that we seem to enjoy. IN making this dichotomy he has been accused of resurrecting a kind of Cartesian dualism, in contrast with another famous phenomenologist, Merleau-Ponty, who saw the self as inherently bound up in its relations the the world.

So there you have it, a whistle-stop tour of Cartesian materialism. I'm aware that I've skimmed over a lot of ideas very quickly here, but hopefully it's enough to illustrate the way in which Descartes is still very much exerting an influence, for better or for worse, on contemporary philosophy of mind.

  • Boden, M. 1990. The Philosophy of Artifial Intelligence. Oxford: OUP.
  • Dennett, D. 1991. Consciousness Explained. Little, Brown and Company. 
  • Searle, J. 1980. “Minds, Brains, and Programs.” Reprinted in Boden 1990: 67-88.

Wednesday 14 November 2012

The "theory" in theory-theory isn't really a theory

First a little bit of background for anyone who isn't familiar with the theory of mind debate.The question is how we are able to understand the mental states of other people, and whether, broadly speaking, we perceive them directly or rely on some kind of inferential process. There's also a subsidiary debate about whether that inferential process might involve "simulating" the mental states of others within our own mind, but I'm not going to discuss that here. What I want to focus on is the dialectic between Shaun Gallagher (probably the strongest proponent of the direct perception camp) and theory-theory, which is the theory that we rely on a theory in order to understand other minds.

As you may have noticed, philosophers are not particularly imaginative when it comes to naming their theories. Theory-theory is really quite simple, though. It basically says that we possess a theoretical model that we use to interpret other people's actions, allowing us to attribute folk psychological states such as pain and belief to other people, despite having no access to their mental life. So when I see you crying, I am able to understand that you are sad by checking my perceptual evidence ("crying") against my theory of mind ("crying = sad").

This infant, lacking a fully developed theory of mind, is unaware that she is sad.

In contrast, Gallagher appeals to phenomenological evidence to argue that we are actually able to perceive to the states of other minds directly, without any appeal to a theory. I was initially sceptical of this position, not because of the evidence that it appeals to (it certainly feels like I perceive your sadness directly), but rather because it seems to lack any account of the actual process that goes on when we perceive mental states. As soon as we try to give an account of this process, we seem to reintroduce a (limited) kind of theory, one that may not be explicit but nonetheless underlies so called "direct" perception.

My undergraduate supervisor Suilin Lavelle makes a similar point in her paper "Theory-theory and the Direct Perception of Mental States" (2012), and I won't deny that her view has undoubtedly influenced my own. However I do feel that Gallagher is right to deny that we explicitly theorise about other minds, at least under usual circumstances. It's still possible to reconcile his position with theory-theory, but not without putting pressure on our common-sense understanding of what constitutes a "theory".

A quick aside: I am focusing here on what is sometimes called "innate" or "modular" theory-theory. This is the theory that we are born with a theoretical understanding of other minds, one that develops in predictable ways as we pass through infancy. It can be contrasted with "scientific" theory-theory, which says that as infants we form a theory about other minds, based on our inter-personal experiences. I find the former theory a lot more plausible, for reasons that I won't go into here.

Back to direct perception. Lavelle argues that theoretical entities can be thought of as direct objects of our perception, provided that we are equipped with the correct theory (Lavelle 2012: 227-9). If mental states are like this, then theory-theory can claim that when I infer from your crying that you are sad, I am in a sense "directly perceiving" your sadness. This might not be enough to satisfy Gallagher, but it is certainly beginning to look a lot more like the kind of intersubjective experience of mental states that he advocates. In fact, I'm inclined to say that the dispute, from this angle, is little more than an aesthetic one. Gallagher doesn't want to call whatever underlies this process a "theory", whilst Lavelle (and others) do.

So why should we think that the tacit processes underlying our perception of mental states are theoretical? Theory-theorists tend to fall back on experimental evidence at this point, arguing that the kinds of systematic errors we find infants performing when they attribute mental states suggest that a certain kind of theoretical structure is at work. They also claim that in order to support inferential reasoning, our understanding of other minds must come in the form of a theory, with propositions and syntax. On the other hand, if this theory is relatively innate and non-explicit, it's unclear to what extent it could really be a "theory". Perhaps it is best described as a theory, just as we might want to say that I have a "theory of depth" that allows me to perceive a far-away cow as being normal sized, despite it appearing to be small. This doesn't mean that I literally understand depth theoretically though.

I think some theory-theorists would actually agree with this interpretation, which is why I said that the dispute is mostly aesthetic. Some people are happy to call a tacit "theory" a theory, others aren't, but this doesn't mean that they actually disagree about anything significant - which sadly is often the case in philosophy. There may be something more significant and fundamental resting on the distinction between a theory and a non-theoretical perceptual process, but I'll happily admit that I'm not quite seeing it yet.

Some credit is due to everyone in the Cognition, Culture, and Context seminar class at Edinburgh University, with whom I discussed this yesterday. Any mistakes that I've made are my own.



Thursday 8 November 2012

Abandoning the Essential, Embracing the Vague

-->
Ideas of the self continue to bewilder. Philosophy, so often disengaged with the real word, becomes starkly relevant when we consider, for example, the daily tribulations of dementia sufferers and carers, and their very real concerns about identity.

Julian Baggini's The Ego Trick contains a very nice summary of one apparent problem facing notions of the self, 

“Therein lies a paradox of any view of the self which puts psychological continuity at its core. On such views,radical discontinuity destroys it. But if there is no hardcore of self, and it is always in flux, then as long as the change is gradual, two very different stages in a person's life can legitimately be seen as stages in the life of one self.” (pg.56)

To paraphrase wikipedia, a paradox is a statement that leads to a contradiction or a situation which defies logic or reason. Baggini rightly highlights the importance of the “paradox” which arises from a common notion of the self. However, I think the solution is to realise it is no true paradox as such. There is in fact nothing contradictory about holding that the self survives across long term change, whilst also holding that it cannot survive certain rapid and/or sufficient kinds of change. This is is possible when we abandon essentialist criteria, the idea that there is a fundamental core or “pearl” of the self to be discovered, rather than a composite entity arising from memory and cognition. Instead, the self becomes vague; a complex phenomena resulting from our biologically circumstantial cognitive apparatus and our relationship with others. Whereas the essentialist view leads us to believe there must be definite conditions when the self remains the same or not, the alternative frees us to the see the self as far more confusing.

In everyday life it's always clear that the person we saw yesterday is still the same person today. And yet it is not hard to conjure situations where it seems clear they are not. In between there are fuzzy grey areas, fuzzy and grey not because we don't understand the truth about the self not because we lack the understanding about the essential nature of the self, but because there is no truth about the matter, there is no essential nature of the self.

Traditional views of an essential self include the idea of the soul, prevalent at least in West, and enduring.

Much of what determines our feelings of a continuous, pervasive self in ourselves and others, must surely be the result of useful adaption. Even species with a simple social life have the need to track and distinguish other particular members as continuous entities, despite physical and behavioural changes over time. Within astonishingly elaborate human society this basic remains true, but is compounded with myriad complexities that come with higher awareness and profound relationships. This complexity creates uncertainty and confusion, strengthened by our own intuitions that there must be an essential part of ourselves which, once understood, will enlighten us to the mysteries of the self. Our evolved psychology may well predispose us to viewing the person as having an essential core. But by clinging to this idea within philosophy we become confused. This intuition may well be a useful adaption, but it does not follow that it reveals anything metaphysically true.

The Sorities Paradox. Abandoning essentialist criteria and embracing vagueness may cast light on apparent contradictions.


Of course there is nothing wrong with calling the apparent discontinuity of conclusions about the self a paradox, or to be continually puzzled by it. That is an inevitable part of the nebulous self and the human condition. Anyone who has experienced the development of dementia within a loved one (an issue Baggini discusses at length) will understand the conflicted feelings one can feel about the identity of the sufferer.

Is the dementia sufferer the same person they once were? Within the delicate scenarios this question may be raised there may be well founded and pragmatic reasons for assuming one or the other, the reasonableness of which will be relative to the stage and context of the sufferer's illness. Sometimes it may be clear that our loved one is the same person they always were. Changed perhaps. But still them. Unfortunately it is not always so clear. Sometimes it will be hard to say. Every carer has a different perspective, seldom are any of them wrong. That is just the peculiar, sometimes painful nature of the self.

-->
Baggini, J (2011) The Ego Trick Granta Publications: London

Tuesday 6 November 2012

Functionalism reconsidered

I've long considered myself to be a functionalist about mental states such as belief and pain. Functionalism is the theory that mental states should be identified not by their physical instantiation but by their functional role, i.e. the role that they play within a given system. The classic example is pain, which is said to be defined by behaviours such as flinch responses, yelling out, and crying (and perhaps a particular kind of first-person experience). One of the main motivations for functionalism is the "Martian intuition" - the intuition that were a silicon-based Martian to exhibit pain-behaviour, we would want to say that it is in pain, despite it lacking a carbon-based nervous system like our own. A less exotic intuition is that an octopus or capuchin monkey can probably feel pain, despite the exact physical instantiation of this pain differing from our own.


Martian octopus, perhaps in pain? 
(with permission from Ninalyn @ http://studiodecoco.tumblr.com/)

However I'm now beginning to suspect that there might be more than a few problems with functionalism. For starters, functional states are often defined as being those that are "relevantly similar" to an imagined paradigm case - thus, a Martian who screamed and recoiled when we punched might be said to be in pain, but one that laughed and clapped its hands (tentacles?) probably wouldn't. This is fine up to a point, especially in seemingly clear-cut cases like the above, but what should we say when we're faced with the inevitable borderline case?

Whether or not fish can feel pain seems to be a case like this. Research into fish pain behaviour is contentious - whilst fish exhibit apparent pain behaviour, they have only recently been shown to exhibit more complex pain avoidance behaviour that might be thought essential to pain. The problem is not just a lack of evidence either, there's a more fundamental lack of clarity about how exactly we should define the functional role of pain, or indeed any other mental state.

Having said that, the problem isn't limited to the functionalist account of mental states. Biological species appear to form vague natural kinds, a problem which has motivated the idea of homeostatic property cluster kinds, categories of kinds that share some, but not all, of their properties. So we maybe we could say that functional kinds, such as pain, are a category of HPC kinds. That still wouldn't necessarily give us a straight answer in genuine border-line cases, but at least we'd have good reason to think functional roles might sometimes pick out genuine kinds (albeit perhaps not natural kinds)

The problems don't stop there though. By arguing that it entails a radical form of cognitive extension, Mark Sprevak has pushed functionalism to its logical extreme. If he is correct then being a functionalist would commit you to apparently absurd conclusions,1 such as that the entire contents of the Dictionary of Philosophy sitting on my desk form part of my cognitive system, or that my capacity for mental arithmetic is bounded only by my access to electronic computing power. I think there might be a way for functionalism to avoid the full force of this argument, but it comes with its own problems and costs.

Essentially what the functionalist needs to do is to stop talking about cognition and mental states as though they were one kind of thing. They're not, and rather than lumping memory, personality, beliefs  and so on into one unitary framework, we need to look at giving finer-grained functional descriptions in each case. This might even mean getting rid of some mental states, such as belief, or at least admitting that they're more complex than we first thought. This approach will still entail some degree of cognitive extension, but hopefully in a more subtle and intuitive way. So whilst it might not be true that the contents of the Dictionary are part of my 'cognitive system', they may nonetheless form part of a belief-like system, albeit one that functions differently to my regular belief system. 

Would this still be functionalism? In a sense yes, because it would maintain a degree of multiple realisability, only at a more fine-grained level. So a Martian with a silicon brain might have beliefs, but equally they might have something more akin to the belief-like system that is constituted by me-and-the-Dictionary. The problem with functionalism is that it tends to reify our folk intuitions about mental states, and we need to remember that these might not be entirely accurate. I suppose I'm beginning to lean towards a form of eliminativism, although I still think that there's room for an instrumentalist account of functional roles. 


1. I say "apparently" because I'm not entirely convinced that one shouldn't just bite the bullet and accept these conclusions. That's probably a post for another day though.

Saturday 13 October 2012

Merleau-Ponty, Wittgenstein, and philosophical mysticism

I study embodied cognition, an emerging field which has taken considerable inspiration from the phenomenological work of the likes of Merleau-Ponty, Sartre, and Heidegger. As such, I've been attempting to get to grips with phenomenology, which given my analytic, Anglo-American philosophical education, is a somewhat odd experience. Phenomenology, broadly speaking, was a reaction against both empiricism and idealism, placing primary emphasis on "lived experience" and the act of perception. Merleau-Ponty in particular also focused on the interaction between the perceiver and the world, and it is this sense of "embodiment" that embodied cognition has most taken to heart.

Merleau-Ponty: grumpy

However there is another side to phenomenology, one which has the potential to be profoundly inimical to the whole project of cognitive science, embodied or not. There is evidence to suggest that Merleau-Ponty, at least, understood phenomenology to be far more than a modification of our psychological methodology. His most famous work, Phenomenology of Perception, is  littered with cryptic remarks that undermine any attempt to read it as a work of empirical psychology. He explicitly states that it is a work of transcendental philosophy, aimed at achieving "pre-objective perception". It is not at all clear what this might be, or even whether it can be expressed in words. Throughout the book (which I'll admit I haven't yet read), there is apparently a sense in which many things go unsaid, perhaps even things which will "only be understood by those who have themselves already thought the thoughts".

Wittgenstein: even grumpier

That sounds familiar. The above quote comes from the introduction to Wittgenstein's Tractatus (which I have read, although I won't claim to have understood it). Both Wittgenstein and Merleau-Ponty seem to be struggling to express the unexpressable, and both, perhaps, ought to be read as "anti-philosophers", whose mission is not to solve any great problems but to help us understand why there never were any problems in the first place. This is certainly the opinion of a psychology lecturer I know who, under the influence of both Wittgenstein and Merleau-Ponty, seemed shocked that us philosophers might still be trying to solve the "problem" of consciousness. Whilst I think this is somewhat arrogant (and ignorant), it is true that both Merleau-Ponty and Wittgenstein regarded analytic philosophy as curiously misguided, tied up in knots of its own creation.

In light of which it may seem odd that half a century later analytic philosophy continues to venerate Wittgenstein, and that analytic philosophy of mind, or at least a certain strand of it, has recently adopted Merleau-Ponty as something of an idol. If both or either of them were right, surely we're completely missing the point? In fact I don't think this should worry us too much. Neither Merleau-Ponty nor Wittgenstein were perfect, and much of what they wrote may have been as confusing to them as it is to us. What is important is to pay attention to the issues that they do highlight, and to take to heart anything that does make sense to us. Daniel Dennett takes this approach with regard to Wittgenstein (in Consciousness Explained and elsewhere), and Shaun Gallagher and Dan Zahavi seem to be doing something similar in The Phenomenological Mind, where they attempt to apply phenomenological insights to contemporary cognitive science. Regardless of whether or not either Mearlea-Ponty or Wittgenstein would have approved, I find this approach extremely useful, and phenomenologically speaking, perhaps this is all that should matter. It is, after all, my lived experience, not Merleau-Ponty's!

(Some credit should go to the phenomenology reading group at the University of Edinburgh, with whom I discussed much of the content of this post. Any errors or misunderstandings, however, are entirely my own.)

  • Dennett, D. 1991. Consciousness Explained. Little, Brown & Company.
  • Gallagher, S. & Zahavi, P. 2008. The Phenomenological Mind. London: Routledge.
  • Merleau-Ponty, M. 1945/1962. Phenomenology of Perception. London: Routledge.
  • Wittgenstein, L. 1921/1991. Tractatus Logico-Philosophicus. New York: Dover.

Thursday 4 October 2012

"To Squeak and To Squeak Well Are Two Different Things"


 I'm not very much fun at parties. In a recent discussion I overheard regarding a friend's pet guinea pig and their “speaking” to another, I didn't have the sense to ignore it, but decided to ruin a perfectly amiable encounter into a debate about animal communication in which everyone left feeling no more satisfied with life.

On a number of occasions, mostly amongst non-philosophers, I've noticed a common response to a denial of language amongst non-human animals is a dismissive “well how do you know!?”. In an accusatory tone, they question how you could possibly think yourself so arrogant as to make claims about the inner workings of a small furry yet impenetrably mysterious rodent. As a matter of fact I think there's a large consensus amongst most philosophers and scientists that the vast majority, and probably all, non-human animal species are incapable of something equivalent to language (there is of course disagreement about the communicative abilities of primates and some other species). It's worth emphasising to people what is really being said, or not said, when claiming animals "aren't really speaking”, but instead partaking in other, admittedly complex, often not well understood forms of communication. 

It's important to realise that when I say guinea pigs don't have language, I'm not implying...

1. Cartesian Certainty. I don't know for 100% per cent, bet your sweet bippy that guinea pigs don't have language. Neither am I certain lampshades don't have language. Neither am I certain guinea pigs aren't made of cheese. But this sort of Cartesian doubt is as relevant to the question of language capacity, or inner mental activity of any kind, as it is to whether you're all in my head, or the world is a computer-simulated reality run by exploitative machines. In other words, it isn't relevant at all. Not to ordinary daily discourse. Hyperbolic doubt has its place, but it's not really a convincing argument against a particular theory. I don't know guinea pigs aren't really communicating in language, not for certain. But I believe they don't, based on certain inferences given certain empirical data. 

Neither am I saying...

2. Guinea pigs are rubbish. I'm under the impression that a common underlying feeling amongst layfolk is that by claiming animals aren't really “talking” when communicating, I'm somehow being disrespectful.That by denying them language I'm not only arrogant, but attacking their worth. It's as if not being able to communicate with language morally equates animals closer to a packet of Wotsits than a human. One clearly doesn't necessarily imply the other. Of course even if I did think that lack of language ability carried important moral ramifications (and truth be told I do think there is something to that thought), that wouldn't constitute an argument against my initial premise. It's just an implication you don't like.

Mr Tiddles. Less talk, more fluff.
The ascription of communicative abilities within other species must be an empirical question, in so far as once we've sorted out (theoretically) what we're looking for, it's an empirical question as to whether we find them in other species. If it's not, ultimately, an empirical question, I fail to see how we avoid naval gazing ponderment about what Mr Tiddles is really communicating to Fluffy Features.

I think the same extends to broader issues of mental life. Consciousness is an obviously more complex topic than language, lacking anything close to agreement on how we should use the word. But I do think that once we are more clear about what we're talking about, if we ever get there, ascription of consciousness will become more and more an empirical issue.

An important caveat: It is of course entirely possible that our empirical questions cannot practically be answered because of limits to our methods, or because we never manage to coherently establish the theoretical framework. Whilst whether or not Mr Tiddles is communicating in language is an empirical matter, it could be the case that we have insufficient means to pursue the investigation- though in fact with language we have some well established criteria. It is for more plausibly an issue with consciousness and its related issues. It strikes me that it is for insufficiencies within the theoretical understanding, and disagreements over empirical ground world, that we have so much disagreement e.g. with ascription of theory of mind to non-humans.

HPCK and Modal Representations

It's been a busy month, moving into a new flat and starting an MSc. I'm studying full time again, which in terms of blogging is a mixed blessing - lots of material, but very little time.

It's nice when you can combine two previously isolated ideas, and that's what I'm going to try and do today. One comes from the philosophy of science, Boyd's "homeostatic property cluster" theory of natural kinds, and the other is an idea from the philosophy of mind, that our mental images might not be entirely separate from our sensory perception.

I'll start with modal representations, because they're probably simpler. A mental representation is basically a mental state that stands for some part of the external world (Clark 1997: 463), whatever we take that to mean. Mental representation is a thorny topic, but all I'm interested in here is one aspect of the issue: whether such representations are composed of sensory information (modal) or are purely abstract (amodal). For example, does our representation of a sunny day call to mind the pleasant feeling of the sun on our skin, or do we somehow comprehend it in isolation from any sensation? For the previous century (analytic) philosophers tended to pick the latter option, no doubt influenced by classical logic, but some (relatively) recent experiments have questioned that assumption. It seems that there is a systematic connection between representations and the sensory qualities of what they represent, as demonstrated by experiments such as those conducted by Zwan, Stanfield & Yaxley (2002) and by Glenberg & Kaschak (2002). The implications of these experiments are still being debated, but one interpretation is that our representations (and by extension, our concepts) are composed of bundles of modal (sensory) data, rather than discrete, amodal definitions.

This is where Boyd comes in. His theory is a form of realism about natural kinds, but I think that it shares some interesting similarities with the idea of modal properties. Motivated by the messiness of biological kinds, Boyd characterises a natural kind as sharing a cluster of properties, none of which are necessary or sufficient. These kinds are rooted in the causal structure of the world, and are thus real, but they allow for the flexibility that is necessary when it comes to biological kinds. Given that our access to kinds is mediated by our senses, I think it might make sense to identify the modal bundles that I described above with Boyd's property clusters. Our concept or representation of a cow might consist of the vague appearance of a cow, the smell of cow dung, and the monotonous sound they make - and this will in some sense correspond with (at least some of) the properties in the natural kind cluster "cow". Boyd's point is that there doesn't have to be an exact matching for every instantiation of a natural kind, so everyone's perception of cow's can (and will) be subtly different.

I still haven't quite got to grips with Boyd's theory, and I'm not sure how much he would support this idea, but I think it could allow for an evolutionary justification of how classify natural kinds. This would be similar to Quine's empiricist position (see his 1969), and might not be as realist as Boyd would like.

  • Clark, A. 1997. "The Dynamical Challenge." Cognitive Science 21(4): 461-81.
  • Glenberg, A. & Kaschak, M. 2002. "Grounding language in action." Psychonomic Bulletin and Review 9: 558-65.
  • Quine, W.V.O. 1969."Natural Kinds." In Ontological Relativity and Other Essays. New York and London: Columbia University Press.
  • Zwan, R., Stanfield, R. & Yaxley, R. 2002. "Language comprehenders routinely represent the shape of objects?" Psychological Science 13: 168-71.

Sunday 2 September 2012

Pragmatic Structural Realism

I'm slowly making my way through Everything Must Go (Ladyman et al 2007), and it's honestly been a breath of fresh air - clarifying, and to some extent answering, several concerns about the philosophy of science that I've been harbouring for some time. It's led me to a version of what Ladyman et al call "ontic structural realism" which I'm tentatively referring to as "pragmatic structural realism". I haven't finished the book yet, so bear with me if this doesn't make any sense, or if I've got something wrong.

Put very simplistically, there is a tension between two broad approaches to science: (constructive) empiricism and (scientific) realism.1 The former states that we should take our scientific theories to be no more than adequate descriptions of the phenomena under investigation, whilst the latter commits science to the literal truth of its theories, and by extension to the existence of the unobservable entities that they describe. For example, the realist is committed to the actual existence of sub-atomic particles, whilst the empiricist will merely use them as part of the description of a contingently accurate theory, without making any judgement as to whether or not they exist.2

Both approaches appear to be flawed (although in ways more complicated than the following paragraph might suggest). Realism is committed to the existence of entities that are likely to turn out not to exist, or not to exist in quite the way that we thought they did. As science progresses, some of these entities become redundant, implying a level of discontinuity between theories that in practice is not manifested. Empiricism, meanwhile, struggles to explain why these entities have any explanatory power, if in fact they're not real (it also risks descent into total relativism). Ladyman et al present structural realism as almost a dialectical synthesis of these two approaches, but for now I'll simply try and break down my own understanding of it.

Realism: Scientific theories do attempt to describe some underlying reality, although of course they are often wrong. Contra empiricism, they are more than just a conveniently accurate account of observable phenomena.

Structural: However, it is not unobservable entities per se that these theories are committed to, but rather the structural relationships between them (if indeed they exist at all). This structure is what underlies reality, and is what science seeks to describe. Whilst theory change requires abandoning some entities, the structure of the previous theory can be retained. Thus, Ladyman et al describe how both successive theories and theories at different explanatory levels can be related in terms of mathematical structure rather than direct one-to-one mapping of entities and propositions (2007: 118).

Pragmatic: Science is an ongoing process, and so we must recognise the commitments of our current theories as pragmatic place-holders rather than absolute certainties. These theories are our best guess at the structure of reality, and we adapt them as new evidence becomes available. Furthermore, our commitment to structural realism is itself pragmatic, motivated by our belief that it best describes actual scientific (and epistemic) practice.

How is this relevant to the philosophy of mind? Well, for one thing I'm keen to make sure that my understanding of the mind is based on an accurate understanding of science. It's important to ensure that we know what we're on about when try to describe the physical instantiation of the mind. On one level this calls for an understanding of psychology and neuroscience, but on another it means coming to grips, at least in basic sense, with physics. All science essentially boils down to physics of some description, and even if we're quickly going to abstract away from that fundamental level, I think we need to understand it first. Otherwise our entire project is going to rest on faulty foundations.

James Ladyman strikes me as someone who's got a very clear grasp of both contemporary science and the muddled attempts of philosophers to try and make sense of it. His 2010 paper (with Don Ross) on appeals to scientific practice in the extended mind debate really struck home for me, and Everything Must Go is more of the same, although heavy going at times. I'm looking forward to hearing him speak at this conference in a week's time, and I'll maybe report back with some further thoughts after that.

1. This terminology is somewhat misleading, as strictly speaking scientific realists are also empiricists. The difference lies in what they believe to exist, not how they advocate studying it.
2. Complicating things further is the apparent failure of modern philosophy to appreciate the nature of contemporary physics. We tend to talk of physics as though it studies discrete, although very small, objects. According to Ladyman et al this is incorrect, and more importantly philosophically unhelpful.
  • Ladyman, J., Ross, D., Spurrett, D. & Collier, J. 2007. Everything Must Go. Oxford: OUP.
  • Ross, D & Ladyman, J. 2010 "The Alleged Coupling-Constitution Fallacy and the Mature Sciences." In The Extended Mind, ed. Menary. Cambridge, MA: MIT Press.

Friday 31 August 2012

Taking an Embodied Approach to Thought Experiments

Embodied cognition, at least in its more radical guises, argues that to truly understand cognition we must look not only at the brain, but also the body and the external world. Whether or not this principle can also be applied to consciousness is a contentious topic (see, for example, Clark 2009), but at the very least it would seem to offer a new approach to several of the classic "consciousness" thought experiments. I've already discussed Frank Jackson's "Mary" experiment in light of embodiment, but today I'd like to consider a few others, and see what general lessons we can draw.

My thoughts on this were prompted by reading Noë (2007), who spends some time discussing the hypothetical isolation of a brain, and what, if anything, it would experience. This is in the context of the search for a neural correlate for consciousness (NCC), a region (or regions) of the brain that is sufficient for conscious experience. Neuroscience is often implicitly committed to the existence of a NCC, and several philosophers are explicitly committed to it, advocating what Noë terms the Neural Substrate Thesis: "for every experience there is a neural process [...] whose activation suffices for the experience" (ibid: 1). If the Neural Substrate Thesis (NST) is correct, then neuroscience will eventually discover a NCC.

Noë focuses on two philosophers who advocate the NST, Ned Block and John Searle. Conveniently, both Block and Searle have also made important contributions to the corpus of philosophical thought experiments. Noë's main point is that focusing exclusively on the brain as the seat of consciousness can in fact be very counterintuitive, to the point of rendering some thought experiments almost incoherent. He demonstrates this with a discussion of the following "duplication scenario" (Noë 2007: 11-15), at least inspired by (if not attributed to), Block:

We are asked to imagine that my brain has an exact duplicate, a twin-brain that, if NST is correct, will undergo the exact same conscious experience that I do. Furthermore, if NST is correct, then provided that this brain continues to mimic my own, it doesn't matter what environment we place it in. It might enjoy an identical situation to my brain, or it might be stimulated just so by an expert neurosurgeon, or it might even be dangling in space, maintained and supported by a miraculous coincidence (see Schwitzgebel's discussion of the disembodied "Boltzmann Brain"). In the first couple of cases, Noë agrees that my twin-brain might well be conscious, but only by virtue of its environment (2007: 13). The final case, what he calls a "disembodied, dangling, accidental brain" (ibid: 15), seems to him to be verging on the unintelligible, and I can see his point. At the very least, it is surely an empirical question whether or not such a brain would be conscious, and one that we have no obvious way of answering.

I'm not really sure what's going on here.
 
These cases reminded me of the classic brain-in-a-vat thought experiments. I've previously held that a brain-in-a-vat would be conscious, and I still do - but with one important caveat. Its only conscious by virtue of the vat itself, and all of the complex stimuli and life-support that it is presumably receiving. If it were simply floating in suspended animation, without any input whatsoever, then I'm not so sure that it would be conscious (or at least not in any familiar sense). That is to say, the brain is not itself conscious, but the extended cognitive system that comprises brain, vat and computer probably is.1

Similar reasoning can be applied to Searle's Chinese Room thought experiment. Clearly the man inside the room doesn't understand Chinese, but that's not the point. The extended cognitive system that is composed of the man, his books, and the room, does seem to understand Chinese. It may even be worthy of being called conscious, although I suspect that the glacial speed at which it functions probably hinders this.

Back to Block. He has argued against functionalism with his China Brain thought experiment. My instinct is that, contra Block, the cognitive system formed by neurone-radios might well be conscious, although the speed at which it operated would give it a unique perspective. Furthermore, it might only be conscious if it were correctly situated, perhaps connected to a human-sized robot or body, as in the original experiment. The pseudo-neuronal system is not enough - it would require the correct kind of embodiment and environmental input to function adequately.

In fact, embodiment concerns might undermine a more radical version of the China Brain proposed by Eric Schwitzgebel. Schwitzgebel argues that complex nation-states such as the USA and China are in fact conscious, due to their functional similarity to conscious cognitive systems. I'm sympathetic to his arguments, but my concern is that such a "nation-brain" might not, in practice, be properly embodied. Aside from structural and temporal issues, it would lack a body with which to interact with the environment, and at best it might enjoy a radically different form of consciousness to our own. Even if it were conscious, we could have difficulty identifying that it was.2 So embodiment is a , double-edged sword - it doesn't always support the most radical philosophical conclusions, and it can sometimes end reinforcing more traditional positions.

1. Literally a few minutes after writing this I realised that Clark (2009: 980-1) makes a very similar point!
2. This is somewhat reminiscent of Wittgenstein's claim that "If a lion could talk, we wouldn't be able to understand it." (1953/2009: #327)

  • Clark, A. 2009. "Spreading the Joy? Why the Machinery of Consciousness is (Probably) Still in the Head." Mind 118: 963-993.
  • Noë, A. 2007. "Magic Realism and the Limits of Intelligibility: What Makes Us Conscious?" Retrieved from http://ist-socrates.berkeley.edu/~noe/magic.pdf
  • Wittgenstein, L. 1953/2009. Philosophical Investigations. Wiley-Blackwell. 

Sunday 26 August 2012

Humans > Computers

I'm going to use this post to discuss a couple of related topics. First up, AI/robotics and a recent development reported here. Then, human-computer interfaces and the embodied cognition paradigm.

Disconcerting, to say the least.
Nico (pictured above), developed by a team at Yale University, is apparently going to be able to recognise itself in a mirror, and is already able to "identify almost exactly where its arm is in space based on [a] mirror image" (New Scientist, 22.08.12). This may not sound like much, but the so-called mirror test is a key psychological experiment used to demonstrate self-awareness. Only a few non-human animals are able to recognise themselves in mirrors (including, off the top of my head, chimps, elephants, and dolphins), whilst in human children it forms a key stage in normal cognitive development (usually at around 18 months). So making a robot that is able to pass this test would be a major development in AI research. 

It's impressive stuff, but what's particularly interesting is how they've programmed it to do this. According to this article, the robot compares feedback from its own arm movements with visual information from its 'eyes', and determines whether or not the arm that it is seeing belongs to it by checking how closely these match. This use of the robot's body to carry out cognitive tasks fits well with the enactive model of vision, whereby we learn about the world through moving and acting in it. It's certainly an improvement on previous models of AI research, which have tended to focus on 'off-line' solutions, forming representations and computing a response based on these. By harnessing elements of our environment (which includes our own body), both we and robots like Nico are able to minimise the cognitive and computational load compared with purely representational solutions. (See Clark 2003 for an accessible discussion of such 'off-loading' strategies.)

This kind of research is very exciting, and self-representation is certainly an important step in developing truly intelligent AI, but it strikes that by focusing on one specific problem like this, researchers risk missing the overall picture. It's all well and good designing a robot that can recognise itself, and another robot that can traverse rough terrain, and yet another robot that can recognise visual patterns, but we'll only start getting truly impressive results when all these abilities are put together. I'm convinced that some elements of human cognition are emergent, only appearing once we reach a critical mass of less advanced capabilities, and how this occurs might not become apparent until we've achieved it. Designing and programming solutions, in advance, for absolutely everything that we might want a robot to do just isn't feasible. Intriguingly Nico seems to have been originally designed to interact with children, which I'll admit is more promising. There's nothing wrong with tackling AI problems in isolation, we just have to remember that eventually we should be looking toward forming these solutions into a coherent whole.

More on this below...

Which leads me, somewhat tenuously, to my next topic. Anderson (2003: 121-5) discusses some interesting proposals from Paul Dourish concerning the future of human-computer interfaces (i.e. the way in which we interact with and make us of computers). For the last century this has largely been constrained by the limitations of the computers, meaning that how we interface with them has not always been ideally suited to our human limitations. The difficulties which many people find with even the simplest computer task attest to these limitations. Research in both 'embodied' AI and embodied cognition is beginning to suggest some alternative ways in which human-computer interfaces might be designed.

As an example of one such alternative Anderson gives the "marble answering machine", which I believe Clark (2003) also discusses. This machine, illustrated above, functions just as a regular answering machine does, but instead of an electronic display or automated message, it releases a different marble for each message recorded. Each marble is unique, and returning it to the machine elicits the playback of the particular message that it represents. Thus, in a very tangible and intuitive way, the user is able to keep track of their messages by handling, even removing, the physical marbles. Similar interfaces could be imagined for many other simple computers (for that is all an answering machine is), or could even be scaled up to the complexity of a desktop PC or laptop.

Here an Anderson makes an interesting contrast between this "tangible computing" and another direction that human-computer interfaces might take: virtual reality (2003: 124). He views the latter as being distinctly unfriendly to humans, drawing them in to the world of the computer as opposed to drawing the computer out in to the world of the human. I think there's room for both approaches, but this seeming dichotomy between the two worlds, one physical and one virtual, is certainly a striking image. What's also striking is the continued interaction between embodied cognition, robotics and AI, and computing, and just how fruitful it can be for all concerned. Once again I am struck by the hugely positive potential for interdisciplinary co-operation, particularly when it comes to philosophy and cognitive science.
 
  • Anderson, M. 2003. "Embodied Cognition: A Field Guide." Artificial Intelligence 149: 91-130.
  • Clark, A. 2003. Natural Born Cyborgs. Oxford: OUP.