Friday 31 August 2012

Taking an Embodied Approach to Thought Experiments

Embodied cognition, at least in its more radical guises, argues that to truly understand cognition we must look not only at the brain, but also the body and the external world. Whether or not this principle can also be applied to consciousness is a contentious topic (see, for example, Clark 2009), but at the very least it would seem to offer a new approach to several of the classic "consciousness" thought experiments. I've already discussed Frank Jackson's "Mary" experiment in light of embodiment, but today I'd like to consider a few others, and see what general lessons we can draw.

My thoughts on this were prompted by reading Noë (2007), who spends some time discussing the hypothetical isolation of a brain, and what, if anything, it would experience. This is in the context of the search for a neural correlate for consciousness (NCC), a region (or regions) of the brain that is sufficient for conscious experience. Neuroscience is often implicitly committed to the existence of a NCC, and several philosophers are explicitly committed to it, advocating what Noë terms the Neural Substrate Thesis: "for every experience there is a neural process [...] whose activation suffices for the experience" (ibid: 1). If the Neural Substrate Thesis (NST) is correct, then neuroscience will eventually discover a NCC.

Noë focuses on two philosophers who advocate the NST, Ned Block and John Searle. Conveniently, both Block and Searle have also made important contributions to the corpus of philosophical thought experiments. Noë's main point is that focusing exclusively on the brain as the seat of consciousness can in fact be very counterintuitive, to the point of rendering some thought experiments almost incoherent. He demonstrates this with a discussion of the following "duplication scenario" (Noë 2007: 11-15), at least inspired by (if not attributed to), Block:

We are asked to imagine that my brain has an exact duplicate, a twin-brain that, if NST is correct, will undergo the exact same conscious experience that I do. Furthermore, if NST is correct, then provided that this brain continues to mimic my own, it doesn't matter what environment we place it in. It might enjoy an identical situation to my brain, or it might be stimulated just so by an expert neurosurgeon, or it might even be dangling in space, maintained and supported by a miraculous coincidence (see Schwitzgebel's discussion of the disembodied "Boltzmann Brain"). In the first couple of cases, Noë agrees that my twin-brain might well be conscious, but only by virtue of its environment (2007: 13). The final case, what he calls a "disembodied, dangling, accidental brain" (ibid: 15), seems to him to be verging on the unintelligible, and I can see his point. At the very least, it is surely an empirical question whether or not such a brain would be conscious, and one that we have no obvious way of answering.

I'm not really sure what's going on here.
 
These cases reminded me of the classic brain-in-a-vat thought experiments. I've previously held that a brain-in-a-vat would be conscious, and I still do - but with one important caveat. Its only conscious by virtue of the vat itself, and all of the complex stimuli and life-support that it is presumably receiving. If it were simply floating in suspended animation, without any input whatsoever, then I'm not so sure that it would be conscious (or at least not in any familiar sense). That is to say, the brain is not itself conscious, but the extended cognitive system that comprises brain, vat and computer probably is.1

Similar reasoning can be applied to Searle's Chinese Room thought experiment. Clearly the man inside the room doesn't understand Chinese, but that's not the point. The extended cognitive system that is composed of the man, his books, and the room, does seem to understand Chinese. It may even be worthy of being called conscious, although I suspect that the glacial speed at which it functions probably hinders this.

Back to Block. He has argued against functionalism with his China Brain thought experiment. My instinct is that, contra Block, the cognitive system formed by neurone-radios might well be conscious, although the speed at which it operated would give it a unique perspective. Furthermore, it might only be conscious if it were correctly situated, perhaps connected to a human-sized robot or body, as in the original experiment. The pseudo-neuronal system is not enough - it would require the correct kind of embodiment and environmental input to function adequately.

In fact, embodiment concerns might undermine a more radical version of the China Brain proposed by Eric Schwitzgebel. Schwitzgebel argues that complex nation-states such as the USA and China are in fact conscious, due to their functional similarity to conscious cognitive systems. I'm sympathetic to his arguments, but my concern is that such a "nation-brain" might not, in practice, be properly embodied. Aside from structural and temporal issues, it would lack a body with which to interact with the environment, and at best it might enjoy a radically different form of consciousness to our own. Even if it were conscious, we could have difficulty identifying that it was.2 So embodiment is a , double-edged sword - it doesn't always support the most radical philosophical conclusions, and it can sometimes end reinforcing more traditional positions.

1. Literally a few minutes after writing this I realised that Clark (2009: 980-1) makes a very similar point!
2. This is somewhat reminiscent of Wittgenstein's claim that "If a lion could talk, we wouldn't be able to understand it." (1953/2009: #327)

  • Clark, A. 2009. "Spreading the Joy? Why the Machinery of Consciousness is (Probably) Still in the Head." Mind 118: 963-993.
  • Noë, A. 2007. "Magic Realism and the Limits of Intelligibility: What Makes Us Conscious?" Retrieved from http://ist-socrates.berkeley.edu/~noe/magic.pdf
  • Wittgenstein, L. 1953/2009. Philosophical Investigations. Wiley-Blackwell. 

Sunday 26 August 2012

Humans > Computers

I'm going to use this post to discuss a couple of related topics. First up, AI/robotics and a recent development reported here. Then, human-computer interfaces and the embodied cognition paradigm.

Disconcerting, to say the least.
Nico (pictured above), developed by a team at Yale University, is apparently going to be able to recognise itself in a mirror, and is already able to "identify almost exactly where its arm is in space based on [a] mirror image" (New Scientist, 22.08.12). This may not sound like much, but the so-called mirror test is a key psychological experiment used to demonstrate self-awareness. Only a few non-human animals are able to recognise themselves in mirrors (including, off the top of my head, chimps, elephants, and dolphins), whilst in human children it forms a key stage in normal cognitive development (usually at around 18 months). So making a robot that is able to pass this test would be a major development in AI research. 

It's impressive stuff, but what's particularly interesting is how they've programmed it to do this. According to this article, the robot compares feedback from its own arm movements with visual information from its 'eyes', and determines whether or not the arm that it is seeing belongs to it by checking how closely these match. This use of the robot's body to carry out cognitive tasks fits well with the enactive model of vision, whereby we learn about the world through moving and acting in it. It's certainly an improvement on previous models of AI research, which have tended to focus on 'off-line' solutions, forming representations and computing a response based on these. By harnessing elements of our environment (which includes our own body), both we and robots like Nico are able to minimise the cognitive and computational load compared with purely representational solutions. (See Clark 2003 for an accessible discussion of such 'off-loading' strategies.)

This kind of research is very exciting, and self-representation is certainly an important step in developing truly intelligent AI, but it strikes that by focusing on one specific problem like this, researchers risk missing the overall picture. It's all well and good designing a robot that can recognise itself, and another robot that can traverse rough terrain, and yet another robot that can recognise visual patterns, but we'll only start getting truly impressive results when all these abilities are put together. I'm convinced that some elements of human cognition are emergent, only appearing once we reach a critical mass of less advanced capabilities, and how this occurs might not become apparent until we've achieved it. Designing and programming solutions, in advance, for absolutely everything that we might want a robot to do just isn't feasible. Intriguingly Nico seems to have been originally designed to interact with children, which I'll admit is more promising. There's nothing wrong with tackling AI problems in isolation, we just have to remember that eventually we should be looking toward forming these solutions into a coherent whole.

More on this below...

Which leads me, somewhat tenuously, to my next topic. Anderson (2003: 121-5) discusses some interesting proposals from Paul Dourish concerning the future of human-computer interfaces (i.e. the way in which we interact with and make us of computers). For the last century this has largely been constrained by the limitations of the computers, meaning that how we interface with them has not always been ideally suited to our human limitations. The difficulties which many people find with even the simplest computer task attest to these limitations. Research in both 'embodied' AI and embodied cognition is beginning to suggest some alternative ways in which human-computer interfaces might be designed.

As an example of one such alternative Anderson gives the "marble answering machine", which I believe Clark (2003) also discusses. This machine, illustrated above, functions just as a regular answering machine does, but instead of an electronic display or automated message, it releases a different marble for each message recorded. Each marble is unique, and returning it to the machine elicits the playback of the particular message that it represents. Thus, in a very tangible and intuitive way, the user is able to keep track of their messages by handling, even removing, the physical marbles. Similar interfaces could be imagined for many other simple computers (for that is all an answering machine is), or could even be scaled up to the complexity of a desktop PC or laptop.

Here an Anderson makes an interesting contrast between this "tangible computing" and another direction that human-computer interfaces might take: virtual reality (2003: 124). He views the latter as being distinctly unfriendly to humans, drawing them in to the world of the computer as opposed to drawing the computer out in to the world of the human. I think there's room for both approaches, but this seeming dichotomy between the two worlds, one physical and one virtual, is certainly a striking image. What's also striking is the continued interaction between embodied cognition, robotics and AI, and computing, and just how fruitful it can be for all concerned. Once again I am struck by the hugely positive potential for interdisciplinary co-operation, particularly when it comes to philosophy and cognitive science.
 
  • Anderson, M. 2003. "Embodied Cognition: A Field Guide." Artificial Intelligence 149: 91-130.
  • Clark, A. 2003. Natural Born Cyborgs. Oxford: OUP.


Friday 24 August 2012

First Impressions of The Cambridge Declaration on Consciousness

(by Jonny)

In a recently fascinating move, various researchers at a meeting at Cambridge University, including cognitive neuroscientists, neuropharmacologists, neurophysiologists, neuroanatomists and computational neuroscientists, signed a declaration voicing their support for the notion that homologous circuits and activity within non-human animal brains demonstrates consciousness. Any such exciting claim requires careful reading. Their declaration is summarazied thus:

“The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that  non-human  animals have the neuroanatomical,  neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Nonhuman animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.”
The problem with the declaration is that it is far too quick to throw around important words, like “affective states”, “emotions” and more importantly “consciousness” without carefully defining them. This isn’t pedantry, it is a necessity given the ambiguity of key concepts within the debate.

This photograph proves cats are much more like humans than first thought.
The idea of looking at neural correlates for demonstrating consciousness is interesting in itself, and I do think it has some value. The logic seems founded on the idea of looking at the neural activity when humans are performing or undergoing x (presumably taking x to involve some unquestionably “affective” or “emotional” state), then discovering some parallel activity in animal. But without much hint of their reasoning (and yes I understand this is just a declaration but this seems to me to be the keystone), the declaration takes this parallel activity to be obviously a sign of consciousness. In short, we need a good clear definition of consciousness before we start talking about it in important contexts. I’m not saying the signees are wrong in their conclusions, but that they are overly ambitious for a two page declaration, or out of touch with the necessities of the debate.

As an aside, here are my two favourite comments from the article about this on io9:

“ if science says this is correct, it is. end of story.”

And,

Animals don't have souls or a conscience, they were put here on the Earth to serve man. This is the truth from the Lord our God himself as written in his Holy Bible.

It’s nice to a see a variety of constructive opinion keeping the debate alive!

Responsible Yet Irrelevant

"The idea - that the bottom level, thought 100 percent responsible for what is happening, is nonetheless irrelevant to what happens - sounds almost paradoxical, and yet it is an everyday truism."

Inspired by this line from Douglas Hofstadter's I Am a Strange Loop (2007: 42), I want to talk a little bit about levels of explanation - what they are, and why they're so important in cognitive science. As Hofstadter indicates, the idea that there can be differing levels of explanation should be a familiar one. Consider something as simple as crossing a road. How do we explain it?

Apparently Lennon is Jesus

We can say that you wanted to get to the other side, that you looked left and right, then left again, then began to walk across, that your brain sent a signal telling your arms, legs and hips to move in unison with one another, that certain chemicals were released and certain electronic currents traveled down your nerves, or even simply that a whole lot of atoms interacted with one another. Already we have five different levels of explanation, although some of them might seem to blur a little at the boundaries.

In the 'hard sciences' (biology, chemistry, physics), levels of explanation are also common. Each discipline will describe the same phenomenon in a very different way. Biology will talk at the level of the organism, chemistry at the level of molecular interactions, and physics at the level of atoms or smaller1. Ultimately the explanations used by physicists underly all other explanations - this is the sense in which they are "responsible" for what is happening. Yet at the same time, they are often "irrelevant" as well. When we are describing how and why someone crossed a road, the precise atomic processes taking place make no real difference to our description (although of course it's important that certain kinds of atomic processes are taking place). Furthermore, an accurate description at this level of explanation would be hopelessly complex, maybe even impossible to compute. So the abstraction of talking at a higher level of explanation allows us to make sense of the world.

Back to cognitive science. When we study the mind, levels of explanation become particularly important, and getting them wrong can lead to serious misunderstandings. Obviously most of what cognitive science studies is underwritten by neuroscience (or at least, biology more generally), but we can nonetheless get things done without referring to neuroscientific levels of explanation. Many psychologists study human (and animal) behaviour without worrying too much about what is precisely going on at the neuronal level. Many philosophers discuss theories of mental representation and consciousness without ever looking at the empirical work of the psychologists, let alone the neuroscientists. They are able to do this because whilst the lower levels of explanation are ultimately responsible for one's own level, they are often not particularly relevant.

Often, but not always, and so I don't think Hofstadter is totally correct when he says that the lower levels are "irrelevant". Whilst a lot of the time it's going to be unhelpful to try and explain consciousness in terms of quarks and Higgs bosons, it's also important not to lose sight of the essentially physical nature of mental phenomena. Too much abstraction can lead to confusion. I think a lot of philosophical thought experiments fail to appreciate this, and end up leading us to meaningless conclusions that have absolutely no real world application. (My thoughts on Frank Jackson's 'Mary' experiment reflect this worry.) It's why scientists sometimes laugh at philosophers, accusing us of just making things up. At its best, philosophy can provide a level of explanation that is symbiotic with those below it, bringing clarity and organisation to the complexities of scientific research. At its worst philosophy deals in empty abstractions and ungrounded concepts, neither of which are relevant or responsible. So whilst the philosophy of cognitive science can often afford to ignore aspects of the more fundamental levels of explanation, its also essential that it doesn't lose sight of the grounding in physical reality that those levels can provide.

1. Although it's worth acknowledging the increasing cross-over between these levels, with the likes of biochemistry providing interdisciplinary "bridges".

  • Hofstadter, D. 2007. I Am a Strange Loop. New York: Basic Books.

Saturday 18 August 2012

A sign of the times?

(by Jonny)

I work on a shop floor. In between the inevitable chores any quality retail outfit would demand, is the less inevitable philosophical discourse that frequently arises between staff. Yesterday the topic of the day was personal identity. What I particularly enjoyed about the discussion was the surprising, for me anyway, acceptability of what I thought was a non-conventional view of the self. Not that everyone was in precise agreement, but what was in consensus was a rejection of the “traditional view” of there existing a distinct and fundamental self which essentially constitutes “us”, which is independent of the complex interaction of varied biological processes and is ultimately, if not always completely, at the helm of the gross body. The rejection of this view in favour of an idea that self is far more fragile, far more contingent, and without constituting a single central executive, was a pleasant discovery.

It reminded me of a class a couple of years ago when my lecturer polled the class asking whether they would willingly enter Nozick's famous experience machine (in short, a machine capable of producing in the subject an artificial life consisting of whatever desirable or pleasurable experiences she should want). When a little over the half said they would be willing to enter the machine he responded, unfazed, that this was a consistent trend among contemporary undergraduates that contradicted the results of polls taken in the 70s when the thought experiment was first introduced in “Anarchy State and Utopia” (1974). Though I'm a fan of thought experiments (usually because they sound like cool ideas for scifi stories), it does make you question the value of generalised results . In this case, initial results may not be the refutation of a certain utilitarianism some might like to think it is. 

My experience machine life would basically look like a prog rock cover
In any case these discussions continue to prove that philosophical dialogue is as popular as it ever was and will continue to be, and further that we should be cautious to infer intuitions or make sweeping conclusions about societies' predominant convictions.

Nozick, Robert (1974). Anarchy, State, and Utopia. New York: Basic Books

Wednesday 15 August 2012

Drescher on False Reification

In Good and Real, an ambitious attempt to "demistify paradoxes from physics to ethics", Gary Drescher discusses the "false reification" of concepts in the philosophy of mind (2005: 50ff). The fallacy of reification is familiar in other areas of philosophy, but to my knowledge Drescher is the first to apply it specifically to consciousness (although he acknowledges Dennett [1991] as a source of inspiration). Today I want to discuss a few of his insights, and I'll maybe go into more detail with my thoughts on them in a future post.

First off, what is false reification? It occurs when we mistakenly interpret our empirical observations as identifying a new and distinct entity. In the case of consciousness, that basically means identifying "being conscious" as a property over and above the cognitive processes that we are conscious of. A simple, non-cognitive example of false reification is the historic notion of vitalism. It used to be believed that there was a separate life-force that endowed living things with life, animating them in a way that non-living things could not emulate. We know now that no such vital life-force exists, and that being alive is in fact no more than a function of the biological processes that compose living things. Whereas vitalism supposed that biological processes involved an extra 'spark of life', modern biology simply identifies life with certain biological processes. We can say that vitalism falsely reified life, believing it to be a distinct entity or property over and above the physical processes that instantiate it. 

Similarly, many philosophical puzzles can be neatly side-stepped if we avoid falsely reifying consciousness. A common mistake, according to Drescher, is to view consciousness as being an intrinsic property of mental events that we discover when we examine those events. "Rather," he writes, "the examination of a mental event [. . .] is what constitutes that event's consciousness" (Drescher 2005: 49). Under this interpretation, it is no surprise that whenever we examine a mental event, we find that event to be conscious. Like the light that turns on whenever we open a refrigerator, consciousness 'turns on' whenever we focus on or examine a particular mental event (ibid.). The false reification that we commit here is to think of consciousness as something extra that we must discover within a conscious system, beyond the physical processes that constitute that system.

Quale, not quail.
The false reification of qualia can also result in philosophical confusion. A quale is a philosophical term referring to the conscious sensation of an experience, for example the feeling of what it is like to see red or hear a loud noise. A famous thought-experiment asks what would happen if you were able to 'invert your spectrum' - that is, make everything look the opposite colour to what it does now. So red would look green, blue would look orange, and yellow would look purple (or something like that, the precise details are unimportant). Would you notice any difference? If colour-qualia have an existence independent of the physical process of colour perception, then perhaps you might - but to argue that they do is to commit a false reification. Our conscious experience of a colour just is the act of perceiving that colour, and so the inverted spectrum experiment is simply incoherent. It just isn't possible that we could perceive everything in the same way that we do now, but with the colours inverted. There are no independent qaulia that we can switch around in order to make the experiment work.

A final, related false reification can occur when we consider our motivations for certain actions. Put (extremely) simplistically, we are motivated by a desire to experience pleasurable things and avoid painful things. So it seems natural to say things like "you want to eat chocolate becuase it just tastes good; you want to avoid stubbing your toe becuase that just feels bad" (Drescher 2005: 77). Intuitively this makes sense, but Drescher thinks that it gets things the wrong way round. There is no property tasty that is intrinsic to chocolate, and no property painful that is intrinsic to toe-stubbing. Rather it is the fact that we have a natural desire for sugar that makes chocolate taste good, and the fact that we have a natural aversion to harming ourselves that makes toe-stubbing painful. So pain and tastiness are constituted by these evolved processes, and to view them as intrinsic properties that we aim for (or aim to avoid) is to falsely reify them.

Thus concludes my whistle-stop tour of Drescher's views of false reification in the study of consciousness. His book is very interesting, although I'm doubtful of his central claims concerning free will and determinism in chapters 5-7. More on those next week, perhaps, or for now you can just re-read my previous post on the topic.

  • Dennett, D. 1991. Consciousness Explained. Little, Brown and Company.
  • Drescher, G. 2005. Good and Real. Cambridge, MA: MIT Press

Monday 6 August 2012

Mary the Embodied Colour Scientist

(Being the techno-wiz that I am, I've worked out how to add additional authors to the blog. So who's written what should now be both more obvious and less obtrusive...this is Joe, by the way.)

"Mary the Colour Scientist" is a classic thought experiment, originally formulated by Frank Jackson (1982; 1986). For those unfamiliar with it, it goes something like this:

Mary has been raised from birth in an entirely monochrome environment, although she has been provided with a wealth of scientific data on colour and colour perception. She is in fact the world expert in the field, despite having never seen colour. For the purposes of the experiment, we are asked to assume that she knows all there is to know about the physical process of colour perception. What happens when she is released into the world? The thought is that she must learn something about what colours look like, despite already knowing all the physical information about colour and colour perception. Thus, there is more to experiencing colour than just the physical process, and therefore physicalism is false.

Monochrome

There's been a huge quantity of debate about this experiment over the years, and I don't intend to discuss much of it now. Most of the replies can be found in an anthology, There's Something About Mary (2004). What I want to discuss is a potential response, from the perspective of embodied cognition, which I don't think has been discussed in any detail before.

Jackson asks us to imagine that Mary knows all that there is to know, scientifically speaking, about colour. Ignoring the fact that the sheer weight of this claim is often underestimated1, we might want to challenge his interpretation of what this in fact means. He assumes that a fully-detailed factual knowledge of colour perception will necessarily, if physicalism is correct, grant Mary knowledge about what colour looks like. On a traditional understanding of cognition, this seems to be a fair assumption. Knowledge, including phenomenal knowledge, is just in the head, and if we knew how colour perception worked, we would be able to imagine what it would be like to have such perceptions. This is the kind of cognitive materialism that was Jackson's original target, and given those assumptions his experiment can seem fairly convincing. My opinion used to be that we had just failed to grasp what truly complete knowledge of colour perception would be like, and that, contra Jackson, Mary in fact wouldn't learn anything knew when she perceived colour for the first time.

However, another possibility has now occurred to me. What if experiencing colour were to be more accurately conceived as an embodied phenomenon, involving not just the brain, but also the visual system and other physiological responses?2 If this were the case, then scientific knowledge of colour perception just wouldn't allow for phenomenal knowledge of what colour looks like. Such knowledge would necessarily be impossible to acquire without actually seeing colour, and not because of any non-physical quality, but simply because that's how colour perception works. Once we shift our focus from the brain to the body as a whole, this seems pretty obvious.

A related kind of response has been made previously, that what Mary acquires when she first sees colour is knowledge, but knowledge of a different kind to that which the scientific understanding of colour perception granted her. Harman (1990), Flanagan (1992), and Alter (1998) all make arguments of this kind. What the perspective of embodied cognition adds to these kinds of responses is a principled stance from which to argue that there is more to knowledge and experience than what goes on in the brain. A more extreme response would be to deny that visual knowledge is ever possible without a world to perceive,3 in which case we would simply deny one of Jackson's original premises, that Mary has total scientific knowledge of colour perception. Such knowledge might not be acquirable without actual experience of colour. This might seem like a cheap way out, but scientifically implausible thought experiments can't easily be given scientifically plausible responses. Once you start discussing things that can't actually happen in the real world, the need to give you a real answer sometimes becomes moot.

1. A problem which pervades many thought experiments of this kind. Whenever you're asked to consider a situation where someone has total physical knowledge of a situation, be wary. Such knowledge is way beyond the reach of current science, and might simply be impossible for any human to comprehend. I often feel that the best response to such experiments is a cautious "I don't know what that would be like, so let's withhold our judgement".
2. There's a secondary issue here, which to my knowledge has not been satisfactorily resolved. After a lifetime of monochrome experience, Mary might in fact be totally unable to perceive colour. Experiments on sensory privation during visual development suggest that, at best, her perception of colour would take a while to develop, and might never quite reach 'normal'. Oversimplifications of this kind haunt many otherwise powerful thought experiments. 
3. Gibson's ecological theory of perception, discussed at length by Rockwell (2005), might entail something like this.

  • Alter, T. 1998. “A Limited Defense of the Knowledge Argument.” Philosophical Studies 90/1: 35–56.
  • Flanagan, O. 1992. Consciousness Reconsidered. Cambridge, MA: MIT Press.
  • Harman, G. 1990. “The Intrinsic Quality of Experience." Philosophical Persepctives 4, Action Theory and Philosophy of Mind: 31–52.
  • Jackson, F. 1982. "Epiphenomenal Qualia". Philosophical Quarterly 32: 127–136.
  • Jackson, F. 1986. "What Mary Didn't Know". Journal of Philosophy 83: 291–295.
  • Ludlow, P., Nagasawa, Y. & Stoljar, D. (eds.) 2004. There's Something about Mary: essays on phenomenal consciousness and Frank Jackson's knowledge argument. Cambridge, MA: MIT Press.