Saturday 23 June 2012

Embodied Ethics

(by Joe, with credit to Marc Morgan at Trinity College Dublin for inspiring some of these thoughts. Marc is a contributor at socialjusticefirst.)

I used to think that the majority of actions were morally neutral, and that only those things that caused harm or suffering could be classified as 'bad'. In and of itself I wouldn't have said that lying was wrong, or sleeping good. Only when coupled with contingencies such as the lie being malicious, or the sleep necessary to rejuvenate the mind and body, could these things be considered in any way moral. My practical ethics are still largely consequentialist, but I've been reconsidering how I classify things within that framework.


A body.

When it comes to practical, applied ethics (certainly the most important kind of ethics), we need to look at everything in context. Whether an action is good or bad, whether it causes harm, will depend on so many contingent factors that it is extremely difficult to make accurate ethical judgments in advance. The best we can hope for is to establish guiding heuristics that will help us to make moral decisions in the future. With that in mind, let's return to the classic example of lying.

As I mentioned above, I used to say that lying was only immoral if it caused harm. That's still basically what I think, only now I'd be tempted to expand harm to include more subtle effects like the degradation of the liar's moral character, and the long-term instability of a relationship build on deception. So whilst in the abstract lying might be morally neutral, in practice it could almost always wrong. Of course there are going to be exceptions, such as if you're sheltering a refugee from a murderous band of thugs, but my moral compass is beginning to swing distinctly towards the "lying is usually wrong" side of things. 

We could call this kind of approach "embodied ethics", in that it emphasises the "in the world" nature of moral judgments. Another sense in which ethics should be considered embodied is that it is very much a product of our evolved and biological nature. To a large extent, things are essentially right or wrong to the degree that they facilitate a way of life that is guided by our evolution. Just to be clear, I'm not saying that everything we've evolved to do is inherently right, but only that evolution has guided the way in which we make ethical considerations, as well as defining the things that matter to us. So ethical discourse must be underwritten by an understanding of our biological, embodied nature.

Finally, ethics is embodied because the mind and the self are embodied. As I've written about elsewhere, the potential for the extension of the mind and perhaps even the self has serious ethical implications. More generally, an understanding of morality demands an understanding of the mechanics behind the human mind, and an understanding of how it interacts with the world. Otherwise our ethics will be too abstract to be meaningful - this is why the ethical debates that philosophers sometimes have can seem so odd and out of touch. I'm still working on the full details, but from now on I'm going to try and make sure that my ethical theorising is firmly embodied, in all three of the ways that I've outlined here.

Tuesday 19 June 2012

The Invisible Self

(by Joe)

"What am I? Tied in every way to places, sufferings, ancestors, friends, loves, events, languages, memories, to all kinds of things that obviously are not me. Everything that attaches me to the world, all the links that constitute me, all the forces that compose me don't form an identity, a thing displayable on cue, but a singular, shared, living existence, from which emerges - at certain times and places - that being which says "I." Our feeling of inconsistency is simply the consequence of this foolish belief of the permanence of the self and of the little care we give to what makes us what we are."

My copy of the book looks like this.

There is a muddy area where my philosophical research and my political beliefs meet, and the above quote, from The Coming Insurrection (Invisible Committee, 2009: 31-2), sums it up nicely. The Coming Insurrection was written in 2007 by an anonymous collective (calling themselves 'The Invisible Committee') based in France, and it is clearly strongly influenced by the philosophy of that country, most notably the situationist movement of the 1960's, but also continental philosophy more broadly. It is pompous, vague and quite rightly criticised by many in the left-libertarian circles that I inhabit - Django over at Libcom described it as "a huge amount of hyperbole and literary flourish around some wafer-thin central propositions". Nonetheless, the approach towards the self expressed in the above extract appeals to me. 

Put very crudely, I think that the self is an illusion or an abstraction, a "narrative center of gravity" that helps guide our lives and our interactions with others (Dennett, 1992). The mechanisms behind this formation of the self have evolved for a reason, and for pragmatic reasons we shouldn't strive to eliminate it entirely, but to focus on it too much is unhealthy and unhelpful. Such a focus has, since the enlightenment, led to a heightened sense of individualism throughout the western world, one which I think is at the heart of our capitalist, consumerist and ultimately selfish culture. We can overcome this individualism by studying what the self truly is, and perhaps eventually realising that it doesn't truly exist. 

There is an obvious link with Buddhist philosophy here, one which I am currently trying to learn more about. There is also a somewhat less obvious link with embodied cognition, and in particular the extended mind hypothesis (Clark & Chalmers, 1998). If the self is an illusion constructed by our mind, and that mind is embedded in, or even extended into, its environment, then the self can be thought of as a product of that environment. This could have quite serious consequences, not only for metaphysics and the philosophy of mind, but also for ethics and political philosophy.

Which brings us back to The Coming Insurrection. In the passage I quoted, they describe the sense of "inconsistency" that we feel when we realise that whilst the self is composed of our interactions with things in the world, those things "obviously are not me". The self is invisible, and however hard we try to look for it we can never find it. David Hume expressed a similar feeling when he wrote that "I never can catch myself at any time without a perception, and never can observe any thing but the perception"(A Treatise of Human Nature: Book 1, Part 4, Section 6). We are what we do, and what we do is interact with the world. The focus on the individual over the last few hundred years has clouded that fact, and created an entity, the solid, 'real' self, that does not in fact exist. In coming to understand that who we are is so heavily dependent upon who others are, I hope we might eventually be able to learn to behave more compassionately and co-operatively with other people, as well as with our non-human environment. Satish Kumar embodies this hope in the phrase "You are, therefore I am" (Kumar, 2002), a play on Descarte's "I think, therefore I am", itself a perfect slogan for enlightenment individuality.

There is also an element of the absurd that is recognised, I think, by both Hume and the Invisible Committee. We are confronted with on the one hand an unshakable conviction in the existence of the self, and on the other with convincing evidence that no such thing exists. Similar absurdity can be found in our struggles with free will, moral realism and even scepticism about the external world. In each case a pragmatic route must be found, one that allows us to go on, but at the same time acknowledges the truths that we have learned about the world. In the case of the self, I think that this means accepting that we are a lot closer to the world around us than our privileged, first person view-point makes it seem, and that in order to survive in such a world we must understand and respect our place in it.

There's a lot more I'd like to say about a lot of things here, but I'll save it for future posts. Otherwise we might get complaints about the lack of monkeys!

Here you go.


Clark, A. & Chalmers, D. 1998. "The Extended Mind." Analysis 58: 7-19.



Invisible Committee, The. 2009. The Coming Insurrection. Los Angeles, LA: Semiotext(e). 

Kumar, S. 2002. You Are Therefore I Am: A Declaration of Dependence. Totnes, UK: Green Books.



Tuesday 12 June 2012

Minding the Abyss: World Building without radical relativism

(by Jonny)

In their influential book “The Embodied Mind” (1991) Francisco Varela, Evan Thompson and Eleanor Rosch made a pioneering journey through many of the themes that the contemporary field of embodied cognition continues to spend a great deal of its resources exploring. One theme that hasn’t caught on in the same way that, say its emphasis on ecological perception has, is the notion that there is no pre-existing world with a given set of properties, a rejection of the “realism” that pervades most contemporary “analytic philosophy”. They express a position which takes the idea of embodied action, that the way in which we make sense of the world is necessarily dependent on contingent sensorimotor capacities, to imply that a perceiver-dependent world is an incoherent notion. Instead, it is perceivers who build worlds out of their contingent physical circumstances. The authors contend that the thesis of embodied mind leads us to believe that are no properties out there in the world independent of perceivers, that there is no independent or objective world.

Clearly published in the 90s...

Varela et al seem to believe that embodied cognition must tell us something profound about the metaphysical nature of reality. Yet it seems to me that this belief is an unnecessary chasm-leap of logic, one which if taken too seriously threatens the much more mundane claims of this research project. The jump seems to be this: why does the fact that perception is dependent upon action (in turn dependent on a contingent physical apparatus) imply that the world has independent, given properties. Why does the fact that our own knowledge of the world depends on the our particular and happenstance bodily form, imply that the world does not exist prior to our particular and happenstance knowledge of it? Such radically relativist theories about the metaphysic of the world do not seem to me to follow.  Andy Clark nicely reflect the worry about relativism’s unnecessary influence.


This high tech diagram I just stole from the internet has little to do with what we're talking about and probably won't help. ( http://www.unc.edu/~megw/TheoriesofPerception.html 

“Varela et al. use their reflections as evidence against realist and objectivist views of the world. I deliberately avoid this extension, which runs the risk of obscuring the scientific value of an embodied, embedded approach by linking it to the problematic idea that objects are not independent of the mind. My claim, in contrast, is simply that the aspects of real-world structure which biological brains represent will often be tightly geared to specific needs and sensorimotor capacities” (1997: 173).

Andy Clark. Nuff said.

“Continental  philosophy”, which I contend at this point in the history of ideas has more time for more radically relativaist theories, has influenced and continues to influence embodied cognition in a tolerant and healthy way that should continue. Yet undoubtedly the majority of research within embodied cognition still takes place within the tradition of “analytic philosophy”, which itself assumes a common sense realism, and whilst it is always healthy to question our paradigms, I believe we would be too quick to throw away the ever prudent belief in an world independent of perception.

Nevertheless, I believe relativist theories do teach the traditional analytic approach important lessons- lessons I believe Varela et al touch upon but take too far. The way human agents carve up the world is heavily shaped by theoretical-framework/s; our relative cultural, historical and physical context. Richard Rorty seems right in some sense when he says the concept of “giraffe” is ultimately contingent (1999: xxvi). It seems to me perfectly plausible that some alien species would not perceive the world as containing giraffes. Giraffes (organisms and their categorization) happen to be a useful object for us to conceptualise for obvious evolutionary reasons. However, this does not mean that there is nothing independent of our cognitive systems, independent of our descriptions, independent of our history, which allows us to pick out giraffes. We can learn from Rorty that for example “our linguistic practices are so bound up with our other social practices that our descriptions of nature… will always be a function of our social needs” without needing to agree that there is no underlying “dough” from which to cut out cookies (1999: 48). We can likewise learn from Varela et al that our contingent physical makeup must determine the way in which we perceive the world without needing claim that there is no world beyond our perception. 

Andy Clark (1997) Being There: Putting Brain, Body and World Together
Again. MIT Press: Cambridge, Massachusetts

Rorty, R (1999) Philosophy and Social Hope Penguin: New York

Varela, F., Thompson, E. and Rosch, E. (1991). The embodied mind. MIT Press: Cambridge, Massachusetts

Sunday 10 June 2012

Accepting Without Believing, or Two Systems of Belief?

(Joe)

In the last few chapters of The Myth of Morality (2001), Richard Joyce lays out a potential system of "moral fictionalism", whereby we could accept moral premises without truly believing in them. This follows a lengthy argument for why we should be "error theorists" about morality, which means that we should consider moral realism to be false. If this is the case, then the most obvious conclusion would be that we should discard morality entirely, whatever that might mean. Instead Joyce wants us to take a fictionalist stance towards morality. By doing this he hopes that we will be able to continue making use of moral discourse, with all the advantages that it brings in terms of social cohesion, but without compromising our epistemological integrity.

This is Richard Joyce. Unfortunately I couldn't think of a better picture to accompany the post.

In Joyce's words, "to make a fiction of p is to 'accept' p whilst disbelieving p" (2001: 189). Without going in to too much detail, Joyce thinks that merely accepting a proposition means something like assenting to it, and employing the discourse that it facilitates, without believing it to be true. In the case of moral propositions, this will retain some of the useful imperative that they impart to our actions, in what Joyce seems to characterise as an almost unconscious manner. So when I, as a moral fictionalist, say "It is wrong to harm another", I am not expressing a belief in some moral truth, but rather in a sense reminding myself that harming others is usually bad for me in the long, despite any apparent short term benefits.

In fact, it may be the case that at the time I make that statement, I do truly believe it - what makes me a fictionalist is that when I'm questioned under serious philosophical pressure ("Do you really believe that?"), I will express my disbelief. This leads me to think that we might be able to more accurately model a possible moral fictionalism by talking about in terms of two seperate belief systems. Rather than saying that accept something without believing it, we could say under x-conditions we do believe something, but under y-conditions we don't. This seems to me to reflect my own attitude to morality fairly well - most of the time I'm a kind of libertarian-utilitarian, but when I sit down and think hard about morality I find it impossible to truly justify that position.

Humans aren't particularly good at logic, and our irrationality is fairly well documented, so this kind of holding of contradictory beliefs might not be uncommon. Furthermore, I currently believe that consciousness is a fragmentary and dis-unified process, which (if true) could make it even easier to hold radically different beliefs under different circumstances. It might be possible to design experiments that test this kind of two-belief structure, perhaps by looking at how the brain behaves when different kinds of belief are being expressed.

For the most part I agreed with Joyce's book, and on the whole I think that some kind of moral fictionalism will be necessary if we are to retain any kind of morality in the future, but I'm still not sure how exactly that might be realised, and what it might look like.


Joyce, R. 2001. The Myth of Morality. Cambridge: Press Syndicate of the University of Cambridge.






Sunday 3 June 2012

"Artifical" Intelligence


(by Joe)

Watching the new Ridely Scott film Prometheus last night, I realised that there's something about the term “artificial intelligence” that doesn't quite sit right with me. SPOILER: there's an android (or humaniform robot, to use Isaac Asimov's term) in the film, one that for all intents and purposes behaves and appears like a human. A somewhat odd human perhaps, one that feigns a degree of subservience to those around it, but a human nonetheless. I would certainly be happy to say that it was conscious, and in terms of intelligence it far exceeds almost every other character in the film. However, I'm not so sure that I'm comfortable calling it “artificial”.

This is Idris Elba. He's not an android.

Early on in the film, one of my companions whispered something like “oh, so he's an AI then”, in what I can't help feeling was a slightly dismissive tone of voice. Whilst this is perhaps technically correct, or at least an accurate use of the term, I don't think that I'd have chosen to use it. Maybe I just read too much science fiction, or spend too long thinking about multiple realisability, but to label a conscious system “artificial” in this way seems distinctly discriminatory to me.

Of course, if like John Searle you think that a conscious, thinking robot is necessarily impossible, then this won't bother you very much. I'm not going to argue for the possibility of Strong AI here, but suffice to say I am essentially a functionalist about consciousness, and thus am firmly committed to the possibility of conscious awareness being instantiated in a non-biological system. Such a system, if we had built it, would be “artificial” in the sense that it would be a constructed artefact, but to label it as such risks distorting our understanding of what it actually is. Referring to an intelligent android as an AI distances it from ourselves, putting it in the same conceptual category as a mindless computer or microwave. We would be tempted to treat such a creature as no more than a tool, and there is certainly an air of dominance towards our creations that the term “AI” can only help reinforce.

In fact, the film managed to address this issue. One otherwise very empathetic member of the ship's crew behaves in a distinctly abusive way towards the android, making constant remarks about how inhuman it is, and treating it as little more than a slave. This behaviour was reminiscent (purposefully, I think) of colonial attitudes towards indigenous populations, being patronising, cruel and dehumanising. I don't think that it would be unreasonable to say that this character was being “racist” towards the android, although we perhaps need a new word for this particular form of discrimination. “Instantialist” is somewhat clumsy, but it gets the point across. I believe we will, in the relatively near future, develop computer “minds” that are functionally similar enough to be thought of as conscious, and when this happens we will be faced with an ethical dilemma. Should we be allowed to treat these creations as creations, or should they be afforded just as much dignity and respect as any other intelligent life-form? We risk inventing a whole new category of discrimination, one that I believe the term AI, with all its connotations of subservience and inferiority, will only exacerbate. 

(The film, by the way, is well worth seeing!)