Showing posts with label moral psychology. Show all posts
Showing posts with label moral psychology. Show all posts

Sunday, 1 July 2012

Reasons and Persons: Moral Immorality

(by Joe)

First off, a quick bit of background. I've decided to try and read Derek Parfit's Reasons and Persons (1984) over the summer. It's a thick book, and densely written, so it's going to take me a while. To keep me going I thought I'd blog about each chapter here, at least when I think that there's something the interesting to say. The book isn't really about the mind, or at least not its biological aspects, but I still think it's relevant to this blog. Parfit investigates ethics, rationality, and personal identity, all of which I consider to be closely related to cognition and the philosophy of mind. In fact, I think a lot of what Parfit's saying could maybe benefit from a closer interaction with the scientific study of the mind-brain-body(-environment?), which is part of what I'm going to try and discuss here.

Derek Parfit

Anyway, on with the show. I've just finished reading the first chapter, "Theories That Are Indirectly Self Defeating". One thing that particularly caught my attention was Parfit's notion of "moral immorality, or blameless wrongdoing" (1984: 32). I'm not entirely convinced that the notion is coherent, but he argues that one possible outcome of consequentialism could be that we are morally obliged to make ourselves disposed to act in an immoral manner. He gives the example of Clare, who faced with the choice of saving her child's life or the life of several strangers, will choose to save her child. Under most consequentialist frameworks, she will have acted wrongly - instead of one person dying, several have died - but she only acts this way because she loves her child, and in coming to love her child she may well have acted rightly. Thus we get a situation where she has done wrong, but not in any way that we would blame her for.

The reason that I'm not sure whether this is coherent is that whilst consequentialism might say that, broadly speaking, it is better to save several lives than save one life, it might also say that in this particular situation it is better to act in a way that preserves the possibility of love than to act in a way that does not. So perhaps Clare hasn't acted wrongly? However, coming back to something that I mentioned in my last post, I suspect that it might be more accurate to say that Clare has committed the action that is least wrong. Practical ethics isn't as simple as a binary choice between right and wrong, and often we will have to make extremely difficult moral decisions. In a sense it is this difficulty that characterises truly moral decisions, rather than simply doing what is obviously right. So whilst I wouldn't necessarily choose to use the precise terminology that he does, I think Parfit is on to something quite meaningful when he talks about moral immorality.

He goes on to make a distinction between what we ought morally to believe and what we ought intellectually to believe (Parfit 1984: 43). So whilst Clare ought morally to believe that her love for her child comes before preserving life (as will in fact result in the best possible world), she ought intellectually to believe that what is best is to save the most number of lives. This is a very similar distinction to that made by Joyce (2001), between moral truth and moral fiction. The distinction is that whilst Parfit retains a consequentialist moral realism on both sides, Joyce's dichotomy is between the apparent truth of moral irrealism, which means we should be error theorists about morality, and our pragmatically assenting to some kind of moral fictionalism in order gain some social advantage for ourselves. Joyce characterised the latter as assent rather than belief, but I suggested here that we might be better off viewing it as a separate system of belief, one which we only come to question under certain special circumstances. This would make Joyce's position even more similar to that of Parfit: we ought to convince ourselves to hold certain moral beliefs, even though we consider them intellectually flawed. The only difference is that whilst Parfit thinks we should do this in order to bring about the best possible world (whatever that is), Joyce thinks we should do it to benefit ourselves. In fact, earlier in the chapter Parfit makes precisely this claim, in discussing whether rational egoism might be indirectly self-defeating; he concludes that it is, because it tells us to act irrationally, but that this is not necessarily an argument against it. So Joyce's moral fictionalism is well supported by Parfit's account of rational egoism, even if Parfit doesn't think that, morally speaking, that is the position that we ought to hold.


Joyce, R. 2001. The Myth of Morality. Cambridge: Press Syndicate of the University of Cambridge.
 
Parfit, D. 1984. Reasons and Persons. Oxford: OUP. (All references from the revised 1987 edition.)

Sunday, 10 June 2012

Accepting Without Believing, or Two Systems of Belief?

(Joe)

In the last few chapters of The Myth of Morality (2001), Richard Joyce lays out a potential system of "moral fictionalism", whereby we could accept moral premises without truly believing in them. This follows a lengthy argument for why we should be "error theorists" about morality, which means that we should consider moral realism to be false. If this is the case, then the most obvious conclusion would be that we should discard morality entirely, whatever that might mean. Instead Joyce wants us to take a fictionalist stance towards morality. By doing this he hopes that we will be able to continue making use of moral discourse, with all the advantages that it brings in terms of social cohesion, but without compromising our epistemological integrity.

This is Richard Joyce. Unfortunately I couldn't think of a better picture to accompany the post.

In Joyce's words, "to make a fiction of p is to 'accept' p whilst disbelieving p" (2001: 189). Without going in to too much detail, Joyce thinks that merely accepting a proposition means something like assenting to it, and employing the discourse that it facilitates, without believing it to be true. In the case of moral propositions, this will retain some of the useful imperative that they impart to our actions, in what Joyce seems to characterise as an almost unconscious manner. So when I, as a moral fictionalist, say "It is wrong to harm another", I am not expressing a belief in some moral truth, but rather in a sense reminding myself that harming others is usually bad for me in the long, despite any apparent short term benefits.

In fact, it may be the case that at the time I make that statement, I do truly believe it - what makes me a fictionalist is that when I'm questioned under serious philosophical pressure ("Do you really believe that?"), I will express my disbelief. This leads me to think that we might be able to more accurately model a possible moral fictionalism by talking about in terms of two seperate belief systems. Rather than saying that accept something without believing it, we could say under x-conditions we do believe something, but under y-conditions we don't. This seems to me to reflect my own attitude to morality fairly well - most of the time I'm a kind of libertarian-utilitarian, but when I sit down and think hard about morality I find it impossible to truly justify that position.

Humans aren't particularly good at logic, and our irrationality is fairly well documented, so this kind of holding of contradictory beliefs might not be uncommon. Furthermore, I currently believe that consciousness is a fragmentary and dis-unified process, which (if true) could make it even easier to hold radically different beliefs under different circumstances. It might be possible to design experiments that test this kind of two-belief structure, perhaps by looking at how the brain behaves when different kinds of belief are being expressed.

For the most part I agreed with Joyce's book, and on the whole I think that some kind of moral fictionalism will be necessary if we are to retain any kind of morality in the future, but I'm still not sure how exactly that might be realised, and what it might look like.


Joyce, R. 2001. The Myth of Morality. Cambridge: Press Syndicate of the University of Cambridge.






Thursday, 31 May 2012

Moral Realism and the Evolutionary Challenge

(by Joe)

Right, we're still sticking to the chimps, but this time I'm going to go out on a bit of a limb. I've been reading Primates and Philosophers as research for a paper that I'm planning to submit to Durham's Philosophical Writings, and I'd like to try and flesh out a few ideas here. Partly this is just a convenient way for me to get something solid written down, although I'd appreciate your thoughts and opinions as well.

In Primates and Philosophers (and elsewhere), Frans de Waal argues for the falsity of what he calls "veneer theory", the idea that morality is a "thin veneer" on top of an essential immoral nature. Instead, he argues, we should see morality as an essential element of human nature, something that can be explained in terms of evolution and, as such, is to some degree continuous with our ancestors and relatives (such as chimpanzees).

As Jonny discusses here, the degree to which morality is found in non-human animals is itself a contentious issue. What I'm interested in is something slightly different, namely what de Waal's argument might mean for what I'm going to call traditional moral realism. Whilst de Waal's characterisation of Veneer Theory is somewhat contentious, I think it does identify something that has traditionally been seen as an important aspect of morality: the concept of moral choice or agency. It's fairly intuitive to think that you can only be held (morally) responsible for doing something if you could have chosen to do otherwise. It hardly seems fair to blame somebody for an action that they did not consciously choose to commit. 

Both de Waal and his commentators in Primates and Philosophers seem to agree that to some extent what sets human morality apart from animal morality (supposing such a thing exists) is rationality. Whilst chimpanzees and other social animals might seem to behave altruistically, they do so because this happens to be their proximate desire (if not necessarily their long-term, evolutionary 'goal'). De Waal's proposed alternative to Veneer Theory is a naturalistic, evolutionary explanation of moral behaviour. I emphasise explanation, because that's precisely what I think it is. De Waal is able to explain how altruistic behaviour and morality more generally might have evolved, but I don't think that this is the same thing as giving an evolutionary account of moral realism. If I only behave morally because I am genetically predisposed to (under certain circumstances), then can I truly be called a moral agent?

I'm not sure. The responders to de Waal (in Primates and Philosophers) for the most part seem to think so, but I find it hard to agree. Peter Singer, for example, is comfortable with the idea that "automatic, emotional responses [...] constitute a large part of our morality" (P&P: 149). Certainly, such evolved responses might make the world a 'better' place, in the utilitarian sense of maximising well-being, but I don't think they constitute real moral agency, which is required for what I'm calling traditional moral realism. So I can't help but feel that evolutionary accounts of apparently moral behaviour tend to undermine traditional moral realism. It's not that I think such accounts are false - quite the contrary, in fact - but rather that if we are going to take them seriously, we will also need to consider their implications for moral realism.

One possibility that I've been considering is what we might call 'pragmatic moral irrealism'. Something of this kind is suggested by Tamler Sommers (2007), who gives a convincing evolutionary account of how the illusion of moral agency might arise, and why it might be beneficial for us to maintain it. I'm about to read The Myth of Morality, by Richard Joyce, which I think might express some similar thoughts. My rough plan for this paper, if I ever get round to writing it, is to demonstrate how 'traditional moral realism' is undermined by evolutionary accounts (which I take to be largely true), before sketching out a possible moral irrealism. I'd be interested to hear about anything similar or relevant to this, as well as any comments anyone has.


de Waal, F. 2006. "Morality Evolved: Primate Social Instincts, Human Morality, and the Rise and Fall of 'Veneer Theory'." In Primates and Philosophers, eds. Stephen Macedo, and Josiah Ober. Princeton University Press: Princeton.

Joyce, R. 2001. The Myth of Morality. Cambridge: Press Syndicate of the University of Cambridge.

Singer, P. 2006. "Morality, Reason and the Rights of Animals." In Primates and Philosophers, eds. Stephen Macedo, and Josiah Ober. Princeton University Press: Princeton.

Sommers, T. 2007. "The Illusion of Freedom Evolves." In Distributed Cognition and the Will, eds. Ross et al. MA: MIT Press.

Sunday, 20 May 2012

Rhesus-ons for Considering Primate Morality: Continuous Evolution and Self-awareness


(by Jonny)

What should we make of cases where rhesus monkeys will starve themselves for several days rather than receive food at the expense of electrocuting a fellow monkey? What do we make of a chimpanzee infant that consistently helps a human in reaching tasks without reward (Warneken and Tomasello, 2006)?
 


                                                                                                                                                                                   
Frans de Waal is famous for asking, “What is the difference about the way we act that makes us, and not any other species, moral beings?” (1996:11). What indeed. In “Primates and Philosophers” (2006), de Waal furthers his case for the continuous evolution of, and homologous relationship between, primate and human morality. After reading this great little title I am keen to begin penning some of my own thoughts on the subject of the origin and possibilities of morality in non-human animals.

“Can we consider other non-human animals ‘moral beings’?”; “Is human altruistic behaviour just a novel form of pre-existing capacities we share with our near relatives or an entirely unique capacity?” My own answers to such questions are inevitably motivated by several key assumptions: that what we regard as human moral behaviour is a natural development; that humans possess at least many of their social behaviours as the result of a continuous evolution from earlier social primates; that language in humans grants unique capacities for conceptualising, reasoning and self-awareness. Where do these assumptions lead me to in this debate?

The work of de Waal centres on the argument that we discover the foundations and aspects of our own morality in non-human primates and other animals. When we look at primates and discover their strategies for conflict resolution, cooperation, inequity aversion, and food-sharing, we discover much of what is important about human moral behaviour. This seemingly moral behaviour can become quite advanced.

Importantly for de Waal, humans are not selfish creatures hiding behind a veneer of fabricated rules for mutual benefit. There was never a time when humans were not cooperative, other concerning social creatures. We are inherently social, “moral beings” whose complex moral lives are nevertheless based on more primitive social capacities. There are no non-human animals capable of weaving the same conceptual richness that forms the fabric of human social interactions. Nevertheless, this richness is in an evolutionary continuum with capacities possessed by ancestors that we share with our near relatives. And for de Waal this actually extends beyond what we share with primates to other parts of Kingdom Animalia. His thoughts are best captured in his own words,

"I've argued that many of what philosophers call moral sentiments can be seen in other species. In chimpanzees and other animals, you see examples of sympathy, empathy, reciprocity, a willingness to follow social rules…” (quoted in Angier, 2001).

The obvious objection is that despite the fact we share certain social capacities with our near relatives, this should not distract from the fact that Homo sapiens retain other unique capacities, and it is these capacities that are required for morality. Sensible suggestions for unique capacities will include language ability, capacity for self-awareness and the ability for some sort of reasoned deliberation, though the extent to which these features are self-supporting will make specific claims about each difficult.

When pointing out these (potentially) unique human capacities we should be careful not construct a straw ape. I don’t believe there are many defending the view that the above abilities are not important to human morality. Even if we don’t have the right to say that it alone grants us the right to be called moral, language nevertheless turns the issue into a whole new ball game. I don’t think anyone is claiming chimpanzees have the same rich moral concepts that language grants us.

What is at stake then, is whether there is some capacity not found in any other animal, which is necessary for what we call morality, despite whatever social behaviours we share with them.

Responding to de Waal, Korsgaard argues that what our relatives seems to lack is some cognitive self-consciousness regarding the causes of one's own actions, Korsgaard attacks the assumption that “the morality of an action is a matter of content of the intention with which it is done” (2006:107) instead suggesting that what makes us moral beings is our “exercise of self-government” (112). This self-government amounts to the ability to be consciously aware of the reasons in which you intend to act, not merely as the objects of that act, but as reasons! To be aware of the object of one’s intended act one must not only be aware of the object as e.g. a desirable thing. Rather one must further be aware that you do desire that object. Humans are aware that they have grounds for acting, not merely of the grounds for which they act. This reason granting ability (112-113) allows humans to not only form beliefs about the intentions based on evidence but to be aware of the evidence and its connection to other states. They can deliberate, reconsider and alter. This autonomy makes us moral. Importantly Korsgaard stresses this in an entirely natural development and on a continuous scale of evolved intentionality. This continuity however, does not stop the fact that what is unique to humans is what makes us moral.

I do not disagree with Korsgaard over humanity’s (probably) unique capacity for self-awareness, and the unique conceptions this grants us. I do not deny that humans are uniquely motivated by deliberating on what we “ought” to do, and this in turn plays an enormous role in our moral lives. What I have some doubts about is the requirement that we find this ability in an animal before we can talk about their moral capacities full stop.

Responding in “Primates and Philosophers” Peter Singer also argues that similarities aside, non-humans animals lack a crucial component for morality. What makes morality morality, what makes one a being capable of moral thinking, is the ability to universalise, the ability to take your considerations and impartially generalise them. Other animals consider and abide by rules concerning their kin and in-group but “It is only when we make these general, impartial judgements that we can really begin to speak of moral approval and disapproval” (Singer, 2006: 144). Singer rightly points out that only we humans have the reasoning capacity to think abstractly in this way.

I worry that both Korsgaard and Singer set the requirements for morality too high, or make the mistake of not allowing morality to be flexible and evolutionarily continuous enough. I think I am naturally sceptical of the value of strict necessary and sufficient conditions for definitions concerning complex cognitive phenomena. I worry further that Korsgaard and Singer are too influenced by formal Western philosophical traditions. Perhaps the whole debate is fixed within a particular cultural paradigm. Do all peoples have the same monolithic conception of “morality”?

Whilst we can see Korsgaard’s requirements for morality stem from the respectable Kantian tradition, as Ober and Macedo point out, it is not so clear that we believe “self-government” is required in what we take as everyday moral acts (2006: xviii). Imagine when I perceive someone inflicting an unnecessary harm on another, I perceive it as bad and consequently interfere; do I need to be aware of my perception of the harm inflicted as a cause? Perhaps we could say “not always”, but what matters is that we are capable of such reflection. But I’m not sure this requirement of higher reflection , important though it is, suddenly boosts us into the hitherto unexplored realm of the moral- “morality” refers to something far to ambiguous and complex for that. As for Singer, though again I see the importance universalizability plays in our formal moral conceptions, I worry that much of what we regard as moral in our lives doesn’t actually fit the bill. I will leave this point to another discussion however, as it requires much greater attention. For now it is worth asking ourselves how much of our “everyday morality” boils down to reciprocity, empathy, conflict aversion and relatively straightforward social rules?

I'm hesitant to use the cliché, but the debate might boil down to “semantics”. Singer seems to define morality as requiring impartial universalizability. At the same time de Waal says, “Moral systems are inherently biased towards the in-group” implying that morality, by definition, need not be impartial. If we can’t reasonably agree on what morality consists of to begin with, it is going to be hard to say when and where we find it. As Ober and Macedo say, it will become a case of comparing apples and oranges (xix).

Darwin believed “the difference in mind between man and the higher animals, great as it is, certainly is one of degree and not of kind”, but added that it was humanity’s unique “intelligence” that produced conscience, “the supreme judge and monitor” (1871, online). Though our social faculties are essentially no different in kind, our unique cognitive capacities produce a novel development.

Perhaps then it is pragmatic to distinguish between the “proto-moral” capacities we find in other species, and the “human-moral” capacities distinguished by something like what Korsgaard or Singer point out; as long as this does not detract from the fact that one has its origins, at least in part, in the capacity for the other. Elsewhere de Waal says, “Non-human primates may not be exactly moral beings, but they do show...key components or 'prerequisites' of morality recognizable in social animals...reciprocity, empathy, sympathy, and community concern” (2000:3).

And that's probably enough for one post! There's clearly a lot more to be said on the issues raised here, and I hope to return to develop my thoughts on them within the near future.



Angier, N (2001). "Confessions of a Lonely Atheist". The New York Times Magazine. http://partners.nytimes.com/library/magazine/home/20010114mag-atheism.html. Retrieved 11th March 2011

Flack, J.C,. de Waal, F (2000) “ ‘Any animal whatever'. Darwinian building blocks of morality in monkeys and apes”. Journal of Consciousness Studies, 7, 1-2 pp. 1-29(29)

Korsgaard, C.M (2006) Morality and the Distinctiveness of Human Action in “Primates and Philosophers” Eds. Stephen Macedo, and Josiah Ober. Princeton University Press: Princeton

Singer, P (2006) Morality, Reason and the Rights of Animals in “Primates and Philosophers” Eds. Stephen Macedo, and Josiah Ober. Princeton University Press: Princeton

de Waal, F (1996) Good Natured: The Origins of Right and Wrong in Humans and Other Animals. Harvard University Press: Cambridge, MA

de Waal, F(2006) Morality Evolved: primate Social Instincts, Human morality, and the Rise and Fall of “Veneer Theory” in “Primates and Philosophers” Eds. Stephen Macedo, and Josiah Ober. Princeton University Press: Princeton


Warneken, F. (2006). Altruistic Helping in Human Infants and Young Chimpanzees Science, 311 (5765), 1301-1303 DOI: 10.1126/science.1121448