Tuesday 30 April 2013

Hedge(hog)ing Your Bets: Animal Consciousness, Ethics and the Wager Argument


I want to begin fleshing out an argument I've been mulling over. It’s far from a comprehensive thesis. Rather, I want to use this blog to sketch out some preliminary ideas. The argument takes off from the notion that whether or not animals are conscious informs the importance of human-animal interaction and dictates the course of animal ethics.

A hedgehog struggling to remain conscious... 
I want to explore the idea that treating animals as if they are conscious carries moral weight from the perspective of a cost-benefit analysis. The “wager argument” starts with the premise that we have a choice to treat animals either as if they are conscious or as if they are not. I will assume for now that consciousness includes the capacity to feel physical and emotional sensations, such as pain and pleasure, from a familiar first-person perspective (I’m strategically evading the problem of defining consciousness for now but I’m fully aware of its spectre- see below).

Animal's wagering. Not what I'm talking about.
The argument looks something like this: you are better off treating animals as if they are conscious beings, because if they are indeed conscious beings you have done good, but if they are not conscious beings then you have lost nothing. Alternatively, if you treat animals as if they are not conscious, and they are, you have caused harm. It is better to hedge your bet and assume animals are conscious.

To paraphrase Pascal, the argument says “if you gain you gain much, if you lose you lose little”. With Pascal’s wager your gain is something like eternal life, and the loss is avoidable annihilation. Some might include in the avoidance or progression to hell (though Pascal himself never mentions hell). For us, the gain is a better world, or the avoidance of a worse one.

Pascal.  I'll wager he Blaised his way through academia... (sorry).

Here's the argument in boring step-by-step premises:

P1 An animal is a being that is conscious or is not conscious.
P2 We may treat an animal as if they are conscious or as if they are not conscious.
P3 Treating a conscious being as if it is conscious or as if it is not conscious bares morally significant differences.
P4 Treating an animal as if it is not conscious and it is conscious will (practically) bare morally significant harm.
P5 Treating an animal as if is not conscious and it is not conscious will bare no morally significance difference.
P6 Treating an animal as if it conscious and it is not conscious will bare no or negligible morally significant difference.
P7 Treating an animal as if it conscious and it is conscious will (practically) bare morally significant good- or at the very least will bare no moral significance.
P8 We ought to behave in a way that promotes morally significant good, or at least avoids morally significant harm.
C We ought to treat animals as if they are conscious.

Note that by “practically” I mean that it does not necessarily follow as a logical result, but follows as a real-world likelihood.

The argument assumes that whether we think an animal is conscious or not makes a big difference to the way we ought to treat them. It also assumes that treating them as not conscious will lead to harm. How we flesh out "harm" is going to depend on our moral framework, and I think this argument most obviously fits into a consequentialist paradigm.

Regardless I think the idea pretty intuitive. If you believe your dog has the capacity for physical and emotional sensation, you are likely to treat her differently than if you think her experience of the world is much the same as a banana. Within medical testing, we may afford those animals we believe to be reasonably attributed consciousness with greater caution regarding harmful experiments. We may altogether exclude conscious beings from butchery, or at least any practice that might be painful. More radically, we may believe that any being we regard as conscious should be afforded the same sort of moral attention as humans. What matters is a “significant difference”- and this needs examined.

The premises obviously need to be elaborated upon, and I already have my own serious criticisms. Two in particular stand out: the problem of treating consciousness as simple and binary; and the assumption in premise 6 that treating animals as if they are conscious, when in fact they are not, will not result in morally significant harm (e.g. think of potential medical breakthroughs via “painful” animal experimentation or the health benefits of a diet that includes animal protein). I do believe the wager argument has strength to fight back against such criticisms but I don’t think it will come away unscathed. In the near future I’ll look at the argument in a little more detail and start examining these criticisms.   


Sunday 21 April 2013

Positive Indeterminacy Revisited

(I meant to write this post a few months ago, when I was actually studying Merleau-Ponty. Since then, positive indeterminacy has popped up a few more times, in various guises. Hence "revisited".)

Merleau-Ponty introduces the term "positive indeterminacy" in The Phenomenology of Perception, where he uses it to describe visual illusions such as the Müller-Lyer...

Which line is longer?

 ...and the duck-rabbit. His point is that perception is often ambiguous, and he concludes that we must accept this ambiguity as a "positive phenomenonon". Indeterminacy, according to Merleau-Ponty, can sometimes be a feature of reality, rather than a puzzle to be explained.

Is it a duck? Is it a rabbit? Nobody knows!

Positive indeterminacy, then, is the identification of features of the world that are in some sense inherently indeterminate. Quine argues that any act of translation between languages is fundamentally indeterminate, as there will be always be a number of competing translations, each of which is equally compatible with the evidence. Of course in practice we are able to translate, at least well enough to get by, but we can never we be sure that a word actually means what we think it does. Thus Quine concludes that meaning itself is indeterminate, and that there is no fact of the matter about what a word means.



Quine: a dapper chap

Hilary Putnam comes to similar conclusions about the notion of truth. According to his doctrine of "internal realism", whether or not some statement is true can only be determined relative to a "conceptual scheme", or a frame of reference. Truth is also indeterminate, in that there is no objective fact of the matter about whether or not something is true. Putnam takes care to try and avoid what he sees as an incoherent form of relativism, and stresses that from within a conceptual scheme there is a determinate fact of the matter about truth. Nonetheless, this truth remains in an important sense subjective - it's just that Putnam thinks that this is the best we can hope for.

More recently Dennett has reiterated this kind of "Quinean indeterminacy", with specific reference to beliefs. According to his (in)famous intentional stance theory, what we believe is broadly determined by what an observer would attribute to us as rational agents. In some (perhaps most) situations, there will be no fact of the matter as to which beliefs it makes most sense to attribute. The same goes for other mental states, such as desires or emotions.

Dennett draws attention to Parfit's classic account of the self as another example of positive indeterminacy. There will be cases, such as dementia or other mental illness, where it is unclear what we should say about the continuity of the self. Rather than treating this as a puzzle that we should try and solve, Parfit argues that our concept of self is simply indeterminate, and that there is sometimes no "right" answer.

All of the above cases are much more complex than I have been able to go into here, but they give a taste of the importance of positive indeterminacy. I am most interested in how it can be applied to puzzles in the philosophy of mind, but it seems that it might well be a more fundamental part of how we should think about the world.

Friday 5 April 2013

"What has philosophy done for us..."- Does Philosophy ever make a difference?

“Philosophers never make any difference,” began a recent conversation that never quite happened as follows.

“Yeah I guess you're right”, I replied, thinking of all the time misspent by so many of my philosophy teachers eschewed away in their secluded studies.

“Philosophy doesn't actually influence anything”, the dialogue continued.

“Yeah totally. Then again”, I hesitated, “I guess there was that Plato chap, the man who arguably shaped the entirety of western intellect for millennia to come and whose ideas shaped the world's biggest religion.”

“Okay. But ignoring a few anomalies, what have philosophers ever done for us?”, they retorted.

Well, if we're going to accept Plato we'll have to allow for Aristotle. He changed history a bit by contributing to all existing academic fields at the time via his philosophical paradigm. He arguably planted seeds for the scientific method and influenced both Christianity and Islam, which sometimes play a part in people's lives even to this day.

Raphael's School of Athens. Some of these dudes may just have changed the course of everything.
In more modern times the odd household name such as Descartes, Rousseau, Locke, Kant or Hume might be credited, for better or worse, for moulding much of the west's current values and institutions. Those who paid attention at school might remember Marx, under whose philosophical system countless revolutions were plotted.

In the 20th century there is Turing, father of computer science, who was arguably philosophically oriented. There's Wittgenstein who fellow blogger Bryan Nelson describes as an “unmatched catalysis for creative thought in the 20th century”. In popular culture Russell still frequently materialises. Rawls, Dewey, Simone de Beauvoir, endless philosophically motivated political figures.

Marx. Somewhat influential.

Feminism. Now there's a movement that has effected the lives of millions people, and if we're going to be generous, we'll have to accept some role played by intellectuals such as Mary Wollstonecraft, Jane Addams, Avita Ronell, Mary Daly...

I suppose if we're going to allow for theologians we might have to accept some minor influence from those dudes like Augustine, Thomas Aquinas, Karl Bath, Gregory Nazianzus, Maximus the Confessor, Gregory Palamas.

But that's about it. And before you say anything, we're not really talking about the East so Confucius, Buddha, Lao Tzu, Zhuang Zi, Dogen, Avicenna, none of them really count (and they probably didn't achieve much anyway).

Tiresome sarcasm aside, the notion that philosophy has not influenced the course of history promptly seems like a philistine dismissal, and realising its historical potential seems important. Critical readers might argue that many of the most so-called influential philosophers were influential for reasons other than their philosophy, perhaps in spite of their philosophy. However, I think a cursory glance over the biographies of some of the names above quickly weaken that claim. More often than not the philosophy done by these thinkers is integral to their other work overall. Of course none of this means that philosophy's influence is a good thing, it just affirms its existence. It also does not mean philosophy is always influential. Philosophy is still often, perhaps most often, indulgent and self-contained.

Russell famously makes the appealing claim that philosophy is often the seed for new scientific disciplines (e.g. psychology in the late 19th century, computer science in the 20th, and once upon a time, physics). He says, “...as soon as definite knowledge concerning any subject becomes possible, this subject ceases to be philosophy, and becomes a separate science” (Russell, 1968, 90). This is not a snub, it is a realisation of philosophy's often integral requisite role in constituting practical subjects. As Russell also alludes to, much of philosophy's influence does not come in the formation of grand historical events, though clearly that does happen, but through quieter, subtler, though no less profound, influences in the psychology individual lives.

Wednesday 3 April 2013

Depression and the Dark Room Problem

Trigger warning: depression, schizophrenia, mental illness

Predictive processing is an exciting new paradigm in computational neuroscience. Its essential claim is that the brain processes information by forming predictions about the world. Depending on who you ask, it's either going to solve everything, or turn out to be relatively uninteresting. I'll maybe discuss it in more detail in a future post, but today I want to focus on just one aspect of the theory.

A central principle driving predictive processing is error minimisation. Each prediction that the brain makes is compared with incoming sensory data, and this generates an "error signal" that reflects any mismatch between the prediction and the data. The brain is then driven to either make a more accurate prediction or modify its environment so as to conform with the inaccurate prediction, in order to minimise this error.

This leads to the so-called "dark room problem". If all we are driven to do is minimise prediction error, then why don't we just lie absolutely still in a dark room, thus enabling the formulation of a stable, accurate prediction? There are several ways of responding to this problem, but all share a general assumption that it is a problem, and that we aren't ever driven towards dark rooms.

Now, most of the time this is going to be correct, but on first hearing about the dark room problem my reaction was that actually I sometimes do just want to lie in a dark room. I suffer from periodic bouts of depression, and during these depressive episodes a dark room is pretty much all I can cope with. So perhaps whatever mechanism drives us away from dark rooms in everyday life is switched off during depression?



The Dark Cave Problem

This reminds me of an evolutionary theory of depression that I've heard of, which says that back when we were hunter-gatherers it made sense to occasionally withdraw from the world, as a survival mechanism in case of bad weather or other dangerous circumstances. In cases of depression this mechanism is simply over-sensitive or, in the worst cases, always switched on. I'm not sure how much I'm convinced by this theory, but lets assume that there is at least a shred of truth in it.

It also fits well with predictive processing and the dark room problem. Predictive processing has already been applied to the positive symptoms of schizophrenia and other delusions (in the form of "false" error signals), and similarly I think we could say that in some cases the dark room problem simply isn't a problem. Depression might be the result of a mechanism that shuts off whatever it is that drives us out into the world, with the result that we are content to minimise error by lying in a dark room.

On the other hand, depression and other mental illnesses are extremely complex, and I remain suspicious of any theory that tries to tell one simple story about them. Better perhaps to treat the dark room as just one of many contributing factors, or even just a useful metaphor.