Showing posts with label daniel dennett. Show all posts
Showing posts with label daniel dennett. Show all posts

Sunday, 28 July 2013

Embodied AI and the Multiple Drafts Model

In "Intelligence without Representation" (1991), Rodney Brooks lays out his vision for an alternative AI project that focuses on creating embodied "Creatures" that can move and interact in real-world environments, rather than the simplified and idealised scenarios that dominated AI research in the 60s and 70s. Essential to this project is the idea of moving away from centralised information processing models and towards parallel, task-focused subsystems. For instance, he describes a simple Creature that can avoid hitting objects whilst moving towards "distant visible places" (1991: 143). Rather than attempting to construct a detailed internal representation of its environment, this Creature simply consists of two subsystems, one which moves it towards distant objects and another that moves it away from nearby objects. By decomposing this apparently complex task into two simple ones, Brooks is able to find an elegant solution to a difficult problem.

Brooks and a robot having a hug

His description of this process is particularly interesting:
Just as there is no central representation there is not even a central system. Each activity producing layer connects perception to action directly. It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors. Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. (1991: 145)
It is strikingly similar to Dennett's account of consciousness and cognition under the Multiple Drafts Model (see his 1991). Maybe not so surprising when you consider that both Dennett and Brooks were inspired by Marvin Minsky, but it does lend some theoretical credence to Brooks' work...as well as perhaps some practical clout to Dennett's.

  • Brooks, R. 1991. “Intelligence without representation.” Artificial Intelligence, 47: 139-59.
  • Dennett, D. 1991. Consciousness Explained. Little, Brown and Company.

Sunday, 21 April 2013

Positive Indeterminacy Revisited

(I meant to write this post a few months ago, when I was actually studying Merleau-Ponty. Since then, positive indeterminacy has popped up a few more times, in various guises. Hence "revisited".)

Merleau-Ponty introduces the term "positive indeterminacy" in The Phenomenology of Perception, where he uses it to describe visual illusions such as the Müller-Lyer...

Which line is longer?

 ...and the duck-rabbit. His point is that perception is often ambiguous, and he concludes that we must accept this ambiguity as a "positive phenomenonon". Indeterminacy, according to Merleau-Ponty, can sometimes be a feature of reality, rather than a puzzle to be explained.

Is it a duck? Is it a rabbit? Nobody knows!

Positive indeterminacy, then, is the identification of features of the world that are in some sense inherently indeterminate. Quine argues that any act of translation between languages is fundamentally indeterminate, as there will be always be a number of competing translations, each of which is equally compatible with the evidence. Of course in practice we are able to translate, at least well enough to get by, but we can never we be sure that a word actually means what we think it does. Thus Quine concludes that meaning itself is indeterminate, and that there is no fact of the matter about what a word means.



Quine: a dapper chap

Hilary Putnam comes to similar conclusions about the notion of truth. According to his doctrine of "internal realism", whether or not some statement is true can only be determined relative to a "conceptual scheme", or a frame of reference. Truth is also indeterminate, in that there is no objective fact of the matter about whether or not something is true. Putnam takes care to try and avoid what he sees as an incoherent form of relativism, and stresses that from within a conceptual scheme there is a determinate fact of the matter about truth. Nonetheless, this truth remains in an important sense subjective - it's just that Putnam thinks that this is the best we can hope for.

More recently Dennett has reiterated this kind of "Quinean indeterminacy", with specific reference to beliefs. According to his (in)famous intentional stance theory, what we believe is broadly determined by what an observer would attribute to us as rational agents. In some (perhaps most) situations, there will be no fact of the matter as to which beliefs it makes most sense to attribute. The same goes for other mental states, such as desires or emotions.

Dennett draws attention to Parfit's classic account of the self as another example of positive indeterminacy. There will be cases, such as dementia or other mental illness, where it is unclear what we should say about the continuity of the self. Rather than treating this as a puzzle that we should try and solve, Parfit argues that our concept of self is simply indeterminate, and that there is sometimes no "right" answer.

All of the above cases are much more complex than I have been able to go into here, but they give a taste of the importance of positive indeterminacy. I am most interested in how it can be applied to puzzles in the philosophy of mind, but it seems that it might well be a more fundamental part of how we should think about the world.

Friday, 21 December 2012

Gilbert Ryle's Concept of Mind

I'd call this a book review, but I haven't finished the book yet. I am enjoying it though, so I thought I'd write a few words about some of the more relevant themes.

Just chilling, no doubt reading some Wittgenstein

As I mentioned last time, it was Gilbert Ryle who coined the term "ghost in the machine" to refer to the disembodied mind that cognitive science seems intuitively drawn towards. The Concept of Mind is to a large extent aimed at dispelling this intuition, but along the way it also touches upon a number of other fascinating topics. Below is a list of ideas that Ryle either introduces, expands upon, or pre-empts:
  • "Knowing How and Knowing That": This is the title of a whole chapter, wherein he draws a conceptual distinction between the two kinds of knowing. In brief, the first is the skilful execution of an action, the second the reliable recollection of a fact. The "intellectualist legend", according to Ryle, makes the former subordinate to the latter, in that all activities are reduced to the knowledge of certain rules (32). That this reduction is false is fundamental to his broader point - there is no isolated realm of the mental, and all cognitive activity must be expressed through action (or at least the potential for action).
  • Embodied cognition and the extended mind: In the same chapter, he devotes a few pages to the common notion that thinking is done "in the head" (36-40). This notion, he argues, is no more than a linguistic artefact, stemming from the way we experience sights and sounds. Unlike tactile sensations, sights and sounds occur at some distance from our body, and so when we imagine or remember them, it makes sense to highlight this distinction by saying that they occur 'in the head'. By extension thought, which Ryle conceives of as internalised speech,1 is also said to occur 'in the head'. However this idiomatic phrase is just metaphorical, and there is no reason that thinking should (or could) occur exclusively in the head.
  • "The Will": Another chapter, this time de-constructing our understanding of volition and action. Suffice to say, Ryle thinks we've got ourselves into a terrible mess, in particular in supposing that to do something voluntarily requires some additional para-causal spark. Rather, to describe an action as voluntary is simply to say something about the manner in which, and circumstances under, it is performed. Free will, under this reading, is something to do with the kind of causal mechanism involved, rather than anything 'spooky' or non-physical.2 Personally I've never found this kind of account particularly convincing, but it is nonetheless influential to this day.
  • Higher-order thought as a theory of consciousness: Although he never explicitly puts it this way, there is a passage where Ryle describes how some "traditional accounts" claim that what is essential for consciousness is the "contemplation or inspection" of the thought process that one is conscious of (131). This is very similar to contemporary 'higher-order' theories of consciousness (see Carruthers 2011). Ryle doesn't exactly approve, dismissing such theories as "misdescribing" what is involved in "taking heed" of one's actions or thoughts.
So there you have it: Gilbert Ryle, largely forgotten but by no means irrelevant. As you may have noticed, a lot of his ideas influenced Daniel Dennett, which isn't surprising, seeing as Dennett studied under Ryle at Oxford.
1. This, perhaps, is one source of Dennett's fable about the origins of consciousness (1991).
2. Again, this is reminiscent of Dennett (2003).
 
References
  • Carruthers, P. "Higher-order theories of consciousness." Stanford Encyclopedia of Philosophy. Retrieved from http://plato.stanford.edu/archives/fall2011/entries/consciousness-higher [21.12.2012]
  • Dennett, D. 1991. Consciousness Explained. Little, Brown & Company.   
  • Dennett, D. 2003. Freedom Evolved. Little, Brown & Company.   
  • Ryle, G. 1949. The Concept of Mind. Hutchinson. 

Saturday, 24 November 2012

A spectre is haunting cognitive science...

...the spectre of Cartesian materialism. If there's been one consistent theme running through my studies over the last two and a half month, its this. But what is Cartesian materialism, and why is it haunting cognitive science?

A few obligatory words about the man himself before we go any further. René Descartes was a 17th century philosopher and mathematician, probably most famous for the now-infamous words "cogito, ergo sum" - "I think, therefore I am". He also invented the Cartesian coordinate system, which most of you will have been taught, even if you don't know it (it's the classic x-axis, y-axis thing). In modern analytic philosophy he enjoys a dubious status as both the inspiration and the target of many key arguments. It is a great irony that a tradition which owes so much to Descartes also routinely indoctrinates undergraduates against him.

He did most of his philosophising from the comfort of his bed.

Not that this is necessarily a bad thing. Many of Descartes' arguments are terrible, but the intuitions they appeal to remain strong, and his influence (the "spectre" of my title) can be felt throughout cognitive science and analytic philosophy of mind. Foremost amongst these is the intuition that 'mind' and 'body' must refer to two distinctly separate kinds of things. Descartes thought that this meant they must be composed of two separate substances, one physical and extended, the other insubstantial and non-extended. His cogito argument refers to this distinction - my mind, being a thinking thing, seems to exist independently of (and prior to) any physical world.

Empirical philosophy of mind (and thus cognitive science) tends to reject this dualism. Most philosophers of cognitive science (including myself) are physicalists, committed to there being only one kind of substance in the world. Thus the mind must be made out of the same kind of stuff as the body. Despite this commitment, there remains a tendency to conceive of the mind as something special, somehow autonomous from its physical instantiation. This attitude is sometimes called 'property dualism', 'non-reductive physicalism' , or, by its opponents, 'Cartesian materialism'.

Classical cognitive science, which dates back to around the middle of the last century, was (and still is) enamoured with the idea that the mind is essentially a computer program. As such it made sense to think of the mind as something distinct from the brain, a kind of "software" running on biological "hardware". This intuition is still strong today, particularly amongst those wanting to give an account of mental representation ("pictures" in the mind), or of the apparently inferential structure of cognition. Traditional functionalist accounts of cognition also tend towards a form of Cartesian materialism, as the multiple realisability requirement means that strict type identity between the mind and the brain is not possible. Whilst in many cases the mind (classically speaking) simply is the brain, it's conceivable that it might take some other form, and so the two are not strictly identical. 

However, recent (and some not-so-recent) work in embodied cognition argues that the physical body might be more important than classical cognitive science assumes. Examples include John Searle's suggestion that some quality of the neurobiological brain might be essential for consciousness (1980: 78), various enactive approaches to perception (championed by Alva Noë), and the dynamical systems approach that argues that cognition is a continuous process involving the brain, body, and environment. Whilst these approaches differ in many respects, they all agree that the mind cannot be conceived of as distinct or autonomous from the body.

Whilst Daniel Dennett takes an essentially computational and functionalist approach to cognition, he has also warned against the risks of Cartesian materialism - in fact, he invented the term. In Consciousness Explained (1991), he argues that many of our confusions about both consciousness and the self stem from Descartes, and that it is essential that we stop thinking about the mind as a single entity located at some discrete location within the brain. His mentor Gilbert Ryle made a similar point in The Concept of Mind, writing about the "dogma of the ghost in the machine" (1949: 17), the disembodied Cartesian mind that somehow controls the body.

A final Cartesian oddity that I have come across recently is found in the phenomenological work of Jean-Paul Sarte. Despite explicitly rejecting the Cartesian concept of the self, he emphasises a distinction between the "being-in-itself" and the "being-for-itself". The former is something like a physical body, and is all the "being" that a chair or a rock can possess, whilst the latter is what makes us special, the uniquely first-person point of view that we seem to enjoy. IN making this dichotomy he has been accused of resurrecting a kind of Cartesian dualism, in contrast with another famous phenomenologist, Merleau-Ponty, who saw the self as inherently bound up in its relations the the world.

So there you have it, a whistle-stop tour of Cartesian materialism. I'm aware that I've skimmed over a lot of ideas very quickly here, but hopefully it's enough to illustrate the way in which Descartes is still very much exerting an influence, for better or for worse, on contemporary philosophy of mind.

  • Boden, M. 1990. The Philosophy of Artifial Intelligence. Oxford: OUP.
  • Dennett, D. 1991. Consciousness Explained. Little, Brown and Company. 
  • Searle, J. 1980. “Minds, Brains, and Programs.” Reprinted in Boden 1990: 67-88.

Saturday, 13 October 2012

Merleau-Ponty, Wittgenstein, and philosophical mysticism

I study embodied cognition, an emerging field which has taken considerable inspiration from the phenomenological work of the likes of Merleau-Ponty, Sartre, and Heidegger. As such, I've been attempting to get to grips with phenomenology, which given my analytic, Anglo-American philosophical education, is a somewhat odd experience. Phenomenology, broadly speaking, was a reaction against both empiricism and idealism, placing primary emphasis on "lived experience" and the act of perception. Merleau-Ponty in particular also focused on the interaction between the perceiver and the world, and it is this sense of "embodiment" that embodied cognition has most taken to heart.

Merleau-Ponty: grumpy

However there is another side to phenomenology, one which has the potential to be profoundly inimical to the whole project of cognitive science, embodied or not. There is evidence to suggest that Merleau-Ponty, at least, understood phenomenology to be far more than a modification of our psychological methodology. His most famous work, Phenomenology of Perception, is  littered with cryptic remarks that undermine any attempt to read it as a work of empirical psychology. He explicitly states that it is a work of transcendental philosophy, aimed at achieving "pre-objective perception". It is not at all clear what this might be, or even whether it can be expressed in words. Throughout the book (which I'll admit I haven't yet read), there is apparently a sense in which many things go unsaid, perhaps even things which will "only be understood by those who have themselves already thought the thoughts".

Wittgenstein: even grumpier

That sounds familiar. The above quote comes from the introduction to Wittgenstein's Tractatus (which I have read, although I won't claim to have understood it). Both Wittgenstein and Merleau-Ponty seem to be struggling to express the unexpressable, and both, perhaps, ought to be read as "anti-philosophers", whose mission is not to solve any great problems but to help us understand why there never were any problems in the first place. This is certainly the opinion of a psychology lecturer I know who, under the influence of both Wittgenstein and Merleau-Ponty, seemed shocked that us philosophers might still be trying to solve the "problem" of consciousness. Whilst I think this is somewhat arrogant (and ignorant), it is true that both Merleau-Ponty and Wittgenstein regarded analytic philosophy as curiously misguided, tied up in knots of its own creation.

In light of which it may seem odd that half a century later analytic philosophy continues to venerate Wittgenstein, and that analytic philosophy of mind, or at least a certain strand of it, has recently adopted Merleau-Ponty as something of an idol. If both or either of them were right, surely we're completely missing the point? In fact I don't think this should worry us too much. Neither Merleau-Ponty nor Wittgenstein were perfect, and much of what they wrote may have been as confusing to them as it is to us. What is important is to pay attention to the issues that they do highlight, and to take to heart anything that does make sense to us. Daniel Dennett takes this approach with regard to Wittgenstein (in Consciousness Explained and elsewhere), and Shaun Gallagher and Dan Zahavi seem to be doing something similar in The Phenomenological Mind, where they attempt to apply phenomenological insights to contemporary cognitive science. Regardless of whether or not either Mearlea-Ponty or Wittgenstein would have approved, I find this approach extremely useful, and phenomenologically speaking, perhaps this is all that should matter. It is, after all, my lived experience, not Merleau-Ponty's!

(Some credit should go to the phenomenology reading group at the University of Edinburgh, with whom I discussed much of the content of this post. Any errors or misunderstandings, however, are entirely my own.)

  • Dennett, D. 1991. Consciousness Explained. Little, Brown & Company.
  • Gallagher, S. & Zahavi, P. 2008. The Phenomenological Mind. London: Routledge.
  • Merleau-Ponty, M. 1945/1962. Phenomenology of Perception. London: Routledge.
  • Wittgenstein, L. 1921/1991. Tractatus Logico-Philosophicus. New York: Dover.

Wednesday, 15 August 2012

Drescher on False Reification

In Good and Real, an ambitious attempt to "demistify paradoxes from physics to ethics", Gary Drescher discusses the "false reification" of concepts in the philosophy of mind (2005: 50ff). The fallacy of reification is familiar in other areas of philosophy, but to my knowledge Drescher is the first to apply it specifically to consciousness (although he acknowledges Dennett [1991] as a source of inspiration). Today I want to discuss a few of his insights, and I'll maybe go into more detail with my thoughts on them in a future post.

First off, what is false reification? It occurs when we mistakenly interpret our empirical observations as identifying a new and distinct entity. In the case of consciousness, that basically means identifying "being conscious" as a property over and above the cognitive processes that we are conscious of. A simple, non-cognitive example of false reification is the historic notion of vitalism. It used to be believed that there was a separate life-force that endowed living things with life, animating them in a way that non-living things could not emulate. We know now that no such vital life-force exists, and that being alive is in fact no more than a function of the biological processes that compose living things. Whereas vitalism supposed that biological processes involved an extra 'spark of life', modern biology simply identifies life with certain biological processes. We can say that vitalism falsely reified life, believing it to be a distinct entity or property over and above the physical processes that instantiate it. 

Similarly, many philosophical puzzles can be neatly side-stepped if we avoid falsely reifying consciousness. A common mistake, according to Drescher, is to view consciousness as being an intrinsic property of mental events that we discover when we examine those events. "Rather," he writes, "the examination of a mental event [. . .] is what constitutes that event's consciousness" (Drescher 2005: 49). Under this interpretation, it is no surprise that whenever we examine a mental event, we find that event to be conscious. Like the light that turns on whenever we open a refrigerator, consciousness 'turns on' whenever we focus on or examine a particular mental event (ibid.). The false reification that we commit here is to think of consciousness as something extra that we must discover within a conscious system, beyond the physical processes that constitute that system.

Quale, not quail.
The false reification of qualia can also result in philosophical confusion. A quale is a philosophical term referring to the conscious sensation of an experience, for example the feeling of what it is like to see red or hear a loud noise. A famous thought-experiment asks what would happen if you were able to 'invert your spectrum' - that is, make everything look the opposite colour to what it does now. So red would look green, blue would look orange, and yellow would look purple (or something like that, the precise details are unimportant). Would you notice any difference? If colour-qualia have an existence independent of the physical process of colour perception, then perhaps you might - but to argue that they do is to commit a false reification. Our conscious experience of a colour just is the act of perceiving that colour, and so the inverted spectrum experiment is simply incoherent. It just isn't possible that we could perceive everything in the same way that we do now, but with the colours inverted. There are no independent qaulia that we can switch around in order to make the experiment work.

A final, related false reification can occur when we consider our motivations for certain actions. Put (extremely) simplistically, we are motivated by a desire to experience pleasurable things and avoid painful things. So it seems natural to say things like "you want to eat chocolate becuase it just tastes good; you want to avoid stubbing your toe becuase that just feels bad" (Drescher 2005: 77). Intuitively this makes sense, but Drescher thinks that it gets things the wrong way round. There is no property tasty that is intrinsic to chocolate, and no property painful that is intrinsic to toe-stubbing. Rather it is the fact that we have a natural desire for sugar that makes chocolate taste good, and the fact that we have a natural aversion to harming ourselves that makes toe-stubbing painful. So pain and tastiness are constituted by these evolved processes, and to view them as intrinsic properties that we aim for (or aim to avoid) is to falsely reify them.

Thus concludes my whistle-stop tour of Drescher's views of false reification in the study of consciousness. His book is very interesting, although I'm doubtful of his central claims concerning free will and determinism in chapters 5-7. More on those next week, perhaps, or for now you can just re-read my previous post on the topic.

  • Dennett, D. 1991. Consciousness Explained. Little, Brown and Company.
  • Drescher, G. 2005. Good and Real. Cambridge, MA: MIT Press

Tuesday, 17 July 2012

Free Will: No Means No

(by Joe)

When it comes to free will, no means no. Not no, except for really important decisions. Not no, except for choosing to not do something. Not no, except for the internal attitudes that shape our actions. No free will means no free will.

Yet all too often writers for whom I otherwise have a lot of respect fall into this trap. They present a solid argument against free will, or express a concern about someone else's suspect use of free will, then turn right around and commit one of above fallacies. They are fallacious because they make an exception solely in order to support a particular point. These exceptions are never supported, or even acknowledged - they just sit there, spoiling an otherwise good argument.

Most recently I caught Susan Blackmore doing this, when at the end of The Meme Machine she turns round and advocates a kind of meditative practice in order to cope with the vertiginous feeling that comes when you realise that you probably don't have any free will (1999: 242). In general I've got a lot of sympathy for such practice, and I broadly agree with her analysis of the illusory nature of the self that precedes it (ibid: 219-34). But as an answer, or at least a coping strategy, to the free will problem, it is distinctly inadequate. She can't expect me to choose to pursue such a meditative lifestyle, can she? Of course she might simply be hoping to nudge my psycho-memetic systems into behaving in the way that she advocates, which is all well and good, but the simple point remains that it is entirely inconsistent to on the one hand deny freedom of the will, and on the other tell your reader what they should do about it.

Daniel Dennett seems to me to make the same mistake when, in Freedom Evolves and elsewhere, he argues that whilst 'we' don't have any direct volitional control, we are somehow able to choose not to act on the volitions that emerge from our multiple drafts of consciousness. It's been a while since I read Freedom Evolves, and I haven't got a copy handy (so forgive the lack of references), but I recall that something like this formed the centre of his compatiblist account of determinism and free will. In any case, I certainly didn't find his account convincing, for much the same reason that I have yet to find any (physicalist) account of free will convincing - none of them take determinism seriously enough. There's no such thing as partial determinism, unless you introduce randomness, and anybody who denies free will but then tells you how best to cope with this denial is simply being inconsistent.

In fact, without free will the very concept of any course of action being 'best' begins to lose a lot of its worth. How can I have any obligation to act one way rather than another, either morally or rationally, if I'm not able to meaningfully make that decision? Both conventional, rules-based moral philosophy and alternative approaches that emphasise "moral imagination" (Nussbaum 1985: 516) or "ethical attention" (Bowden 1998) suffer from this contradiction. In the first instance agency is removed when we are told that there is only one right answer to a dilemma - we no longer have any meaningful moral choice to make. On the latter view, to be moral is to live in a certain way, to be the kind of person who makes moral decisions - whatever those decisions may be. Here, again, we seem to lack ethical agency - either I am this kind of person or I am not, and when it comes to moral dilemmas I no longer have any choice, I simply act in the way that I must. Yet when I made this point in an essay, the marker insisted that "imagination is in part agential" - in which case, surely, the alternative approach simply collapses into the conventional, only with the critical choice being made prior to a dilemma, when an agent exercises their imagination. In my opinion he had fallen into a version of the trap that I outlined above, denying that morality was about freely willed decisions, but then simply reintroducing those decisions in another guise.

Of course, it is not anyone's fault when they make these mistakes, for they could not have chosen to do otherwise - could they?


  • Blackmore, S. 1999. The Meme Machine. Oxford: OUP.
  • Bowden, P. 1998. "Ethical Attention: Accumulating Understandings." European Journal of Philosophy 6/1: 59-77.
  • Dennett, D. 2003. Freedom Evolves. Viking Books.
  • Nussbaum, M. 1985. "'Finely Aware and Richly Responsible': Moral Attention and the Moral Task of Literature." Journal of Philosopy 82: 516-29.


Wednesday, 11 July 2012

Memes vs Genes

(by Joe)

The term "meme" was introduced by Richard Dawkins in The Selfish Gene (1976: 191-201), to refer to a proposed unit of cultural information analogous to the standard unit of genetic information: the gene. He suggested that evolutionary analysis of memes could cast light on cultural oddities that evolutionary genetics sometimes struggle to explain, such as religion. His original introduction of memes was a somewhat off-hand way of illustrating that evolution by natural selection need not only apply to DNA and biology, but almost by accident he invented an entirely new field. Memetics now refers to the study of evolutionary models of cultural information transfer, although whether or not this is something worthy of study is somewhat controversial. A Journal of Memetics was published online from 1997 to 2004 (and is still available), but probably the most famous account of memetics is Susan Blackmore's The Meme Machine (1999).

Everything looked a bit like this in the 90s.

The basic idea behind memetics is extremely simple. Just as we can understand biological evolution in terms of competition between genes, we can understand cultural evolution as competition between memes. A gene will survive not because it is necessarily useful to its host, but because it is useful to itself, driving the kind of adaptation that allows it to be passed on. So the genes of a (non-fertile) worker ant that sacrifices itself for the hive will be passed on through that ant's close genetic relatives.  A meme will survive not because it is necessarily useful to its host, but because it is useful to itself, perhaps by being memorable or easily passed on. An annoying song that you can't get out of your head might not derive any pleasure for you, the host, but it will survive. Analysing behaviour in terms of the benefit for the cultural meme allows us to provide explanations that might not be available at either the genetic or organism level.

Memetics has been criticised for failing to identify a discrete unit of transmission (a meme might be anything from a few notes to a whole philosophical theory), but as Blackmore points out the same can, in a sense, be said about genetics (1999: 53-6). The study of memes more generally is accused of being too vague, even pseudoscientific, and I agree that there is a genuine risk of failing to make any meaningful claims. However I think what matters is whether memetics is able to provide a useful account of phenomena where other fields have failed - and this will only become clear with time. Daniel Dennett's account of consciousness and the self includes a memetic element (1991: 199-226) and Blackmore hopes that memetics might cast light on everything from altruism (1999: 147-74) to the development of agriculture (ibid: 26-7). Whether or not memes actually exists (whatever "existing" means) is not really important - memetics as a discipline can still be provide a useful heuristic, reminding us that cultural practices might propagate themselves simply because that's what they do, not because they are in any way useful to us.


Blackmore, S. 1999. The Meme Machine. Oxford: OUP.

Dawkins, R. 1976 (2006). The Selfish Gene: 30th Anniversary Edition. Oxford: OUP

Dennett, D. 1991. Consciousness Explained. Little, Brown and Company.

Sunday, 1 July 2012

Beyond Belief: Could Consciousness be Beyond our Ken?

-->

(by Jonny)

Consciousness is odd, I'll give you that. I tend to favour what you might call a deflationary account of Chalmers' “hard problem of consciousness”, that is the so-called problem of explaining the relationship between physical events like brain processes and the conscious experience of the world, the phenomenal “quality” of experience. I tend to favour the notion that the apparent incapability between a description of physical processes a description of first person “qualia” is only that, an apparent incapability. Given sufficient conceptual models, and sufficient knowledge of the processes at work, we will begin to see that consciousness is as an explicable natural phenomena as any other. Like other supporters of a deflationary account, I think we will explain away the hard problem by solving the easy ones,problems like how we discriminate, integrate information, report mental states etc.

Yet consciousness is nonetheless odd. Whether we like it or not, the phenomena is so special that has continued to persuade philosophers that it is is unique among perhaps all other phenomena, beyond physical or otherwise objective explanation. For this alone we have to give consciousness the respect of being marvellously teasing.

One philosopher for whom consciousness is especially mysterious is Colin McGinn. Via a position ominously labled “new mysterianism”, McGinn famously argues that consciousness may well be simply beyond our understanding. Human beings just do not have the capacity to solve the hard problem, the answers are beyond us.

Colin McGinn. Looking Mysterious.

I've always had a certain sympathy for this position. It has always struck me that, in principle, McGinn could be right. Though we might be motivated by different reason, I agree that it is possible that an understanding of consciousness is beyond human understanding. From my perspective, it seems to be right that there is a limit to human brain power, and that there could be, in principle, phenomena which to understand would take an amount of information processing beyond at least our current limit.

Where I disagree with this position is where it draws the line. It is tempting to say “we might not be able to explain consciousness” and confuse it with “we certainty cannot explain consciousness” and from there draw the conclusion “there's not point trying to explain consciousness”. I rather believe that this pessimistic line is too quick to jump the gun. Whilst it could be that consciousness is beyond us, there is no real reason to conclude it is in actual fact. I agree with Dennett's tone when says about this sort of view, “...just like Leibniz, they have offered nothing, really, in the way of arguments for their pessimistic conclusions, beyond a compelling image.” (2006:5).

Leibniz believed that when looking into our organic selves we would find only parts, like the machinery of a mill, and the mysteries of the mind would remain unexplained.


New Mysterianism does well to raise the possibility of human limitation and our possible arrogance of thinking we may, as a matter of principle, solve every theoretical problem. But it fails in lacking the reflection that it could be that we can. And that even if there are some unsolvable mysteries, consciousness isn't looking to such a specimen. It looks again like philosophers are too quick to grant consciousness a special status, too quick to exaggerate it's near supernatural nature.

In actual fact I believe we have come some way to explaining consciousness, and long may it continue. I for one am optimistic about the easy problems of the future.


Dennett, D (2006) Sweet Dreams: Philosophical Obstacles to a Science of Consciousness. MIT Press: Masachusetts

Tuesday, 19 June 2012

The Invisible Self

(by Joe)

"What am I? Tied in every way to places, sufferings, ancestors, friends, loves, events, languages, memories, to all kinds of things that obviously are not me. Everything that attaches me to the world, all the links that constitute me, all the forces that compose me don't form an identity, a thing displayable on cue, but a singular, shared, living existence, from which emerges - at certain times and places - that being which says "I." Our feeling of inconsistency is simply the consequence of this foolish belief of the permanence of the self and of the little care we give to what makes us what we are."

My copy of the book looks like this.

There is a muddy area where my philosophical research and my political beliefs meet, and the above quote, from The Coming Insurrection (Invisible Committee, 2009: 31-2), sums it up nicely. The Coming Insurrection was written in 2007 by an anonymous collective (calling themselves 'The Invisible Committee') based in France, and it is clearly strongly influenced by the philosophy of that country, most notably the situationist movement of the 1960's, but also continental philosophy more broadly. It is pompous, vague and quite rightly criticised by many in the left-libertarian circles that I inhabit - Django over at Libcom described it as "a huge amount of hyperbole and literary flourish around some wafer-thin central propositions". Nonetheless, the approach towards the self expressed in the above extract appeals to me. 

Put very crudely, I think that the self is an illusion or an abstraction, a "narrative center of gravity" that helps guide our lives and our interactions with others (Dennett, 1992). The mechanisms behind this formation of the self have evolved for a reason, and for pragmatic reasons we shouldn't strive to eliminate it entirely, but to focus on it too much is unhealthy and unhelpful. Such a focus has, since the enlightenment, led to a heightened sense of individualism throughout the western world, one which I think is at the heart of our capitalist, consumerist and ultimately selfish culture. We can overcome this individualism by studying what the self truly is, and perhaps eventually realising that it doesn't truly exist. 

There is an obvious link with Buddhist philosophy here, one which I am currently trying to learn more about. There is also a somewhat less obvious link with embodied cognition, and in particular the extended mind hypothesis (Clark & Chalmers, 1998). If the self is an illusion constructed by our mind, and that mind is embedded in, or even extended into, its environment, then the self can be thought of as a product of that environment. This could have quite serious consequences, not only for metaphysics and the philosophy of mind, but also for ethics and political philosophy.

Which brings us back to The Coming Insurrection. In the passage I quoted, they describe the sense of "inconsistency" that we feel when we realise that whilst the self is composed of our interactions with things in the world, those things "obviously are not me". The self is invisible, and however hard we try to look for it we can never find it. David Hume expressed a similar feeling when he wrote that "I never can catch myself at any time without a perception, and never can observe any thing but the perception"(A Treatise of Human Nature: Book 1, Part 4, Section 6). We are what we do, and what we do is interact with the world. The focus on the individual over the last few hundred years has clouded that fact, and created an entity, the solid, 'real' self, that does not in fact exist. In coming to understand that who we are is so heavily dependent upon who others are, I hope we might eventually be able to learn to behave more compassionately and co-operatively with other people, as well as with our non-human environment. Satish Kumar embodies this hope in the phrase "You are, therefore I am" (Kumar, 2002), a play on Descarte's "I think, therefore I am", itself a perfect slogan for enlightenment individuality.

There is also an element of the absurd that is recognised, I think, by both Hume and the Invisible Committee. We are confronted with on the one hand an unshakable conviction in the existence of the self, and on the other with convincing evidence that no such thing exists. Similar absurdity can be found in our struggles with free will, moral realism and even scepticism about the external world. In each case a pragmatic route must be found, one that allows us to go on, but at the same time acknowledges the truths that we have learned about the world. In the case of the self, I think that this means accepting that we are a lot closer to the world around us than our privileged, first person view-point makes it seem, and that in order to survive in such a world we must understand and respect our place in it.

There's a lot more I'd like to say about a lot of things here, but I'll save it for future posts. Otherwise we might get complaints about the lack of monkeys!

Here you go.


Clark, A. & Chalmers, D. 1998. "The Extended Mind." Analysis 58: 7-19.



Invisible Committee, The. 2009. The Coming Insurrection. Los Angeles, LA: Semiotext(e). 

Kumar, S. 2002. You Are Therefore I Am: A Declaration of Dependence. Totnes, UK: Green Books.



Wednesday, 30 May 2012

No Monkeying Around: Taking Animal Welfare Seriously

(by Jonny)

I've been interested in animal welfare issues about as long as I've been interested in philosophy of mind. Though hardly unanimous within their respective fields, I've long respected a tradition sometimes found in both, a tradition of seriousness and consistency. On the one hand you have the likes of the Peter Singer who sincerely argues for equal consideration of animal welfare based on a logic of non-arbitrariness, accusing those who oppose of “speciesism”. On the other hand you have the likes of Daniel Dennett who argues for a sophisticated empirically informed theory of consciousness and the mind more generally. The two approaches have not always gotten along (see Dennett, 1995). Yet, I'm wondering if there exists a worthwhile position which borrows from both; a position which acknowledges that we cannot simply assume, without further analyses, certain facts about an entity's mental life, particularly conscious experiences (whatever they are exactly), but at the same time demands that where we find good reason for certain assumptions about minds in other creatures, we take them as ethically serious as possible. If we decide that, say a cow's stress in an abattoir is equivalent to a sheep is a equivalent to a human infant, then all other things being equal we ought to treat all parties in the same situation with equal consideration. When deciding how to respond to fellow animals we should not assume a given organism experiences the world just as humans do, nor should we assume all animals are mindless robots- what is required is an empirically informed approach that takes whatever results we do find ethically seriously. This is an admittedly crude position that requires greater development that can be done justice in one post, but I'll lay out some of my thoughts on the matter.

In a Sunday times article from a few years back John Webster writes that,

“People have assumed that intelligence is linked to the ability to suffer and that because animals have smaller brains they suffer less than humans. That is a pathetic piece of logic, sentient animals have the capacity to experience pleasure and are motivated to seek it, you only have to watch how cows and lambs both seek and enjoy pleasure when they lie with their heads raised to the sun on a perfect English summer's day.” (quoted in the Sunday Times, 27 February 2005)


I appreciate what I think is Webster's sentiment. Cows and lambs display behaviour we typically take to signal pleasure and pain, and their dramatically reduced cognitive abilities do not seem to make such inferences void. Yet I think Webster is wrong in claiming that linking intelligence to suffering is pathetic logic. Intelligence is a weasel word, but such associated faculties of memory, conceptualisation and emotional engagement seem to me responsible for a great deal of both pleasure and suffering (I will avoid the awkward discussion about the differences between pain and suffering for now); though importantly it does not imply that all suffering and pleasure depend on a human-level development of each.

I take it that Webster would not find it uncontroversial that bacteria do not require the same level of consideration dogs do when we poke them with sticks. I take it he would admit that fruit flies are not capable of the same emotional turmoil chimpanzees may regularly undergo within their highly social worlds.



I believe the these intuitive differences are the result of cognitive differences between subjects, and that such differences must be respected across species where we have good reason to assume them. Alarm bells will be ringing for some animal ethicists who will already be predicting that I propose some hierarchy of worth. However I am not proposing that there exist degrees of intrinsic value in species relative to their cognitive complexity- rather that certain cognitive complexity just does produce certain kinds and degrees of suffering (and pleasure) that likewise would not be available; and if we are to sensibly respond the relative demands of an organism’s psychology, we must take these relative capacities into account.

Much of the suffering humans seem capable of experiencing is the result of prediction of the future, memory of the past, association between events, a sense of self, an enduring sense of self, varied and unpredictable emotional needs and empathy. In each of these cases it seems that what allows for these experiences are contingent cognitive abilities and cognitive organization (and of course there is no obvious reason not to imagine the possibility of a species capable of experiencing pleasures and pains in ways humans do not, to degrees humans do not.)

Enduring 149 minutes of Transformers: Revenge of the Fallen fortunately does not result in the same experience for most animals as it does for all normal human beings. Neither does sitting through three hours of Bach. Even experiences that do not require much developed intelligence, being stroked, raising young, hunting, can intuitively produce very different grades and kinds of experience between species, even individual organisms. The same goes for painful experiences. Bereavement, stress, even “raw pain” itself seems relative to the existence of particular contingent mental capacities. In short, I suggest that we should not assume that all animals must experience the same mental states when treated the same way. We must approach every situation with an open mind and ready to be informed by research. It is not a given that chickens appreciate the taste of food as much as chimpanzees, it is not a given that cows feel as much stress in a slaughterhouse as a human would, nor each case it is a priori obvious that they do not- each example requires careful examination.

It is important that this approach is not disrespectful towards other species. Quite the opposite. It takes a serious and mature empirically informed approach. It does not assume that all animals are little humans. It does not assume that an animal's needs and wants are the same as ours. Its picture of animal mentality and how we should respond does not depend on the imagination of human dreamers on either side of the ethics debate. A cow in a slaughterhouse is not necessarily enduring the experience like a mute, hairy four-legged human would. Neither is necessarily a zombie robot without a care in the world. What it is depends on the contingent state of its cognitive organization, and how we come to know anything about that depends on sober investigation.


Dennett D (1995) “Animal consciousness: what matters and why. (In the Company of Animals)” Social Research, 62 n3 p691(20)

Consciousness is in the business of producing illusions.

(by Joe)

Gary Williams, whose blog Minds and Brains I enjoy very much (although don't always agree with), has just written a post on the possibility of partial epiphenomenalism. The idea seems to be that the "feeling of consciousness" could be an epiphenomenal 'illusion' without consciousness itself being epiphenomenal. For one thing, this would solve the problem raised by the Libet experiments (which I mentioned briefly here) by allowing the apparently epiphenomenal experience of volition to be preceded by a casually active conscious decision, just one that has yet to be experienced. There's some similarity here with Dennett's interpretation of Libet in Consciousness Explained (1991: 154-67), where he argues for something like the distribution of consciousness into different 'strands'.

I need to give it a bit more thought, but I'm quite tempted by the idea of divorcing the epiphenomenal experience of consciousness from the functional process of consciousness itself. I particularly liked Williams' suggestion that we might want to say that "consciousness is in the business of producing illusions". That is to say, part of what consciousness does is make extremely convincing illusions of, for example, free will, moral agency, or self hood.

Anyway, just some quick thoughts on a post I found interesting. Proper post coming up soon, so watch this space!


Dennett, D. 1991. Consciousness Explained. Little, Brown and Company.

Tuesday, 22 May 2012

Broadly Speaking: In Praise of (a particular) Functionalism

(by Jonny)

In “Philosophy and Flesh” (1996) George Lakoff and Mark Johnson give a clear and lucid introduction to the notion of the embodied mind, and what they see as its major implications. The book is very readable, let a little down by its claim to paradigm shattering originality and tendency toward over-generalisation. One particular point on which I found the authors to be a little confused was in their objection to 'functionalism'. The authors' basic point seems to be that the functionalism is misled in believing mind can be studied in terms of its cognitive functions whilst ignoring the role the body and brain has to play in those functions (75). For them functionalism is “essentially disembodied”,  a view where the mind “can be studied fully independently of any knowledge of the body and brain, simply by looking at functional relations among concepts represented symbolically” (78).



I think Lakoff and Johnson are too quick to jump the gun, too quick to dismiss a strong principle in their eagerness to overthrow the shackles of traditional “Anglo-American” assumptions (75). From my view, responsible functionalism never ignores anything which might reasonably thought of as contributing to the ultimate function of a mental state, and this must include the body and brain. Perhaps functionlism has a tendency to slip into to the impractically abstract, ignoring the very stuff that must be studied in order to understand function- but this is not necessarily so. The authors quote Ned Block saying, “The key notions of functionalism...are representation and computation. Psychological states are seen as systematically representing the world via a language of thought, and psychological processes are seen as computations involving these representations” (257). Yet to be functionalists we don't have to accept a Fodorian language of thought as the underlying force which must define a mental state's function, though even if we do, this will not and should not stop us ignoring the real world inputs and outputs dependent on the brain and body.

I think perhaps the authors of Philosophy and Flesh are conflating a narrow, abstract, empirically removed functionalism with a broad, scientifically informed version. Functionalism in the broader sense is simply the idea that what matters is what stuff does and as Dennett says functionalism construed this way “is so ubiquitous in science that it is tantamount to a reigning presumption of all science” (2006: 17). As he goes on to say, “The Law of Gravity says that it doesn't matter what stuff a thing is made of- only its mass matters...It is science's job to find the maximally general, maximally non-committal- hence minimal- characterization of whatever power or capacity is under consideration”(17-18). When it comes to the mind, functionalism makes the claim that it's not what the brain is made out of as such, but what that stuff does that matters. This does not ignore the stuff, it does not ignore the brain or body, but it does ask why the stuff matters. To quote Dennett one last time, “Neurochemistry matters because- and only because- we have discovered that the many different neuromodulators and other chemical messengers that diffuse through the brain have functional roles that make important differences” (19). In accepting the significance of the body in cognition, from the reliance of our particular sensori-motor apparatus to perception and conceptualisation to the importance of the body's interaction with its environment to reason, we do not need to reject broad, empirically responsible functionalism.


Dennett, D (2006) Sweet Dreams Philosophical Obstacles to a Science of Consciousness MIT Press: Cambridge (MA)

Lakoff, G., Johnson, M (1996) Philosophy of the Flesh: The Embodied Mind and Its challenge to Western Thought Basic Books

Saturday, 12 May 2012

Lucid Dreaming and the Illusion of Control

(by Joe)

Lucid dreaming refers to the experience of being aware and in control of your dreams. The term was coined by Frederik van Eeden (1913), who discusses his own numerous experiences of such dreams. Snyder & Gackenbach (1988) report that only 20% of the population naturally experience regular lucid dreams, although it is also possible to induce them artificially. The precise neural mechanism behind them is not fully understood, but there appear to be distinct neurobiological differences between regular dreams and lucid dreams. In any case, lucid dreaming presents us with a number of intriguing philosophical puzzles, as well as potential insights into the nature of consciousness.

I am particularly interested in whether the experience a lucid dreamer has of being in control of their dream is genuine, or whether it is merely an experience. It seems quite possible that when a lucid dreamer reports being able to choose how their dream progresses, all they are actually reporting is the sensation of being in control. Studies into schizophrenia and related disorders such as alien hand syndrome suggest that 'being in control' and 'experiencing being in control' are distinct phenomena. So we should not necessarily take a lucid dreamer's word for it when they say that they are in control of their dreams – although it would be difficult to deny that they at least experience or recall being in control.

Stephen LaBerge has conducted extensive research into lucid dreaming, including systematising the use of eye-movements to establish contact between a lucid dreamer and an experimenter (see, for example, LaBerge 2000). The fact that a lucid dreamer can communicate in what appears to be a purposeful manner would seem to validate their claim of being in control of the dream. Kahan & LaBerge (1994) use such evidence to suggest that the traditional distinction between non-conscious dreaming and conscious wakefulness might be flawed. Whilst they take the control of lucid dreamers as a given, one might instead want to question the way in which conscious control is being classified in the first place.

In a famous series of experiments Benjamin Libet discovered that the conscious decision to press a button was reported to occur several hundred milliseconds after the neural activity that was associated with the action began (Libet et al, 1979). The experiments were widely reported to disprove free will, but Daniel Dennett has offered a more subtle explanation. We only have access to the subject's reported experience of initiating the button push, and it might be possible that their decision to push the button actually precedes their conscious experience of control (1991: 154-162). Of course this calls into question the very definition of consciousness, but that is Dennett's intention. Given that there's no homuncular 'centre' to the brain, it might be that decision making occurs separately to conscious awareness of decision making, or that we rapidly lose track of having consciously made a decision.

Similarly, experience of control as reported by lucid dreamers does not unambiguously equal actual control. Whilst Dennett is keen to retain the possibility of free will, others might not be so happy with the apparent detachment of conscious awareness from the actual initiation of actions. When a lucid dreamer tells us that they are able to control their dreams, it would be more accurate to say that they have experienced being in control of their dreams. Whether they actually have, and what that even means, is a much more difficult question to answer.


Dennett, D. 1991. Consciousness Explained. Little, Brown and Company.

Kahan, T. L., & LaBerge, S. 1994. “Lucid dreaming as metacognition: implications for cognitive science.” Consciousness and Cognition, 3/4: 246-264.

LaBerge, S. 2000. “Lucid dreaming: Evidence and methodology”. Behavioral and Brain Sciences, 23/6: 962-3.

Libet, B., Wright, E., Feinstein, B., and Pearl, D. 1979. “Subjective Referral of the Timing for a Conscious Sensory Experience.” Brain, 102: 193-224.

Snyder, T. & Gackenback, J. 1988. In J. Gackenbach & S. LaBerge (Eds.), Conscious Mind, Dreaming Brain: 221-259. New York: Plenum Press.

Van Eeden, F. 1913. “A study of dreams.” Proceeding of the Society for Psychical Research, 26: 431-416.