Thursday 1 August 2013

Alien Intelligence

This is the first draft of the text for a poster that I'll be presenting in September. It's intentionally quite simplistic, and the colours are something that I'm trying out in order to draw attention to key words and phrases. Square brackets denote the eventual location of images. Comments and feedback welcome and encouraged!

I argue that sophisticated embodied robots will employ conceptual schemes that are radically different to our own, resulting in what might be described as "alien intelligence". Here I introduce the ideas of embodied robotics and conceptual relativity, and consider the implications of their combination for the future of artificial intelligence. This argument is intended as a  practical demonstration of a broader point: our interaction with the world is fundamentally mediated by the conceptual frameworks with which we carve it up.

Embodied Robotics
In contrast with the abstract, computationally demanding solutions favoured by classical AI, Rodney Brooks has long advocated what he describes as a "behavior-based" approach to artificial intelligence. This revolves around the incremental development of relatively autonomous subsystems, each capable of performing only a single, simple task, but combining together to produce complex behaviour. Rather than building internal representations of the world, his subsystems take full advantage of their environment in solving tasks. This is captured in Brooks' famous maxim, "The world is its own best model" (1991a: 167).

[PICTURE OF ALLEN]

A simple example of this approach is the robot "Allen", designed by Brooks in the late 1980s. Allen contained three subsystems, one to avoid objects, one to initiate a "wandering" behaviour, and one to direct the robot towards distant places. None of the subsystems connect to a central processor, but instead take control when circumstances make it necessary. So if in the process of moving towards a distant point an obstacle looms, the first subsystem will take over from the third, and manoeuvre the robot around the obstacle. Together these subsystems produce a flexible and seemingly goal-oriented behaviour, at little computational cost. (1986; 1990: 118-20.)

Whilst initially his creations were quite basic, Brooks' approach has shown promise, and it seems plausible to suggest that embodied robotics could eventually result in sophisticated, even intelligent, agents. If it does, these agents are unlikely to replicate precisely human behaviour. Brooks has stated openly that he has "no particular interest in demonstrating how human beings work" (1991b: 86), and his methodology relies on taking whatever solutions seem to work in the real world, regardless of how authentic they might be in relation to human cognition. It is for this reason that I think embodied robotics is a particularly interesting test-case for the idea of conceptual relativity.

Conceptual Relativity
A concept, as I understand it, is simply any category that we use to divide up the world. In this respect I differ from those who restrict conceptual ability to language-using humans, although I do acknowledge that linguistic ability allows for a distinctly broad range of conceptual categories. To make this distinction clear we could refer to the non-linguistic concepts possessed by infants, non-human animals, and embodied robots as proto-concepts - I don't really think it matters so long as everybody is on the same page.

Conceptual relativity is simply the idea that the concepts available to us (our "conceptual scheme") might literally change the way that we perceive the world. Taken to its logical extreme, this could result in a form of idealism or relativism, but more moderately we can simply acknowledge that our own perceptual world is not epistemically privileged, and that other agents might experience things very differently.

[HUMAN PARK/BEE PARK]

Consider a typical scene: a park with a tree and a bench. For us it makes most sense to say that there are two objects in this scene, although if pushed we might admit that these objects can be further decomposed. For a bee, on the other hand, the flowers on the tree are likely to be the most important features of the environment, and the bench might not even register at all. Our conceptual schemes shape our perceptual experience.

Alien Intelligence
Brooks (1991b: 166) describes how an embodied robot divides the world according to categories that reflect the task, or tasks, for which it was designed. My claim is that this process of categorisation constitutes the creation of a conceptual scheme that might differ radically from our own. Allen, introduced in the first section, inhabits a world that consists solely of "desirable", far-away objects and "harmful", nearby objects.

[ALLEN'S WORLD/RODNEY'S WORLD]

A more sophisticated embodied robot, which we might call Rodney, could inhabit a correspondingly more sophisticated world. Assume that Rodney has been designed to roam the streets and apprehend suspected criminals, whom he identifies with an advanced facial recognition device. Rodney is otherwise similar Allen - he has an object avoidance subsystem and a "wandering" subsystem. Rodney divides human-shaped features of his environment into two conceptual categories: "criminal" and "other". He completely ignores the latter category, and we could imagine that they don't even enter into his perceptual experience. His world consists of attention grabbing criminal features, objects to be avoided, and very little else.

Of course, there's no guarantee that the embodied robotics program will ever achieve the kind of results that we would be willing to describe as intelligent, or even if it does, that such intelligences will be radically non-human. A uniquely human conceptual apparatus might turn out to be an essential component of higher-order cognition. Despite this possibility, I feel that so long as a behaviour-based strategy is pursued, it is likely that embodied robots will develop conceptual schemes that are alien to our own, with a resulting impact on their perception and understanding of the world.

References
  • Brooks, R. 1986. "A Robust Layered Control System for a Mobile Robot." IEEE Journal of Robotics and Automation RA-2: 14-23. Reprinted in Brooks 1999: 3-26.
  • Brooks, R. 1990. "Elephants Don't Play Chess." Robotics and Autonomous Systems 6: 3-15. Reprinted in Brooks 1999: 111-32.
  • Brooks, R. 1991a. "Intelligence Without Reason." Proceedings of the 1991 International Joint Conference on Artificial Intelligence: 569-95. Reprinted in Brooks 1999: 133-86.
  • Brooks, R. 1991b. "Intelligence Without Representation." Artificial Intelligence Journal 47: 139-60. Reprinted in Brooks 1999: 79-102.
  • Brooks, R. 1999. Cambrian Intelligence. Cambridge, MA: MIT Press.

Sunday 28 July 2013

Embodied AI and the Multiple Drafts Model

In "Intelligence without Representation" (1991), Rodney Brooks lays out his vision for an alternative AI project that focuses on creating embodied "Creatures" that can move and interact in real-world environments, rather than the simplified and idealised scenarios that dominated AI research in the 60s and 70s. Essential to this project is the idea of moving away from centralised information processing models and towards parallel, task-focused subsystems. For instance, he describes a simple Creature that can avoid hitting objects whilst moving towards "distant visible places" (1991: 143). Rather than attempting to construct a detailed internal representation of its environment, this Creature simply consists of two subsystems, one which moves it towards distant objects and another that moves it away from nearby objects. By decomposing this apparently complex task into two simple ones, Brooks is able to find an elegant solution to a difficult problem.

Brooks and a robot having a hug

His description of this process is particularly interesting:
Just as there is no central representation there is not even a central system. Each activity producing layer connects perception to action directly. It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors. Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. (1991: 145)
It is strikingly similar to Dennett's account of consciousness and cognition under the Multiple Drafts Model (see his 1991). Maybe not so surprising when you consider that both Dennett and Brooks were inspired by Marvin Minsky, but it does lend some theoretical credence to Brooks' work...as well as perhaps some practical clout to Dennett's.

  • Brooks, R. 1991. “Intelligence without representation.” Artificial Intelligence, 47: 139-59.
  • Dennett, D. 1991. Consciousness Explained. Little, Brown and Company.

Tuesday 21 May 2013

(Immature) cognitive science and explanatory levels

When I was working on cognitive extension last year, I was particularly taken by the suggestion that cognitive science is not yet a "mature science" (Ross & Ladyman 2010). By this it was meant that criticising a theory for failing to meet some intuitive "mark of the cognitive" presupposes that we have a good idea of what such a mark might look like. In fact cognitive science is still mired in metaphorical and imprecise language, making it conceptually unclear what we are even meant to be studying.

These guys lack the mark of the cognitive.

Bechtel (2005) makes a similar point, although he focuses on the level at which cognitive scientific explanation is aimed. Typically we begin with a characterisation of a phenomenon at either the neural or personal level, whilst seeking an explanation at some intermediary level (say, computational). The problem is that we have yet to settle on a clearly defined level that everyone agrees upon. Bechtel contrasts this with biological science, which appears to have gone through a similar struggle during the 19th century.

This helps explain why there is currently so much debate over what kind of answers we should even seek to be giving in cognitive science. Fodor rejects connectionism as simply specifying a certain kind of implementation, and in response he is accused of abstracting away from what really matters. There's no easy way to solve this problem, although the mechanistic approach that Bechtel (and others) have advocated does seem promising. Ultimately we'll have to wait for cognitive science as a whole to settle (or splinter), but this approach does at least have the virtue of conforming to (apparent) scientific practice.

More on this next time, where I will be attempting to summarise the mechanistic approach to scientific explanation...

  • Bechtel, W. 2005. "Mental Mechanisms: What are the operations?" Proceedings of the 27th annual meeting of the Cognitive Science Society. 208-13.
  • Ross, D. & Ladyman, J. 2010. "The Alleged Coupling-Constitution Fallacy and the Mature Sciences." In Menary (ed.), The Extended Mind. 155-65.

Sunday 19 May 2013

Two New Approaches to Cognitive Extension

(I wrote this way back in the summer, then for some reason decided not to publish it. My views have moved on somewhat since then, but hopefully some of this is still worth reading - Joe.)

The most recent issue of Philosophical Psychology (25:4) features a pair of articles on cognitive extension, each exploring a different approach to the theory. Both articles attempt to introduce a principled way of limiting cognitive extension, a problem that has been at the heart of the debate since it began in 1998. I wrote my undergraduate dissertation on the extended mind (and the narrative self), and whilst I'm more sceptical now than I was when I began, I still don't think there's any principled way of limiting cognition entirely to the physical brain.The most famous opponents of extended cognition, Fred Adams & Ken Aizawa, and Robert Rupert, try to avoid this conclusion by introducing various necessary limitations on "the bounds of cognition".

Shannon Spaulding agrees that the bounds of cognition should be limited, but argues that the strategy of "offering necessary conditions on cognition [that] extended processes do not satisfy" is misguided (2012: 469). She mentions Adams & Aizawa and Rupert as proponents of this strategy, and focuses on the former's attempt to identify a necessary "mark of the cognitive" (Adams & Aizawa 2008: 10) that would legitimately restrict cognition to the brain. She finds this attempt to be problematic primarily because it is inherently question begging: any necessary condition that opponents of extended cognition can come up with will be based on precisely the kinds of current cognitive-scientific practice that proponents of extended cognition are opposed to (Spaulding 2012: 473).

Instead she proposes that critics of cognitive extension should challenge the theory on its own terms. This means demonstrating that there is, in practice, insufficient parity between intra-cranial and trans-cranial processes for them to ever form an extended cognitive system, even at the coarse-grained level that proponents of extended cognition tend to focus on (ibid: 480-1). At the fine-grained level, she focuses on functional organisation and integration, which has the advantage of being familiar territory to many who support cognitive extension. Here she points to several obvious differences in the way intra- and trans-cranial processes function. She identifies the coarse-grained level with "folk psychological functional roles" (ibid: 483), where she again points to several obvious differences that might count against cognitive extension.

Spaulding's rebuttal of necessary condition based arguments against cognitive extension is simple and compelling. All of these arguments base their conditions in current cognitive science, and one of the core point that extended cognition seeks to make is that cognitive science must become a wider, more inclusive discipline than it is now. Ross & Ladyman (2010) make a similar point: cognitive science is not a mature discipline, and what kinds of conditions will eventually come to define it is precisely what is at stake in the debate over cognitive extension. For the most part I also agree with Spaulding's approach to assessing extended cognition. By accepting many of the initial premises of extended cognition, including the "prima facie" possibility that some trans-cranial process satisfies her conditions (2012: 481), it allows for a more balanced debate, as well as the possibility that some cognitive processes might be extended whilst others aren't.

Where I disagree is in the details of the examples that she gives, and perhaps in particular with the example that she chooses to focus on throughout: that of Otto and Inga, originally introduced by Clark & Chalmers (1998). I'm starting to think that this example, what we might perhaps call the ur-example, has outlived its usefulness. Much of the debate over extended cognition has, almost incidentally, focused exclusively on it, and as a defender of extended cognition I think we might be better off coming up with some new examples. Where Otto and Inga has failed, something else might well succeed. In particular I think that social extension, involving as it does the interaction between two (or more) uncontroversially cognitive systems, might be a far more productive source of examples than simple 'material' extension. Spaulding focuses on this particular (admittedly paradigmatic) example of cognitive extension, but hopes that her argument achieve similar results with others (2012: 488, en3). Whether or not this is the case, I certainly agree with her when she states that "we must proceed on a case-by-case basis" (ibid: 487).   

Rather than arguing directly for or against cognitive extension, Tom Roberts focuses on setting a "principled outer limit" to cognitive extension, based on the tracking of a mental state's causal history (2012: 491). He sets this out as contrasting with previous arguments for cognitive extension, which have by and large been "ahistorical", focusing on a state's effects rather than its causes (ibid: 492). He rejects Clark & Chalmers' original suggestion that external cognitive states must be subjct to prior conscious endorsement (a kind of historical constraint) for much the same reason that they themselves raise: it risks disqualifying uncontroversial cognitive states such as subliminally acquired memories (ibid: 495).

Instead Roberts pursues a theory of cognitive ownership, arguing that a subject must take responsibility for an "external representational resource" if it is to become part of their extended cognitive system (ibid: 496). Responsibility in this sense requires that (for example) a belief is acquired in a certain meaningful way, and that an "overall consistency and coherency" is maintained between ones beliefs (ibid: 496-9). This, Roberts hopes, will exclude more radical cases of extension without dismissing extension outright. He concludes by admitting that such a criteria might generate an area of vagueness, but suggests that this is not necessarily such a bad thing, and that we will nonetheless find clear cases of extension (or not).

I'm sympathetic to Roberts' argument, and in particular the attempt to give a principled boundary to cognitive extension without dismissing it entirely. However I've never been entirely convinced by the historical accounts of mental representation that he draws upon, and it's also not clear whether this kind of argument would apply to cognitive extension in general, or only specifically to the extension of beliefs. Admittedly, much of the extended mind literature has focused on the extension of beliefs, but in principle it might be possible for other cognitive functions, such as perception or problem solving, to be extended as well.

I'm also wary of relying on any concept of belief ownership, implying a central and distinct individual to do the owning. This is perhaps a more esoteric concern, but at the very least I think it's worth considering what exactly it is that does the owning when you're considering extended cognitive systems that might well involve the extension of whatever 'self-hood' is.

No pictures in this one, sorry.

  • Adams, F. and Aizawa, K. 2008. The Bounds of Cognition. Oxford: Blackwell.
  • Clark, A. and Chalmers, D. 1998. “The Extended Mind”. Analysis 58: 7-19. Reprinted in
    Menary (ed.), 2010: 27-42.
  • Menary, R. (ed.) 2010. The Extended Mind. Cambridge, MA: MIT Press.
  • Roberts, R. 2012."Taking responsibility for cognitive extension." Philosophical Psychology 25(4): 491-501.
  • Ross, D. and Ladyman, J. 2010. “The Alleged Coupling-Constitution Fallacy and the Mature Sciences.” In Menary (ed.)  2010: 155-166.
  • Spaulding, S. 2012. "Overextended cognition." Philosophical Psychology 25(4): 469-90.

Sunday 5 May 2013

Is natural language a high-level programming language?

If the mind is a computer, then there must be something in the brain that corresponds to the low-level strings of data (the machine code) that computing mechanisms manipulate. This machine code provides the basic structure for everything that a computer is able to do. In electronic computers it is implemented in the flow of electricity across circuits. In the brain, it might be implemented similarly, in the flow of electricity across neurons.

Can you read this?

What exactly it means (or does) will depend on the computing mechanism in question, but even granted this information it is incredibly difficult (and time-consuming) for people to program in machine code. Because of this, programmers typically make use of a hierarchy of programming languages. Each language is an abstraction of those beneath it, eventually bottoming out in the machine code itself. A programmer will write code in whatever language s/he finds most accessible, and once s/he is done it will be translated into machine code by a compiler.
This is basically the plot of Neil Stephenson's Snow Crash...
Similarly, it seems likely that the basic code used by a computational brain could be incredibly difficult for us to decipher. Contrary to Fodor's fabled language of thought, there doesn't seem to be any reason why (at the computational level of description) the brain should operate on natural language. Nonetheless, there does seem to be an intimate relationship between the brain and natural language. We obviously produce language whenever we speak, and (perhaps less obviously) language can exert a powerful influence on how we think and behave. In a quite literal sense, it could be seen as re-programming the mind. So if the mind is a computer, then it might make sense to think of natural language as (among other things) a high-level programming language.

Note added 10.05.13: Apparently Fodor beat me to it. Piccinini writes that "Fodor likened human public languages to high level programming languages, and the human LOT to a computer's machine language" (2004: 387). I haven't found the original reference yet, but I think it's in The Language of Thought somewhere.

  • Piccinini, G. 2004. "Functionalism, Computationalism, and Mental Contents." Canadian Journal of Philosophy, 34/4: 375-410.

Tuesday 30 April 2013

Hedge(hog)ing Your Bets: Animal Consciousness, Ethics and the Wager Argument


I want to begin fleshing out an argument I've been mulling over. It’s far from a comprehensive thesis. Rather, I want to use this blog to sketch out some preliminary ideas. The argument takes off from the notion that whether or not animals are conscious informs the importance of human-animal interaction and dictates the course of animal ethics.

A hedgehog struggling to remain conscious... 
I want to explore the idea that treating animals as if they are conscious carries moral weight from the perspective of a cost-benefit analysis. The “wager argument” starts with the premise that we have a choice to treat animals either as if they are conscious or as if they are not. I will assume for now that consciousness includes the capacity to feel physical and emotional sensations, such as pain and pleasure, from a familiar first-person perspective (I’m strategically evading the problem of defining consciousness for now but I’m fully aware of its spectre- see below).

Animal's wagering. Not what I'm talking about.
The argument looks something like this: you are better off treating animals as if they are conscious beings, because if they are indeed conscious beings you have done good, but if they are not conscious beings then you have lost nothing. Alternatively, if you treat animals as if they are not conscious, and they are, you have caused harm. It is better to hedge your bet and assume animals are conscious.

To paraphrase Pascal, the argument says “if you gain you gain much, if you lose you lose little”. With Pascal’s wager your gain is something like eternal life, and the loss is avoidable annihilation. Some might include in the avoidance or progression to hell (though Pascal himself never mentions hell). For us, the gain is a better world, or the avoidance of a worse one.

Pascal.  I'll wager he Blaised his way through academia... (sorry).

Here's the argument in boring step-by-step premises:

P1 An animal is a being that is conscious or is not conscious.
P2 We may treat an animal as if they are conscious or as if they are not conscious.
P3 Treating a conscious being as if it is conscious or as if it is not conscious bares morally significant differences.
P4 Treating an animal as if it is not conscious and it is conscious will (practically) bare morally significant harm.
P5 Treating an animal as if is not conscious and it is not conscious will bare no morally significance difference.
P6 Treating an animal as if it conscious and it is not conscious will bare no or negligible morally significant difference.
P7 Treating an animal as if it conscious and it is conscious will (practically) bare morally significant good- or at the very least will bare no moral significance.
P8 We ought to behave in a way that promotes morally significant good, or at least avoids morally significant harm.
C We ought to treat animals as if they are conscious.

Note that by “practically” I mean that it does not necessarily follow as a logical result, but follows as a real-world likelihood.

The argument assumes that whether we think an animal is conscious or not makes a big difference to the way we ought to treat them. It also assumes that treating them as not conscious will lead to harm. How we flesh out "harm" is going to depend on our moral framework, and I think this argument most obviously fits into a consequentialist paradigm.

Regardless I think the idea pretty intuitive. If you believe your dog has the capacity for physical and emotional sensation, you are likely to treat her differently than if you think her experience of the world is much the same as a banana. Within medical testing, we may afford those animals we believe to be reasonably attributed consciousness with greater caution regarding harmful experiments. We may altogether exclude conscious beings from butchery, or at least any practice that might be painful. More radically, we may believe that any being we regard as conscious should be afforded the same sort of moral attention as humans. What matters is a “significant difference”- and this needs examined.

The premises obviously need to be elaborated upon, and I already have my own serious criticisms. Two in particular stand out: the problem of treating consciousness as simple and binary; and the assumption in premise 6 that treating animals as if they are conscious, when in fact they are not, will not result in morally significant harm (e.g. think of potential medical breakthroughs via “painful” animal experimentation or the health benefits of a diet that includes animal protein). I do believe the wager argument has strength to fight back against such criticisms but I don’t think it will come away unscathed. In the near future I’ll look at the argument in a little more detail and start examining these criticisms.   


Sunday 21 April 2013

Positive Indeterminacy Revisited

(I meant to write this post a few months ago, when I was actually studying Merleau-Ponty. Since then, positive indeterminacy has popped up a few more times, in various guises. Hence "revisited".)

Merleau-Ponty introduces the term "positive indeterminacy" in The Phenomenology of Perception, where he uses it to describe visual illusions such as the MĆ¼ller-Lyer...

Which line is longer?

 ...and the duck-rabbit. His point is that perception is often ambiguous, and he concludes that we must accept this ambiguity as a "positive phenomenonon". Indeterminacy, according to Merleau-Ponty, can sometimes be a feature of reality, rather than a puzzle to be explained.

Is it a duck? Is it a rabbit? Nobody knows!

Positive indeterminacy, then, is the identification of features of the world that are in some sense inherently indeterminate. Quine argues that any act of translation between languages is fundamentally indeterminate, as there will be always be a number of competing translations, each of which is equally compatible with the evidence. Of course in practice we are able to translate, at least well enough to get by, but we can never we be sure that a word actually means what we think it does. Thus Quine concludes that meaning itself is indeterminate, and that there is no fact of the matter about what a word means.



Quine: a dapper chap

Hilary Putnam comes to similar conclusions about the notion of truth. According to his doctrine of "internal realism", whether or not some statement is true can only be determined relative to a "conceptual scheme", or a frame of reference. Truth is also indeterminate, in that there is no objective fact of the matter about whether or not something is true. Putnam takes care to try and avoid what he sees as an incoherent form of relativism, and stresses that from within a conceptual scheme there is a determinate fact of the matter about truth. Nonetheless, this truth remains in an important sense subjective - it's just that Putnam thinks that this is the best we can hope for.

More recently Dennett has reiterated this kind of "Quinean indeterminacy", with specific reference to beliefs. According to his (in)famous intentional stance theory, what we believe is broadly determined by what an observer would attribute to us as rational agents. In some (perhaps most) situations, there will be no fact of the matter as to which beliefs it makes most sense to attribute. The same goes for other mental states, such as desires or emotions.

Dennett draws attention to Parfit's classic account of the self as another example of positive indeterminacy. There will be cases, such as dementia or other mental illness, where it is unclear what we should say about the continuity of the self. Rather than treating this as a puzzle that we should try and solve, Parfit argues that our concept of self is simply indeterminate, and that there is sometimes no "right" answer.

All of the above cases are much more complex than I have been able to go into here, but they give a taste of the importance of positive indeterminacy. I am most interested in how it can be applied to puzzles in the philosophy of mind, but it seems that it might well be a more fundamental part of how we should think about the world.

Friday 5 April 2013

"What has philosophy done for us..."- Does Philosophy ever make a difference?

“Philosophers never make any difference,” began a recent conversation that never quite happened as follows.

“Yeah I guess you're right”, I replied, thinking of all the time misspent by so many of my philosophy teachers eschewed away in their secluded studies.

“Philosophy doesn't actually influence anything”, the dialogue continued.

“Yeah totally. Then again”, I hesitated, “I guess there was that Plato chap, the man who arguably shaped the entirety of western intellect for millennia to come and whose ideas shaped the world's biggest religion.”

“Okay. But ignoring a few anomalies, what have philosophers ever done for us?”, they retorted.

Well, if we're going to accept Plato we'll have to allow for Aristotle. He changed history a bit by contributing to all existing academic fields at the time via his philosophical paradigm. He arguably planted seeds for the scientific method and influenced both Christianity and Islam, which sometimes play a part in people's lives even to this day.

Raphael's School of Athens. Some of these dudes may just have changed the course of everything.
In more modern times the odd household name such as Descartes, Rousseau, Locke, Kant or Hume might be credited, for better or worse, for moulding much of the west's current values and institutions. Those who paid attention at school might remember Marx, under whose philosophical system countless revolutions were plotted.

In the 20th century there is Turing, father of computer science, who was arguably philosophically oriented. There's Wittgenstein who fellow blogger Bryan Nelson describes as an “unmatched catalysis for creative thought in the 20th century”. In popular culture Russell still frequently materialises. Rawls, Dewey, Simone de Beauvoir, endless philosophically motivated political figures.

Marx. Somewhat influential.

Feminism. Now there's a movement that has effected the lives of millions people, and if we're going to be generous, we'll have to accept some role played by intellectuals such as Mary Wollstonecraft, Jane Addams, Avita Ronell, Mary Daly...

I suppose if we're going to allow for theologians we might have to accept some minor influence from those dudes like Augustine, Thomas Aquinas, Karl Bath, Gregory Nazianzus, Maximus the Confessor, Gregory Palamas.

But that's about it. And before you say anything, we're not really talking about the East so Confucius, Buddha, Lao Tzu, Zhuang Zi, Dogen, Avicenna, none of them really count (and they probably didn't achieve much anyway).

Tiresome sarcasm aside, the notion that philosophy has not influenced the course of history promptly seems like a philistine dismissal, and realising its historical potential seems important. Critical readers might argue that many of the most so-called influential philosophers were influential for reasons other than their philosophy, perhaps in spite of their philosophy. However, I think a cursory glance over the biographies of some of the names above quickly weaken that claim. More often than not the philosophy done by these thinkers is integral to their other work overall. Of course none of this means that philosophy's influence is a good thing, it just affirms its existence. It also does not mean philosophy is always influential. Philosophy is still often, perhaps most often, indulgent and self-contained.

Russell famously makes the appealing claim that philosophy is often the seed for new scientific disciplines (e.g. psychology in the late 19th century, computer science in the 20th, and once upon a time, physics). He says, “...as soon as definite knowledge concerning any subject becomes possible, this subject ceases to be philosophy, and becomes a separate science” (Russell, 1968, 90). This is not a snub, it is a realisation of philosophy's often integral requisite role in constituting practical subjects. As Russell also alludes to, much of philosophy's influence does not come in the formation of grand historical events, though clearly that does happen, but through quieter, subtler, though no less profound, influences in the psychology individual lives.

Wednesday 3 April 2013

Depression and the Dark Room Problem

Trigger warning: depression, schizophrenia, mental illness

Predictive processing is an exciting new paradigm in computational neuroscience. Its essential claim is that the brain processes information by forming predictions about the world. Depending on who you ask, it's either going to solve everything, or turn out to be relatively uninteresting. I'll maybe discuss it in more detail in a future post, but today I want to focus on just one aspect of the theory.

A central principle driving predictive processing is error minimisation. Each prediction that the brain makes is compared with incoming sensory data, and this generates an "error signal" that reflects any mismatch between the prediction and the data. The brain is then driven to either make a more accurate prediction or modify its environment so as to conform with the inaccurate prediction, in order to minimise this error.

This leads to the so-called "dark room problem". If all we are driven to do is minimise prediction error, then why don't we just lie absolutely still in a dark room, thus enabling the formulation of a stable, accurate prediction? There are several ways of responding to this problem, but all share a general assumption that it is a problem, and that we aren't ever driven towards dark rooms.

Now, most of the time this is going to be correct, but on first hearing about the dark room problem my reaction was that actually I sometimes do just want to lie in a dark room. I suffer from periodic bouts of depression, and during these depressive episodes a dark room is pretty much all I can cope with. So perhaps whatever mechanism drives us away from dark rooms in everyday life is switched off during depression?



The Dark Cave Problem

This reminds me of an evolutionary theory of depression that I've heard of, which says that back when we were hunter-gatherers it made sense to occasionally withdraw from the world, as a survival mechanism in case of bad weather or other dangerous circumstances. In cases of depression this mechanism is simply over-sensitive or, in the worst cases, always switched on. I'm not sure how much I'm convinced by this theory, but lets assume that there is at least a shred of truth in it.

It also fits well with predictive processing and the dark room problem. Predictive processing has already been applied to the positive symptoms of schizophrenia and other delusions (in the form of "false" error signals), and similarly I think we could say that in some cases the dark room problem simply isn't a problem. Depression might be the result of a mechanism that shuts off whatever it is that drives us out into the world, with the result that we are content to minimise error by lying in a dark room.

On the other hand, depression and other mental illnesses are extremely complex, and I remain suspicious of any theory that tries to tell one simple story about them. Better perhaps to treat the dark room as just one of many contributing factors, or even just a useful metaphor.

Sunday 17 March 2013

Eyes, Bunnies, Neanderthal Extinction


Every week scientists seem to change their mind regarding the cause of the extinction of neanderthals. This week it was their big dreamy eyes, the other day it was rabbits, a while back it was their rubbish childhoods.

In truth, I'm sure most researchers aren't radically changing their minds so much so often, but adding nuances to complex theories. The problem is almost certainly down to hyperbolic science journalism. 

Silent killer.
 From what I understand, there are still a few key competing theories regarding neanderthal extinction: interbreeding, disease, genocide or some general competitive advantage possessed by humans. It's this last factor that causes trouble. As soon as we begin to speculate about the details of a plausible but vague competitive advantage, we open the doors to any hypothesis that sounds feasible. Superior communication? Diverse tools? More effective hunting strategies? Surely any and all of these are possible, but how would we come to any sort of sensible and testable comparison?

Giant eyes, killer bunnies; these guys had a hard time.
This a problem that frequents evolutionary psychology. In evolutionary psychology we look for evolutionary reasons for often highly specific and complex psychological traits and behaviour. But in doing so we run the danger of hysterical hypothesising- rapidly drawing conclusions that are frequently unverifiable. That's not to say there isn't an evolutionary explanation for those traits, but rather that our conclusions need to be moderate and uncertain. Likewise, I don't doubt the possibility of human competitive advantages over neanderthals- as a lay person who am I to dispute this? I also don't doubt the possibility and utility of comparing the plausibility of different adaptations as an explanation for human survival to some extent. However, I find the range of such assured headlines suspicious. I'm sure most scientists in the field take a nuanced approach that avoids such certainty.

Friday 15 March 2013

Things that are (probably) magic

There are some philosophical questions that seem utterly unanswerable from within a naturalistic framework. These are questions that science just doesn't appear to have the capacity to investigate. These are things that, at least from where we're currently standing, appear to be magic.
  1. Consciousness: Why is there anything at all that it feels like to be a person, or a dog, or a bat? Where does subjectivity fit into the naturalistic framework?
  2. Free Will: Naturalism commonly assumes a causally deterministic universe (or at best, a quantum undeterministic universe, which is hardly an improvement). How then can we freely choose to act?
  3. Morality: How can anything possess inherent value? What does it mean for something to be right or wrong if all that exists is the physical world?
There's an obvious sense in which all three of these "magic things" are linked. Moral action, at least under most systems, requires a degree of free will, and free will would seem to require a degree of conscious awareness. So maybe we should say that there's just one magic thing, perhaps a transcendent soul of some description.

This is all a bit tongue-in-cheek, although there's a serious point to it as well. Rather than just discarding these as "magic things" that naturalistic philosophy cannot investigate, it might be better to simply regard them as ill-formed questions. In fact, scientific progress is being made on the subject of consciousness, but only by breaking it up into a number of smaller, related questions about attention, perception, and so on. Similarly, questions about the cognitive implementation of agency are tractable, even if the fundamental nature of free will is not. And whilst we might not be able to determine why something is right or wrong, we can ask more practical questions about how ethical principles should be applied in the world.

So maybe we should just accept that, at least for the time being, some things appear to be magic, and get on with answering the questions that we can answer.

Sunday 3 March 2013

Life Without Philosophy

Last Friday I attended a talk by Derek Ball (from the University of St. Andrews), titled "Philosophy Without Truth". His basic claim was that even if philosophical theories were never true, we might nonetheless have reason to accept them. His argument followed the structure of arguments for anti-realism in the philosophy of science, appealing to, amongst other things, the failure of previous philosophical theories and fact that some theories might actually contradict themselves if they were true.

I think that the most interesting point came out in the discussion at the end, where someone suggested that we might want to go for a "pluralism-plus" with regard to the aims of different philosophical disciplines. This would mean that not only might different disciplines have different aims (a possibility that Ball mentioned towards the end of his talk), but that even within a given discipline there might be a number of different competing aims, truth being only one of them.

What might some of those aims look like?

Truth - Obviously we might think it's important that a philosophical theory is true (whatever that might mean).

Scientific Progress - Related to the above, some disciplines/schools see philosophy as being continuous with science, in which case (presuming scientific realism!) they might well aim at truth.

Instrumental Value - On the other hand, we might only care about a theory being in some way "useful", whether that be to scientific progress or in some ethical sense. Pragmatism (as a global description) perhaps falls into this category.

Clarity - Even if it doesn't achieve anything else, a philosophical education certainly enables one to think and reason clearly, and could be valuable for that reason alone.

Being "Interesting" - Towards the end of the discussion I flippantly commented that if we were only motivated by being interesting, we'd be better off becoming fiction writers, but I do actually agree that there can be an aesthetic value to philosophy.

Being Fun - A bit like being interesting, but somewhat broader and perhaps more liable to result in incoherent post-modern ramblings.1

Existential Necessity - Not an aim so much as a motivation, but philosophy asks some pretty mind-bending questions, and perhaps at some level simply pursuing those questions is a necessary component of a fulfilling life.

Winning - The aim of philosophy is to disprove the argument of others while working within the rules of logic.2

I think that all of these are important, and all in some sense contribute to my reasons for pursuing a philosophical career. Some are definitely more important than others though, and if I didn't think that there was at least some instrumental value to what I was doing, I probably wouldn't carry on doing it. On the other hand, I find it hard to imagine a life without philosophy, so perhaps I haven't got much choice in the matter.

This list is by no means comprehensive, so please let me know if you can think of any other aims of philosophy!

1. Inspired by a comment from Krzysztof Dołęga, although he is not responsible for the suggestion that incoherent post-modern ramblings are "fun".
2. Krzysztof also suggested this, albeit as an example of "fun".

Monday 18 February 2013

Philosophy of [Mind/Psychology/Cognitive Science]

Depending on what mood I'm in, and/or who I'm talking to, I might describe myself as studying either philosophy of mind, or psychology, or cognitive science. So what's the difference? Whilst the terms are often used interchangeably, it seems to me that they each have a slightly variation in emphasis:

Philosophy of Mind: This covers the more traditional metaphysical questions regarding what the mind is, whether it's distinct from the physical, and so on. Up until the mid-20th century this was pretty much your only option. Of the three this is most likely to conduct 'armchair' philosophy, with no reference to empirical evidence.

Philosophy of Psychology: This focuses more on methodological questions about actual scientific practice within psychology, and might be regarded as a sub-discipline of philosophy of science. I would also include 'social cognition', at least as I've been taught it, within this category. Philosophy of psychology is, by necessity, closely engaged with ongoing psychological and neuroscientific research.

[Philosophy of] Cognitive Science: Here the 'philosophy of' prefix is arguably unnecessary, as cognitive science is essentially a fusion of linguistics, philosophy, and computer science. Since its genesis in the 1940s/50s this has become an increasingly dominant paradigm. It tends to focus on relatively fine-grained questions about the structure and instantiation of cognition, and attempts to replicate this in artificial intelligence. Historically cognitive science has tended to be committed to some form of the computational theory of mind, but with the advent of embodied cognition and anti-representationalism this has begun to change.

Now there's obviously a huge amount of overlap between these fields, and it's pretty much impossible to study them in isolation from one another (although some philosophers of mind certainly attempt to do this). Personally I favour the latter two at the moment, yet I believe that it is important to remain aware of the more fundamental issues investigated by classic philosophy of mind.

This is an extremely subjective and provisional analysis, so please let me know if you disagree with my categorisations!

Sunday 17 February 2013

¡Ai, caramba! Let's not jump to conclusions about chimp working memory

For years, Ai the chimpanzee has been stunning researchers with feats of memory that surpass those of her nearest cousins. Ai, part of the Ai project at Kyoto University, is famously able to remember the location of a series of numbers on a screen within a fraction of a second, and to recall them in their correct sequence (1-19), where it would take you or me in the region of a few seconds. It's really worth checking out.

The Ai project has produced many great papers relating to chimpanzee cognition and behaviour over the years and occasionally the popular press picks up on them. Recently The Independent newspaper declared that, based on research with Ai and her son Ayumu, “Chimpanzees have faster working memory than humans”.

Whilst I am nowhere near qualified to make any sensible judgement on this research, I have to share my hesitation in jumping to such conclusions. In short, I am sceptical that working memory is so simple and binary that from such recall experiments we can say, unequivocally, that chimps have it better than us. Is working memory not involved in all reasoning and comprehension? Is working memory not involved in all verbal and non-verbal communication? Processes involved in these tasks seem, at least in part, more complex in humans- could this not be a relevant factor?

The Articles goes on to claim that,

Professor Matsuzawa suggested that chimps have developed this part of their memory because they live in the “here and now” whereas humans are thinking more about the past and planning for the future.

What does living in the “here and now" mean exactly? If a human individual became better at living in the “here and now” would their working memory improve? 

Ai getting down to monkey business...sigh
It seems to me that all the experiment that The Independent cites shows us is that chimps are better at particular recall tasks, and working memory processes involved with such tasks are more efficient.

It may be prudent from here to theorise that the reason why this is the case is that there is some trade off in humans between “present” memorisation and recall capacity, for other reflective and future considering capacities (which surely involve working memory at some level). This is far more conservative than claiming that chimps unequivocally have better working memory because they “live more in the present”.

I'm guessing part of the problem is sloppy science journalism. It would be interesting to hear in more depth what conclusions the team at Kyoto draw about working memory.

Sunday 3 February 2013

The evolutionary implausability of outlandish alien cognition

Contemporary arguments for (and against) the extended mind hypothesis (eg. Sprevak 2009) regularly invoke hypothetical aliens with outlandish forms of internal cognition. Sprevak asks us to imagine an alien that stores memories "as a series of ink-marks" (ibid: 9). This is meant to be functionally equivalent to the case where someone 'stores' their memories in an external diary. The point is that, in order to preserve multiple realisability and the Martian intuition, we are forced to accept that both the alien and the diary-user constitute cognitive systems, with the only difference being that the latter extends beyond the biological brain.

Baby Martian?

In another example, this time intended as a reduction ad absurdum of functionalism and the extended mind, Sprevak proposes an alien with an innate, internal cognitive sub-system that calculates the exact date of the Mayan calendar (ibid: 21). Again, his point is that there seems to be no functional difference between this sub-system and the one that he claims to have installed on his office computer1. Ergo, his extended mind includes this implicit knowledge of the Mayan calendar.

Ignoring for the moment any questions about the extended mind per se, we should question the plausibility of these kinds of aliens. In each case, but especially the second, it seems that our aliens would possess remarkably over-specialised brains. The ink-jet memory system seems cumbersome, and the Mayan calender calculator is an extremely niche-interest device, one that would probably never see any use. In both cases it is difficult to imagine how or why such a cognitive architecture would have evolved.

This doesn't constitute a counter-argument, as regardless of any evolutionary implausibility Sprevak's aliens serve their rhetorical purpose. However it's interesting to note that much of Clark's own use of the extended mind is intended to highlight the way in which human brains off-load these kinds of specialised skills on to the environment (see his 2003), meaning that we are precisely the kind of generalists that these aliens aren't. Perhaps it's important not to get too caught up with outlandish aliens when we consider the extended mind, and return to the much more homely (and relevant!) examples which it was originally intended for.


1. I have a meeting with him in his office tomorrow, so I'll try and check if is true...

References
  • Clark, A. 2003. Natural Born Cyborgs. Oxford: OUP.
  • Sprevak, M. 2009. "Extended cognition and functionalism." The Journal of Philosophy 106: 503-527. Available at (and references to) http://dl.dropbox.com/u/578710/homepage/Sprevak---Extended%20Cognition.pdf

Sunday 27 January 2013

Meaning is User Relative

The dominant paradigm in cognitive science and philosophy of mind is the computational theory of mind (CTM). In its simplest form this theory states that the mind is essentially a device that takes inputs, performs a series of operations on them, and gives us an output. This process is known as a computation, and it is also what the digital computer sitting in front of you does. This is obviously no coincidence, as CTM and computer science have developed alongside one another since the 1950s.

Cue hackneyed image of 'mind as computer'

One major criticism of CTM is that it seems unable to account for meaning or semantic content. Any given computational process can be fully described in terms of the symbols that it operates on, the syntax, along with the rules that govern those operations. Whilst we do bestow meaning on to the symbols that our digital computers operate on, that meaning appears to be entirely relative to us, the user. It does not appear to be inherent to the symbols themselves, and in fact there is an infinite range of interpretations that can be given to any set of symbols (Pylysyhn 1986: 40). The worry is that if the mind is a computer, there would be no (inherent) semantic content to our thoughts.

This might turn out to be correct, which would mean that our mental states only mean anything relative to an observer. My mental representation of the blue sky outside of my window might be interpreted entirely differently by an alien scientist scanning my brain. To it, that mental state might simply represent a complex calculation, or a nostalgic yearning for the Sirius system. This, in fact, is a major plot point in The Hitchhiker's Guide to the Galaxy, where (SPOILER ALERT) the Earth turns out to be a giant supercomputer designed to calculate the Ultimate Question of Life, The Universe and Everything - the answer to which I will not reveal to you at this time.

This was the first attempt.

So what about my own interpretation of that mental state as representing a blue sky? That would have to be relative to me, as the 'user' of my own mental computer. What exactly this means, or if it even makes sense to say that I could be interpreting my own mental states, gets very complicated, very quickly. Aside from anything else, it raises questions about the nature of consciousness and the self, both of which are extremely contentious topics. Still, I see nothing wrong with saying that semantic content might be entirely user-relative, both in the case of the digital computer and that of the brain-bound one.


References
  • Adams, D. 1979. The Hithchhiker's Guide to the Galaxy. Pan Books.
  • Pylyshyn, Z. 1986. Computation and Cognition. MIT Press.