Thursday 1 August 2013

Alien Intelligence

This is the first draft of the text for a poster that I'll be presenting in September. It's intentionally quite simplistic, and the colours are something that I'm trying out in order to draw attention to key words and phrases. Square brackets denote the eventual location of images. Comments and feedback welcome and encouraged!

I argue that sophisticated embodied robots will employ conceptual schemes that are radically different to our own, resulting in what might be described as "alien intelligence". Here I introduce the ideas of embodied robotics and conceptual relativity, and consider the implications of their combination for the future of artificial intelligence. This argument is intended as a  practical demonstration of a broader point: our interaction with the world is fundamentally mediated by the conceptual frameworks with which we carve it up.

Embodied Robotics
In contrast with the abstract, computationally demanding solutions favoured by classical AI, Rodney Brooks has long advocated what he describes as a "behavior-based" approach to artificial intelligence. This revolves around the incremental development of relatively autonomous subsystems, each capable of performing only a single, simple task, but combining together to produce complex behaviour. Rather than building internal representations of the world, his subsystems take full advantage of their environment in solving tasks. This is captured in Brooks' famous maxim, "The world is its own best model" (1991a: 167).

[PICTURE OF ALLEN]

A simple example of this approach is the robot "Allen", designed by Brooks in the late 1980s. Allen contained three subsystems, one to avoid objects, one to initiate a "wandering" behaviour, and one to direct the robot towards distant places. None of the subsystems connect to a central processor, but instead take control when circumstances make it necessary. So if in the process of moving towards a distant point an obstacle looms, the first subsystem will take over from the third, and manoeuvre the robot around the obstacle. Together these subsystems produce a flexible and seemingly goal-oriented behaviour, at little computational cost. (1986; 1990: 118-20.)

Whilst initially his creations were quite basic, Brooks' approach has shown promise, and it seems plausible to suggest that embodied robotics could eventually result in sophisticated, even intelligent, agents. If it does, these agents are unlikely to replicate precisely human behaviour. Brooks has stated openly that he has "no particular interest in demonstrating how human beings work" (1991b: 86), and his methodology relies on taking whatever solutions seem to work in the real world, regardless of how authentic they might be in relation to human cognition. It is for this reason that I think embodied robotics is a particularly interesting test-case for the idea of conceptual relativity.

Conceptual Relativity
A concept, as I understand it, is simply any category that we use to divide up the world. In this respect I differ from those who restrict conceptual ability to language-using humans, although I do acknowledge that linguistic ability allows for a distinctly broad range of conceptual categories. To make this distinction clear we could refer to the non-linguistic concepts possessed by infants, non-human animals, and embodied robots as proto-concepts - I don't really think it matters so long as everybody is on the same page.

Conceptual relativity is simply the idea that the concepts available to us (our "conceptual scheme") might literally change the way that we perceive the world. Taken to its logical extreme, this could result in a form of idealism or relativism, but more moderately we can simply acknowledge that our own perceptual world is not epistemically privileged, and that other agents might experience things very differently.

[HUMAN PARK/BEE PARK]

Consider a typical scene: a park with a tree and a bench. For us it makes most sense to say that there are two objects in this scene, although if pushed we might admit that these objects can be further decomposed. For a bee, on the other hand, the flowers on the tree are likely to be the most important features of the environment, and the bench might not even register at all. Our conceptual schemes shape our perceptual experience.

Alien Intelligence
Brooks (1991b: 166) describes how an embodied robot divides the world according to categories that reflect the task, or tasks, for which it was designed. My claim is that this process of categorisation constitutes the creation of a conceptual scheme that might differ radically from our own. Allen, introduced in the first section, inhabits a world that consists solely of "desirable", far-away objects and "harmful", nearby objects.

[ALLEN'S WORLD/RODNEY'S WORLD]

A more sophisticated embodied robot, which we might call Rodney, could inhabit a correspondingly more sophisticated world. Assume that Rodney has been designed to roam the streets and apprehend suspected criminals, whom he identifies with an advanced facial recognition device. Rodney is otherwise similar Allen - he has an object avoidance subsystem and a "wandering" subsystem. Rodney divides human-shaped features of his environment into two conceptual categories: "criminal" and "other". He completely ignores the latter category, and we could imagine that they don't even enter into his perceptual experience. His world consists of attention grabbing criminal features, objects to be avoided, and very little else.

Of course, there's no guarantee that the embodied robotics program will ever achieve the kind of results that we would be willing to describe as intelligent, or even if it does, that such intelligences will be radically non-human. A uniquely human conceptual apparatus might turn out to be an essential component of higher-order cognition. Despite this possibility, I feel that so long as a behaviour-based strategy is pursued, it is likely that embodied robots will develop conceptual schemes that are alien to our own, with a resulting impact on their perception and understanding of the world.

References
  • Brooks, R. 1986. "A Robust Layered Control System for a Mobile Robot." IEEE Journal of Robotics and Automation RA-2: 14-23. Reprinted in Brooks 1999: 3-26.
  • Brooks, R. 1990. "Elephants Don't Play Chess." Robotics and Autonomous Systems 6: 3-15. Reprinted in Brooks 1999: 111-32.
  • Brooks, R. 1991a. "Intelligence Without Reason." Proceedings of the 1991 International Joint Conference on Artificial Intelligence: 569-95. Reprinted in Brooks 1999: 133-86.
  • Brooks, R. 1991b. "Intelligence Without Representation." Artificial Intelligence Journal 47: 139-60. Reprinted in Brooks 1999: 79-102.
  • Brooks, R. 1999. Cambrian Intelligence. Cambridge, MA: MIT Press.

Sunday 28 July 2013

Embodied AI and the Multiple Drafts Model

In "Intelligence without Representation" (1991), Rodney Brooks lays out his vision for an alternative AI project that focuses on creating embodied "Creatures" that can move and interact in real-world environments, rather than the simplified and idealised scenarios that dominated AI research in the 60s and 70s. Essential to this project is the idea of moving away from centralised information processing models and towards parallel, task-focused subsystems. For instance, he describes a simple Creature that can avoid hitting objects whilst moving towards "distant visible places" (1991: 143). Rather than attempting to construct a detailed internal representation of its environment, this Creature simply consists of two subsystems, one which moves it towards distant objects and another that moves it away from nearby objects. By decomposing this apparently complex task into two simple ones, Brooks is able to find an elegant solution to a difficult problem.

Brooks and a robot having a hug

His description of this process is particularly interesting:
Just as there is no central representation there is not even a central system. Each activity producing layer connects perception to action directly. It is only the observer of the Creature who imputes a central representation or central control. The Creature itself has none; it is a collection of competing behaviors. Out of the local chaos of their interactions there emerges, in the eye of an observer, a coherent pattern of behavior. There is no central purposeful locus of control. (1991: 145)
It is strikingly similar to Dennett's account of consciousness and cognition under the Multiple Drafts Model (see his 1991). Maybe not so surprising when you consider that both Dennett and Brooks were inspired by Marvin Minsky, but it does lend some theoretical credence to Brooks' work...as well as perhaps some practical clout to Dennett's.

  • Brooks, R. 1991. “Intelligence without representation.” Artificial Intelligence, 47: 139-59.
  • Dennett, D. 1991. Consciousness Explained. Little, Brown and Company.

Tuesday 21 May 2013

(Immature) cognitive science and explanatory levels

When I was working on cognitive extension last year, I was particularly taken by the suggestion that cognitive science is not yet a "mature science" (Ross & Ladyman 2010). By this it was meant that criticising a theory for failing to meet some intuitive "mark of the cognitive" presupposes that we have a good idea of what such a mark might look like. In fact cognitive science is still mired in metaphorical and imprecise language, making it conceptually unclear what we are even meant to be studying.

These guys lack the mark of the cognitive.

Bechtel (2005) makes a similar point, although he focuses on the level at which cognitive scientific explanation is aimed. Typically we begin with a characterisation of a phenomenon at either the neural or personal level, whilst seeking an explanation at some intermediary level (say, computational). The problem is that we have yet to settle on a clearly defined level that everyone agrees upon. Bechtel contrasts this with biological science, which appears to have gone through a similar struggle during the 19th century.

This helps explain why there is currently so much debate over what kind of answers we should even seek to be giving in cognitive science. Fodor rejects connectionism as simply specifying a certain kind of implementation, and in response he is accused of abstracting away from what really matters. There's no easy way to solve this problem, although the mechanistic approach that Bechtel (and others) have advocated does seem promising. Ultimately we'll have to wait for cognitive science as a whole to settle (or splinter), but this approach does at least have the virtue of conforming to (apparent) scientific practice.

More on this next time, where I will be attempting to summarise the mechanistic approach to scientific explanation...

  • Bechtel, W. 2005. "Mental Mechanisms: What are the operations?" Proceedings of the 27th annual meeting of the Cognitive Science Society. 208-13.
  • Ross, D. & Ladyman, J. 2010. "The Alleged Coupling-Constitution Fallacy and the Mature Sciences." In Menary (ed.), The Extended Mind. 155-65.

Sunday 19 May 2013

Two New Approaches to Cognitive Extension

(I wrote this way back in the summer, then for some reason decided not to publish it. My views have moved on somewhat since then, but hopefully some of this is still worth reading - Joe.)

The most recent issue of Philosophical Psychology (25:4) features a pair of articles on cognitive extension, each exploring a different approach to the theory. Both articles attempt to introduce a principled way of limiting cognitive extension, a problem that has been at the heart of the debate since it began in 1998. I wrote my undergraduate dissertation on the extended mind (and the narrative self), and whilst I'm more sceptical now than I was when I began, I still don't think there's any principled way of limiting cognition entirely to the physical brain.The most famous opponents of extended cognition, Fred Adams & Ken Aizawa, and Robert Rupert, try to avoid this conclusion by introducing various necessary limitations on "the bounds of cognition".

Shannon Spaulding agrees that the bounds of cognition should be limited, but argues that the strategy of "offering necessary conditions on cognition [that] extended processes do not satisfy" is misguided (2012: 469). She mentions Adams & Aizawa and Rupert as proponents of this strategy, and focuses on the former's attempt to identify a necessary "mark of the cognitive" (Adams & Aizawa 2008: 10) that would legitimately restrict cognition to the brain. She finds this attempt to be problematic primarily because it is inherently question begging: any necessary condition that opponents of extended cognition can come up with will be based on precisely the kinds of current cognitive-scientific practice that proponents of extended cognition are opposed to (Spaulding 2012: 473).

Instead she proposes that critics of cognitive extension should challenge the theory on its own terms. This means demonstrating that there is, in practice, insufficient parity between intra-cranial and trans-cranial processes for them to ever form an extended cognitive system, even at the coarse-grained level that proponents of extended cognition tend to focus on (ibid: 480-1). At the fine-grained level, she focuses on functional organisation and integration, which has the advantage of being familiar territory to many who support cognitive extension. Here she points to several obvious differences in the way intra- and trans-cranial processes function. She identifies the coarse-grained level with "folk psychological functional roles" (ibid: 483), where she again points to several obvious differences that might count against cognitive extension.

Spaulding's rebuttal of necessary condition based arguments against cognitive extension is simple and compelling. All of these arguments base their conditions in current cognitive science, and one of the core point that extended cognition seeks to make is that cognitive science must become a wider, more inclusive discipline than it is now. Ross & Ladyman (2010) make a similar point: cognitive science is not a mature discipline, and what kinds of conditions will eventually come to define it is precisely what is at stake in the debate over cognitive extension. For the most part I also agree with Spaulding's approach to assessing extended cognition. By accepting many of the initial premises of extended cognition, including the "prima facie" possibility that some trans-cranial process satisfies her conditions (2012: 481), it allows for a more balanced debate, as well as the possibility that some cognitive processes might be extended whilst others aren't.

Where I disagree is in the details of the examples that she gives, and perhaps in particular with the example that she chooses to focus on throughout: that of Otto and Inga, originally introduced by Clark & Chalmers (1998). I'm starting to think that this example, what we might perhaps call the ur-example, has outlived its usefulness. Much of the debate over extended cognition has, almost incidentally, focused exclusively on it, and as a defender of extended cognition I think we might be better off coming up with some new examples. Where Otto and Inga has failed, something else might well succeed. In particular I think that social extension, involving as it does the interaction between two (or more) uncontroversially cognitive systems, might be a far more productive source of examples than simple 'material' extension. Spaulding focuses on this particular (admittedly paradigmatic) example of cognitive extension, but hopes that her argument achieve similar results with others (2012: 488, en3). Whether or not this is the case, I certainly agree with her when she states that "we must proceed on a case-by-case basis" (ibid: 487).   

Rather than arguing directly for or against cognitive extension, Tom Roberts focuses on setting a "principled outer limit" to cognitive extension, based on the tracking of a mental state's causal history (2012: 491). He sets this out as contrasting with previous arguments for cognitive extension, which have by and large been "ahistorical", focusing on a state's effects rather than its causes (ibid: 492). He rejects Clark & Chalmers' original suggestion that external cognitive states must be subjct to prior conscious endorsement (a kind of historical constraint) for much the same reason that they themselves raise: it risks disqualifying uncontroversial cognitive states such as subliminally acquired memories (ibid: 495).

Instead Roberts pursues a theory of cognitive ownership, arguing that a subject must take responsibility for an "external representational resource" if it is to become part of their extended cognitive system (ibid: 496). Responsibility in this sense requires that (for example) a belief is acquired in a certain meaningful way, and that an "overall consistency and coherency" is maintained between ones beliefs (ibid: 496-9). This, Roberts hopes, will exclude more radical cases of extension without dismissing extension outright. He concludes by admitting that such a criteria might generate an area of vagueness, but suggests that this is not necessarily such a bad thing, and that we will nonetheless find clear cases of extension (or not).

I'm sympathetic to Roberts' argument, and in particular the attempt to give a principled boundary to cognitive extension without dismissing it entirely. However I've never been entirely convinced by the historical accounts of mental representation that he draws upon, and it's also not clear whether this kind of argument would apply to cognitive extension in general, or only specifically to the extension of beliefs. Admittedly, much of the extended mind literature has focused on the extension of beliefs, but in principle it might be possible for other cognitive functions, such as perception or problem solving, to be extended as well.

I'm also wary of relying on any concept of belief ownership, implying a central and distinct individual to do the owning. This is perhaps a more esoteric concern, but at the very least I think it's worth considering what exactly it is that does the owning when you're considering extended cognitive systems that might well involve the extension of whatever 'self-hood' is.

No pictures in this one, sorry.

  • Adams, F. and Aizawa, K. 2008. The Bounds of Cognition. Oxford: Blackwell.
  • Clark, A. and Chalmers, D. 1998. “The Extended Mind”. Analysis 58: 7-19. Reprinted in
    Menary (ed.), 2010: 27-42.
  • Menary, R. (ed.) 2010. The Extended Mind. Cambridge, MA: MIT Press.
  • Roberts, R. 2012."Taking responsibility for cognitive extension." Philosophical Psychology 25(4): 491-501.
  • Ross, D. and Ladyman, J. 2010. “The Alleged Coupling-Constitution Fallacy and the Mature Sciences.” In Menary (ed.)  2010: 155-166.
  • Spaulding, S. 2012. "Overextended cognition." Philosophical Psychology 25(4): 469-90.

Sunday 5 May 2013

Is natural language a high-level programming language?

If the mind is a computer, then there must be something in the brain that corresponds to the low-level strings of data (the machine code) that computing mechanisms manipulate. This machine code provides the basic structure for everything that a computer is able to do. In electronic computers it is implemented in the flow of electricity across circuits. In the brain, it might be implemented similarly, in the flow of electricity across neurons.

Can you read this?

What exactly it means (or does) will depend on the computing mechanism in question, but even granted this information it is incredibly difficult (and time-consuming) for people to program in machine code. Because of this, programmers typically make use of a hierarchy of programming languages. Each language is an abstraction of those beneath it, eventually bottoming out in the machine code itself. A programmer will write code in whatever language s/he finds most accessible, and once s/he is done it will be translated into machine code by a compiler.
This is basically the plot of Neil Stephenson's Snow Crash...
Similarly, it seems likely that the basic code used by a computational brain could be incredibly difficult for us to decipher. Contrary to Fodor's fabled language of thought, there doesn't seem to be any reason why (at the computational level of description) the brain should operate on natural language. Nonetheless, there does seem to be an intimate relationship between the brain and natural language. We obviously produce language whenever we speak, and (perhaps less obviously) language can exert a powerful influence on how we think and behave. In a quite literal sense, it could be seen as re-programming the mind. So if the mind is a computer, then it might make sense to think of natural language as (among other things) a high-level programming language.

Note added 10.05.13: Apparently Fodor beat me to it. Piccinini writes that "Fodor likened human public languages to high level programming languages, and the human LOT to a computer's machine language" (2004: 387). I haven't found the original reference yet, but I think it's in The Language of Thought somewhere.

  • Piccinini, G. 2004. "Functionalism, Computationalism, and Mental Contents." Canadian Journal of Philosophy, 34/4: 375-410.

Tuesday 30 April 2013

Hedge(hog)ing Your Bets: Animal Consciousness, Ethics and the Wager Argument


I want to begin fleshing out an argument I've been mulling over. It’s far from a comprehensive thesis. Rather, I want to use this blog to sketch out some preliminary ideas. The argument takes off from the notion that whether or not animals are conscious informs the importance of human-animal interaction and dictates the course of animal ethics.

A hedgehog struggling to remain conscious... 
I want to explore the idea that treating animals as if they are conscious carries moral weight from the perspective of a cost-benefit analysis. The “wager argument” starts with the premise that we have a choice to treat animals either as if they are conscious or as if they are not. I will assume for now that consciousness includes the capacity to feel physical and emotional sensations, such as pain and pleasure, from a familiar first-person perspective (I’m strategically evading the problem of defining consciousness for now but I’m fully aware of its spectre- see below).

Animal's wagering. Not what I'm talking about.
The argument looks something like this: you are better off treating animals as if they are conscious beings, because if they are indeed conscious beings you have done good, but if they are not conscious beings then you have lost nothing. Alternatively, if you treat animals as if they are not conscious, and they are, you have caused harm. It is better to hedge your bet and assume animals are conscious.

To paraphrase Pascal, the argument says “if you gain you gain much, if you lose you lose little”. With Pascal’s wager your gain is something like eternal life, and the loss is avoidable annihilation. Some might include in the avoidance or progression to hell (though Pascal himself never mentions hell). For us, the gain is a better world, or the avoidance of a worse one.

Pascal.  I'll wager he Blaised his way through academia... (sorry).

Here's the argument in boring step-by-step premises:

P1 An animal is a being that is conscious or is not conscious.
P2 We may treat an animal as if they are conscious or as if they are not conscious.
P3 Treating a conscious being as if it is conscious or as if it is not conscious bares morally significant differences.
P4 Treating an animal as if it is not conscious and it is conscious will (practically) bare morally significant harm.
P5 Treating an animal as if is not conscious and it is not conscious will bare no morally significance difference.
P6 Treating an animal as if it conscious and it is not conscious will bare no or negligible morally significant difference.
P7 Treating an animal as if it conscious and it is conscious will (practically) bare morally significant good- or at the very least will bare no moral significance.
P8 We ought to behave in a way that promotes morally significant good, or at least avoids morally significant harm.
C We ought to treat animals as if they are conscious.

Note that by “practically” I mean that it does not necessarily follow as a logical result, but follows as a real-world likelihood.

The argument assumes that whether we think an animal is conscious or not makes a big difference to the way we ought to treat them. It also assumes that treating them as not conscious will lead to harm. How we flesh out "harm" is going to depend on our moral framework, and I think this argument most obviously fits into a consequentialist paradigm.

Regardless I think the idea pretty intuitive. If you believe your dog has the capacity for physical and emotional sensation, you are likely to treat her differently than if you think her experience of the world is much the same as a banana. Within medical testing, we may afford those animals we believe to be reasonably attributed consciousness with greater caution regarding harmful experiments. We may altogether exclude conscious beings from butchery, or at least any practice that might be painful. More radically, we may believe that any being we regard as conscious should be afforded the same sort of moral attention as humans. What matters is a “significant difference”- and this needs examined.

The premises obviously need to be elaborated upon, and I already have my own serious criticisms. Two in particular stand out: the problem of treating consciousness as simple and binary; and the assumption in premise 6 that treating animals as if they are conscious, when in fact they are not, will not result in morally significant harm (e.g. think of potential medical breakthroughs via “painful” animal experimentation or the health benefits of a diet that includes animal protein). I do believe the wager argument has strength to fight back against such criticisms but I don’t think it will come away unscathed. In the near future I’ll look at the argument in a little more detail and start examining these criticisms.   


Sunday 21 April 2013

Positive Indeterminacy Revisited

(I meant to write this post a few months ago, when I was actually studying Merleau-Ponty. Since then, positive indeterminacy has popped up a few more times, in various guises. Hence "revisited".)

Merleau-Ponty introduces the term "positive indeterminacy" in The Phenomenology of Perception, where he uses it to describe visual illusions such as the Müller-Lyer...

Which line is longer?

 ...and the duck-rabbit. His point is that perception is often ambiguous, and he concludes that we must accept this ambiguity as a "positive phenomenonon". Indeterminacy, according to Merleau-Ponty, can sometimes be a feature of reality, rather than a puzzle to be explained.

Is it a duck? Is it a rabbit? Nobody knows!

Positive indeterminacy, then, is the identification of features of the world that are in some sense inherently indeterminate. Quine argues that any act of translation between languages is fundamentally indeterminate, as there will be always be a number of competing translations, each of which is equally compatible with the evidence. Of course in practice we are able to translate, at least well enough to get by, but we can never we be sure that a word actually means what we think it does. Thus Quine concludes that meaning itself is indeterminate, and that there is no fact of the matter about what a word means.



Quine: a dapper chap

Hilary Putnam comes to similar conclusions about the notion of truth. According to his doctrine of "internal realism", whether or not some statement is true can only be determined relative to a "conceptual scheme", or a frame of reference. Truth is also indeterminate, in that there is no objective fact of the matter about whether or not something is true. Putnam takes care to try and avoid what he sees as an incoherent form of relativism, and stresses that from within a conceptual scheme there is a determinate fact of the matter about truth. Nonetheless, this truth remains in an important sense subjective - it's just that Putnam thinks that this is the best we can hope for.

More recently Dennett has reiterated this kind of "Quinean indeterminacy", with specific reference to beliefs. According to his (in)famous intentional stance theory, what we believe is broadly determined by what an observer would attribute to us as rational agents. In some (perhaps most) situations, there will be no fact of the matter as to which beliefs it makes most sense to attribute. The same goes for other mental states, such as desires or emotions.

Dennett draws attention to Parfit's classic account of the self as another example of positive indeterminacy. There will be cases, such as dementia or other mental illness, where it is unclear what we should say about the continuity of the self. Rather than treating this as a puzzle that we should try and solve, Parfit argues that our concept of self is simply indeterminate, and that there is sometimes no "right" answer.

All of the above cases are much more complex than I have been able to go into here, but they give a taste of the importance of positive indeterminacy. I am most interested in how it can be applied to puzzles in the philosophy of mind, but it seems that it might well be a more fundamental part of how we should think about the world.