Disconcerting, to say the least.
Nico (pictured above), developed by a team at Yale University, is apparently going to be able to recognise itself in a mirror, and is already able to "identify almost exactly where its arm is in space based on [a] mirror image" (New Scientist, 22.08.12). This may not sound like much, but the so-called mirror test is a key psychological experiment used to demonstrate self-awareness. Only a few non-human animals are able to recognise themselves in mirrors (including, off the top of my head, chimps, elephants, and dolphins), whilst in human children it forms a key stage in normal cognitive development (usually at around 18 months). So making a robot that is able to pass this test would be a major development in AI research.
It's impressive stuff, but what's particularly interesting is how they've programmed it to do this. According to this article, the robot compares feedback from its own arm movements with visual information from its 'eyes', and determines whether or not the arm that it is seeing belongs to it by checking how closely these match. This use of the robot's body to carry out cognitive tasks fits well with the enactive model of vision, whereby we learn about the world through moving and acting in it. It's certainly an improvement on previous models of AI research, which have tended to focus on 'off-line' solutions, forming representations and computing a response based on these. By harnessing elements of our environment (which includes our own body), both we and robots like Nico are able to minimise the cognitive and computational load compared with purely representational solutions. (See Clark 2003 for an accessible discussion of such 'off-loading' strategies.)
This kind of research is very exciting, and self-representation is certainly an important step in developing truly intelligent AI, but it strikes that by focusing on one specific problem like this, researchers risk missing the overall picture. It's all well and good designing a robot that can recognise itself, and another robot that can traverse rough terrain, and yet another robot that can recognise visual patterns, but we'll only start getting truly impressive results when all these abilities are put together. I'm convinced that some elements of human cognition are emergent, only appearing once we reach a critical mass of less advanced capabilities, and how this occurs might not become apparent until we've achieved it. Designing and programming solutions, in advance, for absolutely everything that we might want a robot to do just isn't feasible. Intriguingly Nico seems to have been originally designed to interact with children, which I'll admit is more promising. There's nothing wrong with tackling AI problems in isolation, we just have to remember that eventually we should be looking toward forming these solutions into a coherent whole.
More on this below...
Which leads me, somewhat tenuously, to my next topic. Anderson (2003: 121-5) discusses some interesting proposals from Paul Dourish concerning the future of human-computer interfaces (i.e. the way in which we interact with and make us of computers). For the last century this has largely been constrained by the limitations of the computers, meaning that how we interface with them has not always been ideally suited to our human limitations. The difficulties which many people find with even the simplest computer task attest to these limitations. Research in both 'embodied' AI and embodied cognition is beginning to suggest some alternative ways in which human-computer interfaces might be designed.
As an example of one such alternative Anderson gives the "marble answering machine", which I believe Clark (2003) also discusses. This machine, illustrated above, functions just as a regular answering machine does, but instead of an electronic display or automated message, it releases a different marble for each message recorded. Each marble is unique, and returning it to the machine elicits the playback of the particular message that it represents. Thus, in a very tangible and intuitive way, the user is able to keep track of their messages by handling, even removing, the physical marbles. Similar interfaces could be imagined for many other simple computers (for that is all an answering machine is), or could even be scaled up to the complexity of a desktop PC or laptop.
Here an Anderson makes an interesting contrast between this "tangible computing" and another direction that human-computer interfaces might take: virtual reality (2003: 124). He views the latter as being distinctly unfriendly to humans, drawing them in to the world of the computer as opposed to drawing the computer out in to the world of the human. I think there's room for both approaches, but this seeming dichotomy between the two worlds, one physical and one virtual, is certainly a striking image. What's also striking is the continued interaction between embodied cognition, robotics and AI, and computing, and just how fruitful it can be for all concerned. Once again I am struck by the hugely positive potential for interdisciplinary co-operation, particularly when it comes to philosophy and cognitive science.
Here an Anderson makes an interesting contrast between this "tangible computing" and another direction that human-computer interfaces might take: virtual reality (2003: 124). He views the latter as being distinctly unfriendly to humans, drawing them in to the world of the computer as opposed to drawing the computer out in to the world of the human. I think there's room for both approaches, but this seeming dichotomy between the two worlds, one physical and one virtual, is certainly a striking image. What's also striking is the continued interaction between embodied cognition, robotics and AI, and computing, and just how fruitful it can be for all concerned. Once again I am struck by the hugely positive potential for interdisciplinary co-operation, particularly when it comes to philosophy and cognitive science.
- Anderson, M. 2003. "Embodied Cognition: A Field Guide." Artificial Intelligence 149: 91-130.
- Clark, A. 2003. Natural Born Cyborgs. Oxford: OUP.
(Posted on behalf of a friend, who was having trouble with the comments.)
ReplyDeleteHm. Interesting. In Ellul one of the main distinctions he makes in different kinds of technology has to do with the role of technology and what it is facilitating. There is a point at which the things we make stop being tools to enhance human activity and start being entities all on there own whose system of processes we are drawn into. (Ellul sees this as problematic as it is the point at which we are allowing ourselves to become part of artificial systems and no longer have to power to consider our role or activity in the society we inhabit) The point I wanted to make however was that maybe this is why AI advancements aren't happening in the way that you are outlining - computer technology is still focused on enhancing human activity rather than on creating a sentient species in its own right.
It ties into the latter point too, I guess. Human-computer interfaces were previously designed in the way that Ellul warns against, but are perhaps now beginning to go the other way, back into the realm of the "human".
Delete