(by Joe)
Watching
the new Ridely Scott film Prometheus
last night, I realised that there's something about the term
“artificial intelligence” that doesn't quite sit right with me.
SPOILER: there's an android (or humaniform robot, to use Isaac
Asimov's term) in the film, one that for all intents and purposes
behaves and appears like a human. A somewhat odd human perhaps, one
that feigns a degree of subservience to those around it, but a human
nonetheless. I would certainly be happy to say that it was conscious,
and in terms of intelligence it far exceeds almost every other
character in the film. However, I'm not so sure that I'm comfortable
calling it “artificial”.
This is Idris Elba. He's not an android.
Early
on in the film, one of my companions whispered something like “oh,
so he's an AI then”, in what I can't help feeling was a slightly
dismissive tone of voice. Whilst this is perhaps technically correct,
or at least an accurate use of the term, I don't think that I'd have
chosen to use it. Maybe I just read too much science fiction, or
spend too long thinking about multiple realisability,
but to label a conscious system “artificial” in this way seems
distinctly discriminatory to me.
Of
course, if like John Searle
you think that a conscious, thinking robot is necessarily impossible, then this
won't bother you very much. I'm not going to argue for the
possibility of Strong AI here, but suffice to say I am essentially a
functionalist about consciousness, and thus am firmly committed to
the possibility of conscious awareness being instantiated in a
non-biological system. Such a system, if we had built it, would be
“artificial” in the sense that it would be a constructed
artefact, but to label it as such risks distorting our understanding
of what it actually is. Referring to an intelligent android as an AI
distances it from ourselves, putting it in the same conceptual
category as a mindless computer or microwave. We would be tempted to
treat such a creature as no more than a tool, and there is certainly
an air of dominance towards our creations that the term “AI” can
only help reinforce.
In
fact, the film managed to address this issue. One otherwise very
empathetic member of the ship's crew behaves in a distinctly abusive
way towards the android, making constant remarks about how inhuman it is, and treating it as little more than a
slave. This behaviour was reminiscent (purposefully, I think) of
colonial attitudes towards indigenous populations, being patronising,
cruel and dehumanising. I don't think that it would be unreasonable
to say that this character was being “racist” towards the
android, although we perhaps need a new word for this particular form
of discrimination. “Instantialist” is somewhat clumsy, but it
gets the point across. I believe we will, in the relatively near
future, develop computer “minds” that are functionally similar
enough to be thought of as conscious, and when this happens we will
be faced with an ethical dilemma. Should we be allowed to treat these
creations as creations, or
should they be afforded just as much dignity and respect as any other
intelligent life-form? We risk inventing a whole new category of
discrimination, one that I believe the term AI, with all its
connotations of subservience and inferiority, will only
exacerbate.
(The film, by the way, is well worth seeing!)
Interesting because i saw Shaw's 'i'm a human and your a robot' statement at the end as completely ignoring the issues. Maybe it was the bad writing... there could have been a lot more on it. i felt for a film that spoke so much it said very little... it didn't go into any depth on any of it, which was a shame.
ReplyDeleteironically when posting that comment it wanted me to type in letters to prove i was not a robot.... hah
Delete(Joe)
DeleteI can't remember, is Shaw the male archaeologist? Anyway, I got the impression that the film was trying to raise that issue, but maybe I'm just naturally more susceptible to it...