Following on from my blog on perspectives of The Terminator, it reminded me of a conversation I had the other week. I was recently speaking with one of my PhD supervisors about The Terminator. She told me a story of a friend who watched first terminator film with their wife who was from South America and has no idea what the film was about, or that Arnold’s character was a robot.
The scene where The T-800 damages it’s face and the metal skeleton underneath is exposed deeply affected this person because, as far as she was concerned this character was an human being rather than a machine. I think this exposes an issue about the anthropomorphisation of AI systems and robots.
I’m not picking on this person as having an actual human being play a robot is obviously quite difficult to recognise if you don’t know about it. But, we see a lot of people who treat advanced robotics as being people, or as being animals.
A few examples of people treating robots as though they are animals stand out in my mind (this is apparently called zoomophisation, or beastalisation). The first is the perception of cute companion robots as though they are animals.
This is a Paro robot designed to be a companion for the elderly, particularly those with dementia. As I’ve mentioned before, this can be unethical as we are allowing people to make emotional bonds with what are actually inanimate objects. Noel Sharkey relays a story here about how he has caused emotional reactions from others by throwing these robots around and letting them smash against tables etc. (here). Apparently, people react as though he is actually harming an animal, when it is just some cogs, motors, and processors covered in fur. If I stuck fur onto the laptop I’m writing this on would you treat it as an animal? Of course not – it is equally as ridiculous to treat the Paro robot as an animals as it is to treat my laptop or any other piece of technology.
Another tale of zoomophisation that sticks in my memory is from Peter Singer’s book ‘Wired for War’. In it, he tells a tale of a bomb-disposal unit who use robots to help in their job. Members of the group gave their robot a name, ‘Scooby Doo’ and treated it as a pet. When this robot was destroyed by one of the bombs it was being used to defuse, the soldiers began to grieve for this robot. When provided with a replacement, at least one soldier ‘just wanted Scooby-Doo back’. This is obviously a system where there is no illusion that it could be an animal, but we as humans still see animal traits in these machines. I think this is dangerous as if we make emotional bonds with machines, we would be willing to do things for them. In a conflict, whose to say that a soldier wouldn’t risk harm to themselves to ‘save’ a robot? You wouldn’t do this for your fridge, or a production line robot you work with, so why should we allow soldiers to do this in conflict when the stakes are much higher for them?
In terms of anthropomorphisation of robotics, an example I heard recently discussed ‘Hitchbot’ and whether you could murder a robot (listen here, or read here). Hitchbot wasn’t really a robot, it was a big bucket with a GPS tracker made to look like a face on top and some pool noodle floats attached to look like arms and legs. It was created for an experiment to see if people would take it to an ultimate destination. But, people treated it like it was alive. People gave it a ride to wherever they were going, with some of them taking it into their houses for days at a time and treating it like a member of the family. Eventually though, someone vandalised it.
A number of people referred to this as though the robot was ‘decapitated’, and asked whether it was ‘murdered’. Of course, you cannot murder an inanimate object. But, what this links to is the idea that these systems are alive. To be alive, you need consciousness. Whether AI systems can be conscious is something I’ve also been thinking about recently. I think I might leave discussing that for another time though.