I recently watched ‘Love, Death + Robots’, and it sparked a thought in me. It’s a new Netflix show, or as they call it an anthology. It’s 18 short films each about at least one of the main themes. They’re all animated, at least in part, and I quite enjoyed it. On occasion it has vastly unnecessary levels of sex and violence, but it seems to be common in millennial art – perhaps I’m falling behind the times! Anyway, you can watch the trailer here:
Some of the episodes have conceptual links to autonomous weapons, mostly notably ‘Lucky 13’. This short tells the story of a military pilot who is given an AI-enhanced airship to fly in a conflict. The ship, however, has a personality, and it’s actions killed everyone aboard twice. Yet the new pilot, Colby manages to bond with the ship and this results in them working well together and having some successful missions. I won’t spoilt the ending, but the idea of AI systems having a personality is something I’ve been thinking about for a while.
Usually, in the West, we think of AI systems as being evil, part of the military-industrial complex, and an existential threat to humankind. You can see this clearly in most media coverage of military AI, particularly in the UK (e.g. here, here, and here). But, I also recently listened to this podcast:
In it, Dr. Ben Geortzel, who runs a company in China called SingularityNET which aims to create a global AI network which anyone can used, mentioned something which I had forgotten: the idea that robots are inherently evil and insidious is only really dominant in the West. In the East, AI systems and robots are thought of as being companions.
Even the robots who are militarised, or used for fighting, are seen as working on behalf of a human being. There’s even a sub-genre of Japanese TV called ‘Mecha Anime’.
Obviously, these aren’t strictly contained ideas. Wall-e and Transformers suggest that some of these ideas do cross-over. But the dominant view in the West is still that AI enabled robots are going to become overlords in opposition to humans.
Conversations about this are obviously important, because the consequences could be dire if we mess up AI development. But, are we missing part of the wider conversation in the West? I think the AI community does seem to have these discussions. Elder care robots, ‘wingman’ military robots, and AI assistants are all talked about by Western AI scholars in terms of companion. But, the public discussion doesn’t seem to think about these as companions. In many of the conversations I have with people about AI, there is an undercurrent that ‘technology has gone too far’. It would seem that the public imaginary if probably the limiting factor to technology development, at least that’s how it looks to me.