This week, it was announced that Google DeepMind had beaten a human playing the board game ‘Go’. Deepmind is a machine-learning algorithm, and I’ve been thinking about whether machine-learning would be a better method of programming Autonomous Weapon Systems (AWS). I think this could be a useful method of programming AWS so that they comply with the Law of Armed Conflict (LoAC). However, this would require computers to be able to monitor the thoughts of military personnel in order to later imitate them.
Go is an ancient Chinese board game that involves dominating the board with black and white markers. It’s much more complicated than Chess, and has many more outcomes for a match to play out. This makes it far more difficult for a computer to play. The computer has to predict possible outcomes in order to choose the best move to take next. The greater number of moves, means there’s a greater number of calculations it must make, which makes it harder for the computer to play.
This event is rather like Gary Kasparov being beaten at chess by DeepBlue. This was the IBM computer that beat the then Chess world champion in 1997. DeepMind has beaten the current European Go champion. The greater complexity of Go over Chess shows that DeepMind has been able to compute at a greater level.
The reason I think this is interesting for AWS being able to comply with LoAC is the machine-learning element. Machine-learning is a method of programming an Artificial Intelligence computer by seeing what humans do, and then replicating it. In this case, play Go.
George Hotz has managed to make a self-driving car that was programmed by machine-learning. This has been far more successful than comparable efforts with rule-based programming. His version of a self-driving car has managed to navigate California highways much sooner in development that systems created by Google, or Tesla. Both of their self-driving cars are programmed as rule-based systems.
So far, my research has assumed that AWS would be programmed to apply with LoAC as a rule-based system. However, these recent developments in machine-learning lead me to wonder whether it could be a better method of programming AWS.
For this blog, I’ll just use a drone as an example for future AWS (I know they’re not the same, but as they’re probably going to be the most common form of AWS, I’ll run with it). A computer system lurking and learning from a drone operator could be very useful in relation to physical activities that a drone has to do, flying, navigiating, de-icing, and such like. These physical activities where a computer can recognise that ‘in X circumstances, I should do Y’. X could be flying at a low altitude, and Y could be ascending.
Unfortunately, the LoAC calculation is not a physical activity, it is a mental decision. By LoAC calculation, I mean the decision of a drone pilot (or legal adviser/JAG) about whether or not it would be lawful to strike the target under LoAC.
This is where machine-learning might come unstuck. A computer system learning from human actions cannot monitor, nor copy thoughts. Whilst computers might be able to copy navigation by calculating that a pilot is following a compass bearing, or flying towards a GPS co-ordinate, and that it should do that too, it can’t recognise why a certain target is legitimate. The computer wouldn’t be able to recognise why one person on the screen is a lawful target, and another is unlawful.
This is because, there might not appear to be any difference to a computer system. In an International Armed Conflict (between two states), it is the enemy uniform that signifies they are a legitimate target. In a Non-International Armed Conflict (between a State and Non-State Actor), where a legitimate target is those civilians who Directly Participate in Hostilities, or are members of Organised Armed Groups. This means that they are civilians who have decided to join in fighting, or join a Terrorist/Insurgent group, and are therfore targetable. There is a big debate in international law about the exact circumstances someone can be classed as Directly Participating in Hostilities, that I won’t go into here (see here for the Red Cross opinion, and here for Mike Schmitts’ critical analysis).
Determining a legitimate target is a difficult thing to do, and might require significant thought by a drone operator. The computer monitoring a drone operator and trying to copy their actions would be unable to recognise, and imitate their thoughts, and so could not follow the LoAC calculation, or re-create it. So, I don’t think machine-learning would work for AWS.
Although, perhaps use of Brain-Computer Interfaces (BCIs), or Human-Computer Interfaces (HCIs), could give computers a method for monitoring thoughts. I’m afraid I don’t know enough about these to really comment with any authority, unfortunately my knowledge of Neuroscience is a bit sparse. I’m quite interested in BCIs and HCIs, so will hopefully find out soon.
Until next time!