Today I read about an interesting thing that Google has been doing with machine learning (ML). They’ve been experimenting with something called ‘Federated Learning’. Here’s how they explain it:
‘…your device downloads the current model, improves it by learning from data on your phone, and then summarizes the changes as a small focused update. Only this update to the model is sent to the cloud, using encrypted communication, where it is immediately averaged with other user updates to improve the shared model. All the training data remains on your device, and no individual updates are stored in the cloud.’
It’s not exactly a new idea (here and here), but it’s innovative work. If you’re using an android phone with the ‘GBoard‘ keyboard, your phone is already contributing to this federated learning. With my work being on autonomous weapon systems (AWS), I often think of how new technologies/innovations could be applied to them. Could federated learning be used with AWS? I think it could be useful.
There are lots of issues with ML that make it problematic for use in AWS. I’ve written about ML before on this blog, but it was terms of how algorithms could learn from military personnel in ‘shadow mode‘ rather than being trained on pre-existing data sets. The main issues with using online ML (i.e. it continues to learn during use rather than just during manufacture) for AWS revolve around what will it learn about potential targets? Will ML algorithms recognise lawful criteria of targets (meaning will the algorithms recognize an enemy uniform, or something else common in images of adversaries, for example)? What will they be allowed to learn about (just about what a target ‘is’, or about new methods of attack)? Will ML algorithms be used on operations (if so, how will they be tested – James Farrant and Christopher Ford touch on this in their great article here)?
So that we can actually discuss federated learning, let’s assume that these issues can be overcome and a lawful AWS using ML can be deployed. Reading about federated learning reminded me of an idea I had a few years ago which I presented at the conference ‘Machine learning: formulating an innovation policy for the 4th industrial revolution’ at Liverpool University in July 2016.
When we think about an ‘arms race’ we usually think of states churning out ever more superior weapons both in technological and numeric terms so that they can win an expected conflict. But, there are infra-conflict arms races. We saw this a lot in the recent wars in Afghanistan, Iraq, and Syria. For example, British troops turned up in Iraq with ‘Snatch Land Rovers’ which were fine to drive around in until militants started using road-side bombs and their light armour was insufficient. So the Army had to get better armour, then the militants made hard-tipped bombs, then the army got better vehicles, then the militants innovated more and on and on and on.
If you had an AWS with ML that could observe, recognise, and process innovative militant practices, then the system could learn how to deal with these new behaviours. Of course, if you have a counter-innovation by a military force in one location it won’t happen in a different location. Or, in the case of AWS, counter-innovations would develop within the entire fleet unless there was some way of applying new innovations across the board. Enter federated learning!
This method would, of course, provide a military advantage where innovations by adversaries are regularly occurring, because your side is using ML to innovate at the same time. These counter-innovations would ideally need to be spread through an AWS fleet quickly. But, these innovations are modifications to the original programming. Would this modification mean it is a ‘new’ weapon and therefore require a new weapons review under Article 36 of additional protocol 1?
In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.
Boothby suggests that a modification does require a new weapons review (pg.346), as do a number of states and most participants at the ‘UK Second Weapons Review Forum’ (pg.403-407). However, having a new review after every operation would be incredibly onerous on a military force, and all of their lawyers who also need to advise commanders on the legalities of combat. Farrant and Ford offer a practical solution in relation to weapons with ML (pg.406). They suggest a new review should occur when ML modifies a weapon to the extent that is not contemplated by the legal review prior to deployment and these changes could affect compliance with legal standards.
Following a successful review, the modifications from one AWS faced with a novel challenge in an intra-conflict arms race could be applied to every AWS in the fleet. Thus, enabling the entire force to be ahead of the adversary. If the modification does not successfully pass a review, it would need to be erased from the system memory. Of course, an unlawful modification would already have been used at the point in combat when the system made the modification – so ML that can learn in situ is inherently problematic for ensuring that AWS are used in legally compliant ways. Ensuring that AWS cannot modify their programming to affect legal targeting rules might be able to prevent this.
Of course, a way to avoid this issue would be to prohibit ML in situ, and only allow ML during manufacturing. This would, of course, remove the military advantages that it offers, especially in the case of federated learning. So, it would be a difficult choice for higher-ups to make.
Until next time!