So,
apologies for the lack of post recently, I’ve been quite busy, and had a short holiday. But yesterday, I went to an event at the University of Liverpool, hosted by their Centre for Autonomous Systems, where Prof. Alan Winfield was talking on the subject of ‘The Ethical Roboticist’.
The talk was split into 2 sections, the first on ethical concerns of robots, and the second on how robots could be made ethical, or as moral machines. Of course, I’m really interested in law. My PhD will deal with ethics in some ways, but is focused on law. Regardless, the talk was fascinating, and Prof Winfield made many brilliant points all the way through.

I won’t just repeat the talk, as it was videoed and I will post the link when it is put online, but I wanted to put some of my thoughts onto some of the things mentioned. The first really notable thing was that Asimov’s 3 Laws of Robotics:
-
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
-
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
-
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Have been updated to become a new, 5 laws (for more, see commentary here):
-
Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
-
Humans, not robots, are responsible agents. Robots should be designed & operated as far as is practicable to comply with existing laws and fundamental rights & freedoms, including privacy.
-
Robots are products. They should be designed using processes which assure their safety and security.
-
Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
-
The person with legal responsibility for a robot should be attributed.
I think this is a really great idea, and hopefully it will counteract some of the potential loopholes in Asmiov’s original version. Whilst Prof Winfield is against ‘Killer Robots’, the consensus of the deciding group allowed the national security exemption in the first law. However, I think they give some really good ideas that could improve autonomous weapon systems (AWS).
The 2nd law, noting that humans and not robots are responsible, combined with the 5th, that the legally responsible agent should be known in advance, made me think back to a chat I had at the CCW meeting in April. There, I was in a discussion where the consensus was that any AWS commander would be responsible for all of its actions, and I think adding in the 5th law, and having that individual putting take on responsibility for any AWS actions before deployment is a good idea, and would hopefully prevent any potentially risky deployments.
However, in my CCW conversation, we were talking about human-supervised AWS, not the fully-autonomous kind. To me, I think this makes attribution much harder, as a commander could deploy an AWS believing in good faith that it will act perfectly predictably, yet perform an illegal action due to coding error. It seems a little harsh to me to hold the commander liable, I would favour an extended accountability chain to weapons testers, programmers and manufacturers. This would mix well with the 4th law, that robots are products, and therefore any failures could be treated in the same way as product failure, with the manufacturer responsible. However, in the military context, this would require an entirely new paradigm to be created, so it would be difficult to put any of these arguments into practice.
Continuing on from legal regulation, Prof Winfield noted that he had recently been involved in the creation of British Standard 8611:2016 ‘Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems‘. As a standard, it is not a regulation, and is therefore voluntary. However, he was very optimistic that roboticists would follow the guidlines and impact ethical considerations into their work.
The most interesting part of the talk was the concept of ‘The Consequence Engine’, which Prof. Winfiled gave a quick talk on here more here. This idea basically suggests that a robot could be programmed to save humans in danger by running simulations of potential future occurrences, in its own control system, and then acting to save the human, based on the potential consequences.
In simulations, robots were 100% effective at saving a single simulated human from falling into a hole, but less than 50% effective when there were 2 simulated humans. This is because the robot re-calculated what to do every half-second, and so could never make a firm decision over which ‘human’ to save, and ended up essentially dithering. Prof Winfiled suggested that if a robot could ‘remember’ its decision for a short time, it would likely be able to save at least one ‘human’ more often.

The discussion then focused on driverless cars, and how they will be moral agents, as at some points they will have to decide whether or not to hit an unfortunate pedestrian in the road, to swerve and risk other pedestrians, or the driver – therefore making a moral choice. The decisions would have to be made based on probabilistic calculations about which potential path would create the most, or least, harm.
Whilst this discussion was going on, I couldn’t help think about AWS, and the potential for them to make decisions in similar ways. In the fore front of my mind was proportionality decisions, and how an AWS could have to balance military advantage with civilian harm. Consequentialist decisions by AWS could benefit greatly their ability to abide by the Law of Armed Conflict. For example, an AWS being able to calculate the consequences of firing a missile at an enemy vehicle next to a school would be of huge benefit to protecting civilians.
Potentially, this could add into the possibility of AWS carryinf out Collateral Damage Estimations (CDE). Rather than them simply being aware of civilians in the vicinity of a target, an AWS capable of consequentialist reasoning could calculate the risk of harm to those civilians if they were to move nearer or further from the target, and what the impact of firing nearby would mean for those civilians.

Conceivably, this could aid greatly in the ability of AWS to carry out proportionality assessments by comparing CDE with military advantage, as proposed by Schmitt (p.19-21), and Schmitt and Thurner (p.254-255). Where it is suggested that enemy targets could be imbued with an advantage value, that could then be weighted against an acceptable level of collateral damage for an attack, therefore resulting in a proportionality calculation. Should an AWS be able to calculate the consequences of its actions, and is programmed to protect civilians above other concerns, this could only lead to greater protections for civilians from AWS.
Until next time!