So,
This week I came across two stories that are linked, which I thought I would share. The first, is that of an AI system at the University of Cincinnati besting an ex-US Air Force Colonel Gene Lee. This has been quite well publicised, and lots have people have been talking about it. Conveniently, I came across a similar story during my research, that of a Firebee Remotely Piloted Vehicle (RPV) outperforming another US Air Force pilot, but this was all the way back in 1971, and is detailed in Paul Dickson’s ‘The Electronic Battlefield‘ (2012 edn. p.184-185).

I think it’s interesting to note, that whilst the circumstances, and ‘opponents’, are very different, humans have been losing to technology in similar scenarios for quite a while. Lee’s adversary was an AI simulator called ALPHA, that managed to shoot him down without being fired upon. In ’71, the pilot of a Navy F-4 Phantom was outflown by the Firebee which managed to avoid two air-to-air missile assaults, and score ‘hits’ on the Phantom several times. The Firebee was an RPV, controlled by a human on the ground.

However, comparisons are difficult. Due to being uninhabited, the Firebee could make maneuvers impossible for any human pilot that wants to stay conscious, whilst the AI and Col. Lee’s ‘aircraft’ would be subject to the same ‘physics’ in the simulation engine.

Perhaps, as is noted in the articles about the ALHAPA system, AI ‘pilots’ in RPVs would make great wingmen. They would have the ability to make the same maneuvers as the Firebee, the same ability to endure lethal G-forces without killing their pilots, and all the greater-then-human abilities of the ALHPA system. This would be the Machine-Human teaming, or ‘intelligent partnership‘, where weapon systems with autonomy act in conjunction with human beings, and always under human control.
Legally, any teaming creates far fewer difficulties for autonomous weapon systems (AWS), as the individual in control, and therefore accountable for their actions is always clear (unless there is some technical fault, or programming errors), and the human would remain in, or on, the loop of the AWS decision-making. Thus, they would be able to confirm, or intervene in, targets selected by an AWS. But, lots of difficulties still remain, as to how AWS would select targets in accordance with the Law of Armed Conflict. So, I don’t think we’ll be seeing any AI wingmen deployed too soon!

Until next time!