Autonomous robot may have already killed people – here’s how weapons could be more destabilizing than nuclear weapons

0


Autonomous weapon systems – commonly referred to as killer robots – can have killed human beings for the first time last year, according to a recent United Nations Security Council Libyan Civil War report. History may well identify this as the starting point of the next great arms race, which has the potential to be humanity’s last.

Autonomous weapon systems are robots with deadly weapons that can operate independently, selecting and attacking targets without a human intervening in those decisions. Military personnel around the world are invest heavily in the research and development of autonomous weapons. The United States alone budgeted $ 18 billion for autonomous weapons between 2016 and 2020.

Meanwhile, human rights and humanitarian organizations rush to establish regulations and bans on the development of such weapons. Without such controls, foreign policy experts warn that disruptive autonomous weapon technologies will dangerously destabilize current nuclear strategies, both because they could radically change perceptions of strategic dominance, increase the risk of preemptive attacks, and because they could become combined with chemical, biological, radiological and nuclear weapons themselves.

Like a human rights specialist with emphasis on militarization of artificial intelligence, I find that autonomous weapons make the unstable balances and fragmented guarantees of the nuclear world – for example, the American president power to call a strike – more unstable and more fragmented.

Fatal errors and black boxes

I see four main dangers with autonomous weapons. The first is the problem of misidentification. When selecting a target, will autonomous weapons be able to distinguish between hostile soldiers and 12 year olds playing with toy guns? Between civilians fleeing a site of conflict and insurgents making a tactical retreat?

Killer robots, like the drones in the 2017 short “Slaughterbots,” have long been a major subgenre of science fiction. (Warning: graphic representations of violence.)

The problem here is not that machines will make such mistakes and humans will not. This is because the difference between human error and algorithmic error is like the difference between sending a letter and tweeting. The scale, scope and speed of killer robot systems – governed by a targeting algorithm, deployed across a continent – could make misidentification by individual humans like a recent one. US drone strike in Afghanistan appear to be simple rounding errors by comparison.

Autonomous weapons expert Paul Scharre uses the metaphor of the leaking gun to explain the difference. A firearm is a faulty machine gun that continues to fire after releasing a trigger. The gun keeps firing until the ammo runs out because, so to speak, the gun doesn’t know it’s making a mistake. Firearms are extremely dangerous, but luckily they have human operators who can break the ammo bond or try to point the weapon in a safe direction. Autonomous weapons, by definition, do not have such a guarantee.

It’s important to note that armed AI doesn’t even have to be flawed to produce the runaway effect of guns. As numerous studies of algorithmic errors across industries have shown, the best algorithms – working as expected – can generate correct results internally which nevertheless propagate terrible errors quickly through populations.

For example, a neural network designed for use in hospitals in Pittsburgh identified asthma as a risk reduction factor in cases of pneumonia; image recognition software used by Google identified African Americans as gorillas; and a machine learning tool used by Amazon to rank candidates consistently assigned negative ratings to women.

The problem isn’t just that when AI systems get it wrong, they get it wrong en masse. It is that when they are wrong, their creators often do not know why they did it and, therefore, how to correct them. The black box problem of AI makes it nearly impossible to imagine a morally responsible development of autonomous weapon systems.

The proliferation problems

The next two dangers are low-end and high-end proliferation issues. Let’s start with the low end. Military personnel who develop autonomous weapons now assume that they will be able to contain and control the use of autonomous weapons. But if the history of gun technology has taught the world anything, it is this: guns have spread.

Market pressures could lead to the creation and large-scale sale of what can be seen as the equivalent of a self-contained weapon of the United States. Kalashnikov assault rifle: cheap killer robots, efficient and almost impossible to contain because they circulate around the world. Autonomous “Kalashnikov” weapons could fall into the hands of people beyond government control, including international and domestic terrorists.

The Kargu-2, made by a Turkish defense contractor, is a cross between a quadcopter drone and a bomb. It has artificial intelligence to find and track targets, and may have been used on its own in the Libyan Civil War to attack people.
Ministry of Defense of Ukraine, CC BY

However, the proliferation of the high end is just as bad. Nations could compete to develop increasingly devastating versions of autonomous weapons, including those capable of assembly of chemical, biological, radiological and nuclear weapons. The moral dangers of escalating gun lethality would be magnified by the escalation in the use of guns.

High-end autonomous weapons are likely to lead to more frequent wars because they will diminish two of the main forces that have historically prevented and shortened wars: concern for civilians abroad and concern for one’s own soldiers. The weapons are likely to be equipped with expensive ethical governors designed to minimize collateral damage, using what UN Special Rapporteur Agnes Callamard called the “The myth of a surgical strike” to quell moral protests. Autonomous weapons will also reduce both the need and the risks for its own soldiers, dramatically changing the cost-benefit analysis that nations suffer by launching and maintaining wars.

Asymmetric wars – that is, wars fought on the soil of nations devoid of competing technologies – are likely to become more common. Think of the global instability caused by Soviet and American military interventions during the Cold War, from the First Proxy War to the backfire experienced in the world today. Multiply that by every country currently aiming for high-end autonomous weapons.

Break the laws of war

Finally, autonomous weapons will undermine humanity’s last stopgap against war crimes and atrocities: the international laws of war. These laws, codified in treaties dating back to the 1864s Geneva convention, are the thin international blue line separating war with honor from massacre. They start from the idea that people can be held accountable for their actions even in times of war, that the right to kill other soldiers in combat does not give the right to kill civilians. A striking example of a person held to account is Slobodan Milosevic, former President of the Federal Republic of Yugoslavia, who was indicted on charges against humanity and war crimes by the United Nations International Criminal Tribunal for the former Yugoslavia.

But how to account to autonomous weapons? Who is to blame for a robot that commits war crimes? Who would be judged? Tear? The soldier? The soldier’s commanders? The company that made the gun? Non-governmental organizations and experts in international law fear that autonomous weapons could lead to a serious liability gap.

Hold a soldier criminally responsible to deploy a stand-alone weapon that commits war crimes, prosecutors would have to prove both actus reus and mens rea, Latin terms describing guilty act and guilty mind. It would be difficult legally, and perhaps morally unfair, given that autonomous weapons are inherently unpredictable. I think the distance between the soldier and independent decisions made by autonomous weapons in rapidly changing environments is just too great.

The legal and moral challenge is not facilitated by shifting the blame up the chain of command or to the production site. In a world without regulations that oblige meaningful human control autonomous weapons, there will be war crimes without war criminals to hold accountable. The structure of the laws of war, as well as their deterrent value, will be considerably weakened.

A new global arms race

Imagine a world in which the military, insurgent groups and international and domestic terrorists can deploy theoretically unlimited lethal force with theoretically zero risk at times and places of their choosing, without any resulting legal liability. It’s a world where the kind of inevitable algorithmic errors that plague even tech giants like Amazon and Google can now lead to the elimination of entire cities.

In my opinion, the world should not repeat the catastrophic mistakes of the nuclear arms race. He should not sleepwalking in dystopia.

[Get our best science, health and technology stories. Sign up for The Conversation’s science newsletter.]


Share.

Leave A Reply