The ethical use of AI in the security and defense industry

0

Commentary: The ethical use of AI in the security and defense industry

12/21/2021

By Wayne Phelps

istock illustration

Artificial intelligence is already surrounding us. There is a good chance that when you type an email or text, a grayed-out prediction will appear before what you have already typed.

The “suggested for you” portion of some online shopping apps predicts the items you might be interested in purchasing based on your purchasing history. Music streaming apps create playlists based on listening histories.

A device in your home can recognize a voice and answer questions, and your smartphone can unlock after recognizing your facial features.
Artificial intelligence is also advancing rapidly in the security and defense industry. When AI converges with autonomy and robotics in a weapon system, we have to ask ourselves, “From an ethical standpoint, where should we draw the line? “

Air Force pilot Col. John Boyd developed a decision-making model called the OODA Loop – Observe, Guide, Decide, Act – to explain how to increase the pace of decisions to overtake, outsmart and ultimately defeat an opponent. . The goal is to make decisions faster than an opponent, to force them to react to actions, and to allow a force to take the initiative. But what if the opponent is a machine?

Humans can no longer exceed the processing power of a computer and haven’t been able to do so for some time. In other words, a machine’s OODA loop is faster. In some cases, it now makes sense to relegate certain decisions to a machine or to let the machine recommend a decision to a human.

From an ethical point of view, where should we let a weapon system decide on an action and where should we involve humans?

In February 2020, the Ministry of Defense officially adopted five ethical principles of artificial intelligence as a framework for the design, development, deployment and use of AI in the military. To sum up, the ministry said that AI will be accountable, fair, traceable, reliable and governable.

It is an exceptional first step to guide future developments in AI; however, as with many foundational documents, it lacks detail. An article by DoD News titled “DoD Adopts 5 Ethical Principles of Artificial Intelligence,” states staff will exercise “appropriate levels of judgment and care while remaining accountable for the development, deployment and use of AI capabilities “.

The appropriate level of judgment may depend on the maturity of the technology and the action being taken.

Machines are very good at narrow AI, which refers to accomplishing a single task like the ones mentioned earlier. General artificial intelligence, or AGI, refers to intelligence that resembles a human’s ability to use multiple sensory inputs to perform complex tasks.

There is a long way to go before machines reach this level of complexity. While reaching this milestone may be impossible, many AI experts believe it is in the future.

AIMultiple, a tech industry analysis company, published an article that compiled four surveys of 995 leading AI experts since 2009.

In each survey, a majority of respondents believed researchers would achieve AGI on average by 2060. General artificial intelligence, however, is not inevitable, and technologists historically tend to be overly optimistic when doing research. AI predictions. This unpredictability, however, reinforces the reason why we must now consider the ethical implications of AI in weapon systems.

One such ethical concern is “lethal autonomous weapon systems”, which can autonomously sense the environment, identify a target, and make the decision to engage without human intervention. These weapons have been around in various capacities for decades, but nearly all of them were defensive in nature, from the most simplistic form – a land mine – to complex systems such as the Navy’s Phalanx close-range weapons system.

But in March 2020 in Libya, the first case of an offensive lethal autonomous weapon emerged when a Turkish-built Kargu-2 unmanned aerial system was suspected of autonomously engaging human targets with a weapon.

This should be of concern to all, as current offensive weapon technology is not mature enough to autonomously identify people and determine hostile intentions.

The most difficult part of the destruction chain to automate is the positive identification of a target known as “autonomous target recognition”. Object recognition is performed by machine learning or deep learning via a convolutional neural network. Some instances of object recognition using computer vision, such as identifying military weapons such as a tank or artillery piece, can be done today. But even then, environmental challenges such as target obscuration, smoke, rain, and fog can reduce accuracy.

Methods also exist to spoof or trick computer vision by ignoring an object known as a typo or contradictory image attack.

One of these recent examples demonstrated that computer vision was tricked by placing a post-it on an object with text describing a different object. As thorny as this problem is, the challenge of identifying a person and their intention is even more difficult for a machine.

“Signature drone strikes” are engagements that occur based on a target’s lifestyle combined with other intelligence sources. Signature strikes are carried out by teams of highly trained professionals providing context and assessing target intent, but even then they are controversial and certainly not foolproof as evidenced by the drone strike in Kabul in late August. Imagine if the signature keystrokes were happening without a human being in the loop. This is what an offensive autonomous weapon system would do.

Even though we’ve solved all object and people recognition issues, there’s yet another important missing piece where humans outperform machines: context. In the rules of engagement, we seek to answer three questions before engaging a target. Can we legally hire a target? Should we ethically engage the target? And morally, how should we engage it? With tight AI in weapon systems, we might be able to answer the legal question some of the time, but we can never satisfy the ethical and moral questions with a machine.

An example of autonomous target recognition failure is the scenario of a child holding a weapon or even a toy weapon. Computer vision can allow a machine to accurately identify the child as a human being with a weapon and legally, if the rules of engagement allow, this may be sufficient justification to allow a strike against this. target. However, a human observing this same individual would be able to discern that it is a child and would not ethically allow engagement. How would a machine distinguish between a child holding a realistic looking toy gun and a small adult holding a gun?

Situations like this and a myriad of others require the judgment of a human instead of the obedience of a machine to programmed commands.

So where is AI best implemented in weapon systems? In areas where it helps a human make a faster, more informed decision, where it reduces the workload of a human operator’s job saturation functions, and situations where a human is too slow to respond to threatens. Two main examples are products that use phased radars to use artificial intelligence at the edge to analyze the flight characteristics of detected objects to eliminate false positive targets, such as birds. Autonomous drones that track down other drones and capture them with a net can also use AI to determine the best positioning and time to shoot the net based on the speed and position of the target.

These systems are defensive in nature, reduce operator workload and speed up the OODA loop to take initiative.

Where should the line be drawn when it comes to AI in weapon systems? Is this the time when the decision to take another human life is delegated to a machine without any human intervention? This is the current position within the Ministry of Defense. However, adversaries may not share the same ethical concerns when implementing artificial intelligence in their weapons.

It’s currently easy to tell that a human will always be involved in the chain of destruction, but how long can the United States maintain this position when disadvantaged by adversaries’ OODA loop? Will the US military ever be forced to choose between allowing a machine to kill a human or losing its competitive edge?

Opponents of keeping a human in the chain of destruction will also claim that humans make mistakes in war and a machine is never tired, stressed, scared or distracted. If a machine makes fewer mistakes than a human resulting in less collateral damage, isn’t there a moral imperative to minimize suffering using AI?

While competitive advantages and the reduction of collateral damage are valid arguments for allowing a machine to decide to kill a human in combat, war is a human endeavor and there must be a human cost to undertake it. It must always be difficult to kill another human being, lest we risk losing what makes us human to begin with. This is where we should take a strong ethical stance when considering AI in future weapons.

Wayne Phelps is the author of On Remote Killing: The Psychology of Drone Killing and director of federal business development for Fortem Technologies, a counter-UAS company.

The subjects: Robotics and autonomous systems


Source link

Share.

Comments are closed.