Autonomous weapons are here, but the world is not ready for them

0


[ad_1]

This may be We remember the year the world learned that lethal autonomous weapons had gone from a futuristic concern to a battlefield reality. It was also the year when policymakers failed to agree on what to do about it.

Friday, 120 countries participating in the United Nations’ Convention on Certain Conventional Weapons could not agree on whether to limit the development or use of lethal autonomous weapons. Instead, they pledged to continue and “step up” the discussions.

“It’s very disappointing and a real missed opportunity,” says Neil Davison, senior science and policy adviser at the International Committee of the Red Cross, a humanitarian organization based in Geneva.

The failure to reach an agreement came about nine months after the UN reported that a lethal autonomous weapon had been used for the first time in an armed conflict, in the Libyan civil war.

In recent years, more and more weapon systems have incorporated elements of autonomy. Some missiles can, for example, fly without specific instructions in a given area; but they still usually rely on one person to launch an attack. And most governments say that, for now at least, they plan to keep a human “in the know” when using such technology.

But advances in artificial intelligence algorithms, sensors, and electronics have made it easier to build more sophisticated autonomous systems, paving the way for machines that can decide for themselves when to use lethal force.

A growing list of countries, including Brazil, South Africa, New Zealand and Switzerland, argue that lethal autonomous weapons should be restricted by treaty, such as chemical and organic weapons and land mines has been. Germany and France support restrictions on certain types of autonomous weapons, potentially including those that target humans. China supports an extremely narrow set of restrictions.

Other countries, including the United States, Russia, India, the United Kingdom and Australia, oppose the ban on lethal autonomous weapons, arguing that they must develop the technology to avoid damage. ” be placed at a strategic disadvantage.

Killer robots have long captured the public imagination, inspiring both beloved sci-fi characters and dystopian visions of the future. A recent renaissance in AI and the creation of new types of computer programs capable of outperforming humans in certain areas has prompted some of the biggest names in tech to warn of the existential threat posed by smarter machines.

The problem became more pressing this year, after the UN report that a Turkish-made drone known as the Kargu-2 was used in the Libyan civil war in 2020. Forces aligned with the government of the United Nations The national accord reportedly launched drones against troops supporting the Libyan national army chief, General Khalifa Haftar, independently targeted and attacked people.

“Logistics convoys and retreating Haftar affiliated forces were … tracked down and engaged from a distance by unmanned aerial combat vehicles,” the report said. The systems “have been programmed to attack targets without requiring data connectivity between the operator and the ammunition: in fact, a true ‘shoot, forget and find’ capability.”

The news reflects the speed at which range technology is improving. “Technology is developing much faster than the military-political discussion,” says Max Tegmark, professor at MIT and co-founder of Institute of the future of life, an organization dedicated to the fight against the existential risks facing humanity. “And we are heading, by default, to the worst possible outcome.”

[ad_2]

Share.

Comments are closed.