Autonomous weapons. AI specialists call halt to going way beyond drones

Image: AF.mil
Ingrid Fadelli

Global experts  rail against university’s collaboration with a defense company on the creation of autonomous weapons

A group of Artificial Intelligence (AI) experts from nearly 30 countries worldwide are attempting to block a South Korean university’s collaboration with a defense company, which is aimed at developing autonomous weapons.

The news comes almost a year after technology magnate Elon Musk and other renowned leaders asked the United Nations (UN) to ban the development of all lethal autonomous weapons, as they could lead to “a third revolution in warfare”, which would be on a par with the invention of gunpowder and nuclear weapons.

In an open letter that calls for a boycott of Korea Advanced Institute of Science and Technology (KAIST) and defense specialist Hanwha System’s plan to develop “killer robots”, over 50 researchers have expressed huge concerns that the project might accelerate the birth of autonomous weapons.

Toby Walsh, University of South Wales professor who organized the action, told The Guardian newspaper;

“This is a very respected university partnering with a very ethically dubious partner that continues to violate international norms. There are plenty of great things you can do with AI that save lives, including in a military context, but to openly declare the goal is to develop autonomous weapons and have a partner like this sparks huge concern.”

Hanwha is among the most prominent weapon manufacturers in South Korea, producing cluster munitions that are banned in 120 countries, excluding South Korea, the US, Russia, and China.

The company’s collaboration with KAIST, which is said to be openly aimed at creating automated killing machines, has raised serious concerns among the global academic community, ultimately leading to the proposed boycott.

An urgent call for action

A few months ago, KAIST opened a research center for projects combining national defense and AI. The post announcing the opening of the center has since been deleted, yet its main focus was said to be the development of object recognition technology, navigation algorithms for unmanned undersea vehicles, and other AI-based systems for defense purposes.

After reading an article on the Korea Times which explicitly referred to KAIST’s plan to develop AI weapons, Walsh wrote to the university enquiring about the rumors, without receiving an answer.

This encouraged him to talk to other AI specialists and organize the boycott, in an attempt to block the institute’s development of killer robots.

The open letter signed by the group reads;

“At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons.

“We therefore publicly declare that we will boycott all collaborations with any part of KAIST until such time as the President of KAIST provides assurances, which we have sought but not received, that the Center will not develop autonomous weapons lacking meaningful human control.”

The scientists specify that they will no longer visit KAIST, host academics from the Korean University, or take part in any collaboration with it until the university provides assurances that they are not developing autonomous weapons.

The letter goes on to highlight the detrimental consequences that killer robots could have on global warfare.

“They will permit war to be fought faster and at a scale greater than ever before. They have the potential to be weapons of terror. Despots and terrorists could use them against innocent populations, removing any ethical restraints. This Pandora’s box will be hard to close if it is opened.

“As with other technologies banned in the past like blinding lasers, we can simply decide not to develop them. We urge KAIST to follow this path, and work instead on uses of AI to improve and not harm human lives.”

Meanwhile, KAIST representatives are denying the legitimacy of the news, highlighting their cautious approach to the development of new defense technology.

“As an academic institution, we value human rights and ethical standards to a very high degree,” said KAIST’s president, Sung-Chul Shin, in a statement. “I reaffirm that KAIST will not conduct any research activities to human dignity including autonomous weapons lacking meaningful human control.”

The growing debate around autonomous weapons

Semi-autonomous weapons and military machines are already being built in several countries, including the US, the UK, China, and France.

Although fully-autonomous weapons have not yet been effectively developed and placed on the market, military officials believe that the use of AI in war will eventually become widespread.

Over 20 countries have already called for a total ban on killer robots, concerned about potential repercussions on the world population’s safety and security.

The letter signed by Walsh and his colleagues was published a few days before a United Nations meeting in Geneva, during which country representatives are set to discuss the challenges posed by autonomous weapons and possible ways of tackling these.

Many researchers and organizations, including Elon Musk and DeepMind’s CEO Mustafa Suleyman, have repeatedly advocated for a rigid regulation or ban on such weapons, as an arms race for the development of killer robots could have dangerous and unpredictable consequences.

 

Start typing and press Enter to search

mobile esportsvoicetech retail