Are killer robots the future of war?

They could transform the battlefield but is the world ready for machines that decide who to kill? No, say many smaller nations.

Illustration showing a targeting system laid over avatars of people.
[Natallia Shulga/Al Jazeera]

Humanity stands on the brink of a new era of warfare.

Driven by rapid developments in artificial intelligence, weapons platforms that can identify, target and decide to kill human beings on their own — without an officer directing an attack or a soldier pulling the trigger — are fast transforming the future of conflict.

Officially, they are called lethal autonomous weapons systems (LAWS), but critics call them killer robots. Many countries, including the United States, China, the United Kingdom, India, Iran, Israel, South Korea, Russia and Turkey, have invested heavily in developing such weapons in recent years.

A United Nations report suggests that Turkish-made Kargu-2 drones in fully-automatic mode marked the dawn of this new age when they attacked combatants in Libya in 2020 amid that country’s ongoing conflict.

Autonomous drones have also played a crucial role in the war in Ukraine, where both Moscow and Kyiv have deployed these uncrewed weapons to target enemy soldiers and infrastructure.

The emergence and deployment of such machines are driving intense debates among experts, activists and diplomats worldwide as they grapple with the possible benefits and potential risks of using robots, and consider whether and how to stop them.

Yet in an increasingly divided geopolitical landscape, can the international community arrive at any consensus on these machines? Do the ethical, legal and technological threats posed by such weapons make it essential to stop them before they take over the battlefield? Is a blanket ban feasible, or is a set of regulations a more realistic option? Al Jazeera posed these questions to leading experts in the field.

The short answer: An outright blanket ban on autonomous weapon systems does not look likely anytime soon. However, a growing chorus of voices — especially from the Global South — is calling for their regulation, and experts believe a global taboo of the kind that is in place against the use of chemical weapons is possible. Major military powers may be intrigued by the potential battlefield advantages such systems could give them, but there seems to be little appetite for them outside governments and generals.

A drone against a blue sky sky seconds before it fired on buildings in Kyiv, Ukraine, on Oct. 17, 2022.
A drone is seen in the sky seconds before it struck buildings in Kyiv, Ukraine, on October 17, 2022[ Efrem Lukatsky/AP Photo/File]

‘Cannot unwind World War III’

In late March, Yasmin Afina, research associate at the London-based Chatham House, described to the House of Lords, the second chamber of the UK parliament, how the US National Security Agency (NSA) had once mistakenly identified an Al Jazeera journalist as an al-Qaeda courier. That labelling — which also resulted in the journalist being put on a US watch list — only came to light through documents leaked in 2013 by Edward Snowden, a former contractor with the NSA.

A surveillance system of the kind behind that incident is not in itself “a weapon system, but it is lethality-enabling,” Afina said in her deposition. “If you were to engage the target, the journalist, that would absolutely be against international humanitarian law considerations.”

The potential for LAWS to trigger a chain reaction of escalatory events worries Toby Walsh, an AI expert at the University of New South Wales in Sydney, Australia.

“We know what happens when we put complex computer systems against each other in an uncertain and competitive environment. It’s called the stock market,” wrote Walsh in written evidence submitted to the House of Lords.

“The only way to stop dangerous feedback loops and undesirable outcomes is to use ‘circuit breakers’. On the stock market, we can simply unwind transactions when such a situation occurs. But we cannot unwind the start of WW3”, he added.

That does not mean researchers should stop developing the technology behind automatic weapons systems, Walsh told Al Jazeera. That technology, he said, could bring benefits in other fields.

For example, the same algorithms used in car safety systems that avoid collisions with pedestrians will “be the algorithms that go into your autonomous drone that identify combatants, track them — and it’s just a sign change to kill them as opposed to avoid them”, he said. It would be “morally wrong to deny the world” a chance to reduce road deaths, he argued.

Instead, the answer might lie in emulating the “relatively successful regulation of chemical weapons,” Walsh said.

When chemical weapons are used, they make front-page headlines and trigger a global outcry. The UN’s Chemical Weapons Convention prohibits their development, production, stockpiling and use. That, combined with international taboos around chemical weapons, has also successfully stopped major arms companies from producing them.

“We can’t put Pandora back into a box, but those measures seem to have largely limited the misuse of chemical weapons in the battlefields around the world today,” Walsh said.

US Army MIM-104 Patriots, surface-to-air missile (SAM) system launchers, are pictured at Rzeszow-Jasionka Airport, Poland March 24, 2022, amid Russia's invasion of Ukraine.
US Army MIM-104 Patriots, surface-to-air missile system launchers are pictured at Rzeszow-Jasionka Airport,  Poland, March 24, 2022, amid Russia’s invasion of Ukraine. They can identify, select and engage the target without human intervention [REUTERS/Stringer]

Gains and risks

To be sure, AI-driven autonomous weapons systems have their benefits from a military perspective.

They could carry out some battlefield tasks without the use of soldiers thus reducing the risk of casualties. Supporters argue that sophisticated technology embedded in these systems could eliminate or reduce human error in decision-making and eliminate biases. Greater accuracy in targeting could, at least in theory, reduce accidental human casualties.

Autonomous weapons systems can also be deployed for defensive capabilities, with lightning-fast detection algorithms able to detect and eliminate a potential threat with greater efficiency and accuracy than humans.

Yet to many experts and rights groups, the risks of these LAWs outweigh any potential advantages — ranging from the possibility of technical malfunctions with no oversight to violations of international law and the ethical concerns over emotionless machines making decisions of life and death.

Central to all of those concerns is the question of accountability.

In 2019, the 126 counties party to the United Nations Convention on Certain Conventional Weapons (CCW) agreed upon 11 guiding principles recommended by a group of experts appointed by the UN to address concerns about autonomous weapons.

Among those principles was a decision that international humanitarian law would fully apply to the potential development of such weapons. But experts say it is unclear how that principle will be applied in the fog of war. If a robot commits a war crime, for instance, would it be the commanding officer in charge of the theatre of conflict who would be considered responsible? Or would the buck stop at higher-ups who decided to deploy the machine in the first place? Would the manufacturer of the weapon be liable?

All of this “represents a major gap in policy conversation” on the subject, Stockholm International Peace Research Institute (SIPRI) researchers Vincent Boulanin and Marta Bo wrote in an article in March.

There is not even an “official or internationally agreed definition” for autonomous weapons systems, Boulanin told Al Jazeera, though most countries agree that “the critical element is that the system will be able to identify, select and engage the target without human intervention”.

According to Boulanin, the director of the Governance of Artificial Intelligence Programme at SIPRI, weapons systems already operational today fit this description. One such example is the US-made MIM-104 Patriot surface-to-air missile system currently used by many countries, including Saudi Arabia and Israel.

“We are talking about a capability, a function that can be used across very different types of weapons systems, that can come in all shapes and forms and can be used in very different types of missions,” said Boulanin.

“So if you were to ban something,” he explained, “you would have to narrow down exactly the type of weapon or scenario that you find particularly problematic.”

Rather than a blanket ban, a two-tier set of regulations would be a more realistic outcome, he said, with some weapons systems prohibited and others allowed if they meet a strict set of requirements.

“The million dollar question now is, basically, what are the elements that would fit into these two buckets?” Boulanin said.

It is a question that different states have yet to agree on.

Delegates at a meeting on lethal autonomous weapons in the United Nations in Geneva, Switzerland, November 15, 2019.
Delegates at a meeting on lethal autonomous weapons at the United Nations in Geneva, Switzerland, on November 15, 2019 [Campaign to Stop Killer Robots/Handout via REUTERS]

Political or legal regulation?

There is an even more fundamental division among nations over how to approach the question of autonomous weapons: Should the world seek a legally binding set of rules or merely a political declaration of intent?

A political declaration can take many forms but would likely include a public statement where major powers would state their common position on the subject and promise to adhere to the principle points laid out in the document. This could look like the joint statement issued by China, Russia, the UK, the US and France on preventing nuclear war and avoiding arms races signed in January 2022, in which they affirmed, among other things, that a nuclear war “can never be won and must never be fought”.

Boulanin said it is a question that nations “have radically different views” on. Russia has been “very open” about its objections to legally binding instruments; the UK and US are also critical, viewing it as “premature” and seeking a political declaration as a first step, he said.

Some others, like China and India, have been more ambiguous.

China has supported a ban on the use of fully autonomous weapons but not on their development — a position in keeping with the view that some of the world’s most dangerous military tools, including nuclear weapons, can serve as defensive deterrents.

China’s domestic arms industry has duly pressed ahead with the development of such technology, including the Blowfish A2 drones, which can fly in swarms and independently engage a target. The classified 912 Project also aims to develop underwater robots over the next few years.

India, meanwhile, has expressed concerns about a new race for such machines widening the technology gulf between nations, and about the proliferation of killer robots — including to non-state actors — but has simultaneously doubled down on developing its own autonomous weapons systems.

Exactly how much resources militaries are committing to developing LAWS is difficult to gauge, but a 2021 Amnesty International report states that several major military powers were “investing heavily in the development of autonomous systems”. The UK, it said, was developing an uncrewed autonomous drone that could identify a target within a programmed area, “while Russia has built a robot tank which can be fitted with a machine gun or grenade launcher”.

Autonomous functions can also be added to existing or developing technologies, such as the US-made Switchblade 600 loitering missile.

The real pushback against such weapons systems is coming from large parts of the Global South — especially Latin America, Africa and the Middle East — that are seeking legally binding regulations.

Leading the campaign in recent times is a country that has shown that peace can be ensured without an army.

Activists from the Campaign to Stop Killer Robots, a coalition of nongovernmental organisations opposing lethal autonomous weapons or so-called 'killer robots', stage a protest at the Brandenburg Gate in Berlin, Germany, March, 21, 2019.
Activists from the Campaign to Stop Killer Robots, a coalition of nongovernmental organisations opposing lethal autonomous weapons or so-called ‘killer robots’, stage a protest at the Brandenburg Gate in Berlin, Germany, March 21, 2019 [Annegret Hilse/REUTERS/File]

‘Cultural view of peace’

In February, Costa Rica’s government, along with the local nongovernmental organisation FUNPADEM organised a regional conference attended by representatives from almost every country in Latin America and the Caribbean.

The conference’s Belén Communiqué (PDF), which more than 30 states adopted, highlighted the dangers of autonomous weapons systems and called for the international community to respond to them by “developing and strengthening the international legal framework”.

“This is our national position based on our cultural view of peace,” Bradon Mata Aguilar, a project technician at FUNPADEM, told Al Jazeera.

Costa Rica’s army was abolished in 1948, and it remains one of the most stable countries in the region. Aguilar explains that this fact creates “a huge difference between how other states and Costa Rica look at implementing these legally binding instruments”.

Costa Rica, he said, is seeking a complete prohibition of fully autonomous weapons and regulations implemented to control the use and development of semi-autonomous weapons.

Groups like the Stop Killer Robots (PDF) campaign, a coalition of nongovernmental organisations that seek to preemptively ban LAWS, and the International Committee of the Red Cross (ICRC) also had a strong presence at the conference in Costa Rica.

Then, on March 25, at the Ibero-American Summit in the Dominican Republic, 22 heads of state of Spanish- and Portuguese-speaking countries issued a joint statement (PDF), calling for the “negotiation of a legally binding international instrument, with prohibitions and regulations regarding autonomy in weapons systems”.

That sentiment was echoed two days later when the Central American Integration System grouping’s member states, including Belize, Costa Rica, El Salvador, Guatemala, Honduras, Nicaragua, and Panama, adopted a similar statement calling for urgent negotiations.

Multiple nations in Africa and the Middle East — Algeria, Namibia, Ghana, Uganda, Zimbabwe, Morocco, Egypt, Jordan, Iraq and Palestine among them — have called for a complete ban on fully autonomous weapons systems over the past decade. Others like South Africa have called for regulations but have stopped short of seeking a full ban.

All of this momentum shows that the appetite for legislation is there, said Walsh. “We’ve seen three dozen or more countries at the floor of the United Nations call for regulation. We’ve seen the European Parliament vote for it. We’ve seen the African Union vote for it”.

But a key ingredient for the success of any talks is missing, according to some experts: trust.

President Joe Biden meets virtually with Chinese President Xi Jinping from the Roosevelt Room of the White House in Washington, DC, Monday, Nov. 15, 2021, as Secretary of State Antony Blinken listens.
US President Joe Biden meets virtually with Chinese President Xi Jinping from the Roosevelt Room of the White House in Washington, DC, Monday, November 15, 2021, as Secretary of State Antony Blinken, right, listens. High tensions between the US and China, two of the countries leading the development of autonomous weapons, could hurt efforts at building a global consensus on regulating their use, experts believe [Susan Walsh/AP Photo]

Trust and respect

Amid rising geopolitical tensions, many nations are worried about whether they can believe in what rivals state officially, analysts say.

That absence of trust plays out at two levels. Since international conventions like the CCW depend on consensus, “it only takes one country to be disruptive to stop talks progressing,” said Walsh.

But even if a new international law or set of regulations were to come into place, would they be effectively implemented? That is an open question because many nations “are not playing by the rules-based order anymore,” said Walsh.

Boulanin agrees with those concerns.

“States can agree — [that’s] one thing, but compliance is another thing,” Boulanin said.

“I think some states are worried that if they were to agree on an ambitious regulatory framework, they would potentially shoot themselves in the foot,” he added. If their adversaries did not play by the rules and developed LAWS, this would put them at a strategic disadvantage, he explained.

That risk, however, does not mean “we shouldn’t keep trying to agree on new norms for responsible behaviour”, Boulanin said.

Already, one traditional worry — that any international law would be unable to keep pace with the rapid rate at which technology is advancing — has been addressed, he said, with the UN’s approach now focusing on regulations that are technology-agnostic.

Still, there are even more basic issues at stake in this debate, including over the morality of machines taking peoples’ lives without any human being involved in the decision-making process.

The Martens Clause, which has formed a part of the laws of armed conflict since its first appearance in the preamble to the 1899 Hague Convention (II), is often used in discussions over the ethics of autonomous weapons systems. It declares that in the absence of specific treaty law on a topic, people are still protected by “custom,” “the principles of humanity,” and “the dictates of public conscience”.

In 2019, UN Secretary-General António Guterres said machines with the power and discretion to take lives without human involvement were “politically unacceptable” and “morally repugnant”.

And many military personnel Walsh has spoken with for his research seem squeamish about fully autonomous weapons too.

He said that he has found “almost universally that the lower down the ranks you go, the closer to the battlefield you get, there is more pushback against the idea that you could be fighting against robots”.

Beyond laws, regulations and geopolitics, there is a more fundamental problem with the idea of machines lacking human empathy making such critical decisions, said Walsh.

“It’s disrespectful to human dignity.”

Source: Al Jazeera