How do you get killer robots to act ethically?
Trenches, artillery barrages, heavy tanks: at first glance, the war in Ukraine looks like a return to a yellowed history book. But there’s more to it than meets the eye, and the conflict could be a preview of future warfare. Both sides use artificial intelligence-based lethal autonomous weapons systems and so-called suicide drones that independently detect, track and fire at targets. These autonomous weapon systems – killer robots, if you will – are increasingly becoming a staple of modern warfare.
In recent months, UN officials in Geneva have tried unsuccessfully to chisel out a treaty that sets clear legal limits for autonomous weapons. As the US Department of Defense updates its guidelines on autonomous weapons to incorporate advances in artificial intelligence, the international community must ask – and answer – some important existential questions: How can we regulate kill before it’s too late? And how do we ensure vital moral safeguards in the face of a global AI in the face of weapons?
The UK, Australia and the US – all major investors in weapons with autonomous capabilities – are quick to claim that increased robotization will help clean up the battlefield by keeping soldiers out of harm’s way. . But it erases serious ethical and legal concerns about the harmful consequences of these deadly weapons.
Chief among them is the “responsibility gap”. Machines, no matter how sophisticated, can never conform to the legal and moral requirements of the laws of war. When a soldier makes a fatal mistake in war – for example, confusing civilians with combatants – the incident can often be traced to individuals who can be held responsible. This is not the case with robots: if a machine commits a war crime of its own accord, who do you hold responsible? The commander who sent the robot into battle? The programmer? Or the government that invested in the technology in the first place?
These debates still largely take place at a theoretical level, as in most cases a human controller still makes the final decision on whether or not to authorize the attack. But that could soon change as weapons gain more and more autonomous capabilities. One shudders at the thought of killer robots being used by state and non-state actors, who could use facial recognition and other AI technologies to target individuals or groups. We are still a long way from the world depicted in Isaac Asimov’s science fiction novels, where robots make life and death decisions, but developments in quantum computing and neuromorphic chip technology have made the prospect of fully autonomous robots. Using embedded chips that mimic human neural networks, robots will, in theory, be able to develop their own moral codes as they interact with their environment. This raises another thorny question: what moral codes would robots inherit?
The international community has so far failed to regulate lethal autonomous weapons systems (LAWS), despite a decade of on-and-off talks at the UN. Countries at the forefront of LAWS development have held their ground, hiding behind existing international humanitarian laws. But the tide is changing. A growing number of countries have called for restrictions on the development and use of LAWS. An international coalition of non-governmental organizations is gaining momentum and has successfully recruited vocal supporters in the tech industry.
Sign up for the week in review email
Every Sunday: Read the most read articles of the week, watch Iain Martin’s Authors in Conversation series, listen to The Reaction podcast and receive new offers and invites.
A widely accepted regulatory framework governing LAWS is still a distant prospect, but optimists may find glimmers of hope in the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), which has been signed by almost 200 States. A similar non-proliferation regime governing LAWS could help control their development, acquisition, and use, although it would be difficult to implement, not least because of the dual-use nature of the technology.
Legal tools alone will not be enough to protect the world from the dangers of autonomous weapons. A broader debate on the ethics of these robotics is necessary. This debate must take place at all levels of society. Why do so few tech academies or big tech companies offer ethics courses? Is it unthinkable to introduce an equivalent of the Hippocratic oath in the field of robotics? There is an urgent need to reconcile new innovations with the norms and ethics of warfare, including international humanitarian law and the Geneva Conventions.
Above all, we need to dig deep and reassess what makes us human. Neuroscientific research shows that most of us, most of the time, are neither morally innate nor immoral, but rather amoral. This means that our moral compass will largely depend on our “perceived emotional self-interest” and will be heavily influenced by our personal circumstances as well as our innate predilections, including a predisposition to choose actions that maximize our chances of survival. This should ring alarm bells for world leaders as the global AI arms race intensifies and cutting-edge technologies make their way from research labs to the military and the free market. We must remain alert to the likely exponential increase in violence that will result from the killing machines. This heightened and personalized brutality will most likely complicate post-conflict reconciliation and reconstruction.
As Bertrand Russell said in the early 1920s: “Without more benevolence in the world, technological power will serve to increase the capacity of men to harm each other. Nearly a century later, Russell’s words are a painful reminder that humanity is becoming increasingly creative in inflicting harm. Only strict regulation and strong diplomatic weaponry will protect us from our worst personalities.
Professor Nayef Al-Rodhan is a neuroscientist and philosopher. He is an Honorary Fellow of St Antony’s College, University of Oxford and Head of the Geopolitics & Global Futures Program at the Geneva Center for Security Policy (GCSP).