scorecardresearch
Wednesday, April 24, 2024
Support Our Journalism
HomePageTurnerBook ExcerptsThe one job that will disappear by 2062 — the job of...

The one job that will disappear by 2062 — the job of fighting wars

In 2062: The World That AI Made, Toby Walsh writes about the dangerous scenario of lethal autonomous weapons getting to decide who dies and who lives.

Follow Us :
Text Size:

There is one job likely to disappear through automation by 2062 which I and many others especially fear. This is the job of fighting wars. Indeed, replacement has already started to happen. An arms race has begun in the development of robots that can replace humans in the battlefield. The media like to call them ‘killer robots’, but the technical term is ‘lethal autonomous weapons’, or LAWs. 

The problem with calling them killer robots is that this conjures up a picture of the Terminator, and hence of technologies that are a long way off. But it is not Terminators that worry me or thousands of my colleagues working in AI. It is much simpler technologies that are, at best (or at worst), less than a decade away. It is not smart AI but stupid AI that I fear. We’ll be giving machines that are not sufficiently capable the right to make life-or-death decisions. 

Take a Predator drone. This is a semi-autonomous weapon, which can fly itself much of the time. However, there is still a soldier, typically in a container in Nevada, in overall control. And importantly, it is still a soldier who makes the decision to fire one of its missiles. But it is a small technical step to replace that soldier with a computer. Indeed, it is already technically possible to do so. And once we build such simple autonomous weapons, there will be an arms race to develop more and more sophisticated versions. 

The world will be a much worse place if, in twenty years’ time, lethal autonomous weapons are commonplace and there are no laws about LAWs. This will be a terrible development in warfare. But it is not inevitable. We get to choose whether we go down this road – and we’ll be choosing which road we go down in the next few years. 

THE LURE OF KILLER ROBOTS 

For the military, the attractions of autonomous weapons are obvious. The weakest link in a Predator drone is the radio link back to base. Indeed, drones have been sabotaged by jamming their radio link. So if you can have the drone fly, track and target all by itself, you have a much more robust weapon. 

A fully autonomous drone also lets you dispense with a lot of expensive drone pilots. The US Air Force could be renamed the US Drone Force. It already has more drone pilots than pilots of any other type of plane; by 2062 it won’t be just more drone pilots than pilots of any other type of plane, but more drone pilots than all other pilots put together. And while they don’t risk their lives on combat missions, drone pilots still suffer post-traumatic stress disorder at similar rates to the rest of the air force’s pilots. 

Autonomous weapons offer many other operational advantages. They don’t need to be fed or paid. They will fight 24/7. They will have superhuman accuracy and reflexes. They will never need to be evacuated from the battlefield. They will obey every order to the letter. They will not commit atrocities or violate international humanitarian law. They would be perfect soldiers, sailors and pilots. 

Strategically, autonomous weapons are a military dream. They permit a military force to scale their operations, unhindered by workforce constraints. There are, however, many reasons why this military dream will have become a nightmare by 2062.


Also read: Drone attackers, spy-proof radio — private players line up hi-tech wares at Army meeting


THE MORALITY OF KILLING MACHINES 

First and foremost, there is a strong moral argument against killer robots. We give up an essential part of our humanity if we hand over the decision about whether someone should live to a machine. Certainly today, machines have no emotions, compassion or empathy. Will machines ever be fit to decide who lives and who dies? 

Because war is a terrible thing, it should not in my view be an easy thing. It should not be something that we fight easily and ‘cleanly’. If history has taught us one thing, the promise of clean wars is and will likely remain an illusion. War must always remain an option of last resort. Politicians need to justify why our sons and daughters are returning home in body bags.


Also read: Proxies in Syria, Iraq, Yemen to missiles and drones — tools Iran can use against US


WEAPONS OF MASS DESTRUCTION 

Beyond the moral arguments, there are many technical and legal reasons we should be concerned about killer robots. In my view, one of the strongest arguments in favour of a ban of these weapons is that they will revolutionise warfare. 

The first revolution in warfare came with the invention of gunpowder by the Chinese. The second was the advent of nuclear weapons, created by the United States. Each of these represented a step-change in the speed and efficiency with which we could kill. Lethal autonomous weapons will be the third revolution. 

Autonomous weapons will be weapons of mass destruction. Previously, if you wanted to do harm, you needed an army of soldiers. You had to persuade this army to follow your orders, as well as train them, feed them and pay them. Now, just one programmer will be able to control hundreds or even thousands of weapons. As with every other weapon of mass destruction – chemical weapons, biological weapons and nuclear weapons – we will need to ban autonomous weapons.

In some respects, lethal autonomous weapons are even more troubling than nuclear weapons. To build a nuclear bomb requires great technical sophistication. You need the resources of a nation-state, and access to fissile material. You need skilled physicists and engineers. Because all of these resources are required nuclear weapons have not proliferated greatly. Autonomous weapons will require none of this. Simply take a small drone, program it with a neural network that will identify, track and target any Caucasian face. Such face-recognition software can be found in many smart-phones today. Now attach a few grams of high explosive to the drone. By bringing together some existing technologies, you have a simple, inexpensive but very lethal autonomous weapon. 

If you drive a truck with 10,000 of these drones into New York City, you could mount an attack to rival those of 9/11. You don’t even need your weapons to be very accurate. Suppose only one in ten of your drones works – you could still kill a thousand people in just minutes. With 50 per cent accuracy you are up to 5000 dead in no time at all.

WEAPONS OF ERROR 

In addition to being weapons of terror, autonomous weapons will be weapons of error. From a technical perspective, the last place you would want to put a robot is in the battlefield. 

There’s a good reason robots turned up first in places like car factories. In a factory, you can control the environment. You get to decide where everything and everybody goes. You can even put the robot in a cage to protect bystanders. The battlefield is a very different environment, full of uncertainty and confusion. Not the place that you want to put a robot with deadly potential. 

In November 2016 an investigation by The Intercept of US military operations against the Taliban and al Qaeda in the Hindu Kush revealed that nearly nine out of every ten people who died in drone strikes were not the intended targets. Remember, this is while we still have a human in the loop, with situational awareness that is currently superior to that of any machine. And that human is making the final life-or-death decision. As a technologist, if you asked me to replace the human drone pilot with a machine, I would be pleased if we matched human performance and made only nine out of ten errors. I fear we’d make errors almost every time.


Also read: Artificial intelligence-based warfare is the new space race among nations


ARMS RACE 

In the open letter, we warned that there would be an arms race to develop more and more capable autonomous weapons. Sadly, that arms race has begun. The Pentagon has allocated $18 billion in its current budget for the development of new types of weapons, many of them autonomous. Other countries, including the United Kingdom, Russia, China and Israel, have also initiated sophisticated programs to develop autonomous weapons. 

Pick any sphere of battle – in the air, on the land, on the sea or under the sea – and there are autonomous weapons under development by militaries around the world. You can even argue that there is at least one autonomous weapon that is already operational. This is Samsung’s SGR-A1 Sentry Guard Robot, which guards the demilitarised zone (DMZ) between North Korea and South Korea. 

Now, there is no good reason to step into the DMZ. It is the most highly mined part of the world. But if the mines don’t kill you, Samsung’s robot will. It can automatically identify, target and shoot anyone who steps into the no-man’s-land with its autonomous machine gun. It has deadly accuracy from kilometres away. 

MOUNTING PRESSURE 

Twenty-three countries have so far called on the United Nations to ban lethal autonomous weapons. These are Algeria, Argentina, Austria, Bolivia, Brazil, Chile, Costa Rica, Cuba, Ecuador, Egypt, Ghana, Guatemala, the Holy See, Iraq, Mexico, Nicaragua, Pakistan, Panama, Peru, the State of Palestine, Uganda, Venezuela and Zimbabwe. In addition, the African Union has called for a pre-emptive ban. Most recently, China has called for a ban on the use (but not the development and deployment) of fully autonomous weapons. 

There is still some distance to go before support for a ban is a majority opinion within the United Nations, let alone a consensus. The countries so far in support are generally those most likely to be on the receiving end of such terrible weapons. There is, however, a growing consensus on the need for ‘meaningful human control’ over any individual attack. This would require the technology to be predictable, the user to have relevant information, and the potential for timely human judgement and intervention.


Also read: PLA SSF: Why China will be ahead of everyone in future cyber, space or information warfare


AVOIDING THIS FUTURE 

With most weapons in the past, we had to witness their use before we took action. We had to observe the terrible effects of chemical weapons in World War I before we took action and brought in the 1925 Geneva Protocol. We had to witness the horrors of Hiroshima and Nagasaki, and live through the several near misses of the Cold War, before we banned nuclear weapons. We have only one case – that of blinding lasers – where a ban was introduced pre-emptively. 

My fear is that we will have to witness the terrifying impact of lethal autonomous weapons before we find the courage to outlaw them. Whatever happens, by 2062 it must be seen as morally unacceptable for machines to decide who lives and who dies. In this way, we may be able to save ourselves from taking this terrible path. 

This excerpt from 2062: The World That AI Made by Toby Walsh has been published with permission from Speaking Tiger Books.

Subscribe to our channels on YouTube, Telegram & WhatsApp

Support Our Journalism

India needs fair, non-hyphenated and questioning journalism, packed with on-ground reporting. ThePrint – with exceptional reporters, columnists and editors – is doing just that.

Sustaining this needs support from wonderful readers like you.

Whether you live in India or overseas, you can take a paid subscription by clicking here.

Support Our Journalism

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular