10 min read

Terminators

What are the risks of “killer robots” to civilians? Lucy Suchman on the automation of modern warfare.
Terminators
Matt Hearne

Tripoli, March 27, 2020. Libyan armed forces deploy STM Kargu-2 weapons systems to hunt down people suspected of working with the militia of General Khalifa Haftar, the leading rebel commander in the country’s long-running civil war. Kargu-2 is a Turkish-made helicopter with four rotors—and filled with explosives.

And it’s a robot—or, as militaries call it, an autonomous weapons system. The operation was the first known case of so-called “killer robots” being sent out to track, target, and kill human beings. A UN report found they’d been “programmed to attack targets without requiring data connectivity between the operator and the munition.”

Since then, military and paramilitary forces have used various autonomous and semi-autonomous weapons systems in conflicts in Ukraine, Burkina Faso, Ethiopia, and Gaza. And now there’s a campaign to restrict their use. The UN General Assembly has tried to ban them. The EU has endorsed a ban but hasn’t enacted anything. China has called for a treaty prohibiting their use—though not their production, which Beijing is ramping up, in order to compete with the United States. Meanwhile, European weapons manufacturers say they’re “not far off” from producing drones that can select and hit targets entirely on their own. 

The main rationale for outlawing these systems is that they represent an exceptional threat to civilians in combat zones. UN Secretary-General António Guterres says they’re “morally repugnant and politically unacceptable.” At the same time, several governments are looking to produce or buy more of them, saying they’re necessary for war in the 21st century.

So what civilian risks do these weapons really entail?

Lucy Suchman is a professor emerita at Lancaster University. Suchman says this is just the beginning for killer robots. The systems now in use are basically early prototypes, but they’ve been potent all the same. And while most combat robots are still remotely controlled by humans, in the near future more and more of them will be able to fight and kill on their own. 

It’s a new arms race, Suchman says. Militaries are rushing to buy the best new weapons systems. Which might prove very effective: They could potentially outgun even the best-trained and -equipped human soldier. But a big problem is that they shoot whatever they’ve been trained to shoot. And that’s one thing when it comes to obvious military targets, like tanks, but entirely another when it comes to people in crowded urban scenes …


Gustav Jönsson: What exactly are these “killer robots”? 

Stephen Radford

Lucy Suchman: That phrase conjures an infamous figure from science fiction, but it really just means “lethal autonomous weapon systems.” Militaries have had automated weapons for a long time, but in these new systems, two critical functions have been automated that weren’t before: the identification of the target and the initiation of force against it.

One reason why that’s important goes to the foundations of international humanitarian law, which defines the difference between murder and legitimate killing in wartime. A cornerstone of this law is the “principle of distinction,” which is the ability to discriminate between those who’re in combat and those who aren’t. But that’s increasingly vexing in armed conflicts today, which take place outside designated battle spaces—in crowded cities, for example—where combatants aren’t always in uniform. So, on the one hand, it’s gotten harder to identify what constitutes legitimate targets, while on the other, there’s an increasing automation of target identification. That’s a big problem.

Jönsson: What does an autonomous weapons system look like? 

Suchman: There’s a whole spectrum of systems. Let’s start with the “killer robots.” A lot of the robotic vehicles now employed in war are remotely controlled by a human operator—so they’re not autonomous. That might be a ground robot that’s sent out to, say, investigate a building; it’ll be operated by a person using a camera mounted on it. It could also be a drone that’s remotely operated by a pilot.

On the one hand, it’s gotten harder to identify what constitutes legitimate targets, while on the other, there’s an increasing automation of target identification. That’s a big problem.

Once militaries figured out how to use robots for surveillance, they began putting weapons on them—for example, you might equip a ground robot with explosive devices or a drone with firearms. But now that you’ve shifted the lethal force onto the robot, there’s pressure to incorporate things like computer vision, so that they’re able to surveil the scene on their own rather than just projecting their images back to their human operators. Using various algorithmic technologies, those robots then identify targets on their own.  

The argument in favor of these systems is ultimately about speed: Combat now often happens so fast that the human operator can’t keep up—and the operator’s communications with a weapon might get cut off. This argument can become self-fulfilling, because the faster combat goes, the stronger the case becomes that we need to automate weapons systems just to keep up. So we’re in a new arms race—focused on combat acceleration.

Then there are networked weapon systems. The current operation of the Israel Defense Forces in Gaza is a very concerning but illustrative example. They take a person who is known to be a Hamas militant, then they analyze the features of that person—their sex and age, for example. The Palestinian population is very intensively surveilled: They go through checkpoints with facial recognition, they’re on a population registry, and they have their phone and internet communications monitored. So the IDF uses that information to create individual ratings—everyone is scored on how many features they have in common with someone who’s known to be a legitimate target.

Marek Piwnicki

But this is very problematic, as you can imagine, because lots of civilians are affiliated with combatants: They might have a relative who’s a combatant or they might have been in a social media group with someone who’s a combatant. And then these civilians can be scored as combatants.

It’s particularly concerning in the case of Gaza. According to the reports from sources like +972 Magazine, there’s been tremendous pressure on the Israeli military to accelerate and increase the production of targets. The imperative is to hit lots and lots of them—so these systems are generating target recommendation after target recommendation, but the operators responsible have very little time to judge whether they’re legitimate.

One Israeli Defense Force soldier who spoke anonymously, according to +972, said he had about 20 seconds to determine whether someone was a Hamas fighter—but he was told only to check whether the potential target was male, because Hamas doesn’t use female fighters. Judging the validity of proposed targets in this way would of course be tremendously superficial. Yet it seems to be the future of warfare.

Jönsson: I think “artificial intelligence” makes many people think of “super-intelligence,” but you’re saying it’s often more like “superficial intelligence”?

Suchman: You know the old saying, Garbage in, garbage out? The quality of the output of these technologies is only as good as the quality of the input. There’s so much mystification around “artificial intelligence,” which I think can be better thought of as algorithmic intensification.

The faster combat goes, the stronger the case becomes that we need to automate weapons systems just to keep up. So we’re in a new arms race—focused on combat acceleration.

Jönsson: “Algorithmic intensification” in the sense that it generates targets faster than humans can?

Suchman: That’s right. An algorithm is basically the codification of an instruction: Given this input, this is the appropriate output. That’s a highly simplified explanation, but that’s basically what’s going on in these systems. They break down the input—whether signals from radar or text strings from social media—and then turn it into numbers which are repeatedly run through very complex mathematical operations. 

It’s all very complicated, but artificial intelligence isn’t ever really “intelligence” in the sense of the system understanding the meaning of what it’s doing. Humans are either on the input side telling the AI, Consider this input significant, or humans are on the output side telling it, This output is intelligible or unintelligible. Eventually, the system will generate output that looks intelligible to humans—but the system itself has no understanding of the strings of data it’s manipulating.

The whole premise of super-intelligence remains a fantasy. These systems remain extremely limited in their capacities for perception and analysis. They can translate machine-readable input, but humans always determine the significance of that input.   

Jönsson: How widespread is the use of autonomous weapons systems? 

Suchman: They haven’t yet been adopted widely, but that will likely happen soon. I think that’s why we’re seeing a push now for a treaty to prohibit their use: We’re in a crucial time. 

At this point, the case we know the most about is the IDF, because Israel has positioned itself at the leading edge as a weapons producer. They’re demonstrating state-of-the-art technology. And Israel has very close ties with the U.S. companies providing the cloud services and infrastructure for these very data-intensive systems. 

Stephen Hocking

In Ukraine, by contrast, the focus has been on drones, because drones have been relatively cheap and available for Ukrainian soldiers, who can configure and arm the drones themselves. But they’re primarily operated by humans using remote control. Increasingly, however, they’re fitting drones with first-person vision, which means they’re not just using them as weapons but as an extension of their surveillance capabilities.  

It’s quite easy to imagine even Ukraine’s fairly basic drones being further automated to the point that they can match what their vision system is picking up with target prescription. And there are companies working on things just like that. One of the most prominent examples is the U.S.-based company Palantir. They’re now promoting the idea of a command-and-control dashboard where you have continuously updated surveillance of your area of operation—and you’ll have a sort of ChatGPT-style conversational interface that you can query to come up with courses of action. 

At this point, I think it’s still largely in the developmental stage. Notably, if you look at their demos, it’s clear that they’re informed by the war in Ukraine, even though they don’t mention Ukraine by name—they have Russian tanks as targets. 

The way to train a system with computer vision to identify a tank is by having repeated rounds of humans telling the system, That’s a tank, when they see a tank on the monitor. A tank is just about the clearest legitimate target you can come up with.

But this is radically different from the kind of combat in Gaza. The IDF always claims there was a Hamas militant in the area they stuck—but regardless of their intentions or the veracity of their intelligence claims, they’re killing lots of civilians.

Artificial intelligence isn’t ever really “intelligence” in the sense of the system understanding the meaning of what it’s doing.

Jönsson: What broader effects are these weapon systems having on warfare? 

Suchman: The most important is that they’re speeding up the pace of combat. It goes back to that circular logic: We have to automate weapons because things are happening too fast for humans to follow, and once we start to automate weapons, combat becomes even more fast-paced. The argument in favor of them is that you always have to be ahead of your enemy.

Jönsson: What kind of risks do these weapon systems present outside of warfare? It strikes me that if you’re relying on them for target identification, for instance, you have to start building the profile of who you want to identify as a target well before the fighting starts—implying a need for peacetime surveillance?

Suchman: That’s right. These systems are already being deployed for border control. They’re at the U.S.-Mexico border and increasingly at European borders. And as you say, these systems require massive amounts of surveillance data to be trained, which encourages more and more surveillance and profiling. We’re going to be seeing more of them in connection with border control, as well as in domestic policing. 

And I think the proliferation of drones is going to be an emerging crisis. People can easily buy drones that are smaller than some military drones but bigger than the small commercial ones used by hobbyists. I can imagine a very plausible proliferation of drones to the point where it’s very difficult to detect who’s operating them and whether they’re armed.

That would raise people’s level of alert when they see one, and it would escalate their level of insecurity, not just in military conflicts but in everyday life—at crowded concerts, for example. 

Getty Images

Jönsson: What are the challenges of regulating these systems?

Suchman: Bans on fully autonomous weapon systems would resemble bans on land mines or nuclear weapons. The case for their prohibition is that they can’t adequately discriminate between combatants and non-combatants. But the question is how you enforce that. 

Various groups have pushed for an international treaty prohibiting weapons that automate target identification and force initiation. They want to ensure there’s always what’s come to be called “meaningful human control” over these functions. They say “meaningful” to contrast with what we’re seeing now, where there’s ostensibly a human who oversees the system, but the pace of target generation is so fast that there’s no time for that person to have truly meaningful control. 

The idea is that the human operator needs to be responsible and accountable for all of the considerations that go into the designation of a target as legitimate. 

Jönsson: It strikes me that if you suspect a certain state of having a chemical-weapons stockpile, you can send in non-proliferation inspectors to check out sites—but who could you send in to inspect a bad algorithm? Where would they even go?

Suchman: It’s a good question. People with expertise in these technologies would have to be given the opportunity to inspect these systems from the inside. We have the same problem with arms control. But as you say, it’s more complicated still in the case of algorithmic or computational technologies, because they’re much more diffuse and difficult to inspect. It’s a big challenge. But there are people working on it.