Tech Refactored

S2E14 - Killer Robots and State Practice: Governing Autonomous Weapons Systems

November 05, 2021 Nebraska Governance and Technology Center Season 2 Episode 14
Tech Refactored
S2E14 - Killer Robots and State Practice: Governing Autonomous Weapons Systems
Show Notes Transcript

On this episode Rebecca Crootof, a law professor at the University of Richmond, joins Gus to discuss Autonomous Weapons Systems and the analogies that they conjure in our minds - and why those analogies don’t do much to help us govern the use of these weapons.

Note, near the end of this episode there is a brief mention of violence towards animals during wartime.

Disclaimer: This transcript is auto-generated and has not been thoroughly reviewed for completeness or accuracy. 

[00:00:00] Gus Herwitz: This is Tech Refactored. I'm your host, Gus Herwitz, the Menard Director of the Nebraska Governance and Technology Center at the University of Nebraska. Today we're joined by Rebecca Crootof, a law professor at the University of Richmond. Her work explores questions stemming from the iterative relationship between law and technology, often in light of social changes sparked by increasingly autonomous systems, artificial intelligence, cyberspace, robotics, and the internet of things. 

So we're [00:01:00] going to be talking about increasingly artificial, intelligent, autonomous cyberspace, connected internet of things, robots that can kill you. Today we're specifically focusing on autonomous weapon systems and the analogies that they can juror in up in our minds and why those analogies might not do much to help us figure out how to govern and regulate the use of these uh, weapons.

Rebecca, thank you for joining us. 

[00:01:25] Rebecca Crootof: Thank you, Gus. I'm so glad to be here. 

[00:01:27] Gus Herwitz: So going completely off script. I, I have to start by saying, uh, when I think autonomous weapons, I think battle bots, I think Boston Dynamics dogs and I think Robocop, Am I talking, thinking about the right stuff? 

[00:01:44] Rebecca Crootof: Definitely not, but also, yes. So right? That is, that is the whole thing about pop culture, right? Is that it, it gives us a set of ideas and narratives that we use to make sense of things [00:02:00] that we don't quite know what they are otherwise. And in some ways they're really, really useful for identifying. It. Yeah. Potential alternate timelines, for problems and in some ways they're no nowhere close to what's, you know, actually happening in reality.

[00:02:18] Gus Herwitz: So bringing us back down to earth, I guess, can you, uh, start by just telling us what are we talking about when we're talking about autonomous weapons systems? 

[00:02:27] Rebecca Crootof: Yeah, so an autonomous weapon system is uh, a weapon system that is capable of independently selecting and engaging targets. That, and is really, really important there, the selection and engagement, and it does this based on a combination of pre-programmed constraints.

And information that it's gathering from the environment and that it's in and processing. I think in some ways, like the easiest way to clarify what autonomous weapons systems are is to start off with what they're not. So first of all, when I start talking [00:03:00] about them, a lot of people immediately jump

Other than you , other people jump immediately to thinking about drones, right? And many drones right now have certain autonomous functions. Some have the capability to autonomously take off or to autonomously recommend a target. But most of the drones that are in use today are what I would call semi-autonomous.

In so far as they work with a human operator who selects the targets, which the drone then autonomously engages or, or vice versa. They might recommend a target to a human operator. And the human operator makes the choice about engaging it. And so I think. That's one conception. Another is they're not necessarily automatic systems like landmines.

In some ways the landmine is maybe the dumbest autonomous weapon system in so far as there's a, a geographic and temporal distance between the decision to use lethal force and, and how it actually [00:04:00] manifests in the world. But landmines are generally triggered in relatively predictable ways, right? They, they are subject to a certain amount of pressure.

And they react by exploding. Whereas with autonomous weapons systems, they're more, it's more that they're gathering and processing information and that introduces a little bit of an X factor. They can be largely predictable, but they still have the capacity for some unanticipated action. The very last , not, what they're not is they're not weapons of the future.

These are not some futuristic sci-fi thing insofar as over 30 states, like 30 countries around the world right now, already employ weapons systems with varying levels of autonomy in the selection and engagement of targets. So the US ais control system, it's been in use since the 1980s to protect warships, and it's a defensive system that iif.

Incoming ballistic missiles or threatening airplanes, and it has a [00:05:00] mode called a casualty mode where it assumes the human operators are incapacitated and it independently selects and engages incoming targets to to protect the worship. Um, The South Korean SGRA one is a stationary armed robot that's used to monitor the Demilitarize zone and it actually uses, I'm gonna get the video game wrong, but it is like a video game about dancing to I, and it identifies human forms.

Is it like Dance Dance Revolution? Is that, I think it uses that same technology to identify human forms. Trains a gun on them and orders them to halt and surrender. And South Korea says it's only used in conjunction with human operators, but the manufacturer says it has the capacity for being, for our operating independently for that sort of autonomous action.

[00:05:50] Gus Herwitz: There's so much just in there. I, I, first off, I have to, uh, just start by saying this is a scary conversation and as [00:06:00] academics, uh, we get, uh, really excited and enthusiastic, uh, uh, by lots of these topics and we have to recognize. The, the human dimension and the casualty dimension and nature of these,

what's the, the catalog of existing autonomous weapons systems. What does that look like? How, how prevalent are these today? 

[00:06:26] Rebecca Crootof: Whew. That's a hard question to answer because people disagree about the, I mean, for many one states don't always, you know, let us know everything they're using. Uh, but there is a lot of disagreement around the definition.

So I should acknowledge up front that the definition I gave you. Is, it's somewhat my own concoction to try and make sense of these, but sort of depending on your regulatory goal, people will define these in or out of existence and, and so trying to say how many exist [00:07:00] today is, is really difficult because, There is disagreement about what even comes under the heading, right?

Whether or not landmines even count, right? Is, is one of the disagreements. There are a number of folks who would say, Well, an autonomous weapon system is a weapon system capable of human level reasoning, and they, therefore, it doesn't exist yet. Right? And so I, I presented the definition as if it's neutral, but of course, definitions are never neutral, right?

mm-hmm. . And so I should, I wanna acknowledge my own biases, I think. I think when, what we're, what definitions often reflect what we're concerned about. What I find interesting, scary, worth thinking about with autonomous weapon system is this is this capacity for independent, unpredictable action in an arena of lethal force, right?

And so, For me, I want, I use a definition that encompasses that capacity, identifies [00:08:00] it as a capacity, and then is very, very broad so that then we have the ability within that definition to examine a lot of different potential manifestations of it. So, I totally side-stepped your question, but , 

[00:08:13] Gus Herwitz: the, the best questions often are side stepped, um, speaking to, uh, sidestepping or cabining, the topic by wave definition.

Your work is primarily focused on war fighting technologies and regulation under the law of armed conflict, and that's what we're going to focus on. But in terms of framing the discussion, uh, is that the exclusive domain where we're concerned about these technologies or are we starting to see anything like this in domestic use, either by police forces or, uh, law enforcement agencies or by non-government actors?

[00:08:50] Rebecca Crootof: So there's a long tradition of military technologies being repurposed and sold off to law enforcement. Um, and I [00:09:00] mean, you might have recalled not that long ago, there was a picture going around Twitter of, I think it was like the, the Boston Dynamics dog with a gun attached to it. Right? That everyone was like, Oh, and you know, Could this be used by police?

That particular example was a sort of click bait kind of thing. Like that particular system was in no way ready to handle being on the street in any kind of, in any kind of way. But I think that's definitely a dynamic that it's worth being aware of. That's so often military technologies do sort of trickle down to law enforcement.

I don't see that happening that much with autonomous weapons systems as of yet, but it's certainly happening across the board with different uses of AI and predictive technologies. 

[00:09:49] Gus Herwitz: So let's turn to focus on your main area of research and your main area of focus. What is the main body of law that [00:10:00] applies to autonomous weapon systems?

So 

[00:10:02] Rebecca Crootof: most of the time when people are talking about autonomous weapon systems, they're thinking about them in the context of arm arm conflict. And so the law of arm conflict applies. And the weird thing about the law of arm conflict is it dis displaces a lot of other existing protections that exist during peace time.

It just displaces, uh, in some ways, uh, human rights law and other restrictions and notably, Restrictions on killing. Right? And so one of the, the fundamental things about the law arm conflict is it might be lawful to kill somebody under the law of arm conflict. And so what makes autonomous weapon systems.

Different is that they're analyzed in the context of an, of a legal structure in which killing can be permissible, right? That's different than the questions that exist around, let's say, autonomous vehicles, where people are very concerned about, you know, their, their risk of harming [00:11:00] people, but that's not what they're necessarily intended to do.

Whereas a autonomous weapon systems are many times might be desi- designed in order to use lethal force. 

[00:11:12] Gus Herwitz: And h how do we think about, uh, uh, regulating these today? Are, are, is the law, um, of our conflict accepting of them? Is it rejecting of them? Uh, somewhere in between. 

[00:11:28] Rebecca Crootof: So right now, the law of arm conflict in general is fairly tech-neutral, right?

It, it mostly aims to regulate conduct, um, and it applies regardless of whether or not you're using a sword or a laser beam. Um, and so the question then is, is this law going to be sufficient to address the unique problems that are raised by autonomous weapon systems? And there's, there's roughly three camps of thought on this.

Uh, one. The existing law is enough advocates that say, [00:12:00] Hey, we've got a ton of tech neutral rules. These are gonna apply just fine to autonomous weapons systems, right? This is basically for tech law folks. This is like Easter Brooks Law, the horse argument, right? We don't need a special new law for autonomous weapons systems.

Um, There's another camp that says, Actually we do, We need to ban them entirely. The campaign to Stop Killer Robots is one of the foremost components of this concept that says The Law of arm conflict doesn't prohibit these, and it should. And we need to create a prohibition for it. Um, and then somewhere in the middle are people like me that say, Well, yeah, we've got existing law and that covers some things that we're concerned about here, but these systems raise new problems, or they make certain old ones newly salient and the law is ambiguous in problematic ways.

And so let's think through. Like, what exactly do we want to [00:13:00] use these systems for? When are they gonna be beneficial and when do we want to restrain their use? 

[00:13:06] Gus Herwitz: So do what, what's, obviously, we're both lawyers. The answer to this question is there is no right way, but what, what's the right way, um, to, uh, think about this?

Um, should, should we be trying to fit this existing, uh, the, the new technology into the existing framework and say, Well, these are what the current words. This is how they would apply to this new technology? Or should we be thinking, uh, differently about how this new technology will change the nature of armed conflict and the goals of, uh, uh, low act, the goals of, uh, uh, the law armed conflict in order to, uh, better achieve whatever those goals may be.

Uh, should we, should, should we be taking a textualist or a purpose sophist approach, uh, uh, to low a and these technologies should. Be saying these are, these are just weapons platforms. It doesn't matter that they are some autonomous [00:14:00] thing, or does the nature of the, uh, uh, autonomous weapons platform change how we need to think about it in the context of war?

[00:14:08] Rebecca Crootof: So I think this is just such a hard question for me to answer because as soon as I start answering, I find myself disagreeing with myself, uh, and I think. Highlights, sort of the, the, the complexity of these systems, relationship with the law. Um, and, and part of the problem here, right? You mentioned at the beginning, part of the problem here is that.

We have analogies that work for these systems right now, but it's easy to see how the analogies that we have and that we use aren't going to work for these systems in the possible very near future. Um, and just figuring And so often, right when, when old law is confronted with new technologies, we use analogies to stretch that old law to cover the new situation.

And [00:15:00] it's. It's not really clear that that's going to, to address all of the problems that we can see on the horizon with autonomous weapons systems. Um, so you know, To the extent you think, you know, you think of autonomous weapon systems, you can think of them. You, you, I think you mentioned, um, what was it like Robocop, right?

You can also think if you're a little bit more optimistic. Lieutenant Commander Data, my personal favorite autonom, you know, fictional autonomous weapon system, and we grab analogies to try. And make sense and come up with narratives, and that helps us figure out that automobiles are like worthless carriages or that, you know, autonomous vehicles might be driverless cars and.

And the use of these analogies can highlight potential dangers, potential risks. Uh, they can advance regulatory narratives by making us focus on one thing or another. And logical reasoning is crucial for stretching existing [00:16:00] law. But the thing with all of the analogies that we have for autonomous weapons systems is that the.

Has legal implications, the choice will predecide certain legal questions and none of the options are actually good . So all of the options are lacking in one way or 

[00:16:18] Gus Herwitz: another. So once again, you're blowing my mind because, uh, uh, Lieutenant Commander Beta, um, He is, You're exactly right, an autonomous weapon system, uh, insofar as he's an autonomous system that can make decisions to harm people.

And for that matter, uh, I, I get to cross off, uh, uh, this on my Bingo card. Um, the trolley problem, any time that we're talking about driverless cars or any autonomous system, if it can make a decision that will. Intentionally put a human being at harm in order to maximize or optimize for some, uh, uh, other situation.

A car is a weapon. [00:17:00] So is a, uh, autonomous car, an autonomous weapon system? I think most people would say, No, that doesn't, ma we're we're talking about, uh, robotic dogs and drones with guns attached or that drop bombs. But, uh, uh, really this definition problem, uh, you're, you're right that that's, uh, uh, that's a doozy.

[00:17:22] Rebecca Crootof: One of the fundamental tech log questions. Do we have a good analogy? Mm-hmm. . 

[00:17:27] Gus Herwitz: So why, why do people always look for these analogies? Why do we rely on them so much? So 

[00:17:36] Rebecca Crootof: they help us make sense of things. They help us fit. This new thing into existing categories and that that can answer some questions. So if we, So two of the most common analogies for autonomous weapons systems are.

Soldiers, combatants and weapons, like dumb weapons, like a gun, right? And, um, [00:18:00] both of these analogies work in certain situations and, and both of them don't work in others because unlike other weapon systems, this one has the capacity for independent action that isn't a mal. Right, so guns can go off when they're not expect, you know, expected to and other things, but that's different than everything sort of working as it's supposed to and it doing something unpredictable.

Um, and unlike human beings, You can train them on the front end, but you can't really deter them through a threat of punishment. Right. And you can't use other mechanisms that we have for controlling human or, you know, directing human action, uh, with autonomous weapon systems. So a lot of the like legal rules that we have around.

Human beings don't translate well to autonomous weapons systems. Like how do you, how do you, you can't hold an autonomous weapon system individually criminally liable for a work [00:19:00] crime, right? It doesn't have the mens rea requirement to, to act with the necessary intent, but it can do something that ends up looking like a war crime.

And, and so you're left with, with this, what a lot of people are concerned about is what is often referred to as an accountability gap. You just like, you can't hold a sword liable for a war crime. You can't hold an autonomous weapon system liable. And yet they have the capacity to take action that, you know, a sword doesn't.

[00:19:29] Gus Herwitz: So are, are these analogies a, a useful way of thinking? About, uh, uh, uh, in effect, Oh, I'm asking you think, think like a first year law professor, uh, and the discuss the role of analog reasoning. I'm thinking about, uh, legal rules, but specifically in the context of autonomous weapons. Is this a useful way of thinking about, uh, uh, these issues?

[00:19:53] Rebecca Crootof: I mean, I think it's in, I think it's definitely useful for thinking about these issues as long as you don't get trapped in the analogy, [00:20:00] right? As long as you recognize the, that the analogy that's right in one context might not be the right analogy all the time. And as soon as you start changing some of the facts, , right?

The analogy might not hold anymore. So, like I said, I think most autonomous weapons systems right now can be. Fairly analogized to other weapon systems. And so rules governing the use of We weapon systems can use fully be applied to them. It's, we have rules regarding, uh, legal reviews, requirements for new weapons that I think autonomous weapons systems should be subject to.

And so it's useful to analogize them to weapons when thinking about what kind of review is required. But if you start thinking about, um, other legal questions, I'll just say one that is, is problematic is, is the question of, uh, distinction in, in armed conflict. There's an obligation when [00:21:00] commanders engaging in an attack to distinguish between lawful and unlawful targets.

Lawful targets being enemy combatants, military objectives, unlawful targets being civilians, civilian objects wounded or surrendering soldiers. And if an autonomous weapon system is analogized to a weapon, it can be used in a way that comports with the distinction requirement by a human operator if it's analogized to a human.

Well, a lot of the ones that we have right now are not capable of distinguishing between active combatants and wounded combat. Right, and therefore it would be inherently in violation , this requirement. And so when you say, can autonomous weapons systems comply with the distinction requirement, your choice of analogy answers that question.

And so as you've just gotta be upfront about acknowledging that. 

[00:21:57] Gus Herwitz: And, and time is back to that idea of [00:22:00] the accountability gap. Human operators aren't perfect either. Human operators will make mistakes on distinction or proportionality, all along all these lines, but when they do, there can be accountability, there can be consequences, there can be investigations.

And I guess if a autonomous weapons, uh, platform makes a mistake, there might be an investigation. But the, the level in nature of accountability is, uh, fundamentally. 

[00:22:26] Rebecca Crootof: Yeah, and this is actually, I mean, one of the reasons I'm fascinated by autonomous weapons systems is because to me they highlight one of the major accountability gaps in the law.

Arm conflict has nothing to do with the type of weapon system being used. It's the fact that there is no accountability regime for accidents. And I mean, you're a torts professor. I'm a towards professor like . Can you think of a single domestic legal regime where. , I'm sorry, it was an accident, is an absolute defense [00:23:00] to, in, to causing harm.

Right? Like that's just not, if you get in a car accident, you can't say, Oh, I'm sorry that was an accident, therefore I'm not liable. Mm-hmm. , um, and autonomous weapons systems with their capacity for independent action, that's unanticipated. That doesn't fall into this category of accountability that we think about with, with international criminal liability.

It highlights how little accountability there is under the law arm conflict for accidents. And so I find, I find that absolutely fascinating. I. 

[00:23:36] Gus Herwitz: Okay, well we are speaking with, uh, Rebecca Koff about, uh, one of these scariest topics I think we've discussed on the, uh, podcast to date, autonomous weapons systems.

Uh, we will be back with some more scary discussion in just a moment.

[00:23:55] Lysandra Marquez: Hi listeners. I'm Lysandra Marquez.

[00:23:58] Elsbeth Magilton: And I'm Elsbeth Magilton. and [00:24:00] we're the producers of Tech Refactored.

[00:24:02] Lysandra Marquez: We hope you're enjoying this episode of our show. One of our favorite things about being producers of tech refactored is coming up with episode ideas and meeting all our amazing guests. We especially love it when we get audience suggestions.

[00:24:14] Elsbeth Magilton: Do you have an idea for Tech Refactored? Is there some thorny tech issue you'd love to hear us break down? Visit our website or tweet us at UNL underscore NGTC to submit your ideas to the show.

[00:24:27] Lysandra Marquez: And don't forget, the best way to help us continue making content like this episode is word of mouth. So ask your friends if they have an idea too.

Now, back to this episode of Tech Refactored.

[00:24:46] Gus Herwitz: and we are back, uh, with uh, Rebecca Crootof. Discussing autonomous weapons systems. Uh, I, I want to, uh, uh, come back and talk a bit more about, um, analogies. But before we do that, um, We, [00:25:00] we've in been invoking science fiction, uh, whether or not, uh, that's a useful analogy. Uh, I'm going to continue to do that because I think our discussion has evolved a little bit.

Um, and I, I'd like, I'd like to, uh, uh, just. Think about it, ask about what's the purpose of autonomous weapons systems, just as a greenfield question. Um, are they good for or bad for armed conflict? And I'm, I'm thinking of course, science fiction. There are, uh, stories that you can find from, uh, the original series of Star Trek to Dr. Who to Stargate, where you've got these two armies with, um, autonomous weapons platforms and they're are, uh, uh, in this endless war just going back and forth with, uh, uh, competing with, uh, uh, developing their autonomous weapons platforms and it's robot killing robot with all of the humans hiding underground in perpetuity.

So with, with that kind of as a framing, um, do what, [00:26:00] what, what's the promise of autonomous weapons? Is that the dystopian future or is that a good thing? Uh, you can imagine an argument in favor of autonomous weapons. We're not putting human life on the line when we have the, the autonomous platforms out fighting each other.

But does that create a, uh, a literal arms race where the result is going to be more and bigger and more complex and more dangerous weapon systems? 

[00:26:28] Rebecca Crootof: So autonomous weapons systems, I think it's, it's, it's. Easily suggested by sci-fi, right? To jump to a world of, of robot battles. And you know, in some version, like that's not the worst world.

If we can like, you know, just sort of name our champions and have them do get out , it's actually a callback to, to early law war where, where armies would actually do that. Um, sort of David and Goliath style, but I don't see that future. [00:27:00] As being very likely, certainly not in any kind of near term way. What we're seeing with autonomous weapons systems or, or weapon systems with increasing levels of, of autonomy is that they're being used in conjunction with humans and that they're a force multiplier and that they're often one sided, right?

That they're often being used more on one side of, of a conflict than on both sides, against each other. Um, and so in that context, There's a whole lot of different empirical questions about that we just don't have answers to, and oftentimes people's answers to them just betray whether or not they're a tech optimist or a tech pessimist, right?

So, mm-hmm. , our, our autonomous weapons system's going to be better or worse at minimizing. Civilian harm. We don't, we don't actually have an answer to that. And it's going to depend on the context and the use case. Right. And so, and, and if they are better at [00:28:00] minimizing civilian harm or being used, you know, with more precision, are they going to be used more often?

And if they're used more often, does that increase the likelihood of accidental harm? And, and so might we have an a situation where each individual engagement. Is more precise, but still has some risk of harm. But there are so many more engagements that we, you know, create a more pervasive conflict or an ongoing conflict or greater net harm.

Right? There's this question of will they make it easier to go to war? If we're not, you know, if one side is not putting human, li as many human lives on the line, does that destroy democratic peace theory because mm-hmm. right? Democracies will be more willing to engage in war. Uh, and at what point is more willing to willing, Right?

Uh, and that's, that's a policy question that people I disagree, disagreeing in any context. Mm-hmm. , um, right. And so if, if each individual use. Better, but that, you know, [00:29:00] counterintuitively end up in more war and more net death. 

[00:29:04] Gus Herwitz: So you, you, uh, you, you hit on exactly what actually was animating. Uh, my question there, um, the, the grand theory in many ways of the law, law of armed conflict, um, and international humanitarian law is how do we deescalate and avoid future conflicts?

So, uh, the, the bar for identifying when we actually are in an armed conflict that justifies, uh, the use of force is pretty high. And a lot of stuff that people want to say, Oh, this is an act of war. The law says no, no, it isn't. In order to deescalate and minimize the likelihood. Um, and one of my concerns and my, the question I was just asking was really about, are these platforms likely to get us into more of these armed conflicts, In which case that will affect how the law, uh, should affect how the law, uh, thinks about.

[00:29:56] Rebecca Crootof: Yeah. And, and you know, you have these questions of, [00:30:00] of particularly I think less with autonomous weapon systems and more with AI decision assistance and early warning systems, and you have the ongoing Petra of question of like, how much do we trust them , Like how much do we trust their recommendations and, and how, what especi, you know, especially when we know.

They're being trained on past data, right? They're not actually analyzing a given situation. And so there's always, I think, this concern about conflict escalation when they're used in the wrong way or in when they're misunderstood, or when people are too easy, you know, too easily, it's too easy for people to defer to their conclusions and predictions.

[00:30:42] Gus Herwitz: Mm-hmm. , Um, I, I guess that, that's a question I hadn't thought about and it echoes, um, Our, our discussion from a few moments ago about errors and mistakes, uh, do, do you want to make a prediction about, uh, how a mistake by an [00:31:00] autonomous weapons system that if it had been, uh, uh, something that a human actor had done might justify a response would be interpreted under the law of arm conflict.

[00:31:16] Rebecca Crootof: Anything an autonomous weapon system could do that a human could do. Like how it's going to be interpreted on the other side is, is really up to the other side, right? Most of the law of war is interested in an effect space analysis and sort of evaluating what you can do in response. Uh, but right now we're, what we're getting into is less in the middle of an armed conflict and more like what can start an armed conflict, right?

Mm-hmm. like. , and I think that is, that is what I'm referring to, like being concerned about early warning systems and AI decision assistance. Like my concern is that they might. They might have the unintentional effect of, of risking conflict. Escalation. Mm-hmm. , [00:32:00] you know, it's easy to imagine scenarios where, and I have my students in my, my tech threats class go through a scenario where, Right.

An early mor warning system that combines. Monitoring social media counts of a foreign country along with monitoring troop movements and, and blood drives and all sorts of other indicators of preparation for conflict. Right. Where it raises the threat level and what, what, at what point do you decide to respond to that?

Mm-hmm. . Right. And at what point do you feel justified in responding to that when that's all information that you wouldn't necessarily have had before and might. Like, how, how do you evaluate the accuracy of that prediction of risk? 

[00:32:45] Gus Herwitz: Mm-hmm. . Well, I, I do want to, uh, go back to, uh, the, the discussion of analogies, um, and ask are, are there useful analogies, uh, that are, are worth highlighting in this space?[00:33:00] 

[00:33:00] Rebecca Crootof: I mean, I think so . Cause I wrote a paper 

[00:33:02] Gus Herwitz: saying . So what, what, what are they and how and why are they useful? Yeah. So 

[00:33:07] Rebecca Crootof: I think, you know, when we're talking, we're talking about analogies as soldiers or weapons. Um, I said in my, in my paper limits of analogy, um, maybe we're just like not thinking quite imaginatively enough.

Like maybe we need to, like what other entities are out there that are capable of autonomous and unpredictable, harmful actions, but you can't really hold criminally liable for them. And, uh, immediately as both a parent and a cat owner, right? The answers were obvious like children and pets, um, and right.

The both of these are, are entities in armed conflict, right? There are child soldiers out there that are capable of independent, harmful action that do not. They are not held criminally liable in the same way as adult soldiers. And there are a long history of animal combatants, elephants and horses and dogs [00:34:00] and pigeons and dolphins and all sorts of different animals have been used in armed conflict.

But the interesting thing there, it's, while I think both of those are much better out analogies for autonomous weapons systems, neither of them give us useful legal guidance. So child soldiers are banned. They're banned to protect children from the horrors of war, but like even the most enthusiastic.

Member of the campaign to stop killer robots is not going to argue that we need to ban robot soldiers to protect robots, right? And so the underlying reasons for the law don't translate with the analogy there. Um, and then animal combatants, there's no law. So there's one internet, there's one reference in one convention that says it's, it's forbidden to hide booby traps in animal carcasses.

And that's the extent that we have around a law of animal combatants. So we actually have a great analogy here. [00:35:00] Uh, and if we have time, I've got a good story about the Soviet War dogs, but we don't have any law. And so. That highlights, I think, like you can ha you can find a much better analogy than the ones you have, but that doesn't mean you've got a legal solution.

Uh, and so I think when you, and then when you step back and you say, Okay, so we've got these panoply of analogies, some of which are useful and sometimes some of which give you conflicting answers. And so maybe you're gonna pick your favorite analogy based on the answer you wanna get to. Uh, but if the goal here is to have accurate law, it's also worth recognizing that like all of these.

Analogies are misleading because all of these analogies are embodied individuals, and it's entirely possible that you are gonna have a cyber autonomous weapon system, right? A completely disembodied. Weapon system that is capable of independently selecting and engaging targets based on, you know, program [00:36:00] constraints and gathered data.

Uh, or you might have a whole networked system of different entities of combination of sensors and actors, right, that are all networked together, and it's actually the total networked system. That would be the autonomous weapon system rather than any one individual piece in that system. And so all of these analogies don't encompass those other manifestations.

So I think it's always important with analogies to think one about how their faults, like when they're not gonna hold up with new, you know, a new legal situations, and two, how they constrain your thinking and constrain your imagination of the problem. 

[00:36:40] Gus Herwitz: So, uh, speaking of, uh, potentially unhelpful analogies, but since, uh, both you and I are torts professors speaking about animals, of course, in, uh, tort law, uh, and a strict liability, there are rules for ultra hazardous or unusual, uh, animals, dangerous wild animals versus more common animals.

So if [00:37:00] you have. Tiger As a pet, you're more likely to be held liable if it injures someone in your neighborhood than you if you have a dog. Uh, so per perhaps, uh, there is some, uh, outside of, uh, uh, the, the area of war, some analogy, uh, to draw there. Um, but speaking of animals, uh, we, we do have time. I will make time for, uh, uh, you to pry havoc and let's slip stories of dogs of war

[00:37:27] Rebecca Crootof: Okay? So, I feel like I've gotta start with a warning for, uh, everybody, including myself, who like deeply love dogs because it doesn't, I'll just start. The story does not end well for the dogs. Um, but the Soviets were training dogs to sort of become acai warriors. Uh, they were attaching these. These bombs to their backs and training them to run underneath tanks.

And when they did, so there would be a hook on the, on the bomb that would catch on the bottom of the tank [00:38:00] and set the whole thing. And it would have the whole thing explode, Right. And destroy the tank. And of course, the, the animal, um, and they train these dogs extensively. And the first time they used them in the field, they brought them out.

Said, said like, Go forth. Right. And the dogs who had been, and I'm gonna, I'm gonna not quite get this right. Soviet tanks used either diesel or gasoline and the other side's tanks used, whatever the opposite. And the dogs that had been trained on tanks that used diesel or gasoline, whichever it was, turned and ran under the tanks that smelled right to them and blew up the Soviet tanks and

To me it is like, Exactly an analogy for the un. You know, the, the fact that these systems don't necessarily have the common sense that we expect of them. They're going, we're gonna train [00:39:00] them on data and they are going to perform according to that training and. We can't expect them to know the purpose of the training, and that is always a huge risk of, of 

[00:39:13] Gus Herwitz: accidents here.

Mm-hmm. A as a, uh, a dog lover myself. Uh, yes. That is a horrible story and a, uh, uh, uh, sad, uh, story though, I guess. Uh, Bad on the Soviets in many, many ways. Um, but also, uh, uh, a nice demonstration of, uh, the use of analogy, uh, there. Um, the, the last thing I have in my notes that I honestly have no idea what it means, uh, but I just have war torts in all upper cap uppercase letters, uh, in my notes to ask you about.

So, uh, I'm just gonna ask you war torts question. 

[00:39:54] Rebecca Crootof: So you were, you were walking towards it all on your own there when you were talking about ultra hazardous [00:40:00] activities. So I look at the accountability mechanisms that we've got for what, you know, when things go wrong in the law of war and. We've got, um, individual criminal liability, right?

We can argue for a while about how effective that is, right? And how, how you, the international, how, you know, useful the international criminal court is. But theoretically we've got individual criminal liability for war crimes. So when something is done intentionally, someone can be held individually, criminally liable.

Uh, we've got state responsibility. For internationally wrongful acts for violations of the law of arm conflict. The state can be held collectively responsible for. And what we don't have at all, as I alluded to, is any accountability mechanism for when the law isn't violated and people are hurt. People who weren't supposed to be hurt are hurt, and this includes, Accidents.

Right? [00:41:00] This includes collateral damage, anticipated civilian harm, right? That's that's considered, This is the only legal regime that says, you know, you are going to, It is possible to know that you aren't gonna engage in an act. That result will result in unlawful targets being killed, civilians. But as long as you're targeting a lawful target, and as long as the benefit of that attack outweigh.

The civilian harm, it's a lawful attack, right? Then that's not an internationally wrongful act. And that's not a war crime, and there is no counter pressure, right? There is no. Weight behind that civilian harm. Um, and so what I'm arguing, and I think autonomous weapons systems are a great entryway to this because they're, I, you can analogize them to ultra hazardous activities.

Um, I think states should be strictly liable [00:42:00] for their harmful acts and are in conflict. And if you can start small and start with autonomous weapons systems and say anytime an autonomous weapon system. Causes harm, then the state should be responsible for compensating the victims of that harm and apply all the same arguments that we do in domestic tort law for ultra hazardous activities to it.

And then I, And then people say like, Oh, isn't this a slippery slope? Won't that apply? Like why should this be limited to autonomous weapons systems? And well, like the analogy's easy, but. I don't think it should be like I think, I think that just as in domestic law that we have tort law and criminal law, and they have different purposes and they serve different aims.

There is utility to having war torts and war crimes, and sometimes it'll cover the same conduct and sometimes that Venn diagram will overlap a little bit and sometimes it. But I think autonomous weapons systems force us to, to see [00:43:00] this, this accountability gap at the heart of the law of arm conflict and hopefully address it. 

[00:43:07] Gus Herwitz: Yeah, I love it. Um, as you were starting to, uh, uh, speak, I was thinking, well, isn't the answer strict liability? So for uh, listeners, there's an area of tort law called strict Liability and Products Liability. We know that if you manufacture and sell power tools, those power tools are going to injure and probably kill people.

But power tools are something that we value and uh, as a society, we want to be able to go to Home Depot and buy a table saw. Um, so what do we do about it? Well, this area of law basically, Uh, that, uh, broadly speaking, if you're the manufacturer of a product that injures someone, you are insuring your users against those harms.

So, if I get injured by my table saw, uh, for a wide range of reasons that don't involve, involve any negligence or [00:44:00] misdeed on the part of the manufacturer, they just need to compensate me for it. Um, that, no, no question, we're not gonna go, Well, we might go to. But, uh, it's called strict liability. And that, that seems very much in line with what you're saying, uh, and where, where you, uh, uh, went.

So, uh, very, very interesting argument. Uh, I guess my question would be is does that create, uh, perverse incentives? Uh, that might again, defy the, uh, proportionality principle. Are we effectively putting a. On acts of war, um, in a way that, uh, uh, states, uh, that can afford to just write a check, might say, Hey, we're willing to, uh, uh, take these risks with these systems and develop them, uh, in this problematic way.

[00:44:44] Rebecca Crootof: Yeah, I mean, I think that's a huge concern. Right now, I'm even more concerned about the fact that there is no price, right? And so, Yes, Like price, the, the concept of pricing atrocities and somehow saying [00:45:00] like, Oh, can a state just sidestep any kind of moral blame worthiness by compensating victims? Like to the extent this, like compensation becomes a, a, a substitute for , you know, that, that blame worthiness, that's a, that would be a huge problem.

But right now we. You don't have either. Um, and so I'm very, I'm, I think the first step is we need to put a price on some of the things that are happening and then we'll retain. , the existing law of war that makes certain things violations and other things were crimes and, and retain that moral blame worthiness.

And I mean, as any company in the United States subject district, liability knows, right? It's not enough just to pay to get out of things. You suffer reputational harms, you suffer other kinds of harms. Um, so it's not, it's not just a, a pass to be able to compensate. I [00:46:00] think it's sort of mm-hmm. more of the least you can.

[00:46:03] Gus Herwitz: Well, you've, uh, certainly given us a whole lot to think about. Um, and, uh, cer certainly some of it is more scary in macab than other parts of the conversation, but that's, uh, the, the nature of the discussion. And at the same time, I am never going to look at Lieutenant Commander or data the same way again.

Uh, so, uh, Brent Spinner, 

[00:46:24] Rebecca Crootof: I'm gonna, I'm gonna ruin R2D2 for you as well, 

[00:46:27] Gus Herwitz: actually. Uh, Brent, Brent Spiner, or R2D2, If you ever want to be on the show, uh, please, uh, feel free to send us an email and we'll gladly have either of you as, uh, guests. Um, and for that matter, uh, listeners, if you have, uh, topic.

That you'd like us to, uh, explore on the show or guests that you would like to recommend, uh, for the show, please feel free to, uh, shoot us an email or follow us or communicate us with us on Twitter at UNL underscore NGTC. Uh, I've been your host, Gus Herwitz. Thank you for joining us on this episode of Tech [00:47:00] Refactored.

And of course, if you enjoy the show, don't forget to leave us a rating and review wherever you listen to your. It's our show is produced by Elsbeth Magilton and Lysandra Marque and Colin McCarthy created and recorded our theme music. This podcast is part of the Menard Governance and Technology Programming Series.

Until next time, seriously, no dogs were injured in the making of this episode.[00:48:00]