Tech Refactored

S2E42 - Self Driving Cars: Implementation, Hazards, and the Law

May 27, 2022 Nebraska Governance and Technology Center Season 2 Episode 42
Tech Refactored
S2E42 - Self Driving Cars: Implementation, Hazards, and the Law
Show Notes Transcript

The episode you’re about to hear is being hosted by two of our student fellows. Our Student Fellows are a diverse and interdisciplinary group, representing colleges and specializations across the University of Nebraska. Our fellows, Josh Lee and Garrett Wirka invited professor  William H. Widen on the show to explore the topic of self-driving cars, hazards and the law. Widen is a Professor of Law at the University of Miami School of Law, where he currently teaches commercial law, contracts, and other business subjects. His research has focused on Autonomous vehicles and how this technology is affecting society.

Disclaimer: This transcript is auto-generated and has not been thoroughly reviewed for completeness or accuracy.

[00:00:00] Elsbeth Magilton: This is Tech Refactored. I'm one of your regular guest hosts, Elsbeth Magilton, the Executive Director of the Nebraska Governance and Technology Center at the University of Nebraska. The episode you're about to hear is being hosted by two of our student fellows. Our student fellows are diverse in the interdisciplinary group representing colleges and specializations across the University of Nebraska.

The goal of the Student Fellows Initiative is to familiarize students with the nuances of working with professionals from other academic backgrounds incorporating. Diverse perspectives and vocabularies in order to better inform their own work. This semester, we challenged them to produce an episode of [00:01:00] Tech Refactored on a subject of their choosing.

We hope you enjoy this special episode of Tech Refactored, hosted and produced by our student fellows, Josh and Garrett.

[00:01:15] Josh Lee: Hi, my name is Josh Lee. I'm a third year law student with the University of Nebraska College of Law. So I graduate in May 2022 with my JD and a concentration in real estate law. So turn in it over, uh, to my partner, Garrett, for his introduction. 

[00:01:33] Garrett Wirka: Hi, my name is Garrett Wirka. I'm a PhD student in the University of Nebraska Lincoln School of Computing, and I'm primarily studying applications of AI and machine learning and medicine and interpretable machine learning methods.

[00:01:42] Josh Lee: Today, uh, we're joined by Professor William H. Widen. Professor Widen is, uh, professor of law at the University of Miami School of Law. He graduated from Harvard Law School in 1983, where he was editor of the Harvard Law Review. [00:02:00] Prior to joining the University of Miami, Professor Wide was a partner at Cravath, Swaine & Moore, New York.

His practice areas include structured finance and secured lending. Professor Wide currently teaches commercial law contracts and other business subjects. His research has focused on autonomous vehicles and how this technology is affecting society. 

[00:02:21] Garrett Wirka: So welcome, Professor Widen. Thank you for joining us. 

[00:02:24] William H. Widen: Well, thank you very much for the invitation.

[00:02:28] Josh Lee: So professor, first, um, if you could please elaborate a little bit on your history, uh, with autonomous vehicles and how you got into that particular area. 

[00:02:36] William H. Widen: It was really recent and fairly accidental. I started investigating the law and regulation of autonomous vehicles last June when the chair of the, uh, Department of Computing, actually at the University of Nebraska asked me to consider possible content for an ethics module for engineering students, and I [00:03:00] had advised that you should use a practical, modern problem rather than a syllabus that studied old classic texts, like things from Plato and Aristotle and Con. And so I wanted to use an actual ethical question that revolved around technology.

And I considered autonomous vehicles as well as other areas like technology, uh, impacting privacy, and that's what got me started. I ended up selecting. Autonomous vehicles as the, uh, narrow or narrower area that I would focus on. 

[00:03:35] Josh Lee: You mentioned autonomous vehicles. Um, generally speaking, when should a vehicle be considered autonomous?

[00:03:42] William H. Widen: Well, there there's a lot of technical definitions. Okay. I think the labels matter less than the capability, but the technical definitions is really a distinction between what's called an adas. Or an advanced driver assistant [00:04:00] system and an a adss, which is an automated driving system. The former is just assistance, right?

And the latter can perform what people tend to call the, the complete dynamic driving task, including object and event detection and response. The the former, which is. Really an autonomous vehicle, even though it has autonomous FE features, might extend to longitudinal and lateral movement and adjustments.

Things like lane keeping, automatic braking, adaptive cruise control, and a true automated driving system can perform these functions and more. And I think that, you know, the practical definition for me is that when you're performing those functions and you're also able to negotiate intersections and turns at intersections, that's when I think you [00:05:00] really ought to be most interested in regulating.

When you have an automated, an advanced driver assistance system, there's always a driver who's ready and able to take over with a true automated driving system. The theory is that the driver can take a nap in the back seat when the vehicle is operating within its operational design domain. And anytime you're just using driver assistance, that's not the case.

[00:05:29] Garrett Wirka: So when the label autonomous is used for vehicles now for the purpose of advertising or, or legislation, do you think that's kinda a misnomer and other terminology should be used? How do you autonomous being used to describe currents 

[00:05:41] William H. Widen: on the road? Well, the, the pro, you have to understand that the, there's, uh, a vocabulary or a taxonomy.

Put out by the Society of Automotive Engineers sae. It's called J 30 16, and what it does is [00:06:00] describe six levels of autonomy capability starting at zero, which is nothing, and going up to five, which would be an autonomous vehicle that can do all the dynamic driving task on any kind of road or highway.

Okay. Zero obviously is no autonomy. One is a vehicle that gets lateral or longitudinal assistance, but level two is one that gets both. And it's in level three where you start to get a vehicle that within certain parameters can drive itself, i e when you could take a nap. Okay. And what has tended to happen in the law?

If a state decides that they want regulation, what they tend to do is either in their law or their regulation refer to. That taxonomy, they'll actually refer to SAE J 30 16. [00:07:00] And what they will say is that they want to impose regulations and requirements on vehicles that are considered level three, level four, or level five.

And so they're really incorporating these engineering definitions and using them to indicate at what level. Autonomous vehicle regulation will start. 

[00:07:23] Josh Lee: So tying that in, because it's legislatures people that aren't necessarily technically proficient, passing these laws and setting these standards. How do these legislatures, how can they obtain this information?

Are they just relying solely on. What the manufacturers are putting out, or is there a side research or other research that they're relying on when they're 

[00:07:46] William H. Widen: when they're defined? No, basically they rely on industry. The phenomena that is happening, frankly, is that certain states want to be very business friendly [00:08:00] and so they allow the autonomous vehicle industry to operate in their state with virtually no supervision or regulation at all. Okay. Texas, Florida, I think are two examples that are fairly lax that way. Then there's other jurisdictions that want to have meaningful regulation. An example of that would be Pennsylvania, California.

Pennsylvania is just considering a law. California already has a law. The problem right is that is really twofold. The lobbying efforts by the companies are all directed to avoid meaningful third party regulation. Okay? And so what they want to do, what they try to do is say, Okay, we want to self. Tesla is sort of a good example.

If you look at Tesla in California, Tesla claims that its vehicles are [00:09:00] classified as level two, which is just below the level that would trigger regulation and supervision by the California D O T. All right. In fact, Professor Cooman, Philip Cooman at Carnegie Mellon and myself think that in fact, Those vehicles are really level four, and I'm talking here about a Tesla that has what they advertise as full self driving, which is different than just their driver assistance.

You pay an extra fee to get the full self driving, but what they're doing is what we call the level two loop. Okay, and what that is because they self classify what their own vehicles are, as long as they continue to say that their vehicle is level two. Then California, for example, has basically taken the company's word for it.

But if you look at what. The, the vehicles [00:10:00] actually are doing, it seems like they are more capable. They're just not very good at it. Okay. And, and even though Tesla's manual says that you should have a driver in the position to operate the vehicle and take over at all times, that sort of, their advertising is in the other direction.

And a lot of people that own Teslas do things like sit in the back seat, read a book, play a video game, and then they post this on, on the internet, and so people can see them doing all of these sort of things that violate what Tesla anomaly says as the way the vehicle should be, should be treated. And so, but because Tesla is self classifying, They, as long as California takes their word for it, they're level two and they avoid an extra layer of regulation.

Another example that's not just Tesla. There's a company that's doing trucking that called Embark, which apparently rode their [00:11:00] trucks around on the streets of Oakland without anybody in them. But they also, uh, Or without, you know, running fully autonomously. And so they, they say that what that was was a level two, or at least they would try to argue it was a level two.

[00:11:17] Josh Lee: So that, so that's like the legislature and so they appear to be getting it more from the industry, but public trust seems to vary in with, with regards to this apparent here in Nebraska, a survey was done to gauge public opinion on this technology on autonomous vehicles, and 51.9% of people said that they either disagree or strongly disagree that they are comfortable sharing the road with a driverless.

And so referencing those responses are, is the legislature, are these law making bodies listening to these opinions from the public, or is it just primarily those, the [00:12:00] industry that they're feeding information or getting information from?

[00:12:03] William H. Widen: Well, I think that it's primarily the industry. And the way the industry behaves to me, it doesn't create trust.

It's the opposite of trust. The industry has a lobbying group called Pave, P A V E, uh, that tries to promote education about the benefits of autonomous vehicles, but it really just advertises them as a good thing and tries to tell you that they're safe to get the public to understand that they shouldn't really need to be regulated.

And the companies say all the time that we want your trust. Everything we do is for your trust. But how they act is really very different. What they do is, one thing they're doing in Pennsylvania and elsewhere is they get legislature legislators from rural counties [00:13:00] and places where they're not likely to do much testing to pass state legislation that will preempt local legislation so that they don't have to worry that any local legislation would adversely impact their testing in an urban environ.

Okay. And it, that seems to me, for anyone in the urban area to say, Well, why would I trust that? I mean, it even takes away the ability of a locality to, to have regulation that addresses particularly local conditions. So that's one thing that. That makes it look like in this regulation process, there's no trust.

Another thing is that when, when a, a law regulation has a minimum insurance level, those insurance levels are generally too low. I think the government, federal government has published a statistic that would suggest that a human life is worth about 11 million in. In some [00:14:00] cases, the insurance level is 5 million.

In other places it's 1 million and in some places it's less. And so the industry seems very intent on limiting their liability for loss at the same time that they tell you that there really shouldn't be any losses. And they also, and I point this out in one of my early articles on the topic, they also will not tell you what their standard for deploy.

Autonomous vehicles at scale happens to be right. You have to understand there's sort of two areas. There's how does the testing activity look, and then there's how does it look when they're gonna deploy the vehicles generally. Okay. Another thing that they do that is not trustworthy is they're, they want to now test the vehicles without having a safety driver in the vehicle who's able to intervene in the event of a problem.[00:15:00] 

And that's done not for safety. It's done for marketing because it's sort of like, Look, mom, no hands. Right? And the problem with that is that, That if you have a vehicle that's operating without a driver and the system fails, you're going to have a crash. If you have a vehicle that's operating the exact same system, but there's a trained safety driver who can intervene.

You still learn that the system failed. It's just you don't get the consequence of the crash. And so from a scientific and testing standpoint, there's virtually no additional information that is gained from doing driverless testing. But the Wall Street and the market types think that that shows some confidence or some advancement, and so they do that too.

There's nothing trustworthy at all about the way that they're behaving. 

[00:15:58] Garrett Wirka: Sorry to cut you off there. Um, so from, from [00:16:00] your perspective, uh, if the industry did want to build public trust, how do you think they would, should go about doing that? 

[00:16:06] William H. Widen: One of the first things that I would do if they wanted to build trust is I would commit to an engineering standard that's called, uh, J 30 18, I believe, which talks about having safety drivers in all the vehicles during testing, and it describes aspects of, you know, Training and so forth about what the qualifications of a safety driver ought to be.

Because if you do that and, and you, you know, have a monitor there and they're trained, then I think the public could be reasonably confident that a crash would be avoided. Okay. Some people even test with two safety drivers, but at least. I would test with one. The problem with the Uber crash in California, they had a safety driver, but the safety driver was watching a movie or [00:17:00] playing a video game on their phone, and then you had a fatal crash.

So the first thing would be to put safety drivers in for all testing. The second thing I would do would be to put in. High insurance levels to say that, look, if there is an accident, particularly if there's a loss of life, uh, we're gonna have a 10 million policy or a 15 million policy in, in essence, put your money where your mouth is.

Right? Those would be two things that would go a long way. I think the third thing, it's a. More controversial and it would require more thinking, but the legislature needs to put in new legislation that more clearly describes the legal levels of responsibility that an AV developer, whether they're a programming company or a manufacturer, or even what's called an upfitter, would have in the event that one of their vehicles gets in an accident.[00:18:00] 

The industry, I don't think wants to clarify the levels of liability, uh, because they don't want it to be clear. They want it to be a problem that has a transaction cost, which would require litigation, which would give them some settlement value. That's in fact what they're doing. So 

[00:18:20] Josh Lee: building off of that concept of building trust with the public, it seems like.

For every thousands or for thousands of miles that are driven safely, they seem to be overshadowed by like the Uber crash or these grizzly accidents that occur. How can the industry legislatures as a whole address the fact that even if they do build this trust, one accident could cause it to completely fall apart?

[00:18:53] William H. Widen: Right. I mean, I think that, that what they need to do is be more [00:19:00] transparent about what the technology can and cannot do, and to develop the correct expectations among the public. What they currently do is try to sell the acceptability of, of AV technology by using a number of myths. Right, and things that are just wrong to try to sell the, the product and the safety of the product.

One thing they do is to try to create an environment in which our highways are considered a crisis. They appoint to some statistics that they misuse, that 94% of fatal crashes involve, uh, or caused by human error. Then that's not what the statistic says. You know, a human, you know, might be able to intervene or, or help avoid a crash in 94% of the cases.

But it's not that they cause it or that they're the [00:20:00] primary cause. And what they then, they, what they try to do is they say, Look, if we can eliminate things like drunk driving and texting and falling asleep, by definition we're gonna be safer than a human driver. But in fact, uh, while those problems won't come up, other problems that are related to their technology may come up and they never talk about that.

Right? And like, for example, the, the visual systems that are used to identify objects and events, uh, Make mistakes and some of the mistakes are very surprising mistakes to a human vision system. Right? In one case there was a picture of a Buddhist temple and it was identified as a flamingo or something, right?

Stuff that's just, just, we can't understand, but that's the way that. That the neural network that has been trained happens to recognize a particular scene. And so what we don't know is [00:21:00] how many of these problems will come up and what their statistical significance is that are unique to the particular automated driving system.

Okay. And then they misuse statistics like I think Tesla said, Look, we, we have only 0.4. Fatalities per hundred million miles driven, and the US average is 1.1. Isn't that great? Well, there's so many problems with that. I don't even know where to start. But, but the first is that they're not comparing apples to apples, right?

They're, you have to look very fine grained. What was the time of day? What was the road? What was the road condition? Many, many diff, even the geographic region makes a difference. So they're not really comparing apples to apples. And their comparison is for fatalities in new vehicles. But in fact, just the age of the vehicle, an eight year old car, [00:22:00] a seven year old car, six year old car is less safe than a brand new car.

And so some of the reduction, even if you had a statistic, is attributable to the age of the vehicle and has nothing to do with the te. Um, you know, I could go on and on about things that they say that are just wrong to, to add one 

[00:22:20] Garrett Wirka: of my favorite anecdotes and support of your point there. I, I know you mentioned Dr. Phil Koopman before, uh, I remember a presentation. He, he was about some of the analysis that he'd done of object detection systems, um, used by self-driving cars that found that the cars, or that the, the object detection systems really struggle to identify people wearing yellow as people. 

[00:22:41] William H. Widen: Yeah, that's right. That's right. There- they had to reprogram. There was the lime safety vest, which people couldn't figure out why they weren't recognizing. The safety vest, but they had to reprogram for that. There's other issues like at the time of [00:23:00] year, like, you know, if a, if a woman's legs are exposed and it's kind of a tan or a brown color, sometimes they were confusing them with bushes or trees In Tesla, you've heard these problems where they've had trouble apparently even with their lesser systems, recognizing emergency vehicles because of the flashing lights.

And so there's all of these unanticipated things that come up in hindsight, right, that we just don't have the statistical information to know about. But what they could do, and they don't wanna do, is to come up with what the engineers would call a safety case, where they do a very rigorous analysis and try to identify all these possible problems and what their response to that problem would be.

Okay. And then be honest about the fact that although they're doing their best, that in the early [00:24:00] stages of deployment, we may not know that these things are in fact, safer. Right. Um, and in fact, go ahead. 

[00:24:09] Garrett Wirka: Do you currently believe then that, Sorry, I, I don't like how I started that question there. I'm gonna try to restate that: do you believe that autonomous vehicles are safer than traditional human drivers? 

[00:24:20] William H. Widen: There's no way to know that right now, and I suspect that they're not. 

[00:24:26] Josh Lee: So applying that then what if autonomous vehicles were applied in a much more narrow concept? I know here in Lincoln they had pushed for. A pilot program to test autonomous buses or just, or lawn haul trucks, for example.

Another example. What if autonomous vehicles were applied to those much more narrow concepts rather than simply anybody and everybody can have one, 

[00:24:55] William H. Widen: would it? I think that that is, yeah, that's the [00:25:00] promising way that the industry, I think would. Forward best. Okay. And partly what that's doing is limiting the autonomy features by geography, which can be very, very well mapped and monitored.

I mean, in essence, what you're doing here is you're really trying to just replace the railroad tracks. Okay, in, in the olden days, you'd had the railroad. You knew basically, unless it derailed where it was gonna go. The problem that we have with the highway system is, You can sort of go anywhere and there's no constraints.

Okay. We have autonomous vehicles at the airports when you have the, the trams that take people from, you know, the terminal to the parking and things like that. And so I think that's, that would be useful. I think, for example, having roots that went from major hotels and downtown areas to [00:26:00] the airport would be another one.

Ok. An actual company. Pittsburgh is called uh, Loation, and what they're trying to do is to have one truck autonomously follow another truck that is actually manned and crude. Okay. And so they've, they view that as sort of human guidance for autonomy. And their slogan in fact is human guided autonomy on the path to full autonomy.

Right? And so their business model actually, I think is a kind of thing that could work in the relatively near term. The idea is that they do what's called platoon. There's two trucks. An empty truck follows a crude truck. The crude truck has two drivers, the or two uh drivers. One drives the other [00:27:00] sleeps, and then when the time is certain time as elapsed, they switch.

Right? Because there's limits on how many hours a driver can drive, uh, before they. They have to take rest and stop. Okay. So that means that you're able to drive basically 24 hours a day if you wanted to, and then when you get to your destination, because it's not a double tractor trailer, but it's actually two trucks, then.

One of the drivers will get out and drive the truck to one destination and the other driver can take the truck to, to, you know, a second destination. And that seems to me like it's achievable with sensors that, that have the, the following truck follow the lead truck. And so there's things like that that I think make a lot of sense.

What's really difficult is to get a full level five vehicle that can go. On any highway or any road under any conditions and [00:28:00] function in a way that we think is even remotely acceptably safe. Do you think that 

[00:28:06] Garrett Wirka: autonomy is be achievable in in our times? Like when will you feel comfortable taking a, in the driver's seat of your car?

[00:28:14] William H. Widen: For me, I think I'm old enough that I might never become happy with that. Right. But I. I don't think we're gonna see level five systems, you know? Certainly, I'd be stunned if we had them in the next 10 or 15 years. Okay. I just don't see it. The technology is too, too rough. I mean, you know, you, you can't run these things in the rain.

They're not very good at night. There's just a lot of technical tasks and even when you get one that works, it's gonna work. And what would be called a limited O D D or a limited operational design domain, which means it'll work in some places but [00:29:00] not others. When it does that, the vehicle has to know where it is and has to be able to sense the environment, and then it basically shuts down its autonomous function.

It brings itself to a minimal risk condition, and then the driver takes over. Okay. I can imagine a world not too distant where very limited o dds. Like, for example, if you had a path around a university campus or maybe you had a route that took people to the airport, I could imagine that being more near term.

But the full autonomy I think is way off. Uh, if ever, and part of that is a function of computing power, right? And because you need to have a, a vast amount of computing power to deal. With, with all these systems and contingencies and it's difficult. Uh, To have that sort of work [00:30:00] onboard the vehicle, you're not really able to connect through a satellite to a mainframe or to do something like that.

It has to work with the energy consumption and the computing power all on the vehicle. 

[00:30:13] Josh Lee: So, uh, we'll be right back to discuss the implementation of autonomous vehicles. In society.

[00:30:24] William H. Widen: Hi listeners. Thank you for tuning in. Interested in keeping up with the Nebraska Governance and Technology Center. Follow us at UNL underscore NGTC on Twitter, where we share the latest news and opportunities for faculty, students and research. You can also subscribe to our monthly newsletter at ngtc.unl.edu/mailing list.

And now back to this episode of Tech Refactored.[00:31:00] 

[00:31:10] Garrett Wirka: Welcome back. We were discussing autonomous vehicles and their introduction into society. I wanted to ask you what you believe are the next technical steps in the evolution of autonomous vehicles. 

[00:31:21] William H. Widen: Well, I think, I think that the next technical steps to take those steps, I think you have to do your testing with a backup safety driver.

Okay. And because we don't really know. In terms of categories of things that need to happen, I would say that that functions, that the vehicles already perform. They need to perform much better and more accurately. And they need to do it in more varied operational design domains. In [00:32:00] other words, a lot of the testing currently takes place in places like Arizona and Texas, which have relatively mild weather and so forth and so on, right?

I think that the technical progress that needs to be made is to get the vehicle's capability. To operate in increasingly hostile or difficult environments. Things like rain, snow. I don't even know how they're gonna do that because a lot of the lane keeping is done by looking at the lines on the road.

Lighting conditions are a problem. They're, they're not as accurate or helpful at night. And so as I see things, If I have a very clean, clear, straight interstate road, I could probably have a vehicle that in that very limited O D D, we'd be pretty comfortable with, with existing technology. Okay. And I certainly think that the technology for [00:33:00] platooning where there's a human leader, In one truck and then a follower.

I think those things are achievable. But what the problem is, we have to get testing in more varied environments to be confident that we can expand the O D D so that the vehicles can actually be usable. Right. A vehicle that today that can't operate in the rain is not particularly helpful. Okay. From a legals 

[00:33:26] Garrett Wirka: perspective, how will autonomous vehicles affect the rules of the road? Are, are the vehicles going to adapt to our current laws, or do you think laws are gonna have to adapt to autonomous vehicles? 

[00:33:37] William H. Widen: I think that's an excellent and complicated question. Here's the issue, right? We have traffic laws on the books. That in practice, the state troopers and the police do not enforce the letter of the law, okay?

They allow people to exceed speed limits. They sometimes [00:34:00] tolerate rolling stops, okay? There's a whole range of judgment calls that that an officer makes about whether they're going to issue a. That also occurs, say with parking, if you're double parking and you're trying to unload something in a, in a city condition.

And so there's a, a practical problem for the programmers, right? Which is they, they'll be criticized if they don't program the autonomous vehicle to comply with the laws on the books. But the laws on the books are not the way the laws operate in practice. Okay. And so the legal system needs to decide with the, with the technical people how we're gonna specify the driving performance of the autonomous vehicle.

The reason for this, right, is that, that [00:35:00] if, if you put an autonomous vehicle on the road, which is following the letter of the law, and it's driving with people, Are driving under conditions with a lot of leeway. You create potentially dangerous and uncertain situations. For example, if I'm doing a, if I'm a human and I'm passing in a lane, I exceed the speed limit.

Okay. All right, well I do and I don't get a ticket for it and I pass and then I get back in my lane. If I had a, an autonomous vehicle that was limited precisely to the speed limit, they're gonna have real trouble ever executing a pass of another vehicle that's going slow. And so then what do you do?

Well, then you say you can't exceed the speed limit, so they're always in the right hand lane. And then they may block things there, depending. The speed of the traffic generally. And so the first thing is, and Tesla recently got in [00:36:00] trouble. People said, Oh, they've programmed their vehicle to do rolling stops.

Well, they probably shouldn't have done that, or better they should have. If they thought they needed to do it, they should have gone to the California d o t or whoever and said, Look, we think it's safer if you can do some of these things on the margin. We would like to program it that way. We're telling you we're doing that.

If you don't think that makes sense, let's have a conversation. Instead, they program it to do a rolling stop. They don't tell anybody at the regulatory level. Then it comes out that they do rolling stops, and that violates the law. And violating the law is a condition that causes the company to lose trust.

And then you have a needless mess on your. Okay. And so we need that conversation about exactly how, you know, Right turn on red. I mean, how are they doing that in special situations or for special jurisdictions? Right. In some cases [00:37:00] you can turn right on red. In other cases you can't. It's probably the case that the vehicles are not being programmed to deal with all of the local.

Traffic laws that might be applicable in an area, but that's the kind of discussion that they should have. 

[00:37:20] Josh Lee: I, I, that's very, it's very pre, um, like during the process, during the manufacturing, all of those conversations seem to happen, but specifically I had a question regarding after the fact. So like when an autonomous vehicle gets into an accident regarding the legal side of it, how do you see the concepts of liability negligence, any other legal concepts applying? Specifically if an autonomous vehicle were to ever get into an accident.

[00:37:49] William H. Widen: Okay, I, What I would do here is separate two different conditions. If we're in a testing environment and we have a [00:38:00] backup safety driver, the, the current mode of analysis seems to be that we blame the human safety driver for the accident and not the system, because the human safety driver was supposed to intervene to stop the accident.

In other words, the human safety driver was either negligent or reckless. Okay? In the Uber case, Right. They found Uber. They didn't pursue liability, certainly not criminal liability against Uber. When the NTSB did the investigation, they did find the safety driver responsible, and I believe that the safety driver in that case, Was being, is being pursued for homicide, you know, a manslaughter, reckless, negligent homicide, and so the [00:39:00] responsibility rests with the person.

That is the same idea that I think would currently apply if one of Tesla's vehicles, You know, when a Tesla vehicle is in a fatal accident because Tesla's rules of the road say the safe, a person is always supposed to be at the wheel, able to take. Okay. And there's an article written that we cite in some of our pieces where, where this is referred to as using a human as kind of a moral crumple zone, where everything falls back on the human and not on the technology.

The moment that you have a vehicle which is advertised as allowing you to read a book or take a nap in the backseat, then you as the owner, May be operator but not driver. You're not negligent or reckless for not paying attention. In other words, the product is supposed to allow you to not pay [00:40:00] attention.

It's at that moment that you have a question about where civil and criminal liability should lie. Right? And I don't think we've really worked that out In the case of a company that is testing a a vehicle with no driver. Okay. I think that that company during testing ought to be absolutely liable for any accidents that they cause.

But then the question is, why did the accident happen? Was it a defect of some sort, like a, just a physical failure? I mean, the vehicles are still gonna have accidents. Their brakes could fail. That has nothing to do with the autonomous system. Right? And so it will get very complicated to sort out from a causal standpoint, but that's really what, what the law needs to be clarified on, right?

And so, I personally think that you, if you had a regime of absolute or strict [00:41:00] liability for manufacturing and program companies as well as upfitters, I would say that they ought to be jointly and severally strictly liable, and that a person, plaintiff ought to be able to. Compensation from any one of them, and if they get a judgment, I would leave it up to the companies to sort out any issues of relative fault amongst themselves.

I'd suggested that they do something like that in Pennsylvania when I filed something with a. Posted something and sent it to the legislature in January. Um, but the companies hate that. They, they don't want to clarify the rules about, um, liability, but it's a whole new regime and, and it needs to be rethought.

And I think the rethinking ought to happen in a statute and not. Via common law development through cases. 

[00:41:54] Garrett Wirka: So as you've said, S are inevitable self vehicles and [00:42:00] complexity, unforeseen consequences, training in your research of the costs, autonomous vehicle technology, false disproportionately on the poor because they're more likely to be pedestrians in vehicle pedestrian accidents or cyclists.

Could you 

[00:42:17] William H. Widen: expand on that here? Right. I mean the, I don't think people are paying sufficient attention to the collateral risks that are being put on poorer people, but it's also being put on urban people. And I, my current biggest concern is, Frankly, it's in Pittsburgh, largely where a lot of these companies are.

Penn, you know, Philly might have a similar problem. New York will have a problem. I don't, I worry that the companies are free to test disproportionately [00:43:00] in at risk communities that have historically been treated poorly by the law, including in transportation. I would like it to be the case that a locality.

Could require a company to submit a testing plan that included geography to make sure that we're. Testing in communities of people of limited means, or that have otherwise been historically discriminated against. And I think companies ought do that. You can, What happens if you get a fatality? In a poor community, and then the headline is gonna be X, Y, Z.

Autonomous vehicle company targets poor. Okay? And of course, there's legal liability reasons to do that. If you kill someone in a poor neighborhood, it's quite possible that they can't pursue their rights or their estate won't pursue the rights that if they do pursue their rights, [00:44:00] because the earning power is less over a lifetime, the judgements will be smaller.

And I just, I just don't think that that, that particularly our most vulnerable people should, should be at increased risk to develop a technology for the benefit of all of us. And in any event, if I were a company, I would want to file a plan with a city that says, Here's where I'm testing. When I'm testing why and why don't think it's discriminatory.

But they want the ability to, to operate without doing anything like that. So we 

[00:44:35] Josh Lee: were, uh, focused on like potential remedies. So beyond that plan, uh, that you're proposing, why it's not discriminatory, what about access to this technology? I mean, it's, it's a new field. Obviously. Tesla is, is a big player in it.

Currently with my lost student loans, I can't buy a Tesla. Um, but. [00:45:00] Access to this technology as a whole. Do you see that being a big issue with regards to this disparity that you're 

[00:45:06] William H. Widen: mentioning? Well, you have to. You have to be a little foresighted to see how does this really develop? And I think that the more mature thinking on how AVS will be used differs from a private automobile owner.

Okay. A lot of people think, okay, everyone has a gas powered vehicle that is human driven in their garage. We're gonna evolve to a situation where everyone has an electric powered vehicle in their garage, which is capable of being fully autonomous. Right. I don't think that's the way autonomous vehicles are gonna go.

I think that what's more likely to happen is that within an urban area, autonomous vehicles will operate kind of like a [00:46:00] taxi fleet in New York where these vehicles will be kind of roaming around and you call one. To come get you, and then it takes you where you're going. And so I think that there, there's a potential in that kind of model to actually benefit people of limited means, assuming that the fees were not.

Crazy for paying for it. Right. But, but that they would, they would then be able to call these vehicles to then take them where they want to go, and that the ownership of actual vehicles would be reduced. Okay. And if that's the case, then the individual doesn't have to buy a Tesla. The Tesla would be part of a transportation company that owns and operates these vehicles as part of a.

That's, that's how I think ultimately the technology will develop. There will be people that will own their. Vehicles, I'm sure for quite some time. But the end [00:47:00] game I think is to eliminate or severely limit the private ownership of motor vehicles. All right. Well I 

[00:47:08] Garrett Wirka: think that's all the questions we've got for you for now.

So thank you so much for your time 

[00:47:11] William H. Widen: profess widen, uh, you're most welcome. It's a very interesting area and it's an area that I think requires a lot more thought. And if I could close with one thing, I just would say, I think the companies. Operating against their own best interest by having a fairly hostile view towards regulation.

I think if they were to, to engage with the regulators and be a little more honest and straightforward and actually trust the public and the legislature to not do crazy things, you would have a much better, you'd have a safer environment and you'd have an environment in which the AV industry would better thrive.

even in the face of inevitable accidents, including fatal accidents, which are bound [00:48:00] to occur, right? We have this idea of zero fatalities, which is one of the goals I think even of the federal government. They talk about Goal Zero. I don't know that you ever get there. You might approximate it, but. You might get there in the sense that you don't have accidents from, from human error, but you are still gonna have equipment failures and other things.

So I really wish they would change their posture. Awesome.

[00:48:26] Josh Lee:  Thank you again for your time and for joining us today. 

[00:48:31] William H. Widen: You're most welcome. 

[00:48:34] Elsbeth Magilton: Thank you for joining our Student Fellows on this episode of Tech Refactored. If you want to learn more about what we're doing here at NGTC or submit an idea for a future episode, you can go to our website at ngtc.unl.dot.edu, or you can follow us on Twitter at UNL underscore ngtc.

If you enjoyed this show, don't forget to leave us a rating and review wherever you listen to podcasts. [00:49:00] Our show was produced by myself, Elsbeth Magilton, and Lysandra Marquez, and Colin McCarthy created and recorded our theme music. This podcast is part of the Menard Governance and Technology Programming Series.

Until next time, hang in there and keep learning.[00:50:00]