Tech Refactored

S2E24 - Do you trust that algorithm?

January 20, 2022 Season 2 Episode 24
Tech Refactored
S2E24 - Do you trust that algorithm?
Show Notes Transcript

This week we’re joined by Derek Bambauer, a professor of law at the University of Arizona, where he teaches Internet law and intellectual property. We’re discussing his latest paper with Michael Risch on the rise of algorithm-driven decision making enabled by Big Data. Derek explains what algorithms are, how they work, and we get into the depths of why people trust (or don’t trust) algorithms.

Episode Notes:
You can find their paper ‘Worse Than Human?’ here.

Disclaimer: This transcript is auto-generated and has not been thoroughly reviewed for completeness or accuracy.

[00:00:00] Gus Herwitz: This is Tech Refactored. I'm your host, Gus Herwitz, the Menard Director of the Nebraska Governance and Technology Center at the University of Nebraska. Today we're joined by Derek Bambauer, a professor of law at the University of Arizona, where his teaching and research focuses on various law and technology topics including internet law, intellectual property, and cybersecurity.

Derek, welcome to Tech Refactored. 

[00:00:47] Derek Bambauer: Gus, thank you so much for having me. It's a delight to be here. 

[00:00:51] Gus Herwitz: So I, I'd like to start our discussion today, uh, talking about your latest paper, which is co-authored with Michael Risch of Villanova University. [00:01:00] And your paper is about whether or perhaps the extent to which humans trust decisions made by computers.

Uh, let's just dive right in. Can you tell me a bit about this paper?

[00:01:09] Derek Bambauer: I think that the paper begins against a backdrop of, uh, what I would say is great suspicion of algorithms. And honestly, if you think about popular culture, uh, artificial intelligence and algorithms generally speaking don't come off particularly well.

We have. You know, uh, we have how from 2001, we have Skynet, we have the matrix. Uh, by and large, artificial intelligence and algorithms are things that we, we fear and mistrust. And that seems to also be a fairly common thread in scholarship, both legal scholarship and in areas like computer science and pol, uh, public policy more generally.

Michael and I were curious to see the degree to which people who don't read law view articles, uh, were skeptical about [00:02:00] algorithms and our intuition was that they would. Less so, and they turned out to be much less so than we thought. So some of the results that we got when we started examining preferences by presenting people with essentially the choice of having either a human decision maker or an algorithmic decision maker under a set of kind of relatively mundane decision making circumstances, showed that people were much more comfortable with algorithms than we thought.

And that by and large, they reacted to this choice in ways that we might think of as classically rational. They were less afraid of algorithms than they were interested in whether the algorithms offered them benefits such as being cheaper, being faster, being more accurate, and we also saw that there does seem to be a little bit of a preference for having a human in the loop.

As the stakes at issue [00:03:00] increase. So we seem to have this kind of built in preference for, um, having people make decisions as, um, as the gravity of them 

[00:03:09] Gus Herwitz: increases. So there, there's a lot in there to unpack. First we should highlight and we'll, uh, turn to, uh, focus on this in a moment. This is empirical research.

So you were actually using surveys, asking human beings under, uh, uh, certain circumstances what they would prefer to happen. So, we'll, we'll talk about, about, uh, your approach and the nature of empirical research in a moment. But first, let's start really basic. Um, when you say, What are you talking about?

[00:03:42] Derek Bambauer: Oddly enough, I think that all I'm talking about is basically some sort of math that helps you make a decision. And we've been doing this for a long time in very simple ways. Um, Netflix ratings, Rotten Tomatoes. Uh, take your pick. You know, even [00:04:00] things in areas such as criminal law have for a very long time been influenced by relatively simple algorithms.

The federal sentencing guidelines, for example, are basically a matrix of the seriousness of the offense and, and, you know, prior offenses and the like, and you just sort of go across, find the box, and that's the, the penalty that you get. So, One level algorithms are intensely familiar at another level, the uh, Sort of growth in both the power of computing and the capabilities of things like machine learning recently have made algorithms both more prevalent and also somewhat more inscrutable, and so they have garnered, I think, increasing attention.

And increasing concern because of that. But at some level, anybody who's ever read a restaurant review in the New York Times has decided at some level whether they want to eat at a one star restaurant or a four star restaurant, and that's an algorithm. 

[00:04:59] Gus Herwitz: So [00:05:00] for the purposes of the paper, uh, do you explain to let, let's just turn straight to the methodology.

I guess. Can you explain to us how you approached asking this? 

[00:05:11] Derek Bambauer: Absolutely. So we used Amazon's mechanical tur, uh, uh, feature to ask people, volunteers, um, whom we paid a very small amount of money to respond to questions based on being presented with one of four scenarios, what we call vignettes. Um, and they were in sort of increasing order whether or not you would receive a gift card to a coffee.

As the employee of the month, whether you would, uh, receive a loan from a bank for a new car that you need, whether you would be eligible for inclusion in a clinical trial that could address a mal that you suffer from, and at the high end, whether or not you would have to pay a tra a civil traffic fine.

Of several hundred dollars. So we presented people randomly, [00:06:00] one of these four scenarios, and then we, we told them some things in what survey folks would call classic AB testing. So we would tell them, for example, you have been assigned randomly to a human or an algorithm. And then we would give them some information.

We'd say, it turns out that the algorithm has a very low error rate or a high error rate. The algorithm. Cheaper than the human, or it's the same cost as human. The algorithm will decide faster or slower or at the same speed of the human. And probably most interestingly for us at the beginning was we told people either that, um, the algorithm would have access to just public information about them, such as what you might get from a Google search.

Or they would have access to private data, like credit reports or employment history or so forth. And at the very end, we asked them, Okay, given that you have initially been assigned to either a person or [00:07:00] an algorithm, would you like to stay with that or would you like to switch? And so that allowed us to test a number of things, both the effects of the variables that we talked about, and then something that turned out to be quite important, which is just the effects of default.

Anchoring turned out to be a fairly significant factor in the, the study results. 

[00:07:19] Gus Herwitz: By anchoring, you mean people just were willing to go with whatever they were assigned to because for whatever reason they, on the, in the cognitive bias literature, uh, you have some preference for where you already are.

[00:07:33] Derek Bambauer: That's exactly right. And um, we were surprised at how strong that was, and it turned out to be particularly strong for algorithms. And, um, so at, at some level, it's strange too if we, we assume that people are, um, Either very aware of or very fearful of algorithms. The mere coin flip that determined, uh, the virtual coin flip, whether or not you were going to get a human or an algorithm initially [00:08:00] really shouldn't make a difference, but it turned out to have a very strong effect.

[00:08:04] Gus Herwitz: So what did you find, Let's just jump straight to the results. Um, what did you learn from this, uh, uh, survey? 

[00:08:12] Derek Bambauer: Some things that were really surprising. One was that, uh, we collected a, a number of demographic variables from participants, and to a great surprise, they turned out to be completely statistically irrelevant with one exception, which is that, uh, males were slightly more likely, 16% more likely than females to opt for an algorithm.

But everything else, age, political views, exposures, computers, none of that made a difference. That was very surpris. 

[00:08:41] Gus Herwitz: And, and when you say opt for an algorithm, you mean, Uh, did, did you measure both the incidents of, uh, sticking with versus opting to the alternative as different outcomes? 

[00:08:52] Derek Bambauer: Exactly. Right. Yeah.

That was one of the things that we checked for. Um, and we were particularly interested, obviously in the, on, in, in the conditions under which people [00:09:00] would switch. Um, So the, the, the sort of major finding was that people behaved in ways strangely, that are kind of classically economically rational. So the most important thing for people turns out to be price.

So if the algorithm offered them more benefits, they were delighted to have it. Um, and this in many ways, I think, Mirror is a, a slightly cynical, but I think, uh, you know, empirically, shakeable fact about the privacy literature, probably most notably put forward by former Judge Richard Poner, which is all that people really care about, is price.

They are willing to bargain on almost anything else or give up almost anything else. So price made the most difference. Um, they were also motivated by accuracy. They very strongly opted for the more accurate decision maker. So if we told them that one decision maker had a low error rate, they preferred that one.

If we told them that another decision maker had a high error rate, they tended to [00:10:00] avoid that one. Um, things such as speed also made a difference. The, probably the, the piece that shocked us the most was that access to private information make no difference whatsoever. And we, um, think that that might be, uh, possibly because there is just, uh, a set of mixed or heterogeneous preferences out there.

On the one hand, it might be that people are much more comfortable telling uncomfortable truths. To an algorithm that's not going to judge them. It doesn't have any moral context. It just kind of like employs cold machine logic to make a decision. On the other hand, there may well be a second group of people who want exactly that.

They want a human being who brings that social context, who may bring empathy and understanding to the decision. And so it's entirely possible that that explains the, the basically sort of ineffectiveness of that particular. But when all was said and done [00:11:00] generalizing over all of our results, it turns out that 52% of people ultimately picked the algorithm, 48%, the human, and that was a statistically significant difference.

[00:11:11] Gus Herwitz: That's a signif. Statistically significant perhaps, but it's also. A pretty small, uh, uh, difference. That's roughly, um, a coin flip it sounds like, in terms of which you prefer almost 50 50. Um, how does this affect our understanding of whether people, or how should this affect how we think about whether people trust algorithm versus human decision makers?

[00:11:34] Derek Bambauer: 

I think it probably has, has two effects. One is, I think it calls into question the conventional wisdom, just that people are, are frightened of algorithms, are dislike algorithms. Um, at minimum they are quite accepting of algorithms the moment that the algorithm offers them some tangible benefit. The second thing is that I think it has some suggestions.[00:12:00] 

For, you know, if we decide that reform or interventions are necessary for dealing with algorithms at large, whether in the employment context or you know, Facebook or whatever it might be, that. Proposed interventions such as transparency perhaps should take a different form than is currently proposed.

Um, if, for example, tomorrow Facebook were to reveal to us exactly the math that it uses to determine what pops up at the top of our feeds, um, I wouldn't understand it. You would, but I suspect that 99.9% of the population wouldn't, on the other hand, We are concerned about Facebook's algorithm, but Facebook can explain to us that there are tangible benefits from having an algorithm make decisions, right?

It excludes extraneous information. It's able to do things faster. Um, it's able to do things for us more cheaply. That may be precisely this sort of transparency that will be [00:13:00] most effective. In allowing consumers to make, uh, informed choice in these circumstances. 

[00:13:06] Gus Herwitz: So knowing what consumers are, and perhaps even more important, aren't interested in, can help us design policy interventions that will benefit and also possibly not harm consumers.

[00:13:19] Derek Bambauer: I think that's right. And to be honest, I think that that is, um, it seems very mundane, but I think it's an important suggestion that we make, which is a lot of the discussion about algorithm. Leaps past how people actually feel about them, what people actually prefer, and moves straight away to, uh, the necessity of reform or regulation.

And, uh, there are certainly instances where reform and regulation are necessary, where we can show that. Training data is bad or biased or that it, um, it has an account, the algorithm has an accountant for certain variables or just outputs, um, biased results. On the other hand, we do think that regulation should start with where [00:14:00] consumers are.

It should be cognizant of their preferences of the things that they prefer, both for instrumental reasons. It will simply make reforms, uh, politically easier to, um, sell to the American people and also because. We start from the principle that people are their own best judges of, um, their best interests.

That they, um, know what will suit them best, and that that's, that's a sort of basic, uh, move for respect in their autonomy. 

[00:14:30] Gus Herwitz: So we should turn to some questions about. Your methodology and I've got one that I expect you can predict. I'm going to ask you if you can predict it when we get to it, but I want to start with the questions and the vignettes that you presented to the survey participants and in particular, one of the common discussion topics when we are talking about privacy in the the policy and the legal sphere is whether consumers actually.[00:15:00] 

In the best position to take part, uh, care of themselves and to make their decisions. The point that you just made, I'm curious, were any of your vignettes structured to test and probe that question in particular? Perhaps giving consumers, um, a obvious quick financial payout with some buried potential use of data they were disclosing or something like that that could be harmful to.

[00:15:27] Derek Bambauer: Yes. So one of the things that we looked at is I, if we vary, for example, the financial stakes, so we could make, um, For example, the human in the algorithm, they could either cost the same or they could return the same benefit, essentially flip signs of the same coin. And so we could vary that, you know, either they could be equal or one could be greater, or one could be lesser.

And we tested the interaction effects, for example. With things like access to private information and it turned out, for example, that price just swamped everything else. So it private information and, [00:16:00] and we were fairly explicit about this. You know, things like having access to credit reports or employment history, etc., I think of the sort of things that most people think of as relatively private data versus the sort of thing that turns up, um, on a web search, and it just did not seem to, to alter the outcome, and that was deeply surprising. I don't think that the vignettes. Allow us to say anything conclusive about why it is that people were less concerned about access to private data.

But we, we began this project really with, with sort of privacy as, as the first thing that we were thinking about, and it turned out to be the least relevant of all of the conclusions that we had. 

[00:16:47] Gus Herwitz: So what, what are the limitations of the methodology and uh, uh, this research?

[00:16:53] Derek Bambauer: So I think that one of the things that's just common to all survey methodology is that, look, at the end of the day, it's fake [00:17:00] money.

There's actually nothing on the line. And, um, when we examine possibilities of actually imposing something like real, uh, benefits or penalties, the small amount that we were paying people, uh, we thought that, uh, altering that even to a small degree was likely to swamp any effects that we found. So in the end, Nobody's actually affected by these things.

And so it may be very different to make these decisions in theory when, for example, you are, you know, interested in being included in a clinical trial versus actually facing a diagnosis and having to make this decision. So I think that that is, is probably a common limitation, but a very real one. The second is that by design, we tested things that I would.

Relatively mundane decisions. These are the things that one encounters in kind of the work a day world, but they are not perhaps the most, um, grave or consequential of decisions. We, for example, avoided the, uh, [00:18:00] context where I think a lot of advocates are most worried criminal sentencing. Bail and probation.

Although in our, our beta study, our pilot, we actually did include a criminal version of the traffic offense where the, the risk was, uh, a few years in prison. And to our very great surprise, it did not differ statistically from the outcome when it was only the civil offense and. So we think that actually one of the reasons that we dropped it was just, um, to retain power in our study.

One was that we were not completely convinced that we had covered the entire waterfront in terms of variables and things like criminal sentencing. And, and lastly, our intuition was that areas like criminal sentencing may just implicate. Important American values. Things like due process, fairness, the opportunity to be heard that are, are very [00:19:00] difficult, where to engage in, in, in sort of utilitarian or empirical trade offs.

These are just deontological values that may overwhelm or override any utilitarian calculus. And so, um, We just sort of let that be. So I think that what we can say is our results, I think, are strongly suggestive, but we don't want to, um, draw conclusions beyond the kind of boundaries of the type of decision that we presented people with.

[00:19:30] Gus Herwitz: So the, I'm just going to ask you in the, uh, kind of, uh, coy sort of way. What, what do you think? The question I'm going to ask you about limitations is, and let, let's see if it's the one I have in mind. 

[00:19:42] Derek Bambauer: Oh, there's so many. I think one question is just, look, you are basically surveying geeks. You know, you have people who are taking the mechanical turk and are these folks really representative?

Um, It's a great question and one of the things that we did is [00:20:00] we, we worked really hard to get one, a large enough sample size two to check the demographics because on the whole, it turns out that people who take mechanical Turk surveys tend to be slightly better educated. They tend to be slightly more familiar with computers.

They tend to be, uh, whiter, they tend to be more male, et cetera. And so by building out a large enough sample size, We were reasonably confident. The demographic variables did not make a difference, but we are, I think, hopefully appropriately cautious about whether or not. Marginalized communities or certain groups may have distinct sets of preferences that our sample size wasn't large enough to capture.

And I think that that's something that is hopefully fertile grounds for future work. And to be honest, it's our great hope that other people who are working in this field, [00:21:00] especially people who perhaps come at it with a different normative bent from ours, will also use. Empirical approaches to challenge us, to prove us wrong, to build out our results.

[00:21:12] Gus Herwitz: So fir first, Uh, yes, that was exactly my question. The selection bias with, uh, Mechanical Turk. And we, we will, uh, come back after we take a brief break and, uh, uh, just another, uh, minute or two to talk about the role of empirical work in these fields because I, I think it's really important and in many ways, uh, the, uh, most laudable part of, uh, your work here, uh, with Michael, which isn't to say anything negative about your work, just positive and.

Really, uh, emphatic about the importance of this sort of work to look at these questions. I, I do want to ask two, uh, last questions though before we turn to a break. Um, the first is you have mentioned, uh, the sample size that you worked to, uh, achieve. Can you tell us a a little bit about the sample size and statistical significance [00:22:00] of, uh, uh, this process?

[00:22:02] Derek Bambauer: So one of the things that we did, I should note, is that we, we limited. The people who were allowed to participate to people who were in the United States, um, based on a, a variety of sort of technical data, including just their IP address, whether they were using a vpn. We asked them if they were in the United States because we think that these attitudes are really likely to vary.

You know, if, if you were to ask Europeans, they would have very different attitudes. We had just under 4,000 participants and one of the things that we did that turns out to be important is, um, Mechanical Turkers sometimes do this for a living. The quicker that they can answer a survey, the more money they can make.

And it turns out that you can catch them if you ask a clever attention question at the end. And so one of the things that we did at the very end is we told them upfront that they had been assigned by default to one of, you know, either the human or the algorithm. And we just asked them, Hey, which one have you been assigned to?

The human or the algorithm, [00:23:00] the lady or the tiger? If they failed that they were out. And it turns out that we actually got complaint. About this, but the, the size of it actually allowed us to deal with all of the demographic variables that we were able to capture, given that there are, um, important and worthwhile limitations imposed, for example, by IRBs at universities on our ability to ask certain questions.

And so, um, Having about 4,000 people enabled us with the four vignettes and the ab questions to have enough statistical power, I think, to be confident in, in the, the conclusions we were able to 

[00:23:41] Gus Herwitz: draw. And the last question I wanted to ask before we, uh, take a brief break really goes back to the first question that I almost asked when we started talking about this, um, which.

Uh, the sophistication of the participants and in particular, whether you had to explain or you were able [00:24:00] to measure, um, the extent to which people are familiar with and understand what algorithms are, what machine learning is, what artificial intelligence, uh, is. Uh, whether if your understanding of artificial intelligence is the Terminator and Skynet versus, um, yeah, I, I've actually have taken a couple of courses and I, I taught a computer how to play the Game of Snake on its own.

Uh, I'm a ml uh, uh, pro. Um, did you, were you able to measure or did you measure, um, any differences between those sort of characteristic? 

[00:24:37] Derek Bambauer: We tried to, and one of the things that we tried to get at was, uh, people's familiarity and comfort with the internet and with computing more generally. Things like, did they use it on a daily basis?

What did they use it for? And that was, I think another surprise is that, Um, our intuition was that the more familiar one was with, um, with [00:25:00] computers, um, perhaps at the very high end, as you say, with, with machine learning, um, that that would actually generate, people would be more comfortable with algorithms.

Um, and it turned out not to be the case that, uh, that variable wasn't statistically significant. And I think that perhaps the, the reason is just. . Uh, the more you know about something, the more pitfalls you may be aware of. And, um, so perhaps it's just, um, uh, you know, that, that people who know about machine learning know the ways in which it can go awry.

The other thing of course, that um, maybe we can talk about the second half is just, um, Honestly, the, the sort of joke working title for this paper for a long time was People suck too, and humans are bad decision makers, they're biased, they obfuscate things. And so I think that perhaps for people who are, um, less knowledgeable about algorithms, We all have some sense of people, and we all know the ways in which they come up short [00:26:00] as decision makers, and so there may be some pessimism about humans and perhaps a little bit of warranted or unwarranted optimism about what code can do.

[00:26:10] Gus Herwitz: Well, I should have said at the outset, the name of the paper is Worse Than Humans. And that's a phrased as a question. You can find it by Googling it. Of course. Uh, it is available at least in a pre-print version, uh, online. And, uh, it will be coming out, uh, uh, pretty soon, I believe in, uh, the Arizona State Law Journal.

We have been speaking with Derek Bauer. We will be back in a moment after a brief.

[00:26:39] Morgan Armstrong: I'm Morgan Armstrong, a student fellow at the Nebraska Governance and Technology Center and part of the Space Cyber and Telecommunications Law Program at the University of Nebraska. Did you know the University of Nebraska College of Law also has a Space, Cyber and Telecommunications Law program? That started in 2008.

The program features tracks for lawsuits and advanced degrees for [00:27:00] established attorneys interested in satellites, international law, radio spectrum, or just about anything in the great expansive space. Check them out on Twitter at Space Cyber Law. Now back to this episode of Tech Refactored.

[00:27:18] Gus Herwitz: This is the first episode of Tech Refactored that we're recording in 2022, and a new year means new things. Over the next several episodes, we're going to be experimenting with some of our tech refactored format new topics, check new music, maybe new hosts. Sorry guys. You're stuck with me, but we do hope to have more voices getting their turn into host's chair.

As we make these changes. We'd love to continue hearing from. You can find out where to send us your topic ideas or your thoughts about the show by Googling Tech refactored. And we're back with Derek Bauer. We've been talking about, uh, his forthcoming article looking at human versus machine decision makers and whether we as humans trust one more than the [00:28:00] other.

But in addition to this, Derek is a law professor who spent roughly 15. Be, uh, ungenerous in say, 20 years thinking and studying about law and technology, internet law, cyber law, and intellectual property, cyber security, all sorts of fun stuff. So I want to expand our discussion a little bit to think more about this area of law generally, but I'm going to come into it by, uh, uh, really anchoring the discussion in your, the article we've been discuss.

And I want to start with a, a curious question. You're a, a law professor. This article is being published in a law journal. Uh, you target with another law professor for that matter. I'm a law professor. Um, is this a legal question that you're really researching in this article? 

[00:28:47] Derek Bambauer: That's a great question. I think it is in part a legal question.

It is in part a policy question, and in part it's a technological question because at, at some level, [00:29:00] For most people, algorithms including increasingly machine learning ones, uh, might as well be magic. They are a magic box. Things go in one side and come out the other. And even people who are are truly expert in machine learning don't necessarily understand things.

So there is, I think, a technological question about how these things operate. And then there is something that. I think law professors deal with, as a matter of course, but that computer scientists and mis people, uh, tend to avoid, which is they're a set of policy questions about what should we do, if anything, about algorithms, um, because they are, uh, they're cheaper, they're faster, Um, they do things, uh, that would be impossible for humans.

There's simply no way that Facebook or Twitter. Can exist in a way that we would want to use without algorithms. You just cannot hire [00:30:00] enough interns to, uh, curate that content. And right now that is the fight that we are having. For example, and this is where I think the law comes in, is that, um, Algorithms, at least in Washington DC and policy making circles right now are a bit like the weather.

Everybody complains about it, but nobody does anything about it. And so that, I think ultimately is where we were sort of gently heading is. What contributions can we make to this debate about what ought to be done? Particularly given that, as we mentioned before, I think that people tend to sort of zoom past how consumers, ordinary folks actually feel about algorithms and also just that in the, the current policy debate.

Every possible viewpoint and every possible change is being moed. It is a completely open field, and the hope [00:31:00] is that, um, whether, you know, law professors bring one set of, of methodologies and computer scientists bring others, but that we can bring some rigorous grounding to at least part of this. 

[00:31:12] Gus Herwitz: So that really turns me to the thing that I said before i, I like most about this paper, which is that it's empirical research.

Um, it's, uh, trying to actually answer some foundational questions that go into what you've just described as policy, politics, debate, fight, um, those. I would say in many ways aren't the realm of academic, scholarly research, debates, fights, policy, politics, those kind of are. Opinions . Um, not, not to be perhaps, uh, too dismissive or crass about them, but I, I think that, uh, it's not entirely unfair or uncharitable to say that much of the privacy law and policy debate over the [00:32:00] last 20 years has been.

Unan anchored in actual data. Um, so I, I guess I'll, I'll start by asking you to respond to that potentially controversial, uh, statement that I just made. Um, and then I'd like to talk to you a bit about the, the role of empirics, um, in these fields. 

[00:32:20] Derek Bambauer: So, uh, I'm going to say something uncontroversial here, but highly controversial outside, which is I agree with you completely that I think that at least in the United States and perhaps beyond, The debates about privacy, about privacy policy and about privacy regulation.

Uh, basically where you stand depends on where you sit. It is entirely a matter of one's normative priors. And if anything, the debate has, I think, flown in the face of, uh, frankly, consumer preferences for, uh, trading. Private or personal data in exchange for services that [00:33:00] they like. We love having, um, free social media platforms and photo sharing and email in exchange for giving up some information about ourselves and being shown targeted ads.

And, um, one of the things about the debate, at least on this side of the Atlantic, not in Europe, is that there's a strong. Uh, prevailing conventional wisdom that, uh, people are either mistaken or misguided. And I think that in some ways, the hope of empirical research, and this is what, what Michael and I were trying to do, is to try to at least start with a common baseline of facts and then we can argue about whether or not.

We've actually, you know, accurately captured those facts or whether there's errors in our methodology. We can argue about what we should do with that information, but at least hopefully we start with common ground as opposed to simply coming at it from different points of [00:34:00] view, where often I think in the privacy community, it just involves people.

Talking past each other to a certain degree. And, um, the empirical revolution is, is late coming to legal. Yeah. Everywhere else has had it for a very long time with a possible exception of economics where theory still. Hold sway. Um, but in a small way, I think that this is, uh, our attempt to make a contribution where we can ground a debate that is absolutely raging at the moment in at least some verifiable facts about how people feel.

[00:34:38] Gus Herwitz: So it is fascinating that you, uh, made that, uh, uh, side comment about economics. Um, putting my economist hat on. I think, uh, many, uh, theory focused economists would disagree and say economics has just been taken over by, uh, uh, Empirics. Um, and we need more theory. What we're lacking today in economics is [00:35:00] theory, um, because we've got all this empirical work that is a contextual and isn't generating broader understanding of what to do with, uh, uh, the theory with, uh, the empirics, which, uh, really brings me to my, my next observation about your work, your work in many.

Isn't answering the questions or any questions necessarily. Rather it's anchoring the discussion. It's giving us something that we can, uh, probe and analyze and potentially argue about. If you, uh, if your normative priors run contrary to, uh, the work that you and Michael have done here. Well, we now have a methodology.

We have a survey that we can probe and we can ask, Okay, this one study showed us this. What are the drivers that cause people to express these preferences? Um, and how do we do a follow up study? How do we do a more robust study? What are the follow up questions that we need to have in [00:36:00] order to prove that, uh, uh, Derek's worldview is wrong or my worldview is right before if we're just arguing?

Well, here's why. I think what I think, and you're arguing, Here's what you think. We, as you said, we talk past each other. 

[00:36:16] Derek Bambauer: I think that's right. I mean, there's always the incommensurability problem. It is simultaneously a strength and a weakness of our work. The strength of it is that we think that we, um, we have defensible numbers.

The weakness of it is that it. It does not have obvious normative or reformed conclusions. It's not clear in what direction it points, and you made it an excellent point, which is there is, um, there's a temporal problem with our work, which is we are measuring things as of 2020 and we don't know if in which direction people's attitudes will evolve because we could certainly think that, you know, [00:37:00] if for.

Folks who are going off to see Matrix resurrections, perhaps they'll be terrified and they'll become increasingly skeptical of algorithms. Um, or they'll love them just because of can reefs. Um, on the other hand, you know, there is some evidence to suggest that people become accustomed to algorithms over time.

Um, like everything else, you know, it's, it's at first new and frightening and then it becomes standard. And I think the best example there. Is, uh, something that has hit a lot of people during covid, which is, it used to be virtually unthinkable. It was strange. It was frankly creepy to think about having an algorithm select somebody that you would go on a date with.

That's just weird. And now it is utterly accepted and perhaps Covid has made it necessary, but people have just rapidly become accustomed to the idea that when they're trying to winnow down the set of people with whom they could have dinner or coffee, then an [00:38:00] algorithm might. Present a reasonable first cut at it.

And so I, I think that that's, as I said, the limitation of our work is that it doesn't point in an obvious normative direction. The hope is that we have, we have laid down a marker and that not only will we do fall on work, but that hopefully this is high enough quality to convince other people to do the same thing, whether in different areas or over time or in different populations.

[00:38:29] Gus Herwitz: The idea of, uh, algorithmic dating that you just raised and that. A whole other, uh, episode, probably an entire series of discussions in and of itself, but I, I just have to highlight it. It really, uh, drives home a lot of the questions and concerns some folks might have about algorithms and, uh, things like algorithmic transparency.

How is the algorithm trying to match one person A, with person B? What is the algorithm optimiz. Is it trying to optimize the [00:39:00] likelihood that they'll get along and have a good relationship? Is it trying to maximize their, uh, social, uh, uh, outcomes together? Their prosperity for society? Is it a creepy eugenicist algorithm that's matching people based upon shared skin color?

Well, that's kind of not what we want them to be doing. It's a bad idea. Um, and sometimes, We might not know what's going on because the algorithm could very well look at people in your social circle and say, Oh, they for the most part have lighter, dark complexion. That suggests to us that people in your social, uh, uh, circle, uh, tend to prefer people with similar complexions and they start matching people based upon skin tones.

Well, we're probably not as a society too happy with that. We might not know what the algorithm is doing. So, uh, whole range of, uh, uh, discussions and topics, uh, in. [00:40:00] 

[00:40:00] Derek Bambauer: I think that's right, and I think that one useful thing about this is that it may, to the degree that we're honest enough with ourselves, it may force us to confront some very hard questions.

And I think that ironically, dating algorithms are a really good example, as you say. Um, What preferences are acceptable for, for me to have in terms of who I would like to go on a date with? Must they match my political views, my religious views? Must they also be an ardent fan of the Boston Red Sox? Um, must they come from a small town in Massachusetts?

Must, as you say, their complexion be very similar. To mine. And so I think that algorithms can do all of these things very easily. The largest set of policy choices that we face are, um, what things are acceptable to ask them to do, what things are acceptable to allow them to do. And, um, this is a, a sort of subtlety, I think, in the way the algorithms work, which [00:41:00] is the more sophisticated ones.

We talked about machine learning a little bit earlier, depend heavily on. Training data and selecting the quality of that training data is extraordinarily important. And by far my favorite anecdote about this is that Microsoft attempted to write a bot that respond to Twitter queries. The difficulty is they trained it on Twitter, and so within 24 hours, their bot called Tey became extraordinarily.

Racist, antisemitic, sexist, and so forth, because it was reading Twitter. And that's the difficulty, right, is that this is the sort of thing that algorithms may inadvertently reveal about how humans operate and. . Um, and we're going to have to, to face that. And we, you know, we can't blame the computers. This is just us.

So since 

[00:41:50] Gus Herwitz: we're, uh, uh, sharing anecdotes and stories, um, I, I have to, uh, share one of, uh, my favorites than where I think directly relates to your paper. Uh, I'm [00:42:00] sure that you're familiar with this, but the study on whether humans trust robots, and this was a study a couple years ago where. The, our researchers brought a group of participants to a conference room in some building somewhere.

Uh, that's all that the participants knew. And then there was a fire alarm that went off and there was a robot in the hallway that was there to direct them to safety. And it starts saying, I am the safety robot. I am here to direct you to safety. Please follow me to evacuate the building. And they did this several times.

With the robot in various states of disrepair. Um, and the robot would, it would be speaking incorrectly. It would have sparks coming out of it. It would have all sorts of, This robot is malfunctioning up to the point where, The robot, uh, would lead the participants into another conference room and just start circling the table.

And the participants followed the robot. They trusted this robot that was saying, I'm [00:43:00] the robot. I am here to help you evacuate the building, as it basically led them to their death. So, uh, there, there is another data point that humans do tend to, uh, trust robots and also data point that that might not be for the.

[00:43:15] Derek Bambauer: I think that's right and it, it's, it's wonderfully interesting because I think that, um, There is a certain techno optimism that we have, which is that we, we tend to regard things that we see online. For example, Google search results, we tend to cheat as truth, right? Which is, look, this is something that a computer has produced.

I'm going to trust it. And um, this combines, I think, in general with just human cognitive biase. That we tend to trust authority figures, and we tend to follow what they tell us to do. Um, probably, and I think a, a sort of sad version is, you know, during the, the nine 11 attacks, the advice from, um, from security guards and others to go up rather than down, that was wrong.

[00:44:00] And a number of people followed that. And it's very difficult, I think for people to break out of that instinct to follow authority figures, and for whatever reason, we now view robots as authority figures. Perhaps that's Star Wars, but that too I think is something that hopefully, actually as algorithms become more widespread, as we deal with robots everywhere, it's not just my Roomba that perhaps we will normalize it and we won't trust them.

Then we trust, you know, if my next door neighbor came out and said, Hey, you should run that way down the street rather than the other way. Um, because in, in some ways I think that we are, we are the early stages of algorithm history and um, it's almost impossible to predict where it's going to go. And it's harder still because there are some normative choices to be made there.

But, um, the idea of essentially, um, you know, R five, D four from Star Wars right, blowing its capacitor and telling people that they should go down the stairs and everybody [00:45:00] does, is I think both, um, wonderful and a cautionary tale. 

[00:45:05] Gus Herwitz: So Derek, I've had, I have a whole list of other topics I want to, uh, get into with you.

Uh, we're not going to be able to, uh, it's time for us to bring, uh, our discussion to a close, which just means, uh, that I'll need to have you back on for another episode. Uh, I do want to close, uh, by asking what's next. Do you have any plans for, uh, future work building on this? 

[00:45:26] Derek Bambauer: I think that what we want to do is to explore some of the questions that remain open and to try to find out things perhaps, uh, in more detail about privacy, perhaps about whether, uh, subgroups, uh, actually have different views on algorithms and perhaps to begin to, to push the envelope a little bit into sensitive areas, into questions about what happens when algorithms come into questions like, Criminal sentencing, what happens when they come into questions about family law and custody.

Um, [00:46:00] much more fraught topics and hopefully using the same sort of data driven approach, as I said, can at least give us a common vocabulary and a common baseline, uh, to have these, these important discussions about, uh, about values and about the world we want to create. 

[00:46:17] Gus Herwitz: Well, I look forward to seeing the work, uh, when it gets done, and thank you again for doing it.

And also thank you for being a, a guest here on Tech Refactored and talking to us today about it. And thank you To our listeners, I have been your host, Gus Herwitz. Thanks for joining us again on this episode of Tech Refactored. If you want to learn more about what we're doing here at the Nebraska Governance and Technology Center, Or submit an idea for future episodes, you can go to our website at ngtc.unl.edu, or you can follow us on Twitter at UNL underscore NGTC.

If you enjoyed the show, please don't forget to leave us a rating and review wherever you listen to your podcasts. Our show is produced by Elsbeth Magilton and Lysandra Marquez and Collin McCarthy [00:47:00] created and recorded our theme music. This podcast is part of the Menard Governance and Technology Programming Series.

Until next time, Skynet is watching.