Tech Refactored

Automated Human Evaluation and Student Privacy

September 09, 2022 Nebraska Governance and Technology Center Season 3 Episode 3
Tech Refactored
Automated Human Evaluation and Student Privacy
Show Notes Transcript

NGTC faculty member and Assistant Professor at Nebraska College of Law, Elana Zeide, joins Gus Hurwitz to discuss two of her recent articles, “The Silicon Ceiling” which focuses on the use of automated human evaluation and matching systems in education, hiring, and employment, and “Big Proctor” which examines the roll of online proctoring systems on student privacy and education.

Professor Zeide's forthcoming articles will be linked on the NGTC website upon publishing.

Follow Elana Zeide on Twitter @elanazeide
Follow Gus Hurwitz on Twitter  @GusHurwitz
Follow NGTC on Twitter @UNL_NGTC

Welcome to Tech Refactored, a podcast in which we explore the ever-changing relationship between technology society and the law. I'm your host, Gus Hurwitz, the Menard director of the Nebraska Governance and Technology Center.

Our guest today is  Elana Zeide. I am a professor here at the university of Nebraska. I am part of the Nebraska governance and Technology Center, woo woo. She teaches researches and writes about privacy and the legal policy and ethical implications of data driven systems and artificial intelligence. Her work focuses on the modern day permanent record and how new learning, hiring, and workplace technologies impact education and access to opportunity.

We're going to be discussing two of her recent articles. The first is called Silicon Ceiling. It's about the use of automated human evaluation and matching systems in education, hiring and employment. And the second article we'll discuss called Big Proctor looks at the role of online proctoring systems on student privacy and education.

You teach a course in engineering- or law for engineers. Can you tell us a little bit about that course and the experience of working with engineers to introduce them to legal concepts? Yeah, it's, it's a fantastic course. It's actually for graduate students who are already engineers they're in school to be engineering managers.

So that actually is nice, cuz it opened up the realm of the different kinds of laws that they might be interested in and they might have to work with on a day to day basis. It is a very educational experience for both me and the students. You know, law a as you well know. Is not always logical, right? It does not always go according to a formula.

It is difficult to predict what will happen. Lawyers use the words "it depends" on a regular basis, and this is a tremendous shock to the engineers. First of all, they're actually often shocked about what the legal system is, what it's like, like how the courts work, what administrative agencies do- and that is just sort of cool opening their minds.

They're like, wait, I thought this was all just a waste of time and money in red tape, and maybe it's actually useful sometimes a little bit, but it's really fun and interesting to have them come to these concepts that I, as a law professor, I'm so familiar with from such a fresh perspective that it makes me question things too.

It's like, oh, what is this stare decisis thing, and why is it here, doesn't that seem weird that we just do things based on what people did before? So it's a really good two-way learning street, but it's teaching them a variety of sort of general legal concepts, how the system works, but also stuff about technology, both emerging technical issues, like privacy discrimination,

but also just basic law. Like here's what you might need to know about employment, law, safety, law contracts. I, I guess one question flipping it around with your time working with these engineering professionals and they, they are, as you say, already engineers on a management track, have you taken any lessons from them in how you work with law students or how you understand the law yourself?

Yeah. So, you know, they really do ground it in the practicalities, right? So it's like, uh, when I do consulting or when I talk to law students about being in practice, especially if you're working with sort of technology companies, which is as the lawyer, you are the bad guy, right? You are the one that are telling the really enthusiastic engineers and the business people that like, you cannot do this, or you should not do this.

And what the engineers remind me of and what I try to communicate to the students is just being the no person is not useful, right? You still have to do your job and tell people they can't do things that are illegal, or maybe they shouldn't do something that's on the sort of gray area. But to just come in there and say "no" is not helpful.

This is why it's important to do the things we're doing at the center like having law students learn about how technology works, how business mechanism work, because then you can go back and say, okay, what do you guys actually trying to do? Like, what are you trying to accomplish? And maybe there's another way we can accomplish the same thing that doesn't run into sort of a law or jeopardize us and that sort of collaborative work.

Like the importance of that really comes clear when the engineers are, I often encourage 'em to talk about their own experiences with the law in their workplace settings. And it just like blinding clarity there that like, no, you can't just be the "no" rubber stamp person who's doing it in this very narrow, like legal echo chamber.

Could you, uh, tell us a little bit about your own path to the law and becoming a, a law professor? You you've done quite a bit in your background. Yes. So, um, my first career was as a journalist and I did a lot of pop culture reporting and I also did gossip. So I worked for, among other things,  The New York magazine's intelligence or column,

I did a bit of writing for the national Inquirer here and there. And then after that, I went to law school, but I really maintained an interest in both the First Amendment and privacy issues cause of my journalistic background. And I went to a big firm for a while. And when I was done, I was interested in exploring these ideas further and cause of my first amendment and media background started doing media defense work and.

That was when Facebook was kind of really blowing up as a thing where people not only would go to communicate with each other or post things, but where journalists would go to try to get information about sources. And there were so many issues that were coming up that were just unclear, like could a journalist friend, someone on Facebook, you know-

the person consented to you being their friend, then could they publish the picture that was on the private page? Right. And as I was looking at this, all these issues that I had faced with my own journalism came up and my frustration with the law as it was really just bubbled to the surface. And so I figured, oh, I can sit here and try to

implement really ineffectual things or non-existent things and give pretty much "it depends" answers to almost everything anyone ever asks me, or I can try to figure it out myself. Mm-hmm  that led me to academia where you are trying to figure out a whole lot of things. Can you tell me a bit about, uh, what you're doing in Silicon Ceiling?

Sure. Uh, in that article, I'm looking at the way that different institutions in education, employment, and hiring use automated systems to essentially score people for both sort of explicit decisions. Like, are you hired? Are you not, are you admitted to a college? Are you not? But also in the, in the matching component.

So you have entities like Indeed, Zip Recruiter, LinkedIn, that match people and you get not only recommendations facing the recruiters and employers, but user facing recommendations. And I talk about, uh, the ways that- you know, there's been a lot of discussion, obviously about algorithmic discrimination and algorithmic bias and ways to mitigate that,

and, and those are all tremendously important on a substantive level. Another level of solutions for that people talk about is the need for algorithmic transparency. And I say that while those are both incredibly important, we also need to look at the structure of the systems that is implementing the algorithms and that because of the way the system is structured, this ranking

and the sort of systemic disadvantage of people who aren't already predicted to be successful in a particular endeavor they, they become an underclass, an invisible underclass in these markets, and it's very difficult for the employers or recruiters to see, it's very difficult for the people themselves to see the users and to know that they're perhaps being demoted or that they're seeing jobs that are less lucrative than people who might be similarly qualified.

And also because of market concentration, uh, both sort of vertically and horizontally, you get consistency across spectrums, so both across industries, across employers and over time. Um, so as companies draw on the same digital footprints or the same, very few labor matching markets to profile people. You get the same results over and over and over, and you essentially create a ceiling,

AKA the Silicon Ceiling mm-hmm  that is invisible. Um, that it seems like these tools open up tremendous new expand the marketplace of people, expand candidates to untraditional candidates, but they may actually heighten the requirements and exclude, again, people who are currently disfavored by algorithms and without really being able to unpack it or see what's going on.

My first response is surprise, because you would think that if the algorithms, um, or that whatever systems companies are using to find employees are leaving out some employees that's leaving talent on the table. So there would be an incentive or motivation for other companies or for all companies, frankly, to improve their algorithms to say, "Hey, we we've got this great group of employees that are

overlooked by every other company, we should go hire them because they don't have as many options, so maybe we can offer them lower wages and get really great talent and that, that will help bring them into the labor force and then their wages would go up over time, they'd become more competitive..." Why isn't that happening?

Well, so some people are arguing for that, which is fantastic. And I think there is both an- an economic market incentive for people do this as well as perhaps an altruistic societal one. They're not doing it because it's very hard to find these people because they are invisible in the algorithms. They don't surface at the top and it might not be actually that hard for people to just program in different dynamics so that they can surface those individuals, but they don't do so explicitly necessarily.

And it's hard for people to see what's happening because they're told that these algorithms surface the "most talented". Well that most talented, uh, or most similar to let's actually be more specific, most similar to people who were previously identified by the algorithm as talented. You know, there might be a very marginal sort of point, you know, very small computer difference between like this person and that person in terms of their likelihood of success in an endeavor.

And so employers just see the ranks, right? And when you have a lot of individuals, the person who's one on the list and a thousand on the list may not actually be all that different from each other. But if you're a recruiter who's going through that and deciding who to invite to apply for a job, you're probably not gonna go down to number a thousand on the list.

Are these algorithms turning up qualified applicants to start? I, I guess that that's an interesting initial question. It's hard to know, right? I mean, obviously people use different metrics to look at how effective these tools are, but often those metrics are how long it takes for a company to fill a role.

That's usually the metric that's deployed, not how long that employee stays at a position or how long it is until that employee progresses to the next level. Also, it's very hard for companies to evaluate this because they don't have counterfactuals. They don't have the well, if I hired this person who I never saw how they would contribute,

how- if they would succeed. Yeah. The, the reason that I ask and that this might be, uh, my own idiosyncrasy, or maybe there's a bigger phenomenon here. Whenever I'm using any online search algorithm, um, it seems- I'm, I'm shopping for something on Amazon, uh, and the, the things that I'm looking for don't come up.

I spend a lot of time bypassing the algorithm and looking for different search terms and different rankings and adjusting filters. And I guess, using the system completely wrong because the system isn't returning what I think to be the best fit or the best match. And it's fascinating to me to, to think that maybe systems are working well enough that the recruiters are just taking the top line results and saying, "yes, the, these are the best we'll pass them on."

And, uh, not using the algorithmic results as a analytical tool, perhaps to, uh, help as they search and sort, uh, more deeply through the pile of applicants. I think some do, but I think that first of all, there's often a usual initial pass cut off, right? The ease of applying for something online or whether that's a college or a, or a job, you push a button, right?

You can spam all the employers within, you know, mm-hmm,  a very short period of time. So most companies and, and schools too, have to deal with something to cut off the vast quantity of people who they would consider just patently unqualified for this position. And sometimes they use factors like how long have you not been in the workforce?

Right. Mm-hmm  that sort of cuts off a certain type of individual. You know, I do something similar in that, that low pass. Yeah. So we're looking at two different things. We're looking at the, how do we find the, the highest ranked and also how do we identify the, the lowest rank so that we can use the highest rank perhaps for our initial manual sorting and screening, but we can also automatically cut out the bottoms.

And I, I guess, to stick with the, uh, Amazon sort of example, when I'm looking at products I'm not going to look at products that have a really low rating, but I might look at products that don't have any rating at all, which, which is interesting. So that, that one star rating that's telling me there there's a problem here,

I don't want this. It gets filtered out, and then I'm willing to do the manual work to identify or evaluate the other things. Well, so two things with that one is, I don't think most employers are willing to take the person who does not have a rating, mm-hmm  right? Yeah. So, so they're, they're less open-minded than you about what you purchase, but also, I mean, that's the issue with these, with algorithms, which is, I mean, if Zappos, if, and Amazon can't figure out what kind of shoes I like after this point in time,

really, you know, how- how good are they going to be at predicting the complexities of human performance in a ever changing environment? Mm-hmm  yep. At the same time, you know, companies are caught in a bind cuz they need something practical, right? But you know, a lot of these solutions as you sort of implied are not even technological, it's more about changing the business paradigm and perhaps looking at what you look for in these tools mm-hmm  and what you expect them to deliver and how you measure or not whether or not they're succeeding.

Let's talk about changing paradigms, then what do you say we should do in the article? What is your solution to the Silicon ceiling? Well, as most academics, I think I focus more on the problem than the solution. So a lot of it is simply surfacing what's happening because I do it is invisible. Um, so one thing I, I look at is first of all, being sure that we look at results that are provided to the users, as well as just the things that are provided to recruiters, because ads that individuals see or

ranked lists of things to apply to those can have the same problems, and because it's the users making decisions on their own, there's no legal protections on that point, you know? You make what society considers suboptimal decision sucks for you, right? Mm-hmm  um, but I suggest diversifying. So diversifying, not even as in a sense of this typical way, we think about like diversifying the workforce, but literally diversifying, diversifying the data that companies use pulling off the internet or scraping

sites, um, about individuals, they use different providers. They like you use multiple different searches and levels instead of just going through a first tier that if you create enough diversity across the system that will enable hopefully enough diverse individuals to trickle up, to be able to- to pass the barrier.

I, I wonder I might be forming an idea for a business venture in real time right now, it, it seems like there's probably an opportunity for a job search platform that tries to filter out all the obvious, good candidates and maybe also all the obvious bad candidates and focuses then on the, the long tail candidates.

Yeah. Um, especially in the current job market, uh, with, uh, unemployment rates where they are, that seems like a obvious market entry. And since, as you say, it's so easy for candidates to upload their materials and submit applications to lots of these sites, one more could probably enter the market without too much difficulty.

Yeah, no, I think it's a matter of do companies or institutions have the appetite for spending a little bit more time and effort on it and for taking a risk? I think that's actually probably part of the main thing, is are they willing to do what they might see as a slightly more marginal candidate?

Mm-hmm  talking about marginal practices, uh, let's turn to online proctoring. Um, and this is a idea that I, I think lots of people. A much better understanding of generally what's going on, uh, thanks to the pandemic in recent years, but can you tell us what's going on with the project? Sure. So, um, any day / week, now, this article will be published called Big Proctor.

And it's about online proctoring tools, which most people listening to this podcast probably know what they are, but they were implemented particularly during the pandemic. They are tools that typically tap into students or test takers can be professional test takers, like bar exams, test takers, uh, taps into their computers to lock down what they can browse or open to monitor what they do digitally, but also using auto visual people's cameras to scan their rooms, to use artificial intelligence, to make sure that they're the person in their ID and also to purportedly automatically detect

whether or not they are cheating and raise flags if they're doing what the proctoring systems consider suspicious behavior. So what are the characteristics– good and bad– um, about these, these systems? Well, in some ways, these tools are seen as a way to promote fairness because you know, academic integrity is important.

It's important for employers to believe that students have actually qualified to do things it's important to have qualified employees in society, but also- it's unfair to students if they've studied hard and worked hard to be at a disadvantage, simply because they have integrity or they're not as sort of technically savvy or just fiendishly clever in order to create some sort of work around that lets them cheat.

Mm-hmm  um, that's the good right? And so we, we, I think can both agree: cheating is bad, cheating, bad. Okay. Cheating bad. But we're, we're both professors. Cheating is bad, cheating bad. Right? Yes. I also teach professional responsibility. So cheating, doubly bad. Um, not just on a moral level. And the tools that schools Proctor at scale, right?

Having a Proctor, a human Proctor, observing these things, an individual is that's a lot of individuals out there. If you're giving out a lot of tests. The bad is that these proctoring tools are fairly invasive. So they install what's basically malware onto people's computers in order to track them and enter sort of lockdown different features.

And they, in many cases, scan what is often a student's bedroom for hours, you know, monitoring them, their activity, the environment around them: whether someone else walks in, what the noise levels are. And that seems fairly invasive. But aside from privacy concerns, they're also incredibly inaccurate: the statistics are just astounding

how many false positives they come up with. Why are they so inaccurate? What are some of the challenges or examples of these inaccuracies? Well, so I think they're inaccurate because they're fundamentally based on, uh, poor assumptions, right? So they have the assumption that they feeded lots of data into these systems to create what is a "normal" test taker's profile. Deviations from that are interpreted as suspicious.

So most of the certainly pre pandemic, most of the time when their normal profile did not involve a dog walking into the room or a ch- or a mother who needs to talk to her child who is banging on the door or someone who's in a loud, urban environment or students who have like tics, you know, just nervous tics or have- are disabled and have just different ways they move and interact.

Any deviance is seen as suspicious. Mm-hmm. That is problematic. So you get a lot of really innocuous behavior, like, so one of the systems really says you should look at the screen at all times, and if you look down too long at your desk or say something out loud to yourself, it will flag that as suspicious behavior.

The companies are very, very clear that they do not determine cheating themselves. So it gets flagged and then sent for human review or the, the grader, the professor, the teacher is told, "Hey, there's a flag here. You should review this video." And then you have the student's teacher watching them in their bedroom, taking the exam, separate set of issues there, uh, which we actually, uh, uh, will touch on in a, a couple moments.

But so we we've got the false positive concern. What about false negatives? Do these systems actually flag actual cheating? Not so well. I mean, yes, they catch some, but first of all, there's a lot of ways that you, that they just don't catch. Since these have been in use widespread, slightly long enough to just start having research studies come out now, and they're not particularly effective.

In fact, one study said that they act as a placebo more than anything else, that students know they're being monitored so they behave a little better, but that the lockdown tools like: you can't open this, these notes, you can't access the internet, have- have some sort of protective power, but the detection tools are not particularly effective

in detecting cheating, they miss cheating. So like when the researchers had people potentially go in there and cheat in various ways, the, the programs didn't catch that. Yeah. I'm not going to start listing off examples of how- just thinking on the spot right now. I'm thinking, yeah, I could do this or this or this or this,

then it probably wouldn't detect, uh, what I was doing. Uh, and, and fundamentally there's a architectural problem, I think, in how these systems are attempting to work, which is they're relying on the, the computer's camera generally. And that's facing the student, and it isn't showing you what's in front of the student.

And if you really want to see what resources the student is using, is there another monitor behind their laptop that they're looking at, uh, or something like that? You need to have a different image. You need to see what's in front of the student or on the student's desk, not just what's going on, on the student's face.

So funny you should mention that, because many systems require students to do a room scan, a 360 degree room scan at the very beginning of the test to see, you know, if there's like illicit materials posted on the wall behind their monitor mm-hmm . And these were just recently declared as violating the Fourth Amendment in a really, really surprising decision in the Northern district of Ohio.

So say more about that, when I'm thinking violating the Fourth Amendment, I'm thinking that the police show up at my door, kick the door down and start taking all of my stuff. Uh, that, that seems like a Fourth Amendment violation. How does online proctoring or how does Big Proctor violate the Fourth Amendment?

Okay. So couple of steps here. So as I said, this was a really unexpected decision. A student who was at Cleveland State filed this case for, um, inductive into declaratory relief. So there was no criminal aspect involved. So the Fourth Amendment can apply to civil situations because it's still the government doing the search, even though it's generally not what we see- as it's certainly paradigm sort of, of the Fourth Amendment

but for example, when government workers come in to people's houses as a part of their receiving welfare benefits, that can be subject to a fourth amendment analysis, even though there's no criminal aspect there. And, and the government here is the, the university, this was a public university. Exactly, and that's where the government aspect comes in.

But as you you sort of say, the student brought this suit and the university defended saying a, this is not a search, you know, this- the fourth amendment does not apply, this is not a search. Even if you consider there's a search, it is a reasonable search. And yeah, the judge did not agree with those things.

It's really fascinating because this is a 10 to 20 second room scan, but the, the court did say that it was unreasonable because it did not comport with the student's reasonable expectations of privacy and that having privacy in their bedroom, their house and their bedroom, they did have a subjective expectation of privacy that the court recognized.

We, we could easily spend an hour talking about this particular case. I mean, it, it just seems- i- if I have a question about my home security and I invite a police officer into my house to give me a lesson on how to operate my alarm system– so kind of like taking a course– and the police officer sees my collection of highly illegal things that you're not supposed to have in your house;

well, I invited them in and that's not gonna violate the Fourth Amendment. They can arrest me for that. It seems as though if I'm a student and I avail myself of an educational institution and a class there, knowing that there's going to be an exam, now, this might be different if due to the pandemic circumstances changed and the student didn't understand that, but then the student didn't need to take the exam in his or her bedroom or house,

they could have gone to a library or something else, perhaps. So that was a point that was raised in this case, the student had health issues, which meant they could not take it in the school sort of places and I think also in public spaces and given the living configuration, the only place they could take it was their bedroom. Mm-hmm.

And in terms of the availing yourself of a benefit, one thing that the defense did bring up was that, you know, the consequence here is not the end of the world. They didn't compel, they didn't technically compel people to use these systems and that if you didn't take the system and you didn't get credit for your, the exam, that was still kind of, okay, you couldn't make that choice, which, you know, I think a lot of students out there would say, that's not really a choice.

Uh, but one thing that was actually sort of just tying back to our earlier discussion that the court used as part of its evidence, by why the governmental justification for this search was not enough that it was essentially not a government action, it was more just like a regulatory sort of follow through, was that because the school did technically make these searches optional, so they didn't require all professors to use them,

they didn't really enforce any students not using them because all students kind of just said, yes, I'm gonna use them because they didn't really know what was gonna happen if they said no. Right. Um, but the judge said that actually shows that the school did not see that these tools were necessary to stop the cheating.

Therefore it was sort of a weird backhanded thing where I'm sure the school's attorney thought that they were doing good by making it optional. But here the court used that logic against them saying, well, obviously this tool isn't really necessary at all. And that's coming back really to both of your papers, uh, the efficacy of these tools and

do they actually do what they're purporting to do or, uh, do they do it well enough to be relied upon, uh, is kind of a theme that ties both of these projects together. And also there's a, a question of trade offs. Do we prefer to rely on a AI based machine learning based scan of the student's room?

Arguably less invasive because there's no human involved in that scan unless the scan indicates inaccurately, uh, or too loosely use some legal jargon, uh, unless there's some plausible cause for thinking that there was, uh, misconduct, there was cheating going on at which point human review might be justified.

Do we prefer to have the machine based non-human based invasion or do we prefer to every instructor looking in on every student's room so long as we're in this online, uh, environment? And there there's a real equity trade off, uh, that we can think about there. Yeah. And, and it's, it's a really sort of interesting, I, I always hold my privacy classes about this issue and like whether or not they care, if it's a human on the other side of whatever the surveillance device is or a machine and used to be really clear that people cared if it was a human and they didn't care if it was machine.

And I think as knowledge of what algorithms can do and how they might impact people's lives comes a little bit more to the, forefront. They're less hard on that bright line. Interesting. I, I know you and I have talked about this a bit. Um, one of my favorite studies shows this was a, a study that was done a decade or so ago.

At this point, humans tend to trust robots. Like to a absolutely absurd extent. The, the study that, uh, I, I really like involves a group of subjects that are brought to an office building room. They're expecting, they're waiting to have some meeting and a fire alarm goes off and a robot goes down the hall saying, "I am the emergency robot, follow me to safety.

I will help you evacuate the building." And the robot goes through various stages of obvious malfunction, like you should not be trusting this thing that is leading you to your death, but people followed it nonetheless, because we, we tended to trust, uh, robots. And I know, uh, Derek Bambauer and Michael Risch have a recent article,

we had, uh, Derek on a Tech Refactored to talk about this last spring, where they ask humans- it's a, an empirical study, an experiment, uh, that shows humans tend to trust algorithms more than humans. And perhaps there's a, a knowledge gap there, or a equilibration effect as humans, individuals better understand the biases that algorithms might have in them, which might bring them back to humans or not.

So that that's a, a fascinating change over time. What I usually talk about my work is that humans are biased, bots are biased. They both can be defective in their own ways; to my mind it's really a matter of what kind of error is okay for you, and more than that, the process. So what processes do they lend themselves to in terms of preventing errors, detecting errors and fixing errors?

So to make sure that we are not committing an error of omission, what's up next for you? So I am turning back to sort of more strictly sort of student and youth privacy things. There's a lot going on in the school and child space in terms of protecting children or minors, depending on what law you're looking at.

And a lot of the, the student privacy rules when enacted, had a sort of implicit best interests of the child kind of tone that has definitely been taken away from them over time, based on the literal language of the statute. But I think people are trying to kind of read that back in to both educational tools and technologies that are oriented or accessed by children.

So I'm going to be looking at those more strongly, well, definitely. An area that frankly, not many people are working in and lots of concern, a lot of issues. I know I have done some work in focus a little bit on the consumer protection side of, uh, these issues and all of those issues nowadays dovetail,

thanks in large part to our friend technology , um, and all the technologies available to students that students are able to bring into schools, um, that students carry with them wherever they go with both their, their computers, their laptops, and their phones and devices sometimes given to them by the schools.

And of course, uh, all of the ed tech issues with educational institutions and teachers, regardless of what their institutions might be doing are going to try and bring these devices into their pedagogy. And generally we should applaud them for at least trying to be innovative, but there's a lot of ways to do it in a particularly clumsy fashion.

Well, hopefully, uh, not to be particularly clumsy and bring this conversation to a close. Thank you as always, uh, Elana, always great to, uh, catch up and talk about your work. And I look forward to seeing, uh, what comes next and how the discussion continues to develop. Thanks for having me.

Tech refactored is part of the Menard governance and technology programming series hosted by the Nebraska governance and technology center. The N GTC is a partnership led by the college of law in collaboration with the colleges of engineering business in journalism and mass communications at the university of Nebraska Lincoln.

Tech Refactored is hosted and executive produced by Gus Herwitz. James Fleege is our producer. Additional production assistance is provided by the NGTC staff. You can find supplemental information for this episode at the links provided in the show notes. Stay up to date on what's happening with the Nebraska governance and technology center.

Visit our website NGTC.unl.edu. You can also follow us on Twitter and Instagram at UNL underscore NGTC.