Tech Refactored

Section 230, Internet Law, and Emoji Law

December 09, 2022 Nebraska Governance and Technology Center Season 3 Episode 15
Tech Refactored
Section 230, Internet Law, and Emoji Law
Show Notes Transcript

Eric Goldman, Law Professor at Santa Clara University, joins the show to discuss Section 230 as he and Gus examine its origins, and also four current SCOTUS cases that could potentially impact the future of the internet.  Together they also touch on Emoji Law and how Internet Law is changing over time.

Goldman runs the award-winning Technology & Marketing Law Blog, and has long been a go-to source on Internet Law. Goldman is also the Associate Dean for Research, Co-director of the High Tech Law Institute, and Supervisor of the Privacy Law Certificate.

Follow Eric Goldman on Twitter @ericgoldman
Follow Gus Hurwitz on Twitter @GusHurwitz
Follow NGTC on Twitter @UNL_NGTC

Links
Technology & Marketing Law Blog
Nebraska Governance and Technology Center

Disclaimer: This transcript is auto-generated and has not been thoroughly reviewed for completeness or accuracy. 

[00:00:00] Gus Hurwitz: Welcome to Tech Refactored, a podcast in which we explore the ever-changing relationship between technology, society, and the law. I'm your host, Gus Hurwitz, the Menard Director of the Nebraska Governance and Technology Center. Today I'm talking with Professor Eric Goldman about a federal law known as Section 230. 

[00:00:31] Eric Goldman: I'm a professor of law and Associate Dean for research at Santa Clara University School of Law, which is located in the heart of the Silicon Valley.  

[00:00:36] Gus Hurwitz: He's also the co-director of the High-Tech Law Institute and a supervisor of the Privacy Law Certificate program.

Professor Goldman has been working on internet law issues for nearly 30 years, and is the author of a leading book on internet law that is used in classes around the country. He also runs the Technology and marketing law blog, which is one of the best sources for current information about recent developments in internet law.[00:01:00] 

And he's also the leading scholar in emoji law, which he and I joke a bit about in our discussion. But in all serious, Actually offers a fascinating perspective on challenging and timeless legal questions. Our discussion today covers quite a range of topics from what internet law is as a legal field and how that field is changing over time to a bit on You got it Emoji Law, but our main focus is section 230 of the Communications Decency Act.

At one time, section 230 was a relatively obscure. Enacted as part of the 1996 Telecommunications Act. It empowered but didn't require online platforms to moderate their user's speech and clarifying existing law by making clear that platforms don't assume liability for their user speech when they do elect to moderate it.

In recent years, section 230 has become very controversial and controversial in a way that has had a sharp political divide. In our conversation, professor Goldman walks us through the background on Section 230 and some of this controversy, and then we turn to four cases that the United [00:02:00] States Supreme Court is likely to decide in its current term.

Professor Goldman helps to outline the issues that are going to be before the court in these cases, how the court might decide those. And what those decisions might mean for the future of the internet. Our discussion is a few minutes longer than usual because we have a lot of ground to cover. So let's log in and see what information content Professor Goldman has to provide for us.

We have an active Supreme Court term coming up the Supreme. Hearing a couple of cases about internet law and I'd, I'd love to get your perspective on these. Uh, I think that the way that I framed it to you in email was, is the Supreme Court going to break the internet? Uh, and we, we'll get to perhaps that question, but.

For folks who might not have thought of what is internet law before, can you just give us the capsule summary of what is this thing that you and I think about that we call internet 

[00:02:57] Eric Goldman: law? Well, it used to be that internet law [00:03:00] was pretty discreet from things that people would run into in law school. It really related to how people were engaging with each other over a digital network.

So we could think about people logging onto the. Consuming content or engaging with services, and then all the legal liability that can flow from that, whether that's fraud, uh, where someone's losing money or whether that's the control over a person's privacy, or whether it's things like, uh, liability for what people are saying to each other.

Nowadays, internet law has become, uh, Concrete or separate from the rest of our curriculum because so many of the cases we're running to are involving people engaging in online. There's really a blurring of the online and the offline. So internet law in some ways is a bit of an anachronism it, it contemplates the time when it was more clearly discrete from the rest 

[00:03:52] Gus Hurwitz: of the curriculum.

Yeah, that, that's something that I've puzzled over a bit. And I think a couple of years ago I might have asked you about this actually. Um, [00:04:00] internet law is now just law and whether all laws should, I mean, coming from the law school perspective, teaching law school classes, whether it's possible to have a class that doesn't consider the internet aspects of tort law or property or corporations or what, whatever the area of law is.

Yeah. Increasingly, 

[00:04:18] Eric Goldman: I'm getting students in my internet law class who've studied one or more of the cases that I'm teaching because it crossed over to the standard doctrinal treatment in some other. So, for example, when I teach contracts, it's not uncommon that the principal case I use for contracts is a case that students have already encountered in their first year contract class.

[00:04:39] Gus Hurwitz: Yeah. So, I guess one of the core areas of what we think of as internet law is still pretty coherent. Section 230 of the Communications Decency Act. Before we turn to that are, are there other really prototypically unique internet law topics that you think about at a very 

[00:04:58] Eric Goldman: granular level? There's all kinds of [00:05:00] really interesting corners of internet law, and I'll give you an example.

Things like the allocation of domain names or the allocation. I e p addresses, there's some very arcane rules and possible laws that govern something like that. But a classic internet law doctrine is trespass of chattels, um, involving when can you use somebody else's servers or connected to the network.

Which is actually an incredibly ancient doctrine. It's just nowadays you've never run into it except in the internet context. So I'm always excited to teach something like trespassing channels because it's something that my students might have encountered, but they had no idea why they cared. And then I can show them, actually, we care a lot about that question 

[00:05:40] Gus Hurwitz: on the internet.

Yeah. So there's a famous case that I expect, uh, we both teach that gets to this question. Is a computer server a thing or is it a place, so if I hack into your server or if I, uh, use your server to send unwanted email addresses, am I. Somehow going to a place [00:06:00] the internet or is this just a server sitting in a rack in some office somewhere and it, it matters a great deal because there are different doctrines, trespass to land.

If it's a place trespass to chattels or something called conversion and how we classify and how we use our metaphors. Is going to affect what law we might apply or how we develop the law in this area. It's one 

[00:06:22] Eric Goldman: of the great old internet law questions. Is the internet a physical place and does it matter?

And Trespasser Childs is a good example where as you point out, if it's real property, we're gonna have a different set of rules possibly than we do. If it's treated as a 

chattel, 

[00:06:37] Gus Hurwitz: and I, I have to ask, you are the preeminent scholar of many fields, uh, many topics. One of them is certainly emoji law. This is going the, the opposite direction perhaps from really old doctrine to new developments in language.

What's, uh, your capsule summary or what, what's the emoji version of emoji? 

[00:06:58] Eric Goldman: Well, it doesn't really [00:07:00] translate to podcasts to try and talk about the emoji version. Um, we could name the emojis in sequence, but there's something that I think gets lost in that equation. But emoji law is actually a great encapsulation of internet law.

It's a good example of one of these corners. When you get down to a more granular level, is there something unique, special, or different about the ways that emojis communicate that require different set of legal principles than other forms of nont textual communi. Things like vocal inflection or hand gestures or uh, facial expressions or even body language.

These are all ways in which we communicate that the law accommodates, but maybe in completely. And emojis are another kind of example of ways that we talk to each other, that the law has to now accommodate. The main point that I teach when I talk about emoji law is that there are some things that are unique, special, are different about the most obvious being that the emoji that I see on my phone if I send a message might be different than the emoji that a purchases on their phone when they receive that [00:08:00] message.

And we might not know that that substitutions taking place. So there's a bunch of misunderstandings that can occur because of the fact that emojis look different in different setting. and people don't realize 

[00:08:11] Gus Hurwitz: that. So we are nearing the end of the semester, and one of the things I, uh, discussed with my own students today is how the law is changing and how the, and this is just torts, uh, a foundational first year class that every student takes and how one of the challenges of their careers is going to be understanding how technology is changing these long-standing doctrines.

And I've really approached the class as. , how the law changes over time and how the law develops as society and the technologies that we as a society rely on, uh, develop. When you think about, and when you teach internet law, are you trying to do something similar with the field or since it is so quickly moving and updating and changing in real time, is it, [00:09:00] uh, enough to just.

Keep track of what is the current doctrine and how do we teach students what is the law today? And it will be different tomorrow, but let's just figure out what it is today. Uh, yeah. 

[00:09:11] Eric Goldman: I definitely approach my internet law class as a law and technology class. We're a technology and the law are both constantly evolving.

And lawyers in the Silicon Valley have to be able to advise clients when there's uncertainty on the law and possibly on the facts. That's just a fact of life in the Silicon Valley. That's what we do, and the good lawyers know how to do that really, really well. So my job as a teacher is to try and scoot students up that learning curve to figure out, you don't always have all the information you need.

You can't always say with confidence what the law is, and yet you still have to make a recomme. So a lot of what we do is try to anticipate not only where things are today, which is hard enough, but to think about what are the policy dynamics, what are the moral norms, what are the social [00:10:00] dynamics, and what is the technology doing that might change the answer when you're actually tested.

I don't think I can teach that perfectly, but it's absolutely a core part of the curriculum. 

[00:10:09] Gus Hurwitz: How has teaching internet law changed over the years that you've been teaching? And I, and when I say that, I don't mean the, the subject matter, but the student experience and the pedagogical experience working with the students.

Well, 

[00:10:23] Eric Goldman: actually I do wanna mention the subject matter cuz it's related to your last question as well. You know, how do we teach the class when the law is changing so rapidly? The reality is that. I haven't done a lot of structural updates to my casebook in, in a few years. Um, a number of the cases now are 10 years plus.

They haven't changed in a while. And I think that this is the last year I'm gonna say that because I suspect by the, uh, next year I may have to revamp the entire casebook based on what we're gonna discuss in the rest of our podcast. So we're at the cusp of another major iteration of internet law after actually having a [00:11:00] fairly calm period of.

The main change that I see outside of the subject matter is the fact that the students are different. They're all digital natives at this point. They're all consuming the internet on a regular basis. And here's the the real payoff. So many of the students come to the class cynical about the internet as a technology.

It's a really odd phenomenon because we would think that digital native would love the internet. It's an integral part of their lives. But in fact, many digital natives hate the internet. They have a negative view, and so I'm actually teaching against that. I'm trying to come in and say, Hey, remember there's some good things about this that are worth fighting for When they're coming to Assumption, let's burn it all down.

So it's really changed the way I teach because I can no longer presume that the students at all. Uh, the way I do about the value of the internet, I have to assume now they're actually coming in adversarial, 180 degree opposite position on the [00:12:00] topic, and then I'm fighting against that. I'm curious, are you having that experience too?

Uh, 

[00:12:04] Gus Hurwitz: yeah, in some ways, very similar sort of experience. Uh, several years ago, I very much remember having to really. Engage the students and push the students to think about the dangerous, uh, and problematic parts of the internet. I, I remember in particular teaching during the Arab of Spring when the internet was bringing down oppressive governments and regimes, and everyone was, wow, thi this is really great.

This is really cool. And yeah. But let's look at some of the. Problematic areas and challenges that the law has with these areas. And just today, uh, not even in my internet law class, um, I was talking to my, uh, first year students kind of doing a capsule introduction to, um, privacy Torts and Defamation Law.

And the examples that the students wanted to go to were things like doxing and swatting. They were really interested in these really contemporary negative uses. Communications technology. [00:13:00] And it, it was remarkable to me that that's right where they went to. So that definitely, and I'll also say, not my internet teaching, but in my telecommunications class, I've done the same thing.

Uh, the authors of my case book, the teaching materials I use, a few years ago, they started trying to make the book about the internet instead of telecommunications. And I've stuck with the version of the book that's now six or seven years old, and I, I've updated it with some additional. But the foundational principles, you have to understand them.

And foundational questions you lose so much when you're just trying to chase the law and not focusing on what are really the underlying issues that are much more 

[00:13:39] Eric Goldman: timeless. Well, it's also a good point. Another thing that's changed is a lot of the students don't know about some of the older technologies.

We've been doing a rewatch of, uh, Seinfeld, and the kids are just baffled by things like, call. Or you know, the ability to dial back a phone number or the last episode we just watched was about speed dialing. [00:14:00] And the kids are like, what is that? Why would I care? And they don't even know how much of an advance that was at the time, and now it's, uh, passe.

So, um, you know, when it comes to internet, there's all these things that people used to be doing. And I'll just give an example, use. It was a foundational technology from a legal standpoint, a lot of really interesting Usenet law cases, but the, the students would never have any idea to think about Usenet nowadays.

Yeah, so 

[00:14:27] Gus Hurwitz: Usenet was a messaging system, basically a, a messaging board that was distributed across many servers. So it was kind of the, the original Reddit. You could say for I guess Reddit folks still, no. Um, so it's actually 

[00:14:41] Eric Goldman: more like the original Mastodon, I think. Yeah. Mastodon 

[00:14:45] Gus Hurwitz: pre Mastodon, the way that was federated around.

Yep. Well, I'm going to transition us to talking about the Supreme Court. And do so with an unfortunate trope that I, I think is less true, but it might still be [00:15:00] more true. Let's talk about changing the law and folks who don't understand how the technology works. Um, so, uh, the, uh, Supreme Court has a, a few cases that are relevant to internet law and our areas general.

The big ones are certainly relating to section 230, or when I say big, the closest to the sort of stuff that you and I work on. There are a few other cases I expect you might want to talk about, but I, I'm gonna just leave that to you, Eric, to let me know what cases you want to talk about, but I, I'll ask you to just kind of queue up the discussion.

What is the Supreme Court thinking about this term that's relevant to these? 

[00:15:38] Eric Goldman: Yeah. And before I get that, I do want to just note this, uh, this stereotype that the Supreme Court justices are out of touch with the technology and I think that there's some truth to that, but nowadays we are struggling with judges who are making what might be considered to be bad faith statements of the facts or the.

And [00:16:00] so it's become increasingly difficult to understand are the justices not getting it or they're getting it just fine? They've just decided to misportray the situation in order to advance their narrative. And so the Supreme Court streaming of the internet very much runs the risk that we're going to get statements of fact that are not actually based on cluelessness.

They're based on malice against, uh, the uh, 

[00:16:23] Gus Hurwitz: Um, well, we, we know what Section 230 says about, uh, uh, maliciously posted content. Well, 

[00:16:29] Eric Goldman: uh, they're, they're liable for it, except that Supreme Court's Li Justice are liable for nothing. They have no rules that govern their behavior apparently. So there are four Supreme Court cases that I'm interested in, and one of them hasn't been appealed to Supreme Court yet, but let me enumerate them and then we'll figure out how you wanna structure the convers.

The case that everyone is paying attention to, the one with the greatest potential impact for the internet is the Gonzalez versus Google case, which involves, well, we'll talk about it, but it involves a square [00:17:00] question about the scope of Section 230, and the Supreme Court's answer on that could very well dictate how big or how small Section 230 ends up being in practice.

Uh, there's a parallel case, which I'm not a good at pronunciation. I think it's something like TMNA and that is a, uh, TMNA versus Twitter. That's another case that involves a very similar set of facts. That case is a statutory interpretation case. It involves the, an anti-terrorist act, um, and how it might apply to internet services.

So that case has potentially less. Scope only because it's, it's interpreting a different statute, not as broadly applicable as Section 230, but if the Supreme Court reaches a weird result there, that also has a significant impact. There are then two related cases that are on a different thread that don't directly involve Section 230, and yet the resolution of those cases might very well impact section 230.

Very structurally, there's an appeal that's been filed in the net choice versus [00:18:00] Florida. Which involves a law that Florida passed in 2021 called the Social Media Censorship. And I'm sure we'll talk about the details out. There's a lot going on in that law. And then Texas passed a similar law also in 2021, also called Social Media Censorship Law.

And that ruling will be appealed from the Fifth Circuit, and it is. Possible, and my hope is that those cases will be combined together, and so we'll have effectively really two cases. There's gonna be very similar parallel treatment with the Gonzalez Ant cases, and then there should be parallel decision making with the two net choice cases, the one from the 11th circuit involving Florida and the one from the fifth Circuit involving.

[00:18:44] Gus Hurwitz: Yeah. So let's, uh, start with the Gonzalez and Tona cases, and you and I just throw around section 230, like it's nothing because it's so important to us. It's the opposite of nothing . Um, but if you could just give the capsule explanation [00:19:00] of, uh, what has Section 230 been to date? Uh, section 

[00:19:04] Eric Goldman: Uh, section 230 says to summarize very efficiently that websites aren't liable for third party.

Uh, and it is this basic premise that's become the foundation of our modern internet that allows us to talk to each other as opposed to somebody talking to us. So in cases involving social media, for example, the idea is that the social media creates a structure that allows people to talk to each other, and the social media venue isn't liable for the conversations that then, Um, so however people are talking to each other, whatever terrible things they're saying or doing to each other, the idea as Section two says, the services aren't liable for that, it still leaves open the possibility that the users are gonna be liable for whatever they do.

If they committed a crime or a tort, they should be liable for that under standard first principles. 

[00:19:52] Gus Hurwitz: So Section 230 was enacted in 1996 as part of the 96 Telecommunications Act, uh, legislative [00:20:00] package. And usually statutes like this, if they're so important, they might get up to the Supreme Court eventually for some clarification or a debate over what they actually mean.

Has, uh, the statute had Supreme Court level clarification, or what level of authority do we have in understanding what section 230 means? 

[00:20:21] Eric Goldman: So it has never been interpreted by the Supreme Court. Uh, there have been, uh, countless appeals of Section 230 cases to the Supreme Court. Uh, I don't know how many, I can think of a dozen or so off the top of my head.

And the Supreme Court has denied all of those. Now, there's certainly been statements about Section 230 coming from Justice Thomas, who is aware of it and clearly gonna be cynical. So we know that there's been discussion about it within the chambers. Um, but in terms of actual opinions from the Supreme Court, um, as statements of a majority, we've never had anything like that.

So this is gonna break new [00:21:00] ground no matter what. It's going to resolve. Any possible confusion that might be existing at the district court level where we've seen some really inconsistent results. I don't think there's. Conflict at the appellate court level, but to extent that appellate courts are reading it in different ways, or more or less expansively, uh, the Supreme Court's likely to harmonize those as well.

[00:21:22] Gus Hurwitz: Yeah. It's, uh, worth noting you uh, mentioned Justice Thomas, I think it was two years ago in a opinion he noted. He called Section 230 an increasingly important statute for the internet. And, uh, I, I think it's, uh, curious, uh, 2019, 2020 to be referring to Section 230, which has been foundational for the growth of the internet over the last.

25 years or so as an increasingly important statute for the internet. This might 

[00:21:50] Eric Goldman: be an example of many where the Supreme Court statement about the facts or the technology might not be designed to tell the truth. It might be designed [00:22:00] to tell a narrative. 

[00:22:01] Gus Hurwitz: So the issue in the Gonzalez, and let, let's focus on the Gonzalez case.

Uh, TMNA raises similar issues, but I think as you explained it, it's really focused on a separate statute and they've been consolidated. So, uh, we'll probably be discussed. The 230 issues will be discussed in terms of the Gonzalez case. So can you explain what's going on in that case and how this raises, uh, section two.

[00:22:29] Eric Goldman: Yeah, both, uh, cases involved very similar sets of facts. There was a, a spate of lawsuits, I've counted, I about believe, 20 or more, that it alleged that social media services were facilitating terrorists attacks by allowing the terrorists to have accounts to talk online, and in some cases recruit new members of the organization or even to inspire them to commit terrorist.

So both the Gonzalez and [00:23:00] TMNA cases involve terrorist attacks that were committed, where the plaintiffs allege that the social media services were an integral player in that terrorist attack because the terrorist had been using the social media service. So the Gonzalez case focuses on the YouTube service.

The TMNA case is on Twitter. In each case, the allegation is terrorists. Uh, were. And dot, dot, dot. They were there for an integral cause of the terrorist attack. 

[00:23:28] Gus Hurwitz: And Section 230 comes up because the platforms Google, which owns YouTube and Twitter. They're saying this content is user generated. We didn't generate it.

So section 230 shields us from liability for having this troubling content. Correct. Right. As 

[00:23:43] Eric Goldman: I said, section 230 says websites aren't liable for third party content. So the extent that terrorists are publishing content on Twitter or YouTube, No matter how vile it might be, presumptively section 230, uh, would apply to it.

They shouldn't be liable for what the terrorists are [00:24:00] saying. 

[00:24:00] Gus Hurwitz: So I'm going to throw the softball straw man at you. Uh, professor Goldman. Surely you don't support terrorism. 

[00:24:08] Eric Goldman: Uh, no. However, I will point out as uncomfortable as it might. , some of what the terrorists are saying may be protected by the First Amendment, and this is a really an essential point for people to understand because the counter narrative against terrorists is, That many of those people who object to the fact that terrorists played a role in these terrorist attacks will also object to the idea that the services would intervene with respect to First Amendment protected content.

And it leads to this inevitable conundrum that if terrorist speech is nevertheless protected by the First Amendment, it's possible that we don't even want the services to be intervening with respect to. That even the services exercise their editorial control over it, they're actually then downplaying First Amendment protected content.[00:25:00] 

So it's not really about whether or not a person supports terrorists, it's really more about, do we believe that terrorists have First Amendment protected rights to speak? And if they do, as counterintuitive as it sounds, then we have to wonder about a legal system that would require the. 

[00:25:18] Gus Hurwitz: Yeah, so at at some level there's a question, how do we identify what constitutes terrorist speech or other problematic speech or hate speech, or distasteful speech or speech that the owners of a platform may or may not like or that the owners of a platform are using to organize or draw users to the platform?

And I'll, I'll just ask that as a question. How do. Differentiate or how do platforms differentiate between different types of First Amendment protected 

[00:25:51] Eric Goldman: speech? So I don't quite organize the discussion that way. So lemme tell you how organized it and you'll see how I think we get at the, the answer to your question, [00:26:00] there's really three categories of content that services are dealing with.

There's the category content that's illegal, there's the category content that is not illegal or you could say is protected by the First Amendment. I'm fine either way of saying. And that the services nevertheless find objectional for their audience. And then there's the third category, which is content that is legal and that the services feel is appropriate for their audience.

So what we generally want is we want the services to remove a legal content. We generally are agnostic about how the services handle the content that they think is good for their audience, cuz they're gonna wanna share it anyway. And we're really focused on this middle. This content that is protected by the First Amendment, but the services nevertheless conclude is not fit for their audience.

What are some of the things in that category that could include things like pornography? Some services may say pornography is protected by the First Amendment, but it's not appropriate for my audience in the [00:27:00] following circumstances or at all. It could include things like hate speech where people are talking to each other in, in civil.

the Constitution might protect those statements even as vial as they are, and yet the services might conclude that that's not fit for their audience. They don't want that level of discourse on their service, and so the real battle has been over this. What we might call lawful but awful speech. Speech that service objects to, but nevertheless is protected by the, the First Amendment and the general partisan split has become that the Democrats or liberals want the services to nevertheless remove that content and maybe be obligated to do so.

And the Republican or conservative talking point has. That the services should not have the power to remove it if it's protected by the First Amendment. 

[00:27:50] Gus Hurwitz: And from a First Amendment perspective. Both of those kind of sound troubling to me. 

[00:27:56] Eric Goldman: Yes, that's a hundred percent correct. And let me restate it. [00:28:00] It turns out that censorship is a bipartisan concept.

Both parties love censorship. They just don't agree on what censorship they. So this middle zone of lawful, but objectionable or lawful, but lawful content, the fact that both parties want to tell services what they must do, we should object to that on principle. Neither of them should have the right to say that.

And interestingly enough, that's essentially what Section 230 says. Section 230 essentially says you can decide what level of awful content you wire site anywhere from zero to a hundred, and the law will give you protection for that. So section 230 is actually the bypass or the workaround to this, otherwise partisan gridlock.

Where the parties are agreeing that they like censorship, they just want different forms of censorship. 

[00:28:48] Gus Hurwitz: So what if a platform knows that there is speech content being put up onto its platform by users and that [00:29:00] content? Could be legally actionable, even if it's first Amendment protected speech. So turning to, uh, some of the speech, uh, allegedly at issue in these cases, incitements to violence or terrorism related speech, how does that affect what the platform should be doing or how they should be thinking about this?

It sounds 

[00:29:20] Eric Goldman: so easy, like, okay, when an item of content comes through, it comes through with flashing neon light. This is terrorist content that's not protected by the First Amendment. And if that were true, we would expect the services to intervene and take care of it. That's not the way it presents itself.

And let's give a most obvious example. An organization like the Taliban, we classify as a terrorist organization. They're also the functioning government of Afghanistan. So if anything that relates to Afghanistan, government is being published online, it's coming from the Taliban. What are we supposed to do in a circumstance like.

Things like beheading videos are awful, and I would be [00:30:00] happy never to see a beheading video in my life. And yet the Constitution might very well protect that. It's a form of speech, it's a statement. It's a terrible statement. One I hope people don't make, and yet that's not the measure for the Constitution.

Um, so this idea about knowledge, when a service knows there's a problem, what do they have to do? Section 230 generally says that the services aren't liable, even if they. But the brilliance of that is actually the problems with determining if content is legally objectionable or not. That effort is the whole ball game.

That's what content moderation is all about, and it's an imperfect science. And Section 230 says, we're not going to get into the nuances of when a service knows or doesn't know anything, and what kind of inferential or circumstantial evidence you're going to introduce to show that they should have or could have known about this.

None of that matters, and that's why Section 230 has become so foundational because if [00:31:00] we get into those very epistemological questions, what does someone know and win? We know that the ball game is. 

[00:31:07] Gus Hurwitz: Let's turn to what you think is going to happen in the Gonzalez and uh, Tom cases. Let's talk 

[00:31:13] Eric Goldman: about the Gonzalez question presented because it's a really interesting question presented.

The Supreme Court said this is a question that we want to opine about. The plaintiff allege that there's a thing called traditional editorial functions that when a service performs them are covered by section two. And then there's a thing called algorithmic recommendations, which the plaintiff alleges is not a traditional editorial function and therefore is not covered by section 230.

Now that question presented has got all kinds of problems, but the most important one is this concept that there's a thing called traditional editorial functions, and that's what Section 230. Now this invites a lot of mischief. As we know, the Supreme Court loves to [00:32:00] talk about tradition and what people were doing in 1790.

Well, and I'll tell you, there was no online content duration in 1790. So if the Supreme Court wants to go all textuals on this, then they could simply say there was nothing that resembles online content moderation. In 1790, there is nothing called a traditional editorial function and Section 230 covers.

So the question presented invites some very deep mischief, but I find it incoherent to think that there's this thing called algorithmic recommendations, and that's somehow different than whatever we're gonna call a traditional editorial function. Publishers do three things. They gather, organize, and present or disseminate content, and the algorithm micro recommendations is just one of the many ways a service can present.

So you asked the question, what's gonna happen in the Gonzalez case, and let me give you the kind of range of outcomes. One outcome is the court says, [00:33:00] strikes down section 230, either saying it's unconstitutional, which is not really question presented, but they don't care or that they say it never applies to any of the circumstances that we're addressing.

Therefore, section 230 is gutted Congress. If you don't like our interpretation, They could say that Google slash YouTube wins, but they could say it in a way that says that YouTube was engaging in section two three protected activity in this circumstance. But there's a wide range of other circumstances that are not covered.

So even though the defendant wins section 230 still gets gutted and we still end up with a, a strategic loss from the internet. A third scenario is that the Supreme Court fractures and we get a hairball of opinion. That have no consistent holding with each other, and then they leave it to the lower courts to to figure it out, which would be a terrible outcome because right now we don't have that.

Right now we have consistency generally in the appellate interpretations of Section 230, the Supreme Court could [00:34:00] blow that away, and there's one and only one scenario. Actually we get the internet as it looks today, and that's where the court says YouTube wins. And Section 230 has been interpreted roughly correctly in a clean enough way that doesn't invite the mischief of courts trying to find an exclusion to what the Supreme Court said.

If we don't get that, this case will be a strategic loss of the internet. And you can see why I'm panicked. Mm-hmm. ? 

[00:34:26] Gus Hurwitz: Yep. Just given, as you say, the question presented. Uh, it's hard to see how the. Is going to get to that conclusion. It, it's clearly being presented to focus on algorithmic decision making and we, we should say, Algorithmic decision making isn't some magic pixie dust sort of thing.

Effectively, every journalistic enterprise that has exercised editorial discretion, they've done it according to some rule book. They have some editorial values. They've got some standards for what [00:35:00] content they're going to, uh, uh, publish or reject. That's an a. Just because it's some AI machine learning thing, it, it still is fundamentally a way of exercising, uh, discretion over what content is going on to your platform.

[00:35:15] Eric Goldman: A hundred percent. And let's take that further. Let's talk about a traditional print publisher, like a newspaper. There may be codified guidelines. , but at minimum there are informal understandings about how many words a particular article would get, whether it should be on the front page, in the middle, or on the back page, and how big a font should be in the headline.

All of these were formulas or rules or guidelines to help decide how to present content to readers in a way that would help prioritize some content over others. You cannot publish a newspaper without making those judgment. And you could make it up every single time from scratch, but that's not how publishers actually work.

So [00:36:00] this idea about algorithm recommendations really just is another fancy way, as you said, of describing what publishers have done all along. They've always made these decisions how to present information to their audience. And so raising that as a distinction from traditional editorial functions is fundamentally in.

And it would be great if the Supreme Court said it. If they don't, bad things are 

[00:36:21] Gus Hurwitz: likely to ensue. So let's talk about other bad things are likely to ensue cases, Florida and Texas. Can you tell us what's going on here? 

[00:36:29] Eric Goldman: Florida and Texas, both enacted laws that were what I call MAGA laws. These were laws that were designed to appeal to the voters who support the Make America Great Philosoph.

And we call those typically something like messaging bills. They're bills designed to just tell the voters, we hear ya. We love ya. We're paying attention to your interest. But they're not actually designed to pass, they're just to get the crowd cheering. So [00:37:00] both Florida and Texas passed MAGA messaging.

That cover a wide range of different ways of controlling conversations online. I can't get that off the details, but let me just give you a few highlights or lowlights, depending on your perspective. Both of the bills tried to govern the process of content moderation. So when the Florida bill, it says that the content moderation must be done consistently.

And in the Texas bill it's framed. It must be done in a viewpoint, neutral manner. And the idea is they're trying to say that if the Democrats are getting favorable content moderation, the Republicans should get no less favorable content moderation. So, you know, basically treat both parties equally. But that's not what they said, and they couldn't really say that.

What they really say is if there's any viewpoint discrepancy about a. You have to treat both [00:38:00] viewpoints or all viewpoints as equally legitimate. So it isn't just Democrats versus Republicans, it's also the libertarian party. It's also the green party. It's also the Nazi party. They all have to be treated consistently, or they all have to be treated in a viewpoint neutrally manner.

So take a subject like vaccinations, what the law says, if you're publishing provax content, you have to treat the anti-VAX content. So it says there might not be a scientific debate on a topic, but as long as people are still disagreeing with each other, you have to treat both as equally legitimate. This is just flat out censorship.

There's no ambiguity about it. It's saying you don't have discretion side. What's stood for your audience? We're gonna tell you if you pick topic A, you must pick topic, not A, and maybe A and A is a completely stupid question. They all have to be treated equally. You don't get a choice about that. And it, 

[00:38:52] Gus Hurwitz: it's kind of like the fairness doctrine on steroids.

It really 

[00:38:56] Eric Goldman: is back to the fairness, doctrine, and. It's so [00:39:00] baffling because the conservatives waged such a bitter war to ultimately kill the fairness doctrine. That was one of the big chronic achievement of the Reagan era, and here we are again. The Republicans are the ones who are saying, let's impose a fairness doctrine onto the internet and.

one of the reasons why the fairness doctrine was ultimately scuttled and so bitterly opposed by conservatives, they felt like they didn't get a great deal under that. And the idea of embracing, again, they're not gonna get what they want, they didn't learn any of the lessons from the fairness doctrine. I do wanna mention one other major chunk of the laws, um, because this is the part that I'm weighing in on Both of the Texas and Florida laws have what I call editorial transparency require.

They require the services to publish information about their editorial operations and decisions. This is something we haven't seen in the offline world. We don't ever tell newspapers. You have to tell us what stories [00:40:00] you kaboshed, and you gotta tell us why you kaboshed them, or you don't have to tell us how many letters you received and why you chose not to publish 'em.

But these laws would require all that in warmth and in both the 11th circuit and the fifth. Which split on the question about whether or not the outright censorship was permissible. The 11th circuit said it wasn't. The fifth Circuit said maga. Both of them agreed that the editorial transparency provisions did not raise a problem, but that's what I'm gonna be weighing in on and I'm gonna be explaining to the court.

Why actually editorial transparency is a Trojan horse for censorship. It's going to lead to the same censorship that flat out explicit censorship leads to, and they need to understand why their precedents both don't require that. And actually that's 

[00:40:44] Gus Hurwitz: terrible policy. and, and it's also somewhat incoherent and we don't need that sort of requirement for the New York Times or the Wall Street Journal because we know why they print certain types of content, because they're allowed to have a editorial voice and [00:41:00] perspective, and we let them, and we let the marketplace play it out.

And you know, the same thing goes for platforms if it turns out that Twitter has an editorial perspective built into its algorithm. YouTube has one, uh, Mastodon or certain Mastodon instances have editorial perspectives built in. People figure that out and they affiliate with and are drawn to different sources of information or different platforms based upon the marketplace and the marketplace of ideas.

and what we're really saying, or what these laws are really saying is you can't have that. Everyone needs neutrality and neutrality doesn't 

[00:41:39] Eric Goldman: exist. Yes. And actually it would reduce or shrink the diversity of the different kinds of social media options that are out there. If they're having to operate under these rules, it's quite likely that they would homogenize and have to follow the same editorial standards.

All of the services where today there are differences. [00:42:00] So for example, there are conservative alternatives to the perception that some of the other social media services like Twitter, historically Premus were too liberal. So there's Gab and Parler and True Social. They were all designed to say, here's an alternative perspective in the marketplace, and decide which ones you like better.

And what we're seeing of course, in the post Musk Twitter era is that a bunch of people are. This doesn't work for me anymore. And they're literally checking out. And that kind of market mechanism is the kind of thing that we used to just assume was gonna be sufficient to discipline publishers. If they wanna go and cater to their audience and their audience doesn't like how they came from, that's a problem.

But Florida and Texas have said, no, we're gonna take control over the situation. We're gonna tell you what you. That's just flat out censorship. 

[00:42:49] Gus Hurwitz: So we are coming up on the end of our time pretty quickly. I, I want to ask, I'm not gonna ask for more prognostication, assuming that [00:43:00] likely outcomes happen in these cases, uh, by which I will prognosticate anti-230 anti-internet decisions as you would interpret them, uh, are what?

how is that going to affect the field? What, what are you going to do with your class next year? . 

[00:43:17] Eric Goldman: Um, so just to be clear, you know, uh, you didn't want prognostication, but I do wanna say a few words about that. In order for the internet to continue to function the way that we're currently enjoying it today, we have to win all four of the cases and we have to win 'em in language that doesn't invite the mischief and is clear enough that there's not these fractured set of.

If you were to say, what are the odds that the Supreme Court's going to get it perfectly on four cases? I don't like those odds. I really am nervous that one or more of those is going to change the way that we think about the internet and the consequence of that, you know, as I said earlier, I'm panicked about this because it's going to impact the [00:44:00] way that we talk to each other online.

The future of the internet is going to look a lot like what I. A Netflix type environment, what's going to happen is there's gonna be more paywalls with professionally produced content where the publisher decides what they think their audience wants without the audience being able to talk to each other.

So we're gonna have to pay for that content stuff that we might get for free. It's going to be selected for us, not us deciding what we think is of, uh, interest to us. And it's going to perpetuate a bunch of digital divides and, uh, power dynamics that we currently are concerned about. It's going to raise the digital divide higher by costing more, and it's going to continue to provide voices or audiences for people who currently have existing.

So the end of the user generated content era of the internet is very close, and it could be over by June, 2023. And in its place there will still be an internet. We'll still go online, but we won't be talking to each other. We'll be talked [00:45:00] to by people who are broadcasting or publishing content at 

[00:45:03] Gus Hurwitz: us. So it sounds like, uh, maybe there was something to the net neutrality concerns, but we just had the wrong focus, um, with, with the discussions of net neutrality and paid prioritization and all of that stuff.

Well, the 

[00:45:16] Eric Goldman: focus wasn't on the idea that the government could just flat out sensor online publishers of, uh, user generated content. But here we are in 2022, going to 2023, where that's on the table. The Overton window has clearly 

[00:45:28] Gus Hurwitz: included. Yep. Any last thoughts or closing remarks you wanna leave us with? Yo, 

[00:45:35] Eric Goldman: I, I don't mean to spook the, uh, your listeners, but I will tell them that some of these things might be fixable in Congress.

So for example, if the Supreme Court HOKs Section 230, Congress could fix that in theory. , but they're not gonna fix it. They're only gonna double down and make it worse, so long as they think that we want them to continue to bash the internet companies. So the tech lash is actually driving both the Supreme Court's antipathy [00:46:00] and Congress's antipathy, and we need to tell the people who are supposed to be working for us on our tax dollars that that's not what we want, and they're not hearing it.

And as a result, it's created a very fertile environment for cen. 

[00:46:13] Gus Hurwitz: Well, Eric, I usually feel better after talking to you. I'm not sure today if, uh, that's the case, but, uh, tha thank you nonetheless for taking the time and I hope that you're wrong, but I think that a lot needs to go right for, uh, you not to be.

And I'll just pontificate for a moment and say 230 is so important and the internet is so important and has done so much. The tech lash, as you call it. It's a real phenomenon. There is a widespread concern and anger about big tech and the technology companies on the left end, very much on the right and the puzzle.

For me, the thing that keeps me up at night is how has this happened? Why has this happened? Could it have not happened? Is there something about the business models or how the industry [00:47:00] has run itself or in how we've advocated or how advocates on the left or the. But has spoken about the fields over the years or, or whatever, because something's driven it.

And understanding what's driven it, I think is, uh, really important for, uh, folks like you and me to be thinking about. I, 

[00:47:17] Eric Goldman: I know we're over time, but I do wanna embark on that cause it actually ties together our entire conversation. What I think has happened is that as a digital natives have grown up, they have not made a distinction between the social ills that they experience offline and the social ills that they've experienced.

So cyber bullying as when they were younger, they blamed that on the internet, not on the fact that people are just awful to each other. . And so as that generation comes into power, they're coming with a presumption that it's the internet to blame for all the experiences that they had growing up that they objected to.

They don't know how rare and special the internet is, and they don't know how bad things were in the offline world when we didn't have the internet. They've [00:48:00] never experienced it. All they know is that they were cyber bullied when they were younger or they experienced other kinds of harm that the internet, uh, should be held accountable for.

And so, as long as do we see this merging of the internet and the offline world, and as long as the digital natives are the ones who are not, Recognizing what life was like with in a different environment. They think that they can fix social harms by fixing the internet, and you and I both know that ain't gonna work, but until they realize that actually they're likely to do far more harm to society than they think.

[00:48:33] Gus Hurwitz: Yep. Like democracy. The Internet's great. If you can keep it. 

[00:48:38] Eric Goldman: Yes. Always a pleasure. Thanks Guys.

[00:48:43] Gus Hurwitz: My thanks to Eric for taking the time to talk to us today. Listeners might note that this is the second recent discussion that we've had about Section 230. I recently spoke with my colleagues here at the University of Nebraska, Kyle Langhart and James Tierney about some issues relating to Twitter and also cryptocurrency.

That touched [00:49:00] on section 230 and similar topics. And we're gonna have at least one more discussion on this topic because this is a really important issue. But don't worry, we have some other topics coming up as well. Drones to copyright law in the 19th century, so stay tuned for those as well.

[00:49:20] James Fleege: Tech Refactored is part of the Menard Governance and Technology Programming Series hosted by the Nebraska Governance and Technology Center. The NGTC is a partnership led by the College of Law in collaboration with the Colleges of Engineering, Business, and Journalism and Mass Communications at the University of Nebraska, Lincoln. Tech Refactored is hosted and executive produced by Gus Hurwitz.

James Fleege is our producer. Additional production assistance is provided by the NGTC staff. You can find supplemental information for this episode at the links provided in the show notes to stay up to date on the latest happenings within the Nebraska Governance and Technology Center, visit our website at ngtc.unl.edu [00:50:00] You can also follow us on Twitter and Instagram @UNL_NGTC.