Berkeley Technology Law Journal Podcast: Regulating Online Hate Speech, with Christopher Wolf

 

SoundCloudSpotifyGoogle PodcastsApple PodcastsPocketCasts

[Kavya Dasari] 0:12

You’re listening to the Berkeley Technology Law Journal Podcast. I’m Kavya Dasari.

[Darshini Ramiah] 0:17

And I’m Darshini Ramiah. Today’s podcast is on the regulation of online hate speech. For all the benefits of social media, one of its more unpleasant consequences has been the proliferation of online hate speech. While it might be tempting to imagine hate speech is confined to an obscure corner of the internet, it is increasingly apparent that this is far from the truth. There is no international legal definition of hate speech. But the term is generally understood as “any kind of communication in speech, writing or behavior that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of their religion, ethnicity, nationality, race, color, descent, gender, or other identity factor.”1

[Kavya Dasari] 1:10

It’s not surprising then that minority groups are disproportionately affected by hate speech.2 In the United States, nearly half of all African American social media users and 1/3 of female users frequently witnessed offensive images or humor on social media sites.3 This offensive content not only causes psychological harm, but also can dissuade victims of hate speech from speaking out.4 What’s more concerning is that hate speech is fueling real world hate crimes.5 In 2020, the United States saw a 6% increase in such crimes, 64% of which were racially motivated.6 This included a notable increase of crimes against persons of Asian descent, following their online scapegoating for the emergence of the COVID-19 pandemic.7

[Darshini Ramiah] 2:09

These trends have fueled calls for greater regulation and the removal of hate speech on social media.8 As matters stand, permitting widespread hate speech risks undermining the free speech principles of a democratic society.9 On the other hand, advocates of the “free and open marketplace of ideas” argue that regulating any speech, even harmful or degrading speech, chills public discourse.10 Efforts to censor online hate speech could also be counterproductive by either sending the posters of such content underground, or encouraging them to channel their energies into real-world violence.11

[Kavya Dasari] 2:51

In this episode of The BTLJ Podcast, we will be tackling the issue of regulating hate speech on social media platforms in the United States with Mr. Christopher Wolf. Mr. Wolf is a Senior Counsel at Hogan Lovells and is the founding editor of the first Practicing Law Institute legal treatise on privacy and information security law. He chaired the International Network Against Cyber-hate, and currently sits on the board of the Future of Privacy Forum, a think tank devoted to advancing responsible data practices. Mr. Wolf is also involved in the work of the Anti-Defamation League against online hate speech, and co-authored the book, Viral Hate: Containing Its Spread on the Internet.

[Darshini Ramiah] 3:44

Our Vasundhara Majithia delves into the problem of online hate speech with Mr. Wolf. She begins first with the discussion of constitutional obstacles for regulating hate speech on social media platforms. From there, she explores the European Union’s approach to regulating hate speech as inspiration for a policy discussion on what hate speech regulations could look like in the United States.

[Vasundhara Majithia] 4:15

Thank you so much for joining us, Chris. Could you please give us a brief overview of how the First Amendment’s freedom of speech and expression serves as an obstacle when it comes to the government censorship of hate speech? On social media specifically?

[Christopher Wolf] 4:31

Sure. But first of all, I’m not sure I would refer to the First Amendment as an obstacle to government censorship, because no matter what kind of speech we’re talking about, it’s hardly desirable to have the government, our government or any government, decide what is or what is not hate speech. You know, think back just one administration. Imagine what Donald Trump would say constituted hate speech—reporting by the New York Times? As Justice Kennedy put it in the 2017 offensive trademark case, “a law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all. The First Amendment does not entrust that power to the government’s benevolence. Instead, our reliance must be on the substantial safeguards of free and open discussion in a democratic society.”12 So I would say that the First Amendment protects against government censorship. But to answer your question, the First Amendment protects hate speech from government censorship, unless that speech constitutes a true threat, or unless it’s directed at inciting or is likely to incite imminent lawless action. You know, Brandenburg v. Ohio was the case in 1969 in which the imminent lawless action standard was adopted.13 And that’s where the Supreme Court unanimously reversed the conviction of a Ku Klux Klan group for advocating violence as a means of accomplishing political reform because their statements at a rally did not express an immediate or imminent threat to do violence.14 Now, some refer to this First Amendment freedom as freedom for the speech we hate, which is an adaptation of part of the dissenting opinion by Supreme Court Associate Justice Oliver Wendell Holmes in U.S. v. Schwimmer.15 And Holmes wrote, “if there’s any principle of the Constitution that more imperatively calls for attachment than any other it’s the principle of free thought, not free thought for those who agree with us, but freedom for the thought we hate.”16 And underpinning the First Amendment is the concept that free speech allows for a marketplace of competing ideas from which people can evaluate claims and decide what’s true. And of course, in the Internet era, with algorithms deciding what speech should be given priority, and with so called filter bubbles, whereby people can only access information consistent with their beliefs, that marketplace of ideas construct does not quite have the applicability that it once had.

[Vasundhara Majithia] 7:08

I was also wondering why it is then that social media companies can moderate content on their platform despite the First Amendment.

[Christopher Wolf] 7:15

So first, social media companies are private entities with First Amendment rights. Remember the 2010 Citizens United case, whether you agree with it or not, I don’t happen to, but there was no question that the First Amendment protects corporations generally.17 And the question there was whether an exception for corporate speech supporting or opposing political candidates. Second, even though big tech companies such as Facebook, Google, Twitter, etc. appeared to provide substitutes for the town square, they aren’t treated as quasi-governmental entities and might be subject themselves to the First Amendment. And thirdly, § 230 of the Communications Decency Act of 1996 explicitly gives platform operators the right to edit content, in addition to the well-known immunity provided to platforms for content posted by others.18

[Vasundhara Majithia] 8:08

To center our discussion, can you please give us an overview of how social media companies go about monitoring hate speech on their platforms? And if their efforts have been effective in any way?

[Christopher Wolf] 8:20

Sure. So, for, at the beginning, from the beginning of the internet, the paradigm has been notice and takedown where upon the social media companies, all of which, all the major ones, have community standards or Terms of Use, which set forth what speech is permissible or not on their platform, consistent with what I just said, their right to determine what remains or not. But they’ve used what is known as the notice and takedown paradigm, where they rely on individuals or organizations to complain about content. And then their content moderators will review the complaints and decide whether or not the material stays up or not. Now think about the volume of content that is posted to the internet and on social media every day. Someone I think estimated that there’s something like 60,000 hours of content. I’m sorry, 60 hours a minute on YouTube. And I don’t think anyone’s undertaken to calculate exactly how much content goes up on Facebook every minute, but it’s a huge amount. And so most of the online services have used, not only do they use individuals and there’s been estimates that Facebook uses either hires directly or outsources upwards of 40 to 50,000 monitors to deal with notice and takedown but also to interact with AI and algorithms which are used to detect hateful content. But machines can’t detect everything such as the nuances of hate speech and misinformation. And also note that there’s a host of alternative social networks like Parler and Gab, that rose to popularity primarily because they promised minimal content moderation, but the norm is that the major platforms Twitter, Facebook, YouTube, espouse content moderation, hire a lot of people to do it, use AI and algorithms to accomplish it.

[Vasundhara Majithia] 10:31

Speaking of § 230, which you mentioned earlier, do you think the protection granted by § 230 to social media platforms increases or decreases their incentive to regulate hate speech on the platform?

[Christopher Wolf] 10:45

So on its face, § 230 decreases the incentive to regulate hate speech because it provides immunity from liability.19And so one would think that if there was a prospect of liability, that there would be greater review of content and probably greater censorship or taking down of content. And I think experience has shown that § 230 immunity decreases the incentive to regulate hate speech. But I would say that today, with so much attention being focused on possibly paring back the immunity of § 230, companies are being more careful in enforcing their Terms of Use or their community standards. And they would say that it is not in their interest to be a bastion of hate speech to be a service known as one that espouses hate. And certainly that’s true, for example, of YouTube, which I know works very hard to monitor the content that goes up on its service. And that’s because users don’t want to be bombarded by offensive content. But let me add to that answer that recently, the Facebook whistleblower Frances Hogan, the one who secretly copied tens of thousands of pages of Facebook internal records, she says that the evidence shows that the company is not being truthful to the public, when it says it’s making significant progress against hate and violence. And mis- . . . misinformation. And in fact, it’s in the company’s interest. It creates churn to have provocative content on its service.

[Vasundhara Majithia] 12:28

Since you’d mentioned about paring back § 230, I’d also like to briefly touch on the state regulation of social media platforms such as the Texas House Bill 20, which was recently enjoined by a federal judge.20 So this Bill claimed to be founded on free speech principles and prohibited social media platforms, especially major social media platforms, from censoring its users on the basis of their viewpoints. Do you think we can expect this to be an ongoing legal and political battle on both sides of the political spectrum? And do you think these efforts can impact the proliferation of hate speech on the internet?

[Christopher Wolf] 13:09

I don’t. And you mentioned that the law has been enjoined. In fact, in December, when Judge Robert Pittman granted the plaintiff’s request for a preliminary injunction, he’s the judge in the Western District of Texas, he wrote, “HB 20’s prohibition on ‘censorship’ and constraints on how social media platforms disseminate content violate the First Amendment.”21 He also noted multiple other First Amendment concerns including what he characterized as the law’s “unduly burdensome disclosure requirements on social media platforms” and the fact that the law only applied to the social media platforms with at least 50 million active users in the US.22 And on that point, he said, “the record in this case confirms that the legislature intended to target large-scale social media platforms perceived as being biased against conservative views, and the state’s disagreement with the social media platforms’ editorial discretions over their platforms. The evidence does suggest that the state discriminated between social media platforms for reasons that do not stand up to scrutiny.”23 And so the judge found that this was an abridgment of the platforms’ First Amendment rights that we talked about at the top of the podcast.24 There’s a similar law in Florida. And that’s also resulted in a preliminary injunction blocking its enforcement and that decision has been appealed by the State of Florida, and it’s currently before the Eleventh Circuit.25 So while the specifics of the laws are different, the Florida law’s aimed at preventing deplatforming of politicians. Remember, the former President Trump was kicked off of Twitter and others have been as well. And the Texas law addresses content moderation more generally, they raise a set of questions about the limits of Government have power over free speech rights and private platforms. And if I were a betting person, I would say that both laws will go down in flames.

[Vasundhara Majithia] 15:09

So Chris, when we consider hate speech regulation, we know that the European Union is much ahead of the U.S., perhaps because they don’t have a law akin to the First Amendment as well. So what do you think are some innovative approaches that European countries have taken to regulate hate speech?

[Christopher Wolf] 15:29

So I have something of a funny story with respect to that, because in the early 2000s, for a period of time, I was chair of the International Network Against Cyber Hate,26 which is a coalition of NGOs, including the Anti-Defamation League in the United States. On his board I serve, and a number of groups in Europe, including the one that prompted the new German law that I think we’re gonna talk about I think in a minute. But I was at a conference in Paris, actually hosted by the French government and this, this issue came up—why isn’t the United States doing more? And I explained what we talked about earlier, the confines of the First Amendment, and a former Minister of Justice for the French government got up and yelled at me, “Stop hiding behind the First Amendment!” As if I had the power to, to change it, even if I wanted to, which I don’t. And that very much reflects a view in Europe that the United States, which is the hub for so much internet activity, is . . . is very much restricted in what it can do in terms of legislating. And as you point out in Europe, there’s been a great deal more effort to legislate against online hate speech. And part of that is because of the tradition in Europe, to protect human dignity in the aftermath of the Holocaust and in the aftermath of the former Soviet Union’s restrictions on human dignity. There are laws that expressly prohibit denying the Holocaust, there are laws that prohibit the use of Nazi symbols and paraphernalia. Laws that we couldn’t have here in the U.S. And in addition, there are . . . hate speech is covered by media laws and criminal codes and codes of conduct and ethics. But I have to tell you, after, you know, examining this for decades now, I started my work on internet hate speech in the mid-1990s, in the early days of the internet, and I don’t think the E.U. is anywhere close to declaring victory over hate speech. And I don’t think they would admit that either. They’re proposing substantial reforms right now to try to do better in the fight against hate speech. And part of it’s because of the sheer magnitude that I mentioned earlier on. And in Germany, we see Neo-Nazi groups reappearing, and it’s impossible to measure to the extent to which hate groups have been driven underground onto invisible networks. I’ll also mentioned that there’s a lack of uniformity over what speech is considered harmful. In Europe, members of the E.U., for example, Hungary and Poland, don’t believe that anti-LGBTQ speech is pro . . . should be criminalized and that the members of those minority groups should be protected. And also the emphasis in the E.U. on digital privacy, which is the area of law that I practice for many years, actually may result in anonymous hate speech going unpunished because many of the jurisdictions don’t allow sort of the lifting of the veil to find out who is propagating the hate speech under the, under the name, in the name of protecting privacy. But clearly the notice and takedown rule that was imposed by a memorandum of understanding between many European countries and some of the larger platforms27 and the German NetzDG law,28 which I think we can talk about in a moment, with financial penalties have certainly pressured the larger platforms to devote more attention and more resources to online hate. Let me finish up my answer to this question by noting that yesterday, just yesterday, the Washington Post published an editorial with a heading “The US Could Learn From Europe, EU Offers a Model for Online Speech Rules.”29 And the Post was referring to a E.U. proposal for a Digital Services Act that would require platforms to make real-time decisions on what constitutes hate speech.30 And they would face very tough penalties for failing to do so.31 I think the Post failed to recognize that algorithms are going to end up deleting without warning and without stopping to consider context. Speech such as whether . . . it is a product of online bullying or the efforts of those fighting online bullying. Context matters dramatically in deciding whether it’s something . . . is or is not hate speech. And so the result, I think it would be the Digital Services Act, which has a long way to go before it gets passed in Europe, under their system of reviewing legislation. But if it’s passed it will result in the elimination of wide swathes of online content, including perfectly appropriate content. And so much greater effort by online platforms to fight hate speech is needed. Obviously, we’ve been talking about that. But overly broad filtering laws inspired by the E.U. draft laws is not the right way to go.

[Vasundhara Majithia] 20:28

Speaking of the NetzDG law, the law criminally penalized the social media companies by requiring them to remove or block content in violation of the Act, which content is also indictable pursuant to like the existing listed statutes in the German Penal Code.32 Do you think this form of like tailored moderation would be compatible with the U.S. First Amendment protections or not?

[Christopher Wolf] 20:54

Well, it depends on what speech the criminal statutes would cover. Obviously, in the US, we have laws against child pornography, incitement-to-terrorism-type laws and others, but other content illegal in Germany because it promotes racial discrimination, probably not.33 You know, 230 specifies that the immunity of § 230 doesn’t apply to violations of criminal statutes.34 So the platforms have liability for content that violates criminal laws, of which they become aware. But I don’t think the 24-hour notice and takedown rules of the NetzDG law35 would be possible in the U.S. because they are so content-specific; it would be the government proscribing certain content.

[Vasundhara Majithia] 21:39

The NetzDG law also imposes a procedure that requires social media platforms to compile a list of the complaints they receive, and a list of removed and blocked posts.36 So do these reports assuage any concerns about the lack of transparency on how social media companies enforced their speech regulation policies? Because social media companies have been criticized often for having very disproportionate and uneven policies when it comes to regulating hate speech.

[Christopher Wolf] 22:11

So I’ve been happy to see that more and more of the platforms are providing transparency reports, because they’re required to do it in other contexts. For example, privacy and cybersecurity. And so yes, I do think it partially assuages concerns about the lack of transparency and I’d like to see more of those kinds of reports here. But I’ll note that the NetzDG law doesn’t provide judicial oversight with respect to a decision to take down content, whether that violates a person’s right to speak or to access information. And when France enacted a similar law, it was almost immediately enjoined as unconstitutional shortly after going into effect. And it’s also worth noting that Russia, Philippines and the [sic] Singapore, hardly models of progressive societies, have cited that law as an example they may want to follow, which I think speaks volumes about whether or not that law should serve as a model.37

[Vasundhara Majithia] 23:09

So if not, do you have any other policy recommendations for regulating hate speech in the U.S. that might be effective?

[Christopher Wolf] 23:16

I do. I was actually the co-author of a book called Viral Hate: Containing its Spread on the Internet,38 which you may have mentioned in the introduction, and we spent a lot of time in that book, even though it was written a number of years ago. Some of the recommendations still pertain. For example, you know, starting at the very beginning. There needs to be much greater cyber literacy education starting at the earliest age since kids are using technology starting at the earliest age. And the education ought to be about how children can protect themselves, how they can understand when they’re encountering harmful content. But it also can help teach them about how to behave in a civil society. For years, I’ve been talking about the benefits of counter-speech and the whole “marketplace of ideas” construct presupposes that people will fight hate speech with the truth and with counter-speech. But of course, we haven’t seen that because, for whatever reason, there aren’t the incentives for people to engage in counter-speech; haters seem to be more incentivized than right-minded citizens. The ADL, the Anti-Defamation League, had an interesting exercise last summer, and we may see it repeated, called Stop Hate For Profit.39 This was July of ’21 [sic]. And the idea was that advertisers were requested to stop advertising on Facebook because it wasn’t doing enough to fight online hate. And I think to our great surprise, we had something like 1000 advertisers, including some very large brand names, participate in that. And Stop Hate for Profit is a coalition that includes the ADL, but also other civil rights groups. It’s continuing; I wouldn’t be surprised to see them engage in various efforts to put pressure on the platforms. I also think that I agree with Professor Danielle Citron at the University of Virginia School of Law, with whom I worked on this issue for a number of years. She and I presented a report to the UK Parliament,40 and she’s been a consultant to the ADL for a number of years. You know, there are a number of proposals to amend § 230. But I think the most likely, or maybe not the most likely, but certainly the one with the most merit, in my view, is hers.41 Which is to create an incentive for companies to do more monitoring of hate speech and to take down for example, revenge porn, and doxxing, and swatting, and other forms of hate speech. And so this is her so-called duty of care threshold, that if they take reasonable steps to curb unlawful conduct, they get § 230 protection. But if they don’t, they don’t. It’s probably not as simple as that to articulate in the legislation. But that’s, that’s the gist of it. And I think it has, it has great potential. But, you know, in addition, you know, platforms have pretty strong policies against hate speech, the issue is how much are they enforcing it. But as new platforms emerge, they need to be encouraged or maybe even required to have hate speech rules. There’s also in privacy, there’s a concept known as privacy by design that ought to be built into a product from the outset. You know, I’ll coin this one: “fighting hate by design.” I think that companies ought to take a look at their existing technologies, as well as when they’re designing new technologies to see whether it encourages the ability of people to engage in hate speech or not. And also we need to expand tools and services for targets of harassment. People often just don’t know where to turn when they’re the victim of online harassment, bullying, revenge porn, hate speech, and so forth. And then you’ve mentioned it: increased transparency, which would allow for increased oversight. You know, there’s lots more the government can do beyond . . . or instead of just regulating speech, which isn’t permissible. For example, they can appropriate money to study online hate; see why it happens, where it’s happening, and to talk about remedies for individuals. And this would also include the training of law enforcement. They could commission research on the tools and services that might mitigate online hate. So that’s something that the Anti-Defamation League has been proposing and something I think is very important as well.

[Vasundhara Majithia] 27:49

Thank you so much for your insights, Chris, really, thank you so much.

[Christopher Wolf] 27:52

Happy to be here. Thanks so much for asking me to talk about it.

[Darshini Ramiah] 28:00

As we look to the future of online platform moderation, it is clear that the issue of regulating hate speech in the United States offers no easy solutions. While some may criticize the lack of current incentives for social media companies to moderate their platforms, a more prescriptive legislative approach will not guarantee better outcomes. Despite proactive European efforts, hate speech remains a pervasive global adversary. Operational limitations to the European regulatory model also impedes its application in the United States.

[Kavya Dasari] 28:38

We are presented with a situation that requires us to balance the importance of free speech in the promotion of democracy against the detrimental impacts of hate speech on the members of our society, particularly on minority populations. Considering these seemingly countervailing interests, rather than focus our efforts on legislation that moderates speech, to curtail hate speech on social media platforms, it might be more effective to use alternative methods, such as devoting governmental resources to offer cyber-literacy education, study the generation and impact of hate speech on society, and develop support networks to assist victims of hate speech.

[Darshini Ramiah] 29:30

Another possible approach, as discussed in the interview, is to amend § 230 to incentivize social media companies to moderate hate speech on their platforms. Danielle Citron, Professor in Law at the University of Virginia School of Law, proposes an amendment to the § 230 statute, which in effect extends § 230 immunity only to social media companies who have taken reasonable steps to moderate content on their platforms.42 This would put judges in the position to determine if companies have met the duty of care standard.43

[Kavya Dasari] 30:07

Perhaps, the best solution requires a combination of these approaches. Instead of either private companies or the government taking charge, different stakeholders should work together to tackle the generation and moderation of hate speech. Such a collaborative approach will better safeguard the tradition of free speech and democracy whilst seeking to create a less hostile online environment. Time will tell whether the consensus for such a team effort will materialize.

[Darshini Ramiah] 30:39

Thank you for listening! The BTLJ Podcast is brought to you by Podcast Editors Seth Bertolucci and Isabel Jones. Our Executive Producers are BTLJ Senior Online Content Editors Karnik Hajjar and Thomas Horn. BTLJ’s Editors-in-Chief are Loc Ho and Natalie Crawford. We would also like to credit Mariana Garcia Barragan Lopez and Ruchika Wadhawan for their help with the research. The information in this podcast is up-to-date as of February 4, 2022. The interview with Mr. Christopher Wolf took place on February 1, 2022.