Recent Podcasts & Articles on Content Moderation

One of the great things happening now is that more and more attention is being focused at one of my favorite subjects: content moderation by internet platforms. It's an important subject because a large amount of online speaking and listening happens through platforms. There has been a ton of good writing about this over many, many, years but I want to focus on four relatively recent bits here.

Radiolab, Post No Evil, Aug 17, 2018

Radiolab tells a sweeping story of the development of Facebook's content removal policies, deftly switching perspectives from people protesting its former policy against breastfeeding, to the headquarters workers developing policy and dealing with high-profile controversies, to the offshore contractors on the front line evaluating thousands pieces of disturbing content every day.

Post No Evil is a great introduction to the issues in this space but I think its most insightful moment is relatively buried. At 1:02, this exchange happens:

    Simon Adler: What I think this [controversy around a beheading video] shows is that Facebook has become too many different things at the same time. So Facebook is now sort of a playground, it's also an R-rated movie theater, and now it's the front page of a newspaper.
    Jad Abumrad (?): Yeah, it's all those things at the same time.
    Simon Adler: It's all those things at the same time and what we, the users, are demanding of them is that they create a set of policies that are just. And the reality is justice means a very different thing in each one of these settings.
I've tried to emphasize when I talk about content policies that there is no one perfect set of policies that should exist for every service but rather that the policies serve the product or service goal that the platform is trying to create. The type of experience Google web search is trying to create ("you can find whatever you are looking for") is very different from the experience that the Disney was going for when it launched a social network for pre-teens where users could only talk to each other through a set of pre-chosen phrases ("this place is REALLY safe for kids").

Think of the content policies you might want at a library versus a dinner party. When I go to a library, it is very important to me that they have books about the tiny niche of the world that I am interested in at that moment. For example, books on bias in machine learning or Italian Amaros. It doesn't really bother me if they have books on things I don't care as much about, like American football. For books that I disagree with, such as To Save America, or think are evil, such as Mein Kampf, I may question the curators' choices but I expect breadth, and the inclusion of those books is less bad than if the books I cared about were not included.*

Change to the dinner party context and my preferences are reversed. Dinner parties that don't hit on bias in machine learning are fine by me but if I was at a dinner party where someone couldn't shut up about American football, I would not call it a success. A dinner party where a guest was espousing the views of Mein Kamfp would be one I would cause a scene at and leave. Over-inclusion is a huge problem and outweighs inclusion of my specific niche interests.

I've never been a big Facebook user, but it used to remind me of a dinner party. I thought that's what it was going for with its various content policies. Now, as Simon Adler says, it is trying to be many things (perhaps everything?) to many people (perhaps everyone?) and that is really hard (perhaps impossible?). It also has made the decision that some of the types of moderation that other platforms have used to deal with those problems (blocking by geography, content markings for age, etc.**) don't work well for it's goals. As Radiolab concludes starting at 1:08:
    Robert Krulwich (?): Where does that leave you feeling? Does this leave you feeling that this is just, that at the end this is just undoable?
    Simon Adler: I think [Facebook] will inevitably fail, but they have to try and I think we should all be rooting for them.
Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, last revised Apr 17, 2018

Professor Klonick does an excellent job of describing why platforms may want to moderate content, how they do it, and the legal framework and regulatory framework that underpins it all. This is a very large expanse of ground, covered extremely well.*** If you are new to this area and want an in depth briefing, I highly recommend The New Governors. Her proscriptions are to push platforms towards greater transparency in their content moderation decision making and policies, as well as greater accountability to users. As in Post No Evil (for which she was a source), Professor Klonick identifies the popular concern about platform policies and locates it as a mismatch between platform policies and user expectations.

Professor Klonick also draws out the similarities and differences between content moderation and judicial decision-making. She writes:
    Beyond borrowing from the law substantively, the [Facebook content moderation rule documents called] the Abuse Standards borrow from the way the law is applied, providing examples and analogies to help moderators apply the rules. Analogical legal reasoning, the method whereby judges reach decisions by reasoning through analogy between cases, is a foundation of legal theory. Though the use of example and analogy plays a central role throughout the Abuse Standards, the combination of legal rule and example in content moderation seems to contain elements of both rule-based legal reasoning and analogical legal reasoning. For example, after stating the rules for assessing credibility, the Abuse Standards give a series of examples of instances that establish credible or noncredible threats. “I’m going to stab (method) Lisa H. (target) at the frat party (place),” states Abuse Standards 6.2, demonstrating a type of credible threat that should be escalated. “I’m going to blow up the planet on new year’s eve this year” is given as an example of a noncredible threat. Thus, content moderators are not expected to reason directly from prior content decisions as in common law — but the public policies, internal rules, examples, and analogies they are given in their rulebook are informed by past assessments.
(footnotes omitted). Content moderation rules are always evolving and changing. Just as there is no one perfect set of content policies for all services, there is also no one perfect static set of rules for any given service. Instead, just like the law, the rules are always changing and being adapted to deal with new realities.

Ellen Pao, Let's Stop Pretending Facebook and Twitter's CEOs Can't Fix This Mess, Wired, Aug 28, 2018; and Kara Swisher and Ron Wyden, Full Q&A: Senator Ron Wyden on Recode Decode, Recode Decode, Aug 22, 2018

I include these two as good examples of the current mood. Both Ms. Pao and Senator Wyden are friends of tech and highly tech knowledgeable. Ms. Pao was the CEO of Reddit. Senator Wyden was one of the authors of the original statute that encouraged content moderation by protecting platforms that moderate content from many types of liability. Nevertheless, Ms. Pao believes that the tech CEO's don't care about and aren't trying to solve the issue of bad speech on their platforms. She calls for legal liability for falsity and harassment on platforms.
    If you’re a CEO and someone dies because of harassment or false information on your platform—even if your platform isn’t alone in the harassment—your company should face some consequences. That could mean civil or criminal court proceedings, depending on the circumstances. Or it could mean advertisers take a stand, or your business takes a hit.
Senator Wyden says that he is working on legislation that:
    ... lay[s] out what the consequences are when somebody who is a bad actor, somebody who really doesn’t meet the decency principles that reflect our values, if that bad actor blows by the bounds of common decency, I think you gotta have a way to make sure that stuff is taken down.
I strongly disagree with legislating "common decency" because I think there is good evidence that it would do more harm than good, particularly to suppress the speech of unfairly marginalized groups. More broadly both Wyden and Pao seem to believe that these problems are relatively easy to solve, if only the CEOs cared, or were legally liable. I don't agree that this is an easy problem to solve in part because I don't see examples of it having been solved in spite of the value of solving it. As I have written previously:
    ... I don't know of many good examples outside of heavily editorial ones with a relatively small set of content producers, that have been able to be both extremely inclusive and progressive towards what I think are the "right" kind of marginalized ideas while keeping out the ones that I think are marginalized for very good reason. ... Many of the larger Internet platforms are trying, with varying degrees of success and failure, to do this right, as I was when I worked at Google and Twitter. That said, I don't have a great example of a platform or community that is working exactly as I would like. And it seems like that is a big and worthy challenge.
(footnotes omitted). As I said in that post, if you have a good example, please send it my way. In the meantime, my belief is that this is difficult, there is no silver bullet, and we should continue trying.

Nevertheless, it is important to understand that this is where public opinion is headed and these two pieces are a good indication.

Finally, 

If you want to find out more about content moderation, here's a twitter list of content moderation folks on Twitter. If I'm missing someone, please let me know.

* This is really specific to me and your mileage may vary widely. I am a white male with lots of privilege. Take what I say about evil content with a huge grain of salt. I am relatively unthreatened by that content compared to someone who has had their life impacted by that evil. I get that some societies will want to ensure that books like Mein Kampf are not available in libraries. I don't believe that is the right way forward, but I may not be best situated to make that call.

** Facebook does use some of these tactics for advertising and Facebook Pages but, as far as I know, not for Facebook Posts or Groups.

*** Professor Klonick's description of Twitter's early content policies as non-existent is mistaken. Even early in Twitter's history the company had content policies which resulted in the removal of content, for example, for impersonation or child pornography. I think she just didn't have a good source of information for Twitter.