First Amendment and Earlyish Content Moderation

This thread got long, so here is a perhaps more easily read copy of it:

One thing that came up on #InLieuOfFun that I didn't get the chance to answer was @klonick asking about whether the earlyish content moderation was based on "First Amendment Norms." I think the answer to that is a bit more complicated than it may seem.

Am speaking from my experience at Google (outside counsel 2000-3, inside 2003-9) and Twitter (2009-13). Others may have used different approaches.

By "First Amendment Norms" I take @Klonick to mean that the platforms were thinking about what a govt might be OK banning under 1st Am jurisprudence in the US.

Of course, the platforms aren't govt & 1st Am doesn't speak to what govts ban, only what they cannot. But still...

To restate, "1st Am Norms" might be something like platforms ~only~ removing what was removable under US 1st Am jurisprudence ~and~ had been generally made illegal in the US (or elsewhere if doing geo-removals), irrespective of 47 USC 230.

First, lots of content removal was simply not cognizable under 1st Am analysis. Spam was a significant issue for Google's various products & Twitter. I don't know of a jurisdiction where spam is illegal & it is unclear whether a govt banning it would survive 1st Am.

Nevertheless, spam removal (both by hand and automated) was/is extremely important and was done on the basis of improving user experience / usefulness of the products.

Similarly, nudity & porn were sometimes banned for similar reasons. Some types of products (video) might be overrun by porn and be unwelcome for other uses / users if porn was not discouraged through removal, especially early. And yet, the 1st Am is quite porn-friendly.

There were also some places that might look like they fit 1st Am norms but were really the platforms deferring to courts. For example, a court order for the removal of defamation would result in removal (irrespective of §230 immunity).

You can square that w/ 1st Am norms but the analysis was not based on what types of defamation or other causes of action the 1st Am would allow, but rather deferring to courts of competent jurisdiction in democracyish places.* <- this last bit was complicated + inexact.

Where we refused, it was often about fairness, justice, human rights, or jurisdictional distance from the service, not the 1st Am per se.

All of that said, I do think there were times when we look to the 1st Am (and freedom of expression exceptions more generally) to try to grapple with what the right policy was for each product.

For example, understanding what types of threats we would remove from Blogger, we used US precedent to guide our rules. My memory is hazy as to why, but I believe it stemmed from two factors: (a) that we felt that we were relatively new to analyzing this stuff but that

the Courts had more experience drawing those lines, and (b) that the Courts and Congress, being part of a functioning democracy, might reflect the general will of the people. These were overly simplistic ideas but that's my memory.

In summary: while I think there is something to the idea that 1st Am norms were important, I think the bigger impetus was trying to effectively build the products for our then users -- to have the product do the job the user wanted -- within legal/ethical constraints. But...

But, we did all of that from a particular set of perspectives (and that's what the 1st Am norms are probably part of) that was nowhere near diverse enough given the eventual reach and importance of our products.

I'd love the read of others doing or observing this work at the time on whether I'm misremembering/misstating @nicolewong @goldman @delbius @jilliancyork @adelin @rmack @mattcutts @clean_freak @helloyouths @dswillner +many more + those who aren't on Twitter… (please tag more)

And, in case you want to see the question I'm referring to, from @Klonick on #InLieuOfFun look here at minute 22:11 (though the whole conversation was good):