This thread got long, so here is a perhaps more easily read copy of it:
One thing that came up on #InLieuOfFun that I didn't get the chance to answer was @klonick asking about whether the earlyish content moderation was based on "First Amendment Norms." I think the answer to that is a bit more complicated than it may seem.
1/
Am speaking from my experience at Google (outside counsel 2000-3, inside 2003-9) and Twitter (2009-13). Others may have used different approaches.
2/
By "First Amendment Norms" I take @Klonick to mean that the platforms were thinking about what a govt might be OK banning under 1st Am jurisprudence in the US.
Of course, the platforms aren't govt & 1st Am doesn't speak to what govts ban, only what they cannot. But still...
3.1/
To restate, "1st Am Norms" might be something like platforms ~only~ removing what was removable under US 1st Am jurisprudence ~and~ had been generally made illegal in the US (or elsewhere if doing geo-removals), irrespective of 47 USC 230.
3.2/
First, lots of content removal was simply not cognizable under 1st Am analysis. Spam was a significant issue for Google's various products & Twitter. I don't know of a jurisdiction where spam is illegal & it is unclear whether a govt banning it would survive 1st Am.
4.1/
Nevertheless, spam removal (both by hand and automated) was/is extremely important and was done on the basis of improving user experience / usefulness of the products.
4.2/
Similarly, nudity & porn were sometimes banned for similar reasons. Some types of products (video) might be overrun by porn and be unwelcome for other uses / users if porn was not discouraged through removal, especially early. And yet, the 1st Am is quite porn-friendly.
4.3/
There were also some places that might look like they fit 1st Am norms but were really the platforms deferring to courts. For example, a court order for the removal of defamation would result in removal (irrespective of §230 immunity).
5.1/
You can square that w/ 1st Am norms but the analysis was not based on what types of defamation or other causes of action the 1st Am would allow, but rather deferring to courts of competent jurisdiction in democracyish places.* <- this last bit was complicated + inexact.
5.2/
Where we refused, it was often about fairness, justice, human rights, or jurisdictional distance from the service, not the 1st Am per se.
5.3/
All of that said, I do think there were times when we look to the 1st Am (and freedom of expression exceptions more generally) to try to grapple with what the right policy was for each product.
6.1/
For example, understanding what types of threats we would remove from Blogger, we used US precedent to guide our rules. My memory is hazy as to why, but I believe it stemmed from two factors: (a) that we felt that we were relatively new to analyzing this stuff but that
6.2/
the Courts had more experience drawing those lines, and (b) that the Courts and Congress, being part of a functioning democracy, might reflect the general will of the people. These were overly simplistic ideas but that's my memory.
6.3/
In summary: while I think there is something to the idea that 1st Am norms were important, I think the bigger impetus was trying to effectively build the products for our then users -- to have the product do the job the user wanted -- within legal/ethical constraints. But...
7.1/
But, we did all of that from a particular set of perspectives (and that's what the 1st Am norms are probably part of) that was nowhere near diverse enough given the eventual reach and importance of our products.
7.2/
I'd love the read of others doing or observing this work at the time on whether I'm misremembering/misstating @nicolewong @goldman @delbius @jilliancyork @adelin @rmack @mattcutts @clean_freak @helloyouths @dswillner +many more + those who aren't on Twitter… (please tag more)
8/
And, in case you want to see the question I'm referring to, from @Klonick on #InLieuOfFun look here at minute 22:11 (though the whole conversation was good):
https://youtu.be/oYRMd-X77w0?t=1331
9/9
First Amendment and Earlyish Content Moderation
Posted by
A M
on
5/07/2020
0
comments
[
Labels:
expression,
law
]
Recent Podcasts & Articles on Content Moderation
One of the great things happening now is that more and more attention is being focused at one of my favorite subjects: content moderation by internet platforms. It's an important subject because a large amount of online speaking and listening happens through platforms. There has been a ton of good writing about this over many, many, years but I want to focus on four relatively recent bits here.
Radiolab, Post No Evil, Aug 17, 2018
Radiolab tells a sweeping story of the development of Facebook's content removal policies, deftly switching perspectives from people protesting its former policy against breastfeeding, to the headquarters workers developing policy and dealing with high-profile controversies, to the offshore contractors on the front line evaluating thousands pieces of disturbing content every day.
Post No Evil is a great introduction to the issues in this space but I think its most insightful moment is relatively buried. At 1:02, this exchange happens:
- Simon Adler: What I think this [controversy around a beheading video] shows is that Facebook has become too many different things at the same time. So Facebook is now sort of a playground, it's also an R-rated movie theater, and now it's the front page of a newspaper.
Jad Abumrad (?): Yeah, it's all those things at the same time.
Simon Adler: It's all those things at the same time and what we, the users, are demanding of them is that they create a set of policies that are just. And the reality is justice means a very different thing in each one of these settings.
Think of the content policies you might want at a library versus a dinner party. When I go to a library, it is very important to me that they have books about the tiny niche of the world that I am interested in at that moment. For example, books on bias in machine learning or Italian Amaros. It doesn't really bother me if they have books on things I don't care as much about, like American football. For books that I disagree with, such as To Save America, or think are evil, such as Mein Kampf, I may question the curators' choices but I expect breadth, and the inclusion of those books is less bad than if the books I cared about were not included.*
Change to the dinner party context and my preferences are reversed. Dinner parties that don't hit on bias in machine learning are fine by me but if I was at a dinner party where someone couldn't shut up about American football, I would not call it a success. A dinner party where a guest was espousing the views of Mein Kamfp would be one I would cause a scene at and leave. Over-inclusion is a huge problem and outweighs inclusion of my specific niche interests.
I've never been a big Facebook user, but it used to remind me of a dinner party. I thought that's what it was going for with its various content policies. Now, as Simon Adler says, it is trying to be many things (perhaps everything?) to many people (perhaps everyone?) and that is really hard (perhaps impossible?). It also has made the decision that some of the types of moderation that other platforms have used to deal with those problems (blocking by geography, content markings for age, etc.**) don't work well for it's goals. As Radiolab concludes starting at 1:08:
- Robert Krulwich (?): Where does that leave you feeling? Does this leave you feeling that this is just, that at the end this is just undoable?
Simon Adler: I think [Facebook] will inevitably fail, but they have to try and I think we should all be rooting for them.
Professor Klonick does an excellent job of describing why platforms may want to moderate content, how they do it, and the legal framework and regulatory framework that underpins it all. This is a very large expanse of ground, covered extremely well.*** If you are new to this area and want an in depth briefing, I highly recommend The New Governors. Her proscriptions are to push platforms towards greater transparency in their content moderation decision making and policies, as well as greater accountability to users. As in Post No Evil (for which she was a source), Professor Klonick identifies the popular concern about platform policies and locates it as a mismatch between platform policies and user expectations.
Professor Klonick also draws out the similarities and differences between content moderation and judicial decision-making. She writes:
- Beyond borrowing from the law substantively, the [Facebook content moderation rule documents called] the Abuse Standards borrow from the way the law is applied, providing examples and analogies to help moderators apply the rules. Analogical legal reasoning, the method whereby judges reach decisions by reasoning through analogy between cases, is a foundation of legal theory. Though the use of example and analogy plays a central role throughout the Abuse Standards, the combination of legal rule and example in content moderation seems to contain elements of both rule-based legal reasoning and analogical legal reasoning. For example, after stating the rules for assessing credibility, the Abuse Standards give a series of examples of instances that establish credible or noncredible threats. “I’m going to stab (method) Lisa H. (target) at the frat party (place),” states Abuse Standards 6.2, demonstrating a type of credible threat that should be escalated. “I’m going to blow up the planet on new year’s eve this year” is given as an example of a noncredible threat. Thus, content moderators are not expected to reason directly from prior content decisions as in common law — but the public policies, internal rules, examples, and analogies they are given in their rulebook are informed by past assessments.
Ellen Pao, Let's Stop Pretending Facebook and Twitter's CEOs Can't Fix This Mess, Wired, Aug 28, 2018; and Kara Swisher and Ron Wyden, Full Q&A: Senator Ron Wyden on Recode Decode, Recode Decode, Aug 22, 2018
I include these two as good examples of the current mood. Both Ms. Pao and Senator Wyden are friends of tech and highly tech knowledgeable. Ms. Pao was the CEO of Reddit. Senator Wyden was one of the authors of the original statute that encouraged content moderation by protecting platforms that moderate content from many types of liability. Nevertheless, Ms. Pao believes that the tech CEO's don't care about and aren't trying to solve the issue of bad speech on their platforms. She calls for legal liability for falsity and harassment on platforms.
- If you’re a CEO and someone dies because of harassment or false information on your platform—even if your platform isn’t alone in the harassment—your company should face some consequences. That could mean civil or criminal court proceedings, depending on the circumstances. Or it could mean advertisers take a stand, or your business takes a hit.
- ... lay[s] out what the consequences are when somebody who is a bad actor, somebody who really doesn’t meet the decency principles that reflect our values, if that bad actor blows by the bounds of common decency, I think you gotta have a way to make sure that stuff is taken down.
- ... I don't know of many good examples outside of heavily editorial ones with a relatively small set of content producers, that have been able to be both extremely inclusive and progressive towards what I think are the "right" kind of marginalized ideas while keeping out the ones that I think are marginalized for very good reason. ...
Many of the larger Internet platforms are trying, with varying degrees of success and failure, to do this right, as I was when I worked at Google and Twitter. That said, I don't have a great example of a platform or community that is working exactly as I would like. And it seems like that is a big and worthy challenge.
Nevertheless, it is important to understand that this is where public opinion is headed and these two pieces are a good indication.
Finally,
If you want to find out more about content moderation, here's a twitter list of content moderation folks on Twitter. If I'm missing someone, please let me know.
* This is really specific to me and your mileage may vary widely. I am a white male with lots of privilege. Take what I say about evil content with a huge grain of salt. I am relatively unthreatened by that content compared to someone who has had their life impacted by that evil. I get that some societies will want to ensure that books like Mein Kampf are not available in libraries. I don't believe that is the right way forward, but I may not be best situated to make that call.
** Facebook does use some of these tactics for advertising and Facebook Pages but, as far as I know, not for Facebook Posts or Groups.
*** Professor Klonick's description of Twitter's early content policies as non-existent is mistaken. Even early in Twitter's history the company had content policies which resulted in the removal of content, for example, for impersonation or child pornography. I think she just didn't have a good source of information for Twitter.
Posted by
A M
on
8/31/2018
0
comments
[
Labels:
code,
expression,
law
]
Hallin Spheres, Overton Windows, Constitutional Interpretation, and Online Platforms (oh my!)
Hallin Spheres, Overton Windows, and certain theories of constitutional interpretation[1] are all ways of thinking about what can and cannot be argued successfully, or at all, within different contexts. They are very applicable to our current discussion about online platforms and the types of speech they contain. This post aims to briefly describe all three, and how they might apply to online speech. One thing that seems to follow from thinking about the Internet and online platforms through these lenses is that the widening of participants that the Internet brought tends to increase the types of arguments that can be had and tends to decrease the amount of consensus available. While I am generally optimistic about the change as a way of accelerating social progress and bringing more, previously marginalized people into the "room where it happens," some people and ideas were marginalized for very good reason. Implementing global platforms or communities inclusively, but only towards progress seems possible but I don't yet know of a good example of it being done successfully at scale. (If you know of more, I'd love to hear about them!)
Hallin Spheres
Daniel Hallin is a journalist and professor of media systems who wrote The Uncensored War: The Media and Vietnam about the way journalists covered the Vietnam war through a description of three spheres of ideas on which journalists report. At the two extremes are the sphere of consensus, for ideas journalists assume their readers accept and agree with; and the sphere of deviance, for ideas that the journalists believe their readers disagree with vehemently. Between the two is the sphere of legitimate controversy where a journalists assumes her readers believe that there may be debate. These spheres come together like a donut, with the sphere of deviance outside the donut, the sphere of legitimate controversy making up the dough, and the sphere of consensus, the hole in the middle (the Canadian in me can't help but think of it as the Timbit of consensus).
Hallin observed that within the spheres of deviance or consensus, journalists would deviate from "objective" journalism in a variety of ways, such as adopting sphere of consensus views without challenge, excluding sphere of deviance sources and ideas from any mention in their stories, and generally reinforcing the divisions between the spheres. For more specifically on Hallin Spheres and the effect of the the Internet on mainstream media ability to maintain them, see Jay Rosen, Audience Atomization Overcome: Why the Internet Weakens the Authority of the Press (more on that below). A diagram from Hallin's book discussing the spheres is below from Google Books.
![]() |
From Hallin, Daniel, The Uncensored War: The Media and Vietnam, University of California Press (1989), p 117. |
The Overton Window
Joseph Overton was writer and think tanker who proposed the window as a metaphor for understanding which ideas are viable from a political perspective in a certain community. Ideas in the window are viable and can be debated and adopted. Ideas outside the window cannot. Overton appears to have seen the window along a continuum of more and less government intervention (which many on the right would call "less free" and "free") and believed that while the window constrained policy discussions, it was political and social forces that could change whether ideas were inside or outside of the window. The Overton Window has been in the news a lot lately as a way to explain that normalizing extremely radical ideas can move the whole window towards those ideas and thereby move some slightly less radical ideas into the center of the window. For example, see Politico's How an Obscure Conservative Theory Became the Trump Era’s Go-to Nerd Phrase and Vox Media's description.
![]() |
Image of the Overton Window from Wikipedia. |
I've had a harder time coming up with a good source or pithy name for these same ideas in constitutional interpretation. The first time I learned about them was in an Advanced Constitutional Law class I was lucky to take from Professor Lawrence Lessig. My recollection / understanding is as follows, but all errors are mine, not his. First off, the Constitution is made of words. A lot of constitutional law is about interpreting those words and how they might apply to situations in a particular case. For example, is death by a particular lethal injection "cruel and unusual punishment" and therefor illegal under the Eighth Amendment? There are a bunch of different ways to go about this and some significant disagreement among jurists, however the question that Lessig was asking was, is there a context behind all of this that makes some thoughts thinkable by the Supreme Court Justices, while others are not?[2] For example, how do all but one of the Justices in Plessy v. Ferguson, not understand that separate is not equal, whereas almost sixty years later, all of the Brown v Board of Education Court does? Compare:
"The object of the [fourteenth] amendment was undoubtedly to enforce the absolute equality of the two races before the law, but, in the nature of things, it could not have been intended to abolish distinctions based upon color, or to enforce social, as distinguished from political, equality, or a commingling of the two races upon terms unsatisfactory to either. Laws permitting, and even requiring, their separation in places where they are liable to be brought into contact do not necessarily imply the inferiority of either race to the other, and have been generally, if not universally, recognized as within the competency of the state legislatures in the exercise of their police power. The most common instance of this is connected with the establishment of separate schools for white and colored children, which has been held to be a valid exercise of the legislative power even by courts of States where the political rights of the colored race have been longest and most earnestly enforced." Plessy v Ferguson, 163 U.S. 537, 544 (1896)with:
"To separate [students] from others of similar age and qualifications solely because of their race generates a feeling of inferiority as to their status in the community that may affect their hearts and minds in a way unlikely ever to be undone. The effect of this separation on their educational opportunities was well stated by a finding in the Kansas case by a court which nevertheless felt compelled to rule against the Negro plaintiffs: Segregation of white and colored children in public schools has a detrimental effect upon the colored children. The impact is greater when it has the sanction of the law, for the policy of separating the races is usually interpreted as denoting the inferiority of the negro group. A sense of inferiority affects the motivation of a child to learn. Segregation with the sanction of law, therefore, has a tendency to [retard] the educational and mental development of negro children and to deprive them of some of the benefits they would receive in a racial[ly] integrated school system. ... We conclude that, in the field of public education, the doctrine of "separate but equal" has no place. Separate educational facilities are inherently unequal." Brown v Board of Education, 347 U.S. 483, 494-5 (1954)For more on these cases, listen to the difference between the oral arguments in Plessy and Brown, and read Justice Harlan's dissent in Plessy. In the language of Hallin Spheres, segregation being detrimental and not "equal," moved from pretty close to the sphere of deviance for the Court, to the Sphere of consensus. And, there have been similarly important, radical, and progressive shifts in Supreme Court understanding of specific words in the constitution in many other areas that I care about.[3]
Enter, the Internet
As Jay Rosen and Shaun Lau have said better than I ever could, the Internet and its propensity to allow for many more speakers to join the conversation[4] have had a significant effect on windows and spheres in the mainstream media, on internet media, and in society at large.
Rosen gives a great description of Hallin Spheres and argues that the "audience's" increased ability to talk among themselves and talk back to bigger media players weakens the big media players ability to maintain the spheres without challenge:
"Now we can see why blogging and the Net matter so greatly in political journalism. In the age of mass media, the press was able to define the sphere of legitimate debate with relative ease because the people on the receiving end were atomized— meaning they were connected “up” to Big Media but not across to each other. But today one of the biggest factors changing our world is the falling cost for like-minded people to locate each other, share information, trade impressions and realize their number. Among the first things they may do is establish that the “sphere of legitimate debate” as defined by journalists doesn’t match up with their own definition. ... what’s [] happening is that the authority of the press to assume consensus, define deviance and set the terms for legitimate debate is weaker when people can connect horizontally around and about the news." Jay Rosen, Audience Atomization Overcome: Why the Internet Weakens the Authority of the PressLau makes the same point before expanding on it to a bigger point about how internet communities affect discourse in this excellent thread. He's writing about how Jemelle Hill's description of Donald Trump as a white supremacist was seen by ESPN as in the sphere of deviance, whereas many on the Internet disagreed. You should go read it.
Among the many good points Lau and Rosen make is that Hallin Spheres are also about understanding consensus. As more people are included in a community, the sphere of legitimate controversy tends to grow. As it grows, it likely takes some space away from the sphere of consensus. As Lau says:Y'all are having a Hallin's Spheres argument, but I haven't seen anyone else bring it up. So here we go. https://t.co/egJhoWFUr9— Studio Glibly (@NoTotally) September 16, 2017
"Put only Montagues in a room, and you have consensus. Now put a few Capulets in. 'What happened to all the consensus?' The answer is that there never was consensus; there was only agreement among those with access or those represented by those with access. "Internet pushback" when the internet- and very specifically twitter- is more meritocratic than, let's say large corporations like ESPN? That's not "the internet" pushing back. It's those who you didn't allow in the room before we forced our way in via new technology. PEOPLE." From @NoTotally Twitter thread, Sept. 15, 2017.Both of these observations mean that it can be harder to create spheres of consensus, that it is harder to maintain them once created, and that we might expect spheres of legitimate controversy to grow.[5]
This is good?
Years of forward societal progress based in part on expanding and shifting the Overton Window and sphere of legitimate controversy may make you think, this is great! But hold on, some ideas are in the sphere of deviance for a reason. Racism is one good and sadly timely example. Same with some people. The Internet and its platforms have created many spaces for marginalized people to congregate and become less marginalized by removing gatekeepers who might enforce spheres and windows to exclude.[6] On the internet, nobody needs permission to speak. Again, that's often good progress. But, some groups are marginalized for a very good reason. Nazis are a good and sadly timely example. Bringing racism and Nazis back into the window is VERY, VERY BAD, and yet the relative lack of speech gatekeepers in the U.S. Constitution, on the Internet, and on many Internet platforms may make this more likely. This is not a malfunction, but a design feature of the Internet and many of its platforms.
What can online platforms and communities do?
It is worth noting that spheres and windows are a bit of a misnomer for these phenomenon because not only can they be different sizes in different communities, they can also be pretty permeable along the edges and can change size or permeability. Just as importantly, they can change shape. There is nothing that says that a particular society, medium, or context needs to treat all ideas that are equidistant from the center of consensus as equal. The sphere need not be a sphere, it can be oblong and irregular. The window can be slanted and weird looking. In other words, it is not a law of nature that determines that two ideas that are believed by a similar number of people need to be treated the same. That said, I don't know of many good examples outside of heavily editorial ones with a relatively small set of content producers, that have been able to be both extremely inclusive and progressive towards what I think are the "right" kind of marginalized ideas while keeping out the ones that I think are marginalized for very good reason (and I use "I" here as a measure because different people differ quite a bit on these judgments).[7] If anything, there are concerns that attempts to suppress speech by groups that should be marginal are often used against those that shouldn't.[8]
Many of the larger Internet platforms are trying, with varying degrees of success and failure, to do this right, as I was when I worked at Google and Twitter. That said, I don't have a great example of a platform or community that is working exactly as I would like. And it seems like that is a big and worthy challenge. Anyhow, there is probably a whole 'nother post from me on this, but that's for another day, this one is already long enough.
P.S. If you have examples of platforms or communities doing this extremely well at scale, please forgive me for not including them and help me fix my error by pointing me towards them @amac.
[1] I am an expert in none of these, but I have found them to be very useful concepts. [return]
[2] One of the questions that Professor Lessig asked that I think is really interesting, but not quite on point for this post is "What is the thing that we can't really consider today because it almost unthinkable, but our grandchildren will think is so obviously true that it is unthinkable to debate against?" [return]
[3] For example, see Obergefell v. Hodges, 576 U.S. ___ (2015). [return]
[4] This has clearly not been uniform progress. Even as the speech gatekeepers have receded and allowed more people to speak, harassment, trolling, aggressive spamming, false flagging, and other techniques are being used to suppress speech and drive speakers, particularly those who have historically been marginalized, away from these platforms. [return]
[5] I also believe that these two effects have negatively impacted trust in institutions more generally. [return]
[6] See note 4. [return]
[7] Spam or illegal content might be good examples of this at some of the major services. [return]
[8] See e.g. Daphne Keller, Inception Impact Assessment: Measures to further improve the effectiveness of the fight against illegal content online, Comment to the European Commission, March 29 2018 (discussing the potential for disparate impact of rules requiring internet platforms removal of terrorist content).[return]
Posted by
macgill
on
4/04/2018
1 comments
[
Labels:
code,
expression,
law
]
A Service I Want
I would like an algorithm or service that would suggest arguments, opinions, and points of view from smart people trusted within their communities but with whom I am likely to disagree or whose communities I am underexposed to. I do not think I am alone in this desire.
I attempt to get some of this out of who I follow on Twitter (and it was a great use for Google Reader -- may it rest in peace), but that is a pretty imperfect system. I also routinely ask others to suggest sources I might like to fulfill these needs, but I have found that many struggle to make good suggestions.
![]() |
Noble Returns to the Pavilion, from "W.G.", cricketing reminiscences and personal recollections (1899) Public domain book from the Internet Archive. |
One of the tricky things about this algorithm or service is that it would need to distinguish between those arguments and communities that I care about, those that I do not, and those I am repulsed by. For example, I am probably underexposed to cricket enthusiasts but I don’t care much about cricket anymore and don’t want more information. Another example is that I have not read anything about the Parkland victims being actors conspiracy theories but I would be actively repulsed if a service suggested that I should read about it.
My suspicion is that one of the reasons services serve up filter-bubble content based on the engagement metrics of friend groups and similar users is because it is much easier than finding good, challenging material to suggest to users. That said, I wonder if the later might be more fulfilling to the user over the long term and result in a stickier service if it could be achieved.
Do you know of a service doing a good job of this? Do you have ideas for users or publications that would fit this bill for me? If so, please send them my way at @amac.
![]() |
C.L. Townsend, Playing Forward, from "W.G.", cricketing reminiscences and personal recollections (1899) Public domain book from the Internet Archive. |
Posted by
A M
on
2/28/2018
0
comments
[
Labels:
code,
expression
]
Recap & Response to a Thread on Speech
Sometimes a Twitter thread is easier to read as a blog post.
The below was originally posted on Twitter.
1) Good thread by @yonatanzunger with a bunch of useful truths. Recap & comments from me below.
2) Speech can be used as a weapon against other speech: https://twitter.com/yonatanzunger/status/914609013722984448
https://twitter.com/yonatanzunger/status/914609721696559109
See also @superwuster arguing that the 1st Am is obsolete in an era of attention scarcity.
![]() |
Fight between Rioters and Militia, from Pen and Pencil Sketches of the Great Riots. Image in the Public Domain. |
3) People bear diff costs of bad speech & harassment, disadvantaged often most affected:
https://twitter.com/yonatanzunger/status/914609927729147904
https://twitter.com/yonatanzunger/status/914610451782156288
4) Understanding & combating speech that reduces engagement can further a speech maximizing policy goal:
https://twitter.com/yonatanzunger/status/914611676497899520
https://twitter.com/yonatanzunger/status/914611742247809024
https://twitter.com/yonatanzunger/status/914612024126038016
5) Having + stating an “editorial voice,” gestures, public perception & examples also can be important:
https://twitter.com/yonatanzunger/status/914612173023744001
https://twitter.com/yonatanzunger/status/914611921038409729
https://twitter.com/yonatanzunger/status/914612262790402048
![]() |
The Frame, from Typographia. Image in the Public Domain. |
6) Also, he gives great pointers to smart folks in the online community field:
https://twitter.com/yonatanzunger/status/914609375523852288
https://twitter.com/yonatanzunger/status/914611296150188032
https://twitter.com/yonatanzunger/status/914611486881808384
And of course there are many more, incl: Heather Champ, @juniperdowns, Victoria Grand, Monika Bickert, Shantal Rands, Micah Schaffer, @delbius, @nicolewong, @zeynep, @zephoria, @StephenBalkam, @unburntwitch, @noUpside, @EthanZ, @jessamyn, @sarahjeong + many many more incl great non-US folk. And including the folks & orgs on the various advisory councils:
https://blog.twitter.com/official/en_us/a/2016/announcing-the-twitter-trust-safety-council.html
https://www.facebook.com/help/222332597793306/ (and others)
As @yonatanzunger says, this work is a team sport that advances with help from all around.
7) I have some Qs re his 47 USC §230 (CDA) points. I don't know a case of something like his “editorial voice” breaking immunity or otherwise causing a “huge legal risk.” Indeed that was the point of §230 originally. So, asking experts: @ericgoldman & @daphnehk what do you think?
8) Also, I don’t think “maximizing speech” is quite the right goal or that every service should have the same goal. I want something different when I go to Facebook v Twitter v YouTube.
Also, I want more than one good service whose arch + policies (and, sure, “editorial voice”) support an extremely wide diversity of views being able to flourish, be expressed well & be easy to find & interact with including from outside social circles. But your mileage may vary.
9) Naturally, I also disagree that Twitter folks (including me) “never took [these issues] seriously,” provided “bullshit” explanations, were naive, and chased traffic over good policy. Was there & think I'd know.
But, taking that sort of beating is kinda part of the job. And, maybe I’m too biased from working & learning these issues at platforms incl many at Google, Twitter & in govt w/ @POTUS44.
10) Anyhow, I’m very glad @yonatanzunger chose to post this thread to Twitter & I hope the suggestions part is read widely.
![]() |
Printing Press, from Typographia. Image in the Public Domain. |
Posted by
A M
on
10/02/2017
0
comments
[
Labels:
expression,
law,
twitter
]