Panel today. Discussing the topic of social media content moderation. I want to begin by thanking our cosponsors the session on [indiscernible] i will also announce that this is being recorded. Additionally, we welcome our Live Audience on cspan. We will allow for audience questions at the end of our session. 20 minutes of time i will invite you up to the microphone if you have any questions for our panelists. Additionally, a business meeting will follow. If youre interested you can come speak with us at the end of this panel. I would love to introduce to you our distinguished panelists today. Immediately to my left is kate. An assistant professor of law at st. Johns University School of law. She is a fellow at the yale law school. Her research a Network Freedom of expression and private government have appeared in and [indiscernible] in popular press. Then we have an assistant professor of law at drexel university. She is also a fellow at the center for democracy and technology. Her work involves [indiscernible] government accountability. To her left is emma. Of thea board member Global Network initiative. Came out ining work early 2020. This paper was part of a transatlantic group on moderation, online and freedom of expression. Annenbergrt of the institute. I would like to welcome eric goldman. He is codirector. Valley. Iced in silicon in his research and teaching, it focuses on [indiscernible] he has a blog on these topics which i find helpful. He has a big project forthcoming title validating transparency reports. I welcome our panelists and i asking,ke to begin by tell us what content moderation is. What he platforms moderate . How are these decisions made . Thank you for the introductions and for having us here. I am excited to kick this off and laid reef groundwork about content moderation private spaces. Content moderation and i will speak broadly and let erik and others set the Legal Framework for what allows us to take place worried content moderation from a private platform standpoint which is to focus on the main platforms for speech like ,witter, youtube and facebook really all platforms use content moderation. Kickstart her has certain roles about what kinds of projects you can decide do crop fund. Tsy has rules as do ebay and airbnb. All of these things are starting to become obvious to us now. Its 2016, there has been a change in people understanding that have a platforms are working round the clock and constantly to take down or keep up the content posted by their users. That is part of what you would think of as the community they are trying to build for the product they are trying to sell. Content moderation and the rules about theare as much product is any Given Company now or any given platform as they are about a certain type of users right to speech. Those are the competing ideals that this idea has grown up under. The best way to explain content moderation is that it happened and i think everyone can attest, it happened for a long time without people realizing it was happening. In 2015 when i started to dig into this research, i found that every conversation explaining my work had to begin with you dont realize it yet things that you put up on facebook are sometimes taken down by facebook he caused they violate a rule is because. Point, that was not clear to most people. Since then, there have been a number of highprofile moments that have gotten media attention. There was the terror war controversy in 2016 which is a picture posted by a famous author. The photo is sometimes called napalm girl. It is a nineyearold girl flame flinging napalm in the vietnam war. Facebook had removed it because it was sexually exploited imagery of a child. It had a huge Historical Impact and protest impact and educational impact but none of those things were considered so facebook came under fire for censoring that have of content. Similar to that was thinks they got wrong because they took things down but similarly, they get things wrong with it take things up. Keep things up. There was a mentally ill man who is taking a video of himself shooting an old homeless man in the street pointblank. And posted it on facebook and estate up for 2. 5 hours for facebook removed the video. That got a lot of negative press. It these moments are moments in which we are becoming aware of the rules and ways in which facebook and Companies Take down this content at scale. The more we learn about what the private process entails and the legal remedies we can start to put into place around it, thats where the conversation has to begin. How are these decisions being made . I would like to go back to the definition. This is central to the entire panel and the conversation. There is a temptation to think of content moderation as something other than editorial judgment about what content should be published from third party. That is a divide in the conversation. Some people will not be shaken off the belief. It is not an editorial decision it is Something Else. To me, it is squarely an editorial decision. They are deciding which thirdparty content they are willing to publish in which they are not. At which point legal consequences flow. Where do you come out or where are you starting today . Are you thinking of it as an editorial process, maybe it is more automated than we are used to, maybe it is more afterthefact as opposed to prepublication but it is still the same basic process or not. I will point out a case involving a yelp review. The Appellate Court described the yelp decision not to review as speech administration created it was not an editorial judgment, they were the janitors. They were pushing brooms. Pushing decide they are brooms, you can regulate in a different way and say of course it is editorial decisionmaking. We know how to regulate it. Frameworkt moderation leaves us astray because it allows us to think like were not thinking about policy decisions but if we are we know what to do. I want to throw in another dimension for people to think about. A lot of times in our conversations about content moderation they can be bound up in horrific things that have happened on major platforms. Were talking about terrorist propaganda on youtube, disinformation on facebook. When we talk about content moderation more broadly, we are talking about a lot of techniques and press practices rules and procedures that happened on facebook and youtube and on the smallest message board and forum and mailing list. As we discussed, it would be helpful to unpack that there can judgment associated with different kinds of content moderation and its helpful to take a step back and think about what we talking about is something that any host of thirdparty content has to grapple with. They have to come up with roles contenty might apply to on their website. They had to figure out how to apply them. Do they proactively seek out content and try to take it down according to the rules for do they wait for people to notify them . There are different ways that they have made these decisions that are they have benefits and drawbacks create it is important for us to think and Public Policy terms about the implications of how some of the Biggest Companies online have made those decisions that impacted having on our civil discourse and we should talk about that. Lets not forget the long tail of many sites out there grappling with the same issues and it can be helpful for unpacking why these questions are as difficult as they are. To thicken the custodian versus editorial distinction and , i will say to get to the question of how does this happen . Tos say you upload a photo facebook and there is a microsecond between the upload and it actually being posted where it goes through a check against a couple of things using it tool called photo dna just a matching service. There is a universe of known child pornography that is maintained by a thirdparty organization. They have done a photograph matching service. It is a digital mapping and they can instantly in milliseconds recognize whether that photo is a known piece of pornography that has been distributed. If thats the case, it never gets posted. It is automatically taken down red this is also has youtube works with content id and copyright. That is the preproactive screening. Once something gets posted, thats what i would describe as the custodial aspect. The part that is not an editorial judgment but just a screening. Muchit is posted, pretty most of these platforms rely because of the number of platforms postings on proactive reactive moderation. Other users flagging and reporting violations. Then they go into a queue and they are sorted by priority throughout rhythms but eventually most of them go to humid moderators impulse centers all over the world who have a queue of things they look at. A picture or image comes up and they have to decide whether it violates the rules and community standards. They decide whether it stays up or comes down or gets escalated. That is the content moderation system. A vast majority of the stuff that is flagged are things that are not hard calls. They are basically teenagers having reporting wars with each other. Or people who are angry at their neighbor so they decide to go through all of their posts and flagged them. Or people decide they dont like how they look in pictures. Rather than on tagging, they decide to fly to flagpole the jury for removal. Other graphic kinds of hetent, some bad content and gets past the initial custodial sleep at the beginning. Those are the hard questions that are the editorial judgments. [indiscernible] govern iniple should terms of fairness, accountability transparency . I would like to kick this question off with professor [indiscernible] noterst it is important to as background as eric mentioned, if you subscribe to the idea that content moderation is the exercise of editorial discretion, under the First Amendment framework, a whole bunch of consequences flow from that including that the externallyor articulated ideas about principles and safeguards that ought to apply it might be called into question. Some of the accountability fairness and transparency oriented ideas that are floating around about how to make content more transparent and accountable to the users and the public into regulators might not fly under the First Amendment framework or a framework that subscribes to this editorial discussion model created a lot of these interventions are not originating within the United States. Many of them are originating within europe. To some extent, we have sidestepped that inquiry entirely because other governments and actors are engaged in formulating these principles and safeguards. Lets start with transparency. What do we want to know about content moderation . Respects, it has been done through the use of reports and disclosure of aggregate data about content takedown decisions. How many pieces of these are generated content are taken down for violating specific aspects of Community Guidelines . How many pieces are taken down because a Government Agency requested removal . How many pieces are taken down because an algorithm proactively flagged this pieces . Ultimately, how many of this pieces of content are restored after that initial takedown . Those are the kind of aggregate statistical data that are often conveyed in transparency reports. Increasingly, folks who work on these issues as researchers or advocates have expressed a lot of frustration that this kind of aggregate data is not enough to understand how these decisions are being made. Aat you really need is granular casebycase analysis of how the rules are applied to a given fact and how that decision is made both in the cuminmoderator context human moderator context. Making this granular data available, it will be platformsg to see how navigate these precious moving forward. Another transparency related issue is publication of the Community Guidelines and the rules and standards for enforcing them. People tend to forget a lot of the time that something most interesting Important Information we have gotten about how platforms and force their roles have been through leaks to the press. For example, a couple of years ago, the guardian published a facebook wasw training its moderators to enforce the rules. Some would argue that the rules and guidelines should be public from the start. Transparency is also an important aspect of accountability. What we know about the rules informs the way we might want to structure or think about accountability mechanism. One way to think about accountability in this framework is accountability through the marketplace. Mightcal pressure encourage platforms to adapt different rules or to enforce them differently. For a long time, this has been a dominant way of thinking about platform cap ability. If you dont like the rules, pressure the platforms to change them or leave the platform entirely. Increasingly, i think there is also interest in formulating different modes of accountability. Thicker kinds of accountability. There are private forms of accountability mechanisms and kate is an expert in facebook Oversight Board. There are other ways of thinking about private accountability created one proposal is for multiple stakeholder regional or statebased social Media Councils that would represent many different types of actors like Civil Society platforms, users and governments on content moderation issues. This is one way of thinking about private accountability moving forward. Puzzles forhere are accountability through Government Intervention either through the formulation of private rights of action and judicial oversight or administrative oversight. Wheree this in the u. K. The duty of care proposal includes a provision that would encourage administrative oversight of how platforms are operating. Is probably the hardest to achieve. I dont even know what it means in this framework. Platforms when they are talking about having fair sets of rules are often really talking about having consistent rules. Rules that are applied consistently within groups. If person a post white nationalists on social media and they are taken down, person b should not be able to post that. It might mean that they are consistent across groups so that different kinds of prohibited speech that are equivalent late bad should be treated the same way. Think there are major questions mostly political and social questions not legal once about how we think about what fairness means in the context. Following up on the transparency piece which is a very common intervention that gets discussed when it comes to content moderation, let us see more. Let me make a short plug for my next paper. I invite your feedback because im struggling with this project. How do we know that the numbers provided by the Internet Companies are true . They publish them but how do we know . What do we do to validate those numbers . Can we send in the lawyers or the accountants under government compulsion to validate the numbers . If we are going to expect transparency, we have to know if we are accurate numbers and if there is something effort about the transparency that might be obligated under the content we areion piece, then talking about editorial processes it very well may be different than other disclosures the government requires were andch issues are in play the intervention in the government to validate the numbers might not be as intrusive into the editorial function. I would invite your feedback and thoughts about this. Make thefor us to transparency option work, we have to know if we can believe in the numbers. Im not sure how we are going to do that. Idea on how to validate ,ransparency reporting numbers it was around the time of the snowden gleeks that tech theanies began providing response to government demands on user data. They werepeople handing over gobs of data to the u. S. Government or many governments around the world, Company Started putting out numbers about demands they got. Many years of pressure from to get the companies to start putting out numbers on their terms of service or content moderation transparency. We have only seen a couple of reports from the Biggest Companies in the past few years. One issue with understanding these numbers is we were very nascent in the history of transparency reporting. There is not a lot of standardization across how the companies do the reporting which is on the one hand infuriating to researchers because how are you supposed to use these numbers to draw conclusions or compare them or compare them to the same company across multiple reports . They often change their reports every six months or year. Theres also a lot of good reasons why they change their reports because they have developed a new capacity to report different information. They found a different way to start accounting for something. Issues wesome of the will have to work out as people pull Public Policy discussion whether it is voluntary or mandatory moves forward. Area to has potential, the government demands reports, we could try to compare facebooks reports on how many demands a get from the u. S. Government to information from the government about how many demands they put into a company like facebook. Are some i know, there places within the federal government that do some reporting on those types of demands. Theres nothing like a single aggregate report from the u. S. Or any other government about this is how frequently our Law Enforcement agents are making requests to these Technology Companies for content removal or access to user data. Part of the question on the government demand section is that there is a Real Asymmetry as far as who is doing transparency reporting reedit that means that kind of objective comparison of numbers from one side to the other isnt really possible. As for Companies Reporting on their own behavior, short of some sort of thirdparty auditing, it is hard to see where the reliance on those numbers can come from because there is no way within looking at a single report to check