We have a great panel with us today. Former director of the bureau of protection for the federal trade commission. We will have the president of the computer and communication industry association. And here to discuss the department of justice proposal, we have lauren willard, counselor to the attorney general. We are very lucky to have this great panel here. We will walk through the actual proposal. Then for some commentary over the industry and consumer side of things. Give ai want to kind of little level set. Im sure most of the people dialed in have some familiarity with section 230 and understand what it is all about, but i would like to give some history to give some context to the statutes and so forth. When we talk about the Communications Decency act section 230 with the fairly recent movie, the wolf of wall particular the character based on daniel koresh. He was an employee with the company that was Leonardo Dicaprios business in that movie. Hillarify your memory, mr. Was the one who went up to one of his employees and said, do more business or i will eat your goldfish. Koresh said that one is was one of the few accurate lines from his time there. They probably did more for the internet than even al gore. 1994, they sued prodigy. There were posts on the listserv made that the firm was engaged in major criminal fraud. Mr. Koresh was assumed to be proven criminal. Stratton oakmont was a cult of brokers. They got sued, prodigy was sued by statin oakmont under various libel claims. The court at the time analyzed of a standardsis of newspaper type theory, publishing libelous content. Held, yes, prodigy would be liable. It is very important to todays conversation, the understanding is that by trying to police the listserv, prodigy was in itself taking on policing everything. If they were going to do any, they would be caught and they could be held liable. Some then representatives in the house of representatives caught this, copy issue with it, proposed what has been rolled into the Communications Decency act section 230. Senator wyden was recently interviewed about this and he said that part of the genesis was that ifort someone is exercising inordinate control, then they have personal responsibility. He said nobody is going to invest in something when they themselves are going to be personally on the hook. Effort as he was seeing lots of postings being germane to the internet and moving thing forward, they passed the Communications Decency act section 230, which has at least a social media aspect of the internet as we know it today. One of the most important laws out there. With that in mind, that is the history from the ice age known as the mid1990s. With that taken care of, we can whoed over to ms. Willard will talk about some of the recent doj developments and some of what is under review by the department. Thank you very much. Thank you for listening and today, everyone. The doj started looking at section 230 almost a year ago now. It actually grew out of our broader view of Online Platforms , primarily focused on competition and antitrust, but one thing we discovered as we looked at some of the concerns people were raising about Online Platforms is that not all of them fell within traditional antitrust and section 230 is one of those examples. Listenedooked at and to the widespread and bipartisan concerns people had that about this broad immunity, we decided to open up an internal doj working group and then in february of this past year, we had a big public workshop at the fbi building. My copanelist was an esteemed panelist in one of those last big events we had before everything changed. Expert had a series of roundtables, dozens of listening sessions with industry stakeholders and thought really long and hard about the concerns we were hearing. Last june, we issued a summary of our Public Events and roundtables, as well as our key findings at this stage of where we think we are planning to go. We decided that based on everything we had learned through our engagement, as well as internal analysis, that it is time to reform section 230. This is a law that has been around since 1996. The combination of significant technological changes in the last 25 years combined with very expansive Court Interpretations of the immunity have left Online Platforms immune from a wide array of Illicit Activity online and free to censor speech without accountability. Series oft together a concrete, but measured reforms, in part to also help move the dialogue beyond the issues of should we revoke it entirely or not touch it at all . We have a series of reforms. I know my time is limited, so i wont walk through everyone, but they fall into two general buckets. On the one hand, there are reforms aimed at incentivizing platforms to better address illicit criminal activity on their platforms. A series of hand, reforms to help them be more transparent and accountable when removing lawful speech. I read a critique of our report early in that was criticizing us for requiring platforms to take speech, that is exactly right. We do want platforms to take down more criminal content, but be more fair and transparent when addressing lawful speech. The law treats criminal and lawful speech differently, so too should platforms. Those buckets are important to work together. Kind of think of it as bookends. We dont want to have a scenario where you have over censorship for fear of liability, but you also dont want the wild west for the internet to have illegal content run rampant. Moreng to some of these in detail, one of the categories we samaritanwas bad carve out the carveout for bad actors in the section 230 immunity. This was something seen that my copanelist matt mentioned in his remarks in february. This idea that those bad actors that induce or contribute or solicit or participate in unlawful conduct should become publishers and therefore not titled to the immunity. The problem is not all the courts agreed with that right, so you had decision that have allowed platforms that induced unlawful behavior to nonetheless turn around and claim the benefit of 230. We think it would be very helpful and hopefully for industry not problematic to clarify the text of the legislation that if you facilitate, it solicit federal criminal activity, you are not a Good Samaritan in the sense of the statute and not entitled to this broad subsidy for civil liability. This is going to websites like backpage. Com, which punted the reforms. ,magine the new version of that we dont want those types of bad actors to be able to benefit from section 230 immunity. Related to that is even if you are a good platform, once you have knowledge that you have something that violates a federal criminal law in your platform, whether it is child pornography or drug trafficking, as soon as you know that and you understand that that is clearly unlawful, you have an obligation to do something about it, otherwise you are becoming complicit in that activity area so, we have a narrow notice liability provision we are proposing in order to address that scenario to allow victims to seek redress, as well as to incentivize platforms to be better at taking down the worst of the worst on their services. I think there is a reason we drew the line at federal criminal law. I think that is where platforms are clearly on notice that something is unlawful when you are talking about Something Like illegal drug trafficking. It is much harder when you were looking at civil claims related to defamation for a platform to know whether that is unlawful. Imagine you have a diner at a restaurant that set my soup was cold. The platform is not going to know what that the soup was cold or hot at the restaurant. Maybe it was guest bob joe. We dont want them to be in the position to adjudicate the claims unless they are receiving a Court Judgment that it was determined to be illegal. When it comes to something that is objectively criminal and oklates federal, law, we are with asking platforms to do more so that the victim of child sexual abuse material does not have to go through a Court Process before a platform does something about it. I also distinguish this from the carveout for federal criminal enforcement by the government. Those are doing two very different things. Sometimes, the lines become blurred. Obviously, we as a government can go after the bad act is violating criminal law, but the resources are scarce and there is a very high bar. It allows victims to pursue civil claims that may not raise the level of federal law, but the content they are asking to take down is going to be something that would violate the criminal law. That is important because you have people in platforms that are anonymous that you may not be able to get to the underlying perpetrator, so your only recourse is asking the platform itself to do something. Other topic is the carveout for federal Civil Enforcement. I think one thing matt had mentioned is that the industry and the government needs to do more. We 100 percent agree. One way the government can do more is by having a greater ability to go after some of civilly and harmful criminally. Without the federal civil carveout, we are fighting with one hand behind our back. That is one important reform we propose. Related to that, we also have a carveout for the federal antitrust law. Along with carveouts never , they areused intended to address areas of the law that were not intended to be covered by section 230, which at the time it was enacted was responding to these decisions about defamation. There are certain claims that the outliers, that we can carve out from section 230 without crumbling the internet as we know it. The kind of covers a lot on Illicit Activity criminal side. I will also briefly touch on the transparency and accountability and i can answer austins later. So, the other reforms we are thinking about on the other side ofthe coin are these ideas when you are taking down lawful speech, you need to be more transparent and accountable of why you are doing that. We are not requiring platforms to be neutral. We are just requiring them to be transparent and abide by their own rules. It should not be controversial to say that in order to get this immunity, you have to abide by your own terms of service when taking down speech. You have to have plain and clear rules that puts people on notice about why you would take down their content and some minimal level of due process. So, and a lot of that we have decided to put into the context of the goodfaith provision, which courts have not done enough to give full explanations of what that means. So we would provide a statutory definition of good faith. We would also want to replace the term otherwise objectionable that has been read over time to give platforms is sexually essentially Carte Blanche to take down content with which they disagree, even if you are looking at it along the other terms, you would think to read just the type of content that would be harmful to children. But given that that has caused a lot of mischief over time, we propose replacing it with claver language like unlawful, promotes terrorism, promotes selfharm, things that will give greater guidance to courts in platforms and users about when you can take down content with a broad blanket immunity. Platforms, that does not render them automatically liable, it just removes the section 230 shield that stops claims. You could still have an issue of taking down content based on your terms of service and separate instances. I think that kind of covers a number of our reforms. I also want to say thank you for everyone that has spoken with doj. Experts, industry, stakeholders. We really tried to take this process seriously and carefully, recognizing there is a benefit to section 230, but that it has grown so far that we need to do something to reformat, updated and to address outlier Court Decisions to protect americans in a number of different ways. Great, thank you. We appreciate the department of justice having an ear to walk us through all of these points and the positions here. With that in mind, why dont you give us a little brief talk cdat how does industry view 30 230 as it is and what do you see it may be becoming . Thank you. I appreciate it. Thanks for bringing us together today. I think everyone knows that section 230 encourages online intermediaries to moderate thirdparty content. It does it in two ways. Have, it has what we sometimes heard as a dont shoot the messenger rule, that the platform should not be responsible for the individual bad actors and it says that they should not be sued for moderating decisions they make trying to improve online content and suppress misuse of their services. Digitalple, if a Service Terminates the accounts of selfproclaimed nazis posting hate or russian agents posting disinformation, section 230 closes the courthouse doors to those miss users of the service, the potential claimants would want to relitigate. Similarly, litigious plaintiffs hey, thisor example, is defamatory about me, like the nefarious claimants in the story about the wolf of wall street. A call and they say this is an extraordinarily crooked firm and we are not going to take that content down, then plaintiffs who try to get through litigation that which they lost in the moderation are also out of luck. To sort of recap, section 230 encourages firms without fear that they will own what they dont remove and without fear that when they do remove something that they wont be subjected to litigation by bad actors. Effect ofhe protecting diverse viewpoints, including marginalized views who might not be able to get a voice in the newspapers or on Cable Television and they can be heard online without being heckled off. All came from the stratton tomont in 1994, which led 1996, where congress foresaw the concert in says of litigation as a tool for controlling internet disincentivizing moderation and it acted. Mind you, Congress Never said that the plaintiff cant pursue bad actors posting the content. They simply limited action against the services those bad actors use. At the end of the day no matter what any of us talk about, you can only act against the internet user. The reality is is sometimes it is easier to see the big company instead of pursuing the bad actor themselves. The problem on the internet is a lot of times it is not a big company, it is Digital Service start ups that dont have the resources that some of the Larger Companies do to build out elaborate content moderation. Industrywide, what has been the effect . It has created a vibrant internet economy that is the envy of the World Without question. There are hundreds of thousands of jobs and tens of billions of dollars in annual creativity founded on these services. That is not just the companies themselves, but small and mediumsized businesses using these Digital Services to engage in their everyday business activity. And, of course, you and i every day Internet Users also derive value. My key researcher did some work measuring willingness and they calculated that the average american gets about 17,000 a year in value from Digital Services, virtually all of which rely on section 230. Underlining those protections could amount to the equivalent of a 17,000 tax on everyday americans. A lot of the proposals we are talking about and i dont want to single out the doj proposal. And tia submitted a proposal. Has been some legislative proposals on the hill. A lot of these proposals have the risk of undermining those protections. In me point out one example the doj proposal and others. Notices about providing when one engages in content moderation. That may seem like an easy thing to do. Give them the notice when you moderate their content. That may sound good, but when you think about it, just one Digital Service is terminating up to 70 million accounts a month of foreign disinformation agents. Requiring that service to provide to explain the particularities for the account also, there are also proposals that you need to point to the explicit provision in the terms of Service Every time accounts are taken down. One may have the effect of viral information starts spreading on the internet, that you need to push out an update to your terms of service before you can start suppressing that information. I think that is very worrisome. Let me point another comment out and then i will wrap up. A lot of the proposals that take issue with section 230, there is a lot of interest in section 230 as a catchall. It says that companies cant be held responsible for efforts to moderate content that is obscene, lascivious, and number of other things, or otherwise objectionable. It a lot of services use that otherwise objectionable to encompass these kinds of problematic content we are talking about that isnt necessarily obscenity, but clearly it is something that their community would not want. So, there has been a proposal to narrow what is otherwise objectionable to strictly unlawful or violent. I acknowledge that is a good start, but what about religious and ethnic intolerance . What about racism, hate speech . Does that have to stay up because it does not fit into the definition . About encouraging selfharm, suicide remember the tide pods challenge . Dont we want services to take that stuff down before saying, wait a minute, we have to push out an update to the terms of service before we dont tell people to drink bleach to cure the coronavirus . We Want Companies to act quickly and not give due process to foreign agents spreading misinformation to disrupt our election. With all that being said, i will acknowledge that you do see bad actors trying to claim section 230. You can always pursue the individual actor who has posted inappropriate content online and if the Digital Service contributes to that, we have seen in the roommates case for example that participating in the development of the content is enough to lose the protection. Although backpage is precurrently pointed to as the rationale for this reform, it is often overlooked that t