Transcripts For CSPAN Discussion On Election Disinformation

CSPAN Discussion On Election Disinformation On Online Platforms November 5, 2022

Good morning. This disinformation, misinformation, freedom of speech, and our first memo of rights are being questioned, specifically with todays internet tools and how the platforms that run social media output is concerned. Who decides the answers to these questions . Should it congress or the Supreme Court. Our citizens seeing their views and dislikes being amplified by social media and not willing to take responsibility for the discord in todays environment. It is often commented that what they call the interactive computer services, would represent an extraordinary ability to advance education and informational information. With the internets growth and resources around the world to help with Entertainment Services and cultural are concerns of misuse of the power of the internet. It is to go to countries we find the United States has spent the last decade mired in election cycles being the concern of the internet and turned into a tool that sows discord among our own citizens. These are questions we will discuss at todays event and recognizing how it continues for the free flow of information is not only up to the knighted states but globally. Not only up to the United States but globally. A Senior Editor and a contributing writer at the atlantic and has written and appeared in the New York Times as well as the Washington Post, she served as an editorial writer. She is the coauthor of the arbiters of truth,. Spencer davis is a campaign and policy officer. A founding partner, most recently the political director for senator john cornyns election, the executive director of the texas victory a graduate of texas a m university, he began his career as a data director for the Republican National committee. The director of policy at the center for democracy and technology, he oversees the execution of the policy agenda on strategy and represents organizations and policymakers in a Civil Society and industry in the media, bringing 25 years of experience in government and working in law and policy and regulation. He served on the Obama Administration as associate Deputy Attorney general and senior director of cybersecurity policies the National Security council. He was an honorary architect of the policy directive on the United States information and executive order for malicious cyber enabled activities and we have a really good group of experts here today. If you want to participate in discussions, those of you still on twitter should use the ha shtag or email. Lets start with, we have an election coming up on tuesday. The top right already, we have the rnc concerned they are being suppressed. They have sued google over this and there is a lot of confusion about how and why this is happening. Would you start with the basics of why they think they are being suppressed and give us a baseline. Is a question of email fundraising. We have to remember that the fundamental tools to a campaign today, whether statehouse or president ial, we can talk about digital advertising, marketing, door knocks and email messages, but none of that happens without email fundraising. From 2016 to early last year, we were seeing exponential growth and on the republican side, that is due to more email fundraising and as we expanded into this, we are now seeing this last three to four quarters we have been down, quarter to quarter is not comparing that president ial years. Some of that can come from a spam issue. You are saying the response is it down . There was a suspicion that a lot of the emails had a bad deliverability rate and that is how many are they showing up in a viewable box and how much are getting spam. They have five staffers completely voted to this question of deliverability. They are seeing something i saw in my campaigns and that was confirmed by a study from North Carolina that said, you are seeing on gmail a disproportionate amount of emails coming from republican campaigns going to spam when compared to democrat campaigns. For democrats it is below 10 and four republicans it is 65 . That is a concern as we are trying to speak to our base and build a Fair Elections and campaigns. It has come to a head with this rnc lawsuit. So they have been speaking to google for 10 months. They have staffers devoted to it. It is normally north of 90 but you will see at the end of every month it drops to 0 and the end of the month is the most important because that is when you are getting the most dollars per email. Google told the rnc that it is content, frequency or user input. And that is what in the ima service uses to build a spam filter. But the rnc said they sent the same percent and one group got 100 and one group got 0 . They are comparing how many emails they are sending and we dont think we are doing more than on the democrat side. We can see through the google tools that the spam usage reportage is below 1 . So not feeling that they were given enough, googles solution got in the visor he opinion but the rnc is not sending up for that locally saying it is a trojan horse and is giving google too much data and they are saying that the only ones receiving emails ones because they sign up for it. I think that google is building a double that they had to sign up to receive the emails and they are not participating in the Pilot Program. The Pilot Program was set up for Political Action groups and campaigns and now they are suspicious that they would get their information out. Gmail is one of the largest users for individuals. So the whole premise is to have targeted recommendations. It is also a possibility in their is certain things i dont like and the algorithm takes it away and put spam. I did read the North Carolina study and it comes out anecdotal or finding what you are looking for. They needed to have a role and how we communicate. Can you talk about that . Google was worried about two things, they felt that they had to concern confirmed was a candidate. So housing got paperwork could confirm what campaigns would be able to participate in the Pilot Program. Google was also worried about the issue of contributions. So would getting extra service by building the Pilot Program in white listing local campaigns, with that have to show up on those Campaign Reports . So they said they were cleared to do it and set it up. The dnc during the rulemaking said we should not build the Pilot Program because it is giving bad users of email an extra leg up. It is white listing and delivering straight to inboxes and they have become a participant in the Pilot Program. The bigger question of the challenge of the internet is who is in charge, who gets to decide . Do i get to decide what im seeing and the and are the algorithms. There are a lot of people doing things that arent. Can we talk about the most recent Supreme Court taking up several cases. I want to start with you because you do a podcast. There are two cases the Supreme Court heard. Gonzalez the google gonzalez v. Will. It addresses gonzalez v. Google. The pattern in each cases involve incidents where someone was hurt or killed in a terrorist attack and the petitioners are the plaintiffs were arguing in the lower courts that the platforms had liability for this because they in some way allowed it to happen and boosted algorithmic contents that spread propaganda from isis and they allowed people to get in touch with one another and help them fund terrorist attacks. This is been very broad but not specific to the cases but the general fact pattern. The court decided they relate to similar issues but there are different questions in east eats each case. The first one is the 230 protection that shields platforms from liability for User Generated Content. The question concerns whether the platform use of an algorithm to boost or down rake content is protected by section 230 or not, the argument being while a terrorist group, there was content that was boosted on the algorithm and that is not User Generated Content but produced by the platform itself and therefore the plaintiffs are arguing it should be able to be held liable. That is a hugely important case and could reshape how section 230 works and how the internet functions. The second case, a similar fact pattern but specific questions help you deal with the statute under which twitter would be held liable in the antiterrorism act. That is a particular question of statutory interpretation, whether setting aside section 230 for the moment or whether you accept it could be held liable under section 230 that the plaintiffs would be able to recover certain damages that the plaintiffs are arguing and contributed to the terrorist attack in some way. There are a lot of nuances and i am curious and the bottom line is that these cases could be extremely significant in how we understand the future of the internet. Lets talk about whats at stake. These cases concern terrorist content but have implications for user content at large because if in fact providers like google can be held liable. They face the risk of liability and if they face liability, they may as a rational matter have little choice but to start being much more aggressive in taking down content, because if you are potentially liable when someone posts some content and there is a complaint that it is supporting terrorism, irrational reaction would be to i am simply going to take that down and if you are a Service Writer and you are not liable for terrorist, you might take down all content related to terrorism and that would be positive also, he could be journalism about terrorism. There is a risk that depending on how the court finds it, you may end up with less Free Expression as a result. The other point i would make is that it is important to realize that section 230 is not about facebookss and googles but all providers and in some ways it is the smaller providers most at risk because they are the ones that dont have the resources to defend against this litigation and made shut down or not allow for thirdparty content. If you are a newspaper, why allow comments from users if you are liable for those comments. And that is again that they dont have thousands to put in intent moderation so they take the broad steps of, im just not going to allow content terrorists. As an example, when there was a small provider who was being told that terrorists are using your platform to spread propaganda, and his response was, im a small provider. All of this is happening in arabic and i dont understand arabic so i am just going to walk all arabic content. A lot of that was beneficial and not to do with terrorism but that was the only choice they had because they had no other option to separate the potential harmful content. We have two other cases doing the opposite, which is the state of florida and the state of texas and two simple fight, they want to keep everything up. So how does this all play out and what are we looking at in the next year . Does the internet become a highly regulated place or does it all fall down and end in tears . What do you think . You are right. They are two cases that everyone expects to go to the Supreme Court, or at least one of them, which are First Amendment challenges to laws passed by texas and florida, both of which in different ways require providers not to moderate content. In texas, the law says writers should not engage in what our viewpoint based discrimination in terms of how it moderates content. And sometimes that is a good idea, but when you think about Something Like suicide, we may want writers to take down content that promotes suicide while leaving up information about resources and places to go if you are thinking about suicide. I think most people would find that to be a good thing. We want terrorist to take down terroristic propaganda but we also want to talk about why terrorism is bad. It is about due providers have a First Amendment right, editorial discretion, akin to what newspapers have, to have the choice to leave content up or take it down. Dividers but the question is should providers have that . I think what texas and florida are trying to do takes the spirit of the First Amendment and make the content moderation. As a former ceo, but yesterday it was on what content moderation is and it was never about the speech itself but the environment it is creating. Going back to spam, we have identified spam is something we need to remove from any kind of healthy, online marketplace of ideas or just place to enjoy connecting with friends, because spam can take over any community it is in. We are fine with banning span but the First Amendment protects that. Banning spam but the First Amendment protects that. What it is getting to is the difficulty of how to have content moderation while creating a healthy space. I dont it we can just throw up our hands and say, well, everything out there is valuable. Content moderation is to marred and we have to throw out all speech because it will create instances of radicalization and lends itself to conversations about terrorists being activated and radicalized and it is not worth it. You have done a lot of shows about this. I think it is important to understand that these cases are touching on different legal issues, even as they all speak to what the internet looks like and how we think about how we curate what we see online. I think it is important to understand because it is a reminder that often in public discussions, people will point to section 230 as one that allows hate speech online. Unfortunately, or perhaps fortunately, that is the First Amendment. Section 230 has to do with a slightly different but still important russian. It is a really complicated network of laws, jurisprudence built up over time of really difficult finegrained questions about how do we moderate content that maybe we dont speak and how do we hire enough people in tune algorithms to filter out this and not that . I think this space is often a really difficult one because at the same time as it touches on very High Altitude russian that a lot of people care very deeply about, free speech, how they choose what they see and say, it has become quite politicized where you often see democrats calling for platforms to take content down and the republicans calling to keep it up. It comes quite hard to move from that very abstract level down to the nittygritty of how are we going to let this and implement it in a weight that means come i can show you the social media platforms to communicate with my friends and it wont be drowned out. That is crucial to understand but also why it is so difficult to untangle these questions. Another layer of complication is we are not just talking about the United States but a global system and the First Amendment doesnt apply outside of the United States. In europe, you have Digital Services that have content moderation applications that cannot be done here under the First Amendment but platforms have to obey. Even elon musk when he talks about free speech has to obey free speech. You will have contradictory regimes throughout the world. From a technical standpoint it is difficult to run a surface where you provide certain content in one place and different in another. It will not just be the end stage at the Supreme Court but how do platforms deal with that competing contradictory. Transparency and the challenge of algorithms, the algorithms are said to be their secret sauce. Now we have seen in a very positive fashion, safety officers now more prevalent in companies, especially in the social media companies. We tried to get someone from tiktok on but were not able to make it but they created some kind of a health center. I went to look somewhere else to see if they wanted one had one and they didnt. Is there a balance in terms of use on transparency that would help with the whole situation or is that im just being an optimist . I think transparency is a positive thing. We had the santa clara principles which lay out in detail kinds of transparency that would be useful and would inform Public Policy decisions about this. Knowing not only what roles that apply but what of the consequences and how much content are they taking down, in what category and in response to government and of their own volition . And what in greater detail about how they apply those rules and i think all of that would be helpful and it would be helpful because it would allow us to have more informed discussions from a policy and legal standpoint to better understand the ramifications if we change to 30 in this way or interpret the First Amendment message that way change 230 in this way or interpret the First Amendment message that way. There could be such a thing of too much transparency but we are not even close to that in a general manner. The fine line is the challenge, because we dont have an industry perspective for one of the social media platforms but the answer would be, if you know much about it you are able to gain it. That is the reason they gave for pulling back apis is that we provide too much information in real time and that just activates bigger and better thoughts. The problem was identifying the threats and trying to identify just how ma

© 2025 Vimarsana