Good morning. This disinformation, misinformation, freedom of speech, and our first memo of rights are being questioned, specifically with todays internet tools and how the platforms that run social media output is concerned. Who decides the answers to these questions . Should it congress or the Supreme Court. Our citizens seeing their views and dislikes being amplified by social media and not willing to take responsibility for the discord in todays environment. It is often commented that what they call the interactive computer services, would represent an extraordinary ability to advance education and informational information. With the internets growth and resources around the world to help with Entertainment Services and cultural are concerns of misuse of the power of the internet. It is to go to countries we find the United States has spent the last decade mired in election cycles being the concern of the internet and turned into a tool that sows discord among our own citizens. These are questions we will discuss at todays event and recognizing how it continues for the free flow of information is not only up to the knighted states but globally. Not only up to the United States but globally. A Senior Editor and a contributing writer at the atlantic and has written and appeared in the New York Times as well as the Washington Post, she served as an editorial writer. She is the coauthor of the arbiters of truth,. Spencer davis is a campaign and policy officer. A founding partner, most recently the political director for senator john cornyns election, the executive director of the texas victory a graduate of texas a m university, he began his career as a data director for the Republican National committee. The director of policy at the center for democracy and technology, he oversees the execution of the policy agenda on strategy and represents organizations and policymakers in a Civil Society and industry in the media, bringing 25 years of experience in government and working in law and policy and regulation. He served on the Obama Administration as associate Deputy Attorney general and senior director of cybersecurity policies the National Security council. He was an honorary architect of the policy directive on the United States information and executive order for malicious cyber enabled activities and we have a really good group of experts here today. If you want to participate in discussions, those of you still on twitter should use the ha shtag or email. Lets start with, we have an election coming up on tuesday. The top right already, we have the rnc concerned they are being suppressed. They have sued google over this and there is a lot of confusion about how and why this is happening. Would you start with the basics of why they think they are being suppressed and give us a baseline. Is a question of email fundraising. We have to remember that the fundamental tools to a campaign today, whether statehouse or president ial, we can talk about digital advertising, marketing, door knocks and email messages, but none of that happens without email fundraising. From 2016 to early last year, we were seeing exponential growth and on the republican side, that is due to more email fundraising and as we expanded into this, we are now seeing this last three to four quarters we have been down, quarter to quarter is not comparing that president ial years. Some of that can come from a spam issue. You are saying the response is it down . There was a suspicion that a lot of the emails had a bad deliverability rate and that is how many are they showing up in a viewable box and how much are getting spam. They have five staffers completely voted to this question of deliverability. They are seeing something i saw in my campaigns and that was confirmed by a study from North Carolina that said, you are seeing on gmail a disproportionate amount of emails coming from republican campaigns going to spam when compared to democrat campaigns. For democrats it is below 10 and four republicans it is 65 . That is a concern as we are trying to speak to our base and build a Fair Elections and campaigns. It has come to a head with this rnc lawsuit. So they have been speaking to google for 10 months. They have staffers devoted to it. It is normally north of 90 but you will see at the end of every month it drops to 0 and the end of the month is the most important because that is when you are getting the most dollars per email. Google told the rnc that it is content, frequency or user input. And that is what in the ima service uses to build a spam filter. But the rnc said they sent the same percent and one group got 100 and one group got 0 . They are comparing how many emails they are sending and we dont think we are doing more than on the democrat side. We can see through the google tools that the spam usage reportage is below 1 . So not feeling that they were given enough, googles solution got in the visor he opinion but the rnc is not sending up for that locally saying it is a trojan horse and is giving google too much data and they are saying that the only ones receiving emails ones because they sign up for it. I think that google is building a double that they had to sign up to receive the emails and they are not participating in the Pilot Program. The Pilot Program was set up for Political Action groups and campaigns and now they are suspicious that they would get their information out. Gmail is one of the largest users for individuals. So the whole premise is to have targeted recommendations. It is also a possibility in their is certain things i dont like and the algorithm takes it away and put spam. I did read the North Carolina study and it comes out anecdotal or finding what you are looking for. They needed to have a role and how we communicate. Can you talk about that . Google was worried about two things, they felt that they had to concern confirmed was a candidate. So housing got paperwork could confirm what campaigns would be able to participate in the Pilot Program. Google was also worried about the issue of contributions. So would getting extra service by building the Pilot Program in white listing local campaigns, with that have to show up on those Campaign Reports . So they said they were cleared to do it and set it up. The dnc during the rulemaking said we should not build the Pilot Program because it is giving bad users of email an extra leg up. It is white listing and delivering straight to inboxes and they have become a participant in the Pilot Program. The bigger question of the challenge of the internet is who is in charge, who gets to decide . Do i get to decide what im seeing and the and are the algorithms. There are a lot of people doing things that arent. Can we talk about the most recent Supreme Court taking up several cases. I want to start with you because you do a podcast. There are two cases the Supreme Court heard. Gonzalez the google gonzalez v. Will. It addresses gonzalez v. Google. The pattern in each cases involve incidents where someone was hurt or killed in a terrorist attack and the petitioners are the plaintiffs were arguing in the lower courts that the platforms had liability for this because they in some way allowed it to happen and boosted algorithmic contents that spread propaganda from isis and they allowed people to get in touch with one another and help them fund terrorist attacks. This is been very broad but not specific to the cases but the general fact pattern. The court decided they relate to similar issues but there are different questions in east eats each case. The first one is the 230 protection that shields platforms from liability for User Generated Content. The question concerns whether the platform use of an algorithm to boost or down rake content is protected by section 230 or not, the argument being while a terrorist group, there was content that was boosted on the algorithm and that is not User Generated Content but produced by the platform itself and therefore the plaintiffs are arguing it should be able to be held liable. That is a hugely important case and could reshape how section 230 works and how the internet functions. The second case, a similar fact pattern but specific questions help you deal with the statute under which twitter would be held liable in the antiterrorism act. That is a particular question of statutory interpretation, whether setting aside section 230 for the moment or whether you accept it could be held liable under section 230 that the plaintiffs would be able to recover certain damages that the plaintiffs are arguing and contributed to the terrorist attack in some way. There are a lot of nuances and i am curious and the bottom line is that these cases could be extremely significant in how we understand the future of the internet. Lets talk about whats at stake. These cases concern terrorist content but have implications for user content at large because if in fact providers like google can be held liable. They face the risk of liability and if they face liability, they may as a rational matter have little choice but to start being much more aggressive in taking down content, because if you are potentially liable when someone posts some content and there is a complaint that it is supporting terrorism, irrational reaction would be to i am simply going to take that down and if you are a Service Writer and you are not liable for terrorist, you might take down all content related to terrorism and that would be positive also, he could be journalism about terrorism. There is a risk that depending on how the court finds it, you may end up with less Free Expression as a result. The other point i would make is that it is important to realize that section 230 is not about facebookss and googles but all providers and in some ways it is the smaller providers most at risk because they are the ones that dont have the resources to defend against this litigation and made shut down or not allow for thirdparty content. If you are a newspaper, why allow comments from users if you are liable for those comments. And that is again that they dont have thousands to put in intent moderation so they take the broad steps of, im just not going to allow content terrorists. As an example, when there was a small provider who was being told that terrorists are using your platform to spread propaganda, and his response was, im a small provider. All of this is happening in arabic and i dont understand arabic so i am just going to walk all arabic content. A lot of that was beneficial and not to do with terrorism but that was the only choice they had because they had no other option to separate the potential harmful content. We have two other cases doing the opposite, which is the state of florida and the state of texas and two simple fight, they want to keep everything up. So how does this all play out and what are we looking at in the next year . Does the internet become a highly regulated place or does it all fall down and end in tears . What do you think . You are right. They are two cases that everyone expects to go to the Supreme Court, or at least one of them, which are First Amendment challenges to laws passed by texas and florida, both of which in different ways require providers not to moderate content. In texas, the law says writers should not engage in what our viewpoint based discrimination in terms of how it moderates content. And sometimes that is a good idea, but when you think about Something Like suicide, we may want writers to take down content that promotes suicide while leaving up information about resources and places to go if you are thinking about suicide. I think most people would find that to be a good thing. We want terrorist to take down terroristic propaganda but we also want to talk about why terrorism is bad. It is about due providers have a First Amendment right, editorial discretion, akin to what newspapers have, to have the choice to leave content up or take it down. Dividers but the question is should providers have that . I think what texas and florida are trying to do takes the spirit of the First Amendment and make the content moderation. As a former ceo, but yesterday it was on what content moderation is and it was never about the speech itself but the environment it is creating. Going back to spam, we have identified spam is something we need to remove from any kind of healthy, online marketplace of ideas or just place to enjoy connecting with friends, because spam can take over any community it is in. We are fine with banning span but the First Amendment protects that. Banning spam but the First Amendment protects that. What it is getting to is the difficulty of how to have content moderation while creating a healthy space. I dont it we can just throw up our hands and say, well, everything out there is valuable. Content moderation is to marred and we have to throw out all speech because it will create instances of radicalization and lends itself to conversations about terrorists being activated and radicalized and it is not worth it. You have done a lot of shows about this. I think it is important to understand that these cases are touching on different legal issues, even as they all speak to what the internet looks like and how we think about how we curate what we see online. I think it is important to understand because it is a reminder that often in public discussions, people will point to section 230 as one that allows hate speech online. Unfortunately, or perhaps fortunately, that is the First Amendment. Section 230 has to do with a slightly different but still important russian. It is a really complicated network of laws, jurisprudence built up over time of really difficult finegrained questions about how do we moderate content that maybe we dont speak and how do we hire enough people in tune algorithms to filter out this and not that . I think this space is often a really difficult one because at the same time as it touches on very High Altitude russian that a lot of people care very deeply about, free speech, how they choose what they see and say, it has become quite politicized where you often see democrats calling for platforms to take content down and the republicans calling to keep it up. It comes quite hard to move from that very abstract level down to the nittygritty of how are we going to let this and implement it in a weight that means come i can show you the social media platforms to communicate with my friends and it wont be drowned out. That is crucial to understand but also why it is so difficult to untangle these questions. Another layer of complication is we are not just talking about the United States but a global system and the First Amendment doesnt apply outside of the United States. In europe, you have Digital Services that have content moderation applications that cannot be done here under the First Amendment but platforms have to obey. Even elon musk when he talks about free speech has to obey free speech. You will have contradictory regimes throughout the world. From a technical standpoint it is difficult to run a surface where you provide certain content in one place and different in another. It will not just be the end stage at the Supreme Court but how do platforms deal with that competing contradictory. Transparency and the challenge of algorithms, the algorithms are said to be their secret sauce. Now we have seen in a very positive fashion, safety officers now more prevalent in companies, especially in the social media companies. We tried to get someone from tiktok on but were not able to make it but they created some kind of a health center. I went to look somewhere else to see if they wanted one had one and they didnt. Is there a balance in terms of use on transparency that would help with the whole situation or is that im just being an optimist . I think transparency is a positive thing. We had the santa clara principles which lay out in detail kinds of transparency that would be useful and would inform Public Policy decisions about this. Knowing not only what roles that apply but what of the consequences and how much content are they taking down, in what category and in response to government and of their own volition . And what in greater detail about how they apply those rules and i think all of that would be helpful and it would be helpful because it would allow us to have more informed discussions from a policy and legal standpoint to better understand the ramifications if we change to 30 in this way or interpret the First Amendment message that way change 230 in this way or interpret the First Amendment message that way. There could be such a thing of too much transparency but we are not even close to that in a general manner. The fine line is the challenge, because we dont have an industry perspective for one of the social media platforms but the answer would be, if you know much about it you are able to gain it. That is the reason they gave for pulling back apis is that we provide too much information in real time and that just activates bigger and better thoughts. The problem was identifying the threats and trying to identify just how many were out there. I think in this new perspective it will always be, everything we provide is good transparency and everything we require is too much bad transparency, but i think there is a much better area that we havent reached yet. I will join that cause. Twitter has really been a Movement Among major social media platforms toward Greater Transparency in recent years. It is fair to say twitter has been at the forefront in many different ways in providing reports about how policies are implemented and often they would say they were testing content moderation ideas. You may remember if you used twitter there was a while where if you went to retweet a tweet that had a link that you didnt click on, it would prompt you to click on it. They were very open about, we are testing Different Things to see what works to make our platform healthier and utter and better. One thing i am keeping an eye on under elon musks version of twitter is what will become of that transparency team. We are seeing a trend toward transparency and i dont think that if it ends at twitter it ends elsewhere. But it is worth attention to. We are talking about the two First Amendment cases we are expecting to hit the Supreme Court, there are transparency provisions in the texas and florida laws. They contain provisions they can tell platforms how they can and cant moderate that is how they have been talking but there is also a language that says platforms have to make certain information public. We can quibble over whether the particular way those are designed is perhaps ideal and whether they require too much or too little. The platforms themselves are arguing that those claimants violate the First Amendment along with the requirements for content moderation. I do think there will be an interesting question and most has been on the content moderation provisions but the Supreme Court can consider the question of, do transparency provisions protect companies how does the First Amendment interact with those . They are arguing that if u. S. The New York Times a Washington Post to give you a window into the editorial the liberation, that should be protected. I think there is an alternative arguments where you could say, these are not quite the same as a newspaper. There is Something Different and perhaps if there is a middleground we can find, where a level of transparency can be requirement be a requirement. It is notable to me that the Tech Companies involved in this litigation are really pushing back against that as hard as they can. Transparency requirements is the best Case Scenario and i think they will wake up and realize that after these cases are judged after we get through whatever may be 230 reform in congress this session. Thinking of 230 in the abstract, it is the greatest small conservative experiment of all time. We went out and it said 25 years ago we were not going to allow, not total throwing our hands up and having regulation but for the most part taken away the worst and being able to subscribe or prescribe liability to that case. These transparency requirements are a good middleground, where it is not that we are going to force you to develop a product that doesnt exist yet in order to fight but what we need is better content moderation, but we need to tell us more about what you are doing and what is happening. I think that is the best Case Scenario for the industry. Whatever happens in the United States, extensive transparency requirements in other areas. It is hard to believe that big providers will provide transparency in europe and not the equivalent in the United States. I dont think that will be a tenable position to take and i think transparency is coming one way or another. So twitter and elon musk, it is interesting because part of me on twitter says i am a free marketer and it is a petri dish and we were talking about the blue check that meant something and now it means you paid twitter and so will that dissolve versus the service act where you had an element of regulation and the attempt to make something mean more usually slows down the process while people try to figure out how to gain whatever that is. We are getting an interesting clash in multiple venues of regulation versus all these things that are fundamental to at least around free speech in america. How do we, the global world view, and i appreciate you bringing europe in, making sure what we do doesnt tip the scales too hard and the whole thing breaks. I dont know that anyone knows the answer to that we are embarking on a great experiment both in europe and depending on the Supreme Court here in the United States as well. There is dissatisfaction in how particularly the big social media providers are handling these issues and also a feeling that, why is it that elon musk or Mark Zuckerberg should be the one exercising this much influence over our public discourse, which is what they do. No one elected them and it doesnt seem like they are the right key incision makers. Key decisionmakers. There are key clear dangers in the government regulating which is why we have protections here in the United States. May be public safety, put your device down and go outside for a minute. We were also talking about the idea of, take Mark Zuckerberg had no idea this would have to be managed at such a consequential level. Does the idea of a common carrier for these platforms help the conversation . I dont think the large social media platforms are common carriers for the same reason i dont think they are monopolies. In most cases they just dont charge anything. You need to quantify what a monopoly is and im not convinced about any argument that doesnt include competition. I think you have to say x, y, z company is a monopoly. It is more complicated than that but common carrier law sometimes is just that we say it is a common carrier. I think that is coming to a head with the rnc issues where there were extensive citations of laws and attempting to peg gmail as the coming carrier. That is what we are seeing in europe. The larger point of the Digital Services act, it had a headline Eu Commission makes new rule, and im not been particularly happy about it. I think they are treading on dangerous ground where they are getting too involved in the market. What they are going to end up doing is the only people that are able to follow these new rules are going to be the Large Companies and they try to cut the middleground by drawing a line of who needs to answer to these regulations and everybody has a different way of doing it. There are probably 50 different bills on 230, is it headcount, monthly active users, gross annual revenue . I dont think we are at a place where we can go through and say, here are the people and this is the bucket that needs to follow the new rules and here is the bucket that doesnt have to yet. What is content moderation about . I agree it is about the creation of communities and different types of communities, one might be for children and one might be a freeforall and one might focus on a subject. If you have carrier, it requires you to carry everything and i think the different communities go out the door. There is i think there is misinformation misunderstanding that if you allow everyone to speak, then that is priest each at large free speech at large it does more complicated. Part of what was online is significant harassment, trolling, negative kinds of haters kinds of behaviors and makes it so others cant speak on line. If you allow everyone to speak, that doesnt necessarily mean everyone will have an equal voice because there will be people who as a result of other people being forced offline or dont feel free to speak. I think that is really important. Several news organizations have put out information on abuse of women of Color Running for political offices receive by far more harassment online and that is a great example of how some speech can drown out other speech. These are cases in which somebody decides they are going to throw their hat in the ring and participate in the conversation and they want to run for Public Office and are hit with a wave of speech, abuse, harassing, violent speech and in some cases decide, i dont want to be a part of this conversation anymore. You see that for example in twitter where there are conversations about people saying if more people come on here and are allowed to engage in abusive, harassing behavior, i am gone and i dont want to be here anymore. That is an example of how allowing a freeforall can actually drive some people out of the room and i think drives home the point that it is not a case of more speech is better, it is a situation where we have to think very carefully about how do we curate these environments in communities. And moderation is usually started with the best of intentions. Then it becomes the majority of what is on your platform and everything is a decision, and i think we are asking a lot of people to decide things. We have given people both a microscope and a megaphone and they are using both at a level they never expected and we are not sure which to dial up or down on. This perpetual information flow, especially now youre managing thought levels. You can see two people in the same 24 hours and look at different information and when they amplify it, they are not doing any cross functional information check on that. We have so many things that are important, which brings us to we are in washington, d. C. What we think, was is going to do about this . They dont like to be shown up by the Supreme Court that often because that is where we are headed. The difficulty is that on the one hand, you can find republicans and democrats who say section 230 is terrible and we need to get it of it, but they have very Different Reasons and outcomes they are looking for. By and large, republicans want to get rid of section 230 because they want less content moderation. Democrats want to get rid of section 230 because they want platforms to be more aggressive. There is the consensus that section 230 needs to change but differences in why and that makes the change very difficult. Other thoughts on congressional . I will frame my answer differently and go as far as to say what they should do, and i feel like sometimes maybe the last person in d. C. That thinks section 230 is good as it is. It is just one of the most powerful conservative experience of alltime. If i was trying to answer the question of what should they do if they have to do something, i think we need to take it in order of what is able to be accomplished today and stay focused on what it is technically feasible today. One of the things that can change right now is the whole conversation out of 2016 out of russia funding political speech and political advertising from facebook. I dont know if it was ever entered in a satisfactory way. There are customer rules that can be required from a liability standpoint in that we can make a new carveout for to hunt 30 as we have done with Copyright Infringement four 230 for 230 as we have done with Copyright Infringement. I think that is a step, only one of the realistic steps that can take now and im setting goals for what is being accomplished next is creating viability for bots and other fraudulent actors. Again going back to elon musks struggles identifying how many bots were at twitter. I dont know that me answered in the next five years. I dont know if that is answerable in the next five years. This congress question is algorithmic and i dont see any way from a technical standpoint to implement liability because especially in natural language processing isnt there. We have seen in copyright that it is a simple technical question of this is the content that is copyright and this content looks a lot like it, even if it has been zoomed in or zoomed out of the color has changed. It can identify that. I dont think we have anything sophisticated or powerful enough yet to be able to understand context of political conversation, especially if they are text based. We have a report on that as well and the limitations of automated content, even something narrow like child sex abuse material. There are good tools out that can identify known child sexual abuse material. People have seen a terrible picture before and it is at a database and they can see if it has been recirculated, but there arent good tools for identifying new child sex material. And then once you go that for political discussions, it is almost impossible because all the factors make such a difference. On the algorithmic point, one of the points i hope will become clear to the court is that this notion that you can separate out amplified content is a small subset of content and leave section 230 in place for Everything Else is false. Everything on the internet almost is a product of algorithmic. Google is recommending sites in response to your query, so if you say commendations are no longer protected by section 230, facebook feed is engineered in different ways. It is boosting your friends posts as opposed to random peoples. Every thing we see on some level is some form of the problem product of an algorithm or recommendation. That would essentially got 230 gut section 230. We have seen in recent years proposals put forward we have seen proposals to exempt algorithm amplification in some form from section 230 protections. It will always be, how do you define the boundaries . We have a great example of how this can go terribly wrong, which is a law that created carveouts for 230 protections for content related in some way to sex trafficking. We wouldnt want to have protections but the problem is the way the law was written was incredibly broad and can incredibly convoluted at the same time and platforms didnt know what they needed to take down or not and removed a lot that didnt necessarily need to be taken down and had negative impacts on certain communities. I think that is an example of how if Congress Says we would like to create a carveout of some kind, there really needs to be specificity put into how we are defining what is carved out and making sure platforms know what they can and cannot do, because otherwise you risk opening a pandoras box, in the same way i worry Supreme Court might do in gonzales. The challenging of laws being a lagging indicator is something to get in front of. But the intent was good but it was the ability to figure it all out that has become exceptionally challenging. I cannot believe there are no questions. I did get a note saying the internet was telling me i was wrong about something, that facebook has an election monitoring center. I guess what do think 2023 is it going to be a hell escape or where are we headed . I am optimistic. That is good to know. I think 2016 social media platforms that sucker punched. In 2018 they were down. In 2020 they did some things right. It is hard to say because the conversation has been in the last two years focused on the most dangerous form of election disinformation possible was is trying to convince people their vote didnt count. If you break down, what i do is the basic point, giving people the opportunity to go to the polling with every two years and say, well, i generally agree with everything that happened the last two years are generally disagreed with it. Saying that ballots were being thrown out and Maricopa County or 2000 votes. Overnight goes right at it. Peoples trust that their vote counts is actually very solid. You ask people generally, do you think elections are fair in the United States, you will not get a confidence building answer and that would condition or quantified my optimism. But if you ask them what is your confidence in your local elections are run well, it is always above 85 that was true in october 2020 and november 2020 and it is true as of a couple weeks ago when pew research asked the question again. I think there is that specific situation, limited ability for disinformation to change peoples minds about how the votes counted. They still trusted they counted. To the point earlier were a lot of disinformation is targeted specifically at individuals, i think your example was candidates of color. There is been turnover in election administrators. But by and large, from some of the polling we are seeing, threats and harassment are not coming from anyone they are serving as an election administrator. Do we feel like democracy is being challenged by all of this . We have so may things in play at the same time, next tuesday, we have disinformation campaigns going on and is hard to tell what is true and not true in campaigns. We also have people showing up to protect ballot boxes because they think something is going to happen. We also have seen some states change rules about not trying to make everything happen on one day and being able to use tools to not have a concentrated point of people getting into one space that may not be at the moment the safest place to be. Any thoughts about this coming tuesday and what we will see over the weekend . I have to take a little less optimistic view. I do there is a chance and it may well be that people around truly confident about local elections but i do think there is a broader skepticism about the Election Integrity more generally and if we ever get to a point where every chunk of americans feel that if their candidate or party has lost and the election was illegitimate, that is a fundamental premise on which our democracy is based. I feel like we may be getting to a point where there is a big chunk of people who if their candidate loses and i think it is illegitimate. I think disinformation and misinformation we see online to new contribute to that belief. I will second that. I do think that the overlay of existing political problems and growing extremism really underlines that content moderation can certainly help but we are not going to be able to content moderate our way out of this problem. The underlying issues are fundamentally political questions. One of the ways that were changed in 20s there was a context in particular with the russian interference in the election that they were coming from abroad and that if you could bar russian posters that would fix the problem what we havent seen in recent years is that is unfortunately not true and it is far harder to tell a pennsylvania voter who believes that the 2020 election was stolen and is posting that on twitter and moderating that is very different question than taking down a tweet from someone in st. Petersburg looking and pretending to be a voter. For that reason, these are very tough questions. I always wonder if the russians start to care at some point and they have been working so long on them. [laughter] any questions from the audience . Can you please identify yourself. I am a masters from georgetown. Social media platforms allow users to shape feeds based on likes or shares and who they block in a way that creates the illusion that everyone agrees with them and their opinions and should social media platforms be obliged to modify the algorithm itself, the strength of the ai to make users experience feeds and less than an opinion vacuum . It makes me think about a good reminder that not everybody lotion at every time. Without it making it seem like an opposing opinion is just a troll. Someone elses feed might be opinions to them that are valued and to another person, those may seem like spam. Maybe they could recommend articles. Is a good question because it is designed to make you see what you want to see. There has been some interesting Academic Research suggesting that affect me not be as strong as we thought. Perhaps there is some small portion of people who do craft filter bubbles for themselves but the rest of us not so much or perhaps the effect of being able to carry your feed doesnt affect her political perceptions as much as you might think, which underlines, a lot of these are empirical questions that have empirical answers and the issue is sort of that things are moving so quickly and the content moderation question is changing and it takes a while for researchers to come out with the paper. I do think it is a very good question and i will be interested to see if the Research Continues in that direction or if we move back to the filter bubble idea. We usually find Something Interesting five years too late. A question over here . I work for a tech start that focuses on election laded disinformation election related disinformation. I have been focusing on section 230 and regulatory regimes and it is helpful. Bringing that into the contents of i have noticed a lot from 2020 that it was all about online conversation where as interns in 2022 seem like it is online call to action whether it is watchers or people taking photos of people they suspect of being trolls. Have you experienced that and also do you think that realworld impact has a chance of driving the policy discussion in a more urgent fashion, whether in congress or at the Supreme Court. I think shane mentioned this earlier about elections in maybe 2020. Some of the issue was people who were not counted early enough. I think that is symptomatic of what were the larger conversations of content moderations and 230 is as we have responsibilities as citizens and citizens were all members of the government and in a philosophical way to be able to enforce these things and we cannot because it is hard make it required where the stuff is happening. Im saying that because im thinking forward to tuesday, shane mentioned these things were not being counted early enough, that is true of at least 33 states in the United States where they arent allowed to count ballots by mail. 23 cant do it until the morning of the election around 7 00 a. M. , so next tuesday, and there is at least 10 still where they cannot count them until they are past the polls being closed. Disinformation thrives in those information vacuums and i think that is what created a lot coming out of 2020 and things like 2000 meals or things like election stolen and lets go down there and point cameras at buildings for three weeks. Right . [laughter] that is where that derives. I think it requires a realworld answer. I do not know if we will be able to moderate that out. In terms of talking about the interaction between online and the real world, i think another really terrifying example is actual violence against candidates there it obviously the paul pelosi being attacked is a prime example but when you see people feeding on misinformation, disinformation, conspiracy theories online and that is becoming a motivation for them to actually ask act offline in violent ways or things like that. That is a scary thought if we see more of that thing happening, that will have to drive the conversation and have to make it more urgent i think. We have time for one less question. Market, commune occasion researcher and i work for a tech Company Called vertical knowledge and we are responsible for monitoring the narratives and those kind of things so this is a semi large wind of question so i will keep it short. Education, one of the premises at 230 is the Educational Value of diverse information offset the other side of the equation. Is the educational side of the problem brought in enough . For example, young people as you mentioned it is a pretty political problem we have fundamentally. If young people are dont even have the executive functions working properly to filter, content moderation is also part of protecting their access and entry point into dialogue, into discussion, into this philosophical inclusion in government we share. There is the big educational side to this yet 2000 meals and the things we look at our so now. So postman and others focus a lot on the communication side of onboarding young people into politics. Is there enough thinking about the cognitive dimension and educational dimension as we talk about free speech . Security, counterterrorism, economics come out where them, that there is a longterm program we need to study specific to can people even realistically, cognitively do their own moderation in a way that when the framers were founding the country they assumed newspapers and education. To tote who spent a lot of time on this, assume the level of knowledge that does not exist so we have to think about congressional scaffolding to bring the level of understanding up so we can use the internet responsibly and within that, talk about regulation. I know many books are being written on this all the time but my fundamental question, are there enough educational stakeholders in the mix on this specific conversation now to weigh in or are we getting too lost in the weeds of now technology and those sorts of things. And that will set us back in five years, we are kind of chasing our tail. Whatever the new 2000 meals is, we end up doing the same thing over and over and maybe the market fixes it but maybe there is something more that can be done with educators involved. I think a lot of the correct answers to these questions are out there. I think there are good takes and good evidence why Something Like 2000 meals is not correct. I think there are good takes an evidence in situations like Maricopa County which was sharpies given only to conservatives instead of so those votes would not be counted. I thing its less availability of education and how it is delivered. I think it needs to be something in real time. As twitter will call this, the old witter recall this prebunking where you are trying to educate folks but they have to have attention to that because lets go back to Maricopa County, the administrator there two weeks before the election says sharpies could be used and they show the process of a sharpie marked ballots going through the counter and working. So they effectively prebunked it and i looked at the video i think three days ago and election administrator had Something Like 200 subscribers and i think the video had Something Like 2000 views so its an availability for people to see it. That is why i am a fan of if we are talking about tools of social media platforms that social media plat forms can use i think they need to do it in real time, meaning we need to have more information flags. I liked the old twitter you have to read the article before you retweet it. It kept me honest sometimes. [laughter] i think the answers are out there about delivering them in real time and i think that is a place social media platforms can be more engaged. I think there are a lot of conversations ive seen happening about the role of a Civic Education. As an important factor. I completely agree it is crucial that people understand how elections are administered, to the nuts and bolts of how government works. I think there is a good argument it could undercut some of the falsehoods out there. That said, my worry is Civic Education sounds great as a solution in the abstract but the problem is always going to be in the details and especially when you look at really extreme anger, frustration, polarization around education in recent years , Critical Race Theory or whatever issue parents are concerned about. I do worry that shifting the focus to Civic Education will just lead to replaying the same conversations in a different context, which does not mean it is not important but does mean i think it is not a cure all. I think at one point you raise an interesting point in a sense there is on a policy standpoint a lot of attention to Children Online but it tends to be more from a safety standpoint and that is an important conversation. But it could be the policy conversation shifts around children and online should encompass not only the safety aspect but more the educational aspect and thinking about are the things we could be doing from an education to literacy are their educational standpoints. I think u. S. A lot of good questions. Its a reminder to say that aei has a content project where working in collaboration with brooking, working quite a bit with some of our colleagues in this area. Hope you will continue to stay tuned. Please join in on the conversation because this is about us trying to figure this out collectively as we are going through this. Thank you all for being in the room and those watching and thank the panelists for todays conversations. [applause]