The committee on Homeland Security will come to order. The committee is meeting today to receive testimony on examining social Media Companies efforts to counter online terror content and misinformation. In march, a white supremacist terrorist killed 51 people and wounded 49 more at two mosques in christchurch, new zealand. Our thoughts and prayers continue to be with the vic dems and their families. The motive behind the attack is not in question. The terrorist had written an extensive manifesto outlining his white supremacist, white nationalist, anti immigrant, anti muslim and baptist beliefs. His act was horrifying beyond words. And it shook the conscience. Shockingly, the terrorist was able to live stream the attack on facebook, with the video and its gruesome contents went undetected initially. Instead, Law Enforcement officials in new zealand had to contact the company , and ask that it be removed. When new zealand of dohertys called on all social Media Companies to remove these videos immediately, they were unable to comply. Even moderators could not keep up with the volume of videos being reposted. And their Automated Systems were unable to recognize minor changes in the video. So, the video spread online, and spread around the world. The fact that this happened nearly 2 years after facebook, twitter, google, microsoft and other Major Tech Companies established the Global Internet forum to counterterrorism, or pronounced give ct. It is troubling to say the least. Give ct was created for Tech Companies to Share Technology and best practices to come back to read online terrorist content. Back in july 2017, representatives of the gif ct committed on this new initiative. At the time, i was optimistic about its intentions and goals. And acknowledged that its members demonstrated initiative and willingness to engage on this issue while others have not. But after a while, and white supremacist terrorists were able to exploit social media platforms in this way, we all have reason to doubt the effectiveness of the gif ct and the companys efforts more broadly. March 27 of this year, representatives of gif ct briefed this committee after the christchurch massacre. Myself and other members of this committee have asked important questions about the organizations annual companies that have yet to receive satisfactory answers. Today, i hope to get answers regarding your actual efforts to keep terrorist content off your platform. I want to know how you will prevent content like the new zealand attack video from spreading on your platforms again. This committee will continue to engage social Media Companies about the challenges they face in addressing terror content on the platform. In addition to terror content, i want to hear from our panel about how they are working to keep hate speech and harmful misinformation off their platforms. I want to be very clear, democrats will respect the freespeech rights enshrined in the First Amendment. But much of the content i am referring to is either not protected speech, or violates the social Media Companies own terms of service. We have seen time and time again that social media platforms are vulnerable to being exploited by bad actors, including those working at the behest of foreign governments who seek to sow discord by spreading misinformation. This problem will only become more acute as we approach the 2020 elections. We want to understand how companies can strengthen their efforts to deal with the persistent problem. The fundamental level, todays hearing is about transparency. We want to get an understanding of whether and to what extent social Media Companies are incorporating questions of national security, public safety, and integrity of our domestic institutions into the business model. I look forward to having that conversation with the witnesses here today. And to our ongoing dialogue on behalf of the american people. I think the witnesses for joining us, and the members for their participation. With that, i now recognize the Ranking Member of the full committee, the gentleman from alabama, mr. For five minutes with the purpose of an opening statement. Thank you, mr. Chairman. Concerned about violent, interior related online content that has existed since the creation of the internet. This its peaked over the past decade with the growing sophistication in which Foreign Terrorists and their global supporters have exploited the openness of Online Platforms to radicalize, mobilize, and promote their violent messages. These tactics proved successful. So much so that we are seeing domestic extremists mimic many of the same techniques to get followers and spread hateful, violent propaganda. Other pressure has gone steadily on the companies to modify the terms of service, to limit posts linked to terrorism, violence, criminal activity, and most recently, the hateful rhetoric of misinformation. The large and mainstream companies have responded to this pressure in a number of ways, including the creation of the Global Internet forum to counterterrorism gif ct. There also updating the human terms of service and todays hearing is an important opportunity to examine the constitutional limits placed on the government to regulate our free speech. Advocating violent action, recruiting terrorists online is a legal. But expressing ones political views, however repugnant they may be, is protected under the First Amendment. I was deeply concerned to hear the recent news reports about googles policy regarding President Trump, and conservative news media. Googles head of responsible innovation, jen jeni recently said, we all got screwed over in 2016. The people got screwed over. The news media got screwed over. Everybody got screwed over. So, we have rapidly been like, what happened there . How do we prevent this from happening again . Close Elizabeth Warren is saying that we should break up google. That will not make it better. It will make it worse. Because, all of the Smaller Companies that dont have the same resources that we do will be charged with preventing the next trump situation. This jeni is entitled to her opinion. But we are in trouble if her opinions are googles policies. This report details a lot of claims about googles deliberate attempt to alter search results to reflect the reality google wants to promote, rather than objective facts. This report stark reminder of why our founders created the First Amendment. In fact, the video i just quoted from has been removed from youtube. That platform is owned by google. Who is joining us here today. I have serious questions about googles ability to be fair and balanced will include, there colluding with you to to silence press coverage. Regulating speech quickly becomes a subjective exercise for government or the private sector. Noble intentions often give way to biased and political agendas. The solution to this problem is complex. It will involve enhanced cooperation between the government, industry, individuals, well protected the Constitutional Rights of all americans. Appreciate a witnesses participation here today. Of the todays hearing can be helpful in providing Greater Transparency and this complex challenge. Thank you very much. Other members of the committee, reminded that under the committee rules, Opening Statements may be submitted for the record. I welcome our panel of witnesses. Our first witness, ms. Monica is the Vice President of Global Policy management at facebook. Next, we are joined by mr. Nick nichols who currently serves as a global senior strategist for Public Policy at twitter. The third witnesses mr. Derek slayton, the global director of information policy at google. Finally, we welcome miss levines johnson, who serves as the John Marshall harlan the second professor of law. At new york law school. Without objection, the witnesses full statement will be inserted in the record. I now ask each witness to summarize his or her statement for five minutes. Again, with ms. Becker. Thank you. Chairman thompson. Ranking member rogers. And members of the committee. And, thank you for the opportunity to appear before you today. I am monica bickers. Facebooks Vice President for Global Policy management. And i am in charge of our product policy and counterterrorism effort. Before i joined facebook, i prosecuted federal crimes for 11 years at the department of justice. On behalf of our company, i want to thank you for your leadership. Combating extremism, terrorism, and other threats to our homeland and national security. I would also like to start by saying that all of us at facebook stand with the victims, their families, and everyone affected by the recent terrorist attacks, including the horrific violence in sri lanka and new zealand. In the aftermath of these acts, it is even more important to stand together against hate and violence. And we make this a priority and everything that we do it facebook. On terrorist content, our view is simple. Theres absolutely no place on facebook for terrorists. They are not allowed to use our services under any circumstances. We remove their accounts as soon as we find them. We also remove any content that praises or supports terrorists or their actions. And if we find evidence of imminent harm, we promptly inform authorities. There are three primary ways that we are implementing this approach. First, with our products. That help stop terrorists and propaganda at the gates. Second, through our people. Who help us review terrorist competent, content implement policies. Come through partnerships outside the company. Which help us stay ahead of the threat. First, our products. Facebook has invested significantly in technology to help identify terrorist concept content. Including the use of Artificial Intelligence, but also using other automation and technology. For instance, we can now identify violating textual posts in 19 different languages. With the help of these improvements, we have taken actions on more than 25 million pieces of terrorist content since the beginning of 2018. Of the content that we have removed from facebook, for violating our terrorism policies, more than 99 of that is content that we have found ourselves, using our own technical tools before anybody has reported it to us. Second, our people. We now have more than 30,000 people who are working on safety and security across facebook, across the world. And that is three times as many people as we had dedicated to those efforts in 2017. We also have more than 300 highly trained professionals exclusively are primarily focused on combating terrorist use of our services. Our Team Includes counterterrorism experts, former prosecutors like myself, former Law Enforcement officials, former intelligence officials. Together, they speak more than 50 languages and they are able to provide 24 hour coverage. Finally, our partnerships. In addition to working with Third Party Intelligence providers, to more quickly identify terrorist material on the internet, we also regularly work with academics who are studying terrorism, the latest trends, and government officials. Following the tragic attacks in new zealand, facebook was proud to be a signatory to the christchurch call to action. Which is a 9 point plan for the industry to better combat terrorist attempts to use our services. We also partner across industry. And mr. Chairman and Ranking Member mention, and 2017, we launched a Global Internet form to counterterrorism, or gif ct with youtube, microsoft, and twitter. Gif ct, the point of that is, we bring companies together from across industry to share information, and also to Share Technology and research to better combat these threats. Through gif ct, we have expanded an industry database for companies to share what we call hashes, which are basically, digital fingerprints of terrorist content, so we can all remove it more quickly and help Smaller Companies do that, too. We have also trained over 110 companies from around the globe and best practices for countering terrorist use of the internet. Now, facebook took over as the chair of gif ct in 2019. And along with our fellow members, we have this year, worked to expand the capabilities, including making new audio and tech hashing techniques available to other member companies. Especially the Smaller Companies. We have also improved our crisis protocols. In the wake of the horrific trite christchurch attacks, we communicated in realtime across the companies and were able to stop hundreds of versions of the video of the attack, despite the fact that bad actors were actively trying to edit the video to upload it to circumvent our system. We know adversaries are always evolving the tactics them have to improve if they want to stay ahead. And though we will never be perfect, we have made real progress. We are committed to tirelessly combating extremism on a platform. I appreciate the opportunity to be here today. I look forward to answering your questions. Thank you. Chairman thompson, Ranking Member rogers, members of the committee. Thank you for the opportunity to appear here today to discuss these important issues of combating terrorist online and. Keep the victims turn your mic on, please. Is that better . Thank you. We keep the victims, their families and the effective communities of the attack at christchurch and around the world in our minds as we undertake this important work. We have made the health of twitter a top priority. To measure access and how successfully we encourage healthy debate, conversations and Critical Thinking on the platform. Conversely, hateful conduct, terrorist content and deceptive practices detract from the help of the platform. I would like to begin by outlining three key policies. Firstly, twitter takes a zero tolerance approach to terrorist content on a platform. Individuals may not promote terrorism, engage in terrorist recruitment, or terrorist acts. Since 2015, we have suspended more than 1. 5 million accounts for violations of rules related to the promotion of terrorism, and continue to see more than 90 of these accounts suspended through proactive measures. In the majority of cases, we take action at the account creation stage, before an account is even tweeted. The remains 10 is identified through a combination of user reports and partnerships. Secondly, we prohibit the use of twitter by large extremist groups. These other as defined in our rules is brutal, whether by the statements on or off the platform, promote violence against civilians or use violence against civilians to further the cause, whatever the ideology. Since the introduction of the policy in 2017, we have taken action on 184 groups globally, and really suspended more than 2000 unique accounts. Thirdly, twitter does not allow hateful conduct on it service. An individual on twitter is not permitted to promote violence or directly attack or threaten people based on protective characteristics. When any of these rules are broken, we will take action to remove the content and we permanently remove those who promote terrorism of large extremist groups from twitter. As you have heard, twitter is a member of the Global Internet forum to counterterrorism, a partnership between youtube, twitter, facebook and microsoft, that fertilitys information sharing and Technical Cooperation across industry, as well as providing essentialist support from our companies. Blood a number of lessons from the christchurch tech. The distribution of media was manifestly different from how i asked her other terror organizations worked. This reflects a change in the wider threat environment that requires a renewed approach and focus on Crisis Response. After christchurch, an array of individuals online sought to continuously re upload the content created by the attacker. About the video in the manifesto. The board of infinite ecosystem presented then and so presents a challenge we cannot avoid. A range of Third Party Services were used to share content, including some forms and websites that of long hosted some of the most egregious content available on my. Our analysis found that 70 of the views of the video posted by the christchurch attacker came from verified accounts on twitter, including news organizations and individuals posting the video to condemn the attack. With committed to learning and improvement 60. Every entity had a part to play. We should also take apart from the social examples we have seen on twitter around the world as users come together to challenge hate and challenge division. First of all i pray for orlando. After the christchurch attached, hello brother reporter terrorist narratives and offer a Better Future for us all. In the months since the attack, governments, industry, and Civil Society were united behind a mutual commitment to a safe, secure, open and Global Internet. In fulfilling our commitment to the christchurch, we take a wide range of actions, including the continuing investing in technology. We can respond as quickly as possible to the future instance. At me now manipulate the public conversation. As we uniquely open service which enables the clarification of falsehoods in real time. We proactively enforce policies and use technologies to halt the spread of content propagated through manipulated tactics. Our rules clearly prohibit coordinate account manipulation combination is and faced, take accounts. We continue to explore how we may take further action, by policy and products for these kind of issues in the future. We continue to critically examine additional safeguards. To protect the health of the conversation that is going on on twitter. We look forward to working with the committee on these enforcement issues. Thank you. Thank you. Your testimony. I now recognize mr. Slater to summarize his testimony for five minutes. Chairman thompson, Ranking Member rogers. Displaying distinguished members of the committee. Thank you for the opportunity to appear before you today. I appreciate your leadership on the important issues of radicalization and misinformation on my. I welcome the opportunity to discuss googles work in these areas. My name is derek slater and i am the global director of information policy at google. In my role, i lead a team that advises company in Public Policy frameworks and online content. Google, we believe that the internet has been a force for creativity, learning, and access to information. Supporting the free flow of ideas is core to our mission. Organize to make the worlds information universally accessible and useful. You have always been legitimate limits, even where laws strongly protect free expression. This is true both online and off. Especially when it comes to issues of terrorism, hate speech, and misinformation. We take this issue seriously. And part of the solution. In my testimony today will focus on two areas where we are making prod, progress in particular users. First, on the enforcement of a policies around terrorism and hate speech, and second, in combating this information broadly. On youtube, we have rigorous policies and programs to defend against use of our platform to spread hate or incite violence. Over the past two years, we have invested heavily in machines and people to quickly identify and remove content that violates our policies. First, youtubes enforcement system starts from a point at which a user uploads a video. If it is somewhat similar to videos that already violate our policies, it is and for humans to review. If they determined that it violates our policies, they remove it. This is to make the digital fingerprint so it cant be uploaded again. The First Quarter of 2019, over 75 of the more than 8 million videos removed were first flight by a machine. The majority of which were removed before a single view was received. Second, we also rely on experts to find videos with the algorithm might be missing. Some of these experts said at our intel desk which proactively looks for new trends and content that might violate a policies. We also allow expert ngos and governments to notify us about content in bulk through our trusted flagger program. Finally, we go beyond enforcing our policies by creating programs to promote counter speech. Examples of this work include our creators for change program. Which supports you to creators that are acting as positive role models. In addition, alphabets jigsaw group is to play the method which uses targeted at and videos to disrupt online radicalization. This brought in crosssectional work has led to tangible results. In the First Quarter of 2019, youtube manually reviewed over 1 million perspective terrorist videos and found that only fewer than 10 , about 90,000, violated our terrorism policy. As a comparison point, we typically remove between seven and 9 million quarters a quarter. Which is a fraction of a percent of youtubes total views during this time period. Efforts do not stop the. We are constantly and reacting to new situations. For example, youtube recently brother updated its hate speech policy. Dated policy specifically prohibited videos that a group is support superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, religion, Sexual Orientation or veteran status. Similarly, recent tragic events in christchurch presented some unprecedented challenges. In response, we took more drastic measures such as automatically rejecting new uploads the videos without waiting for review to check if it was news content. Are now re examining our crisis protocols, and have also sized signed the christchurch call to action. Finally, we are deeply committed to working with government, the Tech Industry and experts from Civil Society and academia to protect our services from being exploited by bad actors. Including dural during googles chairmanship and the gif ct over the last year and have become the topic of combating this information, we have natural long term incentive to prevent anyone from interfering with the integrity of our products. We also recognize it is critically important to combat this information in the context of democratic elections, when i users seek accurate, trusted information that will help them make critical decisions. We have worked hard to curb misinformation in our products. Efforts include designing that ranking algorithms, implementing tougher policies against misrepresentative column content. The same time, we have to be mindful about that our platforms reflect a broad array of sources and information and are important free speech considerations. There is no silver bullet, we will continue to get, to work to get right. In conclusion, we want to do anything we can to ensure users are not exposed to harmful content. We understand these are difficult issues of serious interest of committee. We take them seriously, want to be responsible actors to do our part. Thank you for your time, and i look forward to taking your questions. Thank you for your testimony. I now recognize mr. Austin to summarize her statement for five minutes. Thank you so much, chairman thompson and Ranking Member rogers, and members so sorry. Thank you so much chairman thompson and Ranking Member rogers, and other members of the committee. My name is nadine stross and. I am a professor of law at new york law school, and the immediate task president of the American Civil Liberties union. Of great pertinence last year i wrote a book which is directly pertinent to the topic of this hearing called hate why we should resist it with free speech, not censorship. I know mr. Chairman, that you referred to hate speech as problematic content in addition with terror content. And misinformation. All of these kinds of speech, while potentially harmful, present enormous dangers when we and power either government or private companies to censor and suppress the speech for this reason. The concepts of hate speech, terrorist content, and misinformation are all irreducibly vague and broad. Therefore, having to be enforced according to the subjective discretion of the enforcing authorities. And, the discretion has been enforced in ways that both under suppress speech that does pose a serious danger as a number of the members, as the chairman pointed out, the Ranking Member pointed out, but also, do suppress very important speech, as also has been pointed out. Speech that actually counters terrorism and other dangers. What is worse, is that in addition to violating free speech and democracy norms, these measures are not ineffective in dealing with the underlying problems. And i thought that was something that was pointed out by comments like my co panelists in particular, nick pickles, written testimony talked about the fact that if somebody is driven off one of these platforms, they will then take refuge in darker corners of the web, where it is much harder to engage with them, to use them as sources of information for Law Enforcement and counterterrorism investigation. So, we should emphasize other approaches that are consistent with free speech and democracy but have been lauded as at least as effective and perhaps even more so than oppression. I was very heartened that the written statements of michael panelists all emphasized these other approaches. Monika bickerts testimony talks about how essential it is to go after the moat closet, root causes of terrorism in the testimony of nick pickles and derek slater. Also emphasize the importance of counter speech, counter narratives, and direction. I recognize that every single one of us in this room is completely committed to free speech and democracy. Just as every single one of us is committed to countering terrorism and disinformation. All, the reason we oppose terrorism and disinformation is precisely because of the harm that they do. To democracy, and liberty. Before i say anything further, i do have to stress something that i know everybody here knows, but many members of the public did not. That the social Media Companies are not bound by the First Amendment freespeech guaranty. So, none of us have a freespeech right to air any content on the platforms at all. Conversely, they have their own free speech rights to choose what will be and what will not be on the platforms. So, it would be unconstitutional, of course, for congress to purport to tell them what they must put up in what they must take down, to the extent that the takedowns would go beyond First Amendment unprotected speech. Chairman thompson, you did completely accurately, of course, note that much of the content that is targeted as terrorist, is unprotected. But much of it is protected under the constitution. And much, much of it is very valuable including human Rights Advocacy that has been suppressed under these, necessarily overbroad and subjective standards. Although the social Media Companies do not have a constitutional obligation to honor freedom of speech, given their enormous power, it is incredibly important that they be encouraged to do so. And, in closing, i am going to quote a statement from the written testimony of nick pickles, which i could not agree with more, when he said that, we will not solve the problems by removing content alone. We should not underestimate the power of open conversation through , to change minds, perspectives, and behaviors. Thank you very much. I think other witnesses for their testimony. And i remind each member that he or she will have five minutes to question the panel. I now recognize myself for questions. Misinformation is some of this committees challenge, as it relates to this hearing. As well as the terrorist content. Lets take, for instance, the recent doctored video of Speaker Nancy Pelosi , that made her appear to be drunk or slurring her words. Facebook and twitter left up the video, but youtube took it down. Everybody agreed that something was wrong with it. Facebook, again, took a different approach. I want this record and miss pickles to explain how you diss describe the process for leaving this video up on facebook and twitter and then mr. Slater, i want you to explain to me why youtube decided to take it down. This bickert . Thank you mr. Chairman. Let me first say, misinformation is the top concern for us, especially as we are getting ready for the 2020 elections. We know this is something we have to get right. We are especially focused on what we should be doing with increasingly sophisticated manipulative media. So, let me first speak to our general person misinformation, which is, we remove content when it violates our Community Standards. Beyond that, if we see somebody that is sharing this information, we want to make sure that we are reducing the distribution and also, providing Accurate Information from independent, Fact Checking organizations, that people can put in context what they see. To do that, we work with 45 independent Fact Checking organizations from around the world. Each of which is certified by pointer, as being independent and meeting certain principles. And as soon as we find something that those Fact Checking organizations labeled false on our platform, we dramatically reduce the distribution, and we put next to it, related articles so that anybody who shares that gets a warning that this has been rated else. Anybody who did share it, before we got the Fact Checkers rating, get the notification that the content was, has now been rated else by fact checker. And we are putting next to, those related articles from the Fact Checking organization. I understand. How long did it take you to do that for the pelosi video . The pelosi video was uploaded to facebook on wednesday, may 22, around late morning. And on thursday around 6 30 p. M. , Fact Checking organization rated it as false and we immediately down ranked it and put information next to it. And that is something where we think we need to get faster. We need to make sure that we are getting this information to people as soon as we can. This also is the reason that, at 6 30 pm it took you about a day and half . Yes, it did, mr. Chairman. Mr. Chuckles mr. Pickles. Like i said, the process, we reviewed this against a roseburg any content that breaks the rules, we will remove. We are also very aware that people use tactics to spread this content. Bank accounts, automation. They will take action on the distribution, as well as the content. This is a policy area we are looking at right now. Not just in the case of where the video might be manipulated, but also where the videos are fabricated, and where the whole process of creating media maybe artificial. We think that the best way to approach this is with the policy on the product approach that covers, in some cases, removing i understand. But just get to it. Why you left it up. At present, the video doesnt make a roseburg the account posting it doesnt break the rules. But it is absolutely a policy where looking at right now about whether our rules on our products are the correct framework for dealing with this challenge. If its false, or misinformation, that doesnt break the rules . No. Mr. Slater. On youtube, we have Top Community guidelines to layout the rules of the road. What is on the platform and what is that. Violated content, when it is identified to us via machines or users, we will review. And remove. In this case, the video in question violated our policies around deceptive practices, and we removed it. So, again, our committee is looking at misinformation and some other thing. Were not trying to regulate companies, but terrorist content can also be manipulated document. So, this strossen, talk to us about your position with that. The difficulty in the inherent subjectivity of these concepts, chairman thompson, is illustrated by the fact that we have three companies that have subscribed to, essentially, the same general commitments, and yet our interpreting the details very differently, with respect to specific content. We see that over and over again. Ultimately, the only protection that we are going to have in the society against disinformation is education starting at the earliest levels of a childs education in media literacy. Because congress could never protect against misinformation. And traditional media, right . Unless it meets the very strict standards of defamation that is punishable and fraud that is punishable. Content. Including the pelosi video. It is completely constitutionally protected and other media. Thank you. I yield. To the Ranking Member for his questions. Thank you mr. Chairman. Mr. Slater, the video i dictate my comments with miss jeni in your employee. Would you like to take this opportunity. Have you seen it . Congressman, i have not seen the full video, but i am aware of what youre talking about picking us. Would you like to take an opportunity to respond to the comments i offered about what was said . Could you be specific on congressman, which would you like me to respond to . When she basically said for example, that we cant let google be broken not because the Smaller Companies dont have the same resources we have to stop trump from getting reelected. Thank you for the clarification. Let me be great. This employee was recorded without our consent. I believe the statements were taken out of context. But stepping back to our policies, are, how we address the issue youre talking about. No employee weather in the lower ranks up to Senior Executives has the ability to manipulate our search results, based on our products or services , based on the political ideology. We design and develop our products for everyone. We mean everyone. , we do that to provide relevant results, authoritative results. We are in the trust business. We have a longterm incentive to get that right. And we do that in a transparent fashion. You can read more on her house works site. We have guidelines on the public on the web that describes how we looked at rating. And, we have robust systems and checks and balances in place to make sure theres a regulus, rigorously adhered to as we set up our systems. I recognize that she was being videotaped without her knowledge. But, the statements that i quoted from more full, complete statements that were not edited. So, it is concerning when you see somebody who is an executive with google and there was more than one of the video, by the way. Making statements that indicate that its management policy within google, to try to manipulate information to cause one or another candidate for president of the United States, or for that matter, any other office, to be successful or not be successful. So, that is what gave rise to my concern. Is it, do we have reason to be concerned that google has a pervasive nature in the company to try to push one Political Party over another . In the way it conducts its business . Congressman, i appreciate the concern. But let me be clear again. In terms of what our policy is from the highest levels on down, and what a practices and structures and checks and balances are about. We do not allow anyone, lower level , higher level, to manipulate our products in that way. Okay. I hope its not the culture. At any of your platforms. Because you are very powerful in our country. This strossen me raise concern. Will companies can legally decide what content to allow on the platforms, but censorship coverage, what are your recommendations for these Companies Regarding content moderation without censorship . I would thank you so much, Ranking Member rogers. I would first of all, endorse at least the transparency that both you and chairman thompson stressed in your opening remarks. And in addition, other process related guarantees, such as due process, the right to appeal, and a clear statement of standards. I would also recommend standards that respect the free speech guarantees, not only in the United States constitution, but of International Human rights that the United NationHuman Rights Council has recommended in a nonbinding way, that powerful Companies Adopt. And that would mean that content could not be said trust, unless it posed an emergency. That it directly caused certain specific, serious, imminent harm that can be per , cant be prevented other than through suppression. Other than short of that as you indicated, Ranking Member rogers, politically controversial, even repugnant speech, should be protected. We made very much disagree with the message, but the most effective as well as principled way to oppose it is through more speech. I would certainly recommend, as i did in my written testimony, that these Companies Adopt user empowering technology. That would allow us users to make truly informed, voluntary decisions, about what we see and what we dont see. And not manipulate us as has been reported many times, into increasing rabbit holes and echo chambers, but give us the opportunity to make our own choices and choose our own communities. Thank you. I yield back. Thank you. Chair recognizes young lady from texas. Ms. For five minutes. I think the chair and i think the Ranking Member. Committee members for this hearing. Let me indicate that there is known to the public before the state tech and i might say that we have a fifth the state, which is all of you and others that represent the social media empire. And i believe it is important that we Work Together to find the right pathway for how america will be a leader in how we balance the responsibilities and rights of such a giant entity, and the rights and privileges of the american people, and the sanctity and security of the american people. Social media statistics from 2019 show that there are 3. 2 billion social media users worldwide. And this number is only growing. That equates to about 42 of the current world population. That is enormous. Certainly, i know the numbers are just as daunting in the United States. So, let me ask a few questions, and i would appreciate brevity, because of the necessity to try to get as much in as possible. On march 15, 2019, worshipers were slaughtered in the midof the prayers in christchurch, new zealand. The gunman live streamed the first attack on facebook live. My question to you miss bickert , is that can you today, sure the committee that there will never be another attack of this nature that will be streamed as it is happening over facebook live. You mentioned to 30,000 and 300. And so, i hope they may contribute to your answer, but i yield to you for your answer. Reporter congresswoman, thank you. The video was appalling. The attack, of course, is an unspeakable tragedy. And, we want to make sure we are doing everything to make sure it doesnt happen again and it is not live streamed again. One of the things we have done as we have changed access to facebook live. So that people who have a serious content policy violation are restricted from using it. So, the person who live streamed the new zealand attack what is the likelihood of that would not happen again in terms of the new structures that you put in place . Well, the technology we are working to develop, the technology is not perfect. So, Artificial Intelligence is a key component of us recognizing videos before they are reported to us. And this video was not about, fewer than 200 people saw it while it was live on facebook. My time is short. You have a percentage . 50 . 60 . With the technology, i can give a percentage. I can say that we are working with governments and others to try to improve that technology so that we will be able to better recognize mr. Pickles and mr. Slater, if you would, miss bickert did with the question of Artificial Intelligence. So, if you would respond as to the utilization of ai and individuals, as briefly as possible, please. So, one of the challenges was to, there is not a lot of content. 280 characters, a maximum of 2 minutes, 20 videos. One of the challenges in christchurch was we didnt see the same video uploaded. With all different snippets i took at different points. So, are investigating technology to make sure that people cant reupload content wanted once it has been removed previously. We are also making changes to make sure that, for example, when people manipulate media, we can move quicker. Using human subjects and ai . Its Machine Learning yes. Mr. Slater . Thank you congressman, congresswoman. Were using a combination of Machine Learning and up people to review, speaking overall, in the First Quarter of 2019, 75 of the 8 million videos we removed, they were first flagged by a machine. And they were, the majority removed before single view. When it comes to violent extremism, it is even stronger. So, over 90 of the violent extremist videos were uploaded and removed in the past six months, removed before a single human flag, an 88 with less than 10 views. Thank you. Let me ask the question about fakes. Because my time is going for each of you in the 2020 election, what you will do to recognize the fact that these fakes can be a distortion of an election that is really the premise of our democracy. Can you quickly answer that question at the same time . I just want to make mention of the fact that free speech does not allow incitement, fighting words, threats and otherwise. Could you just answer that please . Yes, congresswoman. The three deep fakes as briefly as you can. Absolutely. We are working with experts outside the company and others to make sure that we understand the deep fakes can be used in come up with a copy in the policy to address them. In the meantime, we are focused on removing fake accounts, which are disproportionately responsible for this content and also making sure that we are improving the speed at which accounts are misinformation with actual factual articles and repeat, reduce we will remove that now. Mr. Slater. We are investing on working with researchers and others to build capacities in the space we have intel that scans the horizon for new threats and constantly looking at this sort of issue. Thank you for your courtesy, i healed back mr. Chairman mister walker. We were sitting here today i looked up on the internet and put in facebook apologizes, google, twitter and there were more pages than i could count going through those apologies. I listen closely to the words or how you phrase them mr. Pickles and mr. Slater one of you, mr. Slater you use the expression hate speech and you listed several different people that were protected. What i did not hear you say in that group of people that you listed for those that were wanting to express their faith and in april one of the larger apologies you have made in april Kelly Harkness brought us to the attention of Abby Johnsons life story in a movie called unplanned. That has gone on to make 20 million. Google listed that as propaganda. My question for you is was that a machine that listed outward individual . Im not familiar with the specific video. I would have to go back. It is not a video it is a movie one of the larger stories in april a major motion. Youre not familiar with that dozer, i am not. When we talk about the difference between content and hate speech, i know mr. Pickles in june earlier this year marco rubio brought to the attention twitter was banning any language that was may be offensive to china, later came back and apologized. How does twitter use their discretion to block information without discriminating against different individuals or groups the rules are to identify hateful conduct that we focus on behavior first. We look at that before we look at the speech. And the difference between fast and targeting someone else is the difference between constant and conduct. The rules are enforced without ideology and it is important for us to recognize. When we remove someone from twitter and they come back for another purpose our technology will recognize that person trying to come back and we dont want to come back that we have removed. Sometimes that does catch people who are having a different purpose so there is a value to technology and we should recognize where we make a mistake. Sucker mr. Slater how does google moderate the content policy to ensure they are being followed and not being driven by bias thank you for that question. We have a robust system of the development and enforcement of our policies. We are constantly reviewing and analyzing the policies whether they are drawing the right lines and reviewers go through extensive training to make sure we have a consistent approach. We draw those reviewers from around the country, around the world and train them deeply and constantly reviewing. What type of training, if any, do you provide for human content marauders regarding content . We provide robust training to make sure we are applying consistent rules. Robust. What does that mean . What is robust . When reviewers are brought on board before they are allowed to review we provide them with a set of educational material and tell steps in addition they are reviewed by managers and others to make sure that they can correct mistakes and learn from those and so on. Ms. Bickert do you think a i will get to where you think you can rely solely or is human moderation always play a role . Thank you for the question. At least the near future human moderation is very important. Technology is good at some things, it is good at matching known images of terrorism or child sexual abuse and it is not as good at making the calls of each warbling. Final couple of questions, mr. Pickles, do you know how many times twitter apologized per month . We take action on appeals regularly. You have a number on that . And i dont but i can follow up. Mr. Slater do you have any idea how often you apologized per month for mismanaging . I do not have a number but i would happy to get back to you. You have apologized more than kanye west to taylor swift. With that i you back. I recognize the lady from illinois, ms. Underwood. Facebook announced it would start directing users searching foran organization that works to rehabilitate extremists. Life after hate is based in chicago so i met with them last month. They told me since facebook announcement they have seen a large bump in activity that hasnt slowed down. Facebook and instagram have 3 billion users. Life after hate is a tiny organization whose federal funding was told by this administration. They do great work and dont have the resources to handle every single neonazi on the internet. Ms. Bickert has facebook considered giving funding . Thank you. They are doing great work with us and for those that dont know we are redirecting people who are searching for these terms to this content and we do this in some other areas as well and with selfharm support groups and we do see and this is something we can come back to you on. We are committed to making this work. Right now there is no long term funding. Im not sure the details, but i will follow up. Facebook has made it a significant part and we would appreciate that followup feedback information. Mr. Slater over the years youtube has put forth various policy changes to limit limit how dangerous Conspiracy Theory spreads. Youtube announced it would display information in the form of links to wikipedia next to the conspiracy video. In the 15 months since this policy was announced what percentage of users to view videos with information actually cling click the link for more information . Thank you for the question. This is a very important issue and we take a number of other steps. I dont have a specific percentage but would be happy to come back with you. Most articles can be edited by anyone. Weve seen one with questionable content. These articles that links to on information to ensure the accuracy we were to raise up authoritative information and ensure what we are displaying is trustworthy and correct any mistakes we may make. You have corrected the Wikipedia Pages. We do before we display such things we look to ensure we have a robust process to make sure we are displaying Accurate Information. The question is about what you are linking to. Can you followup with us on writing on that one . Ms. Bickert, facebook has done similar things. What has click through . I dont have that for you im sorry but i will follow up in writing. Mr. Chairman i would like you to show the last chance. Instagram said they would hide hashtags. I look did a simple search from two different accounts. These are the top results. The majority of these responses display and type that and popular accounts with titles like corrupt vaccine, vaccines uncovered and these are not new terms. This content is not hard to find and that information is not a new issue. Clearly the efforts have a gap. Antivaxxers content is a deadly threat to public health. What additional steps can you take to ensure this content is not promoted . Congresswoman, thank you for the question. And vaccine hoaxes and misinformation are top of mine and we have launched recent measures and ill tell you how we are working to get better. One thing we are doing is when accounts are sharing this we are trying to down rank them in the search results. That is ongoing and requires some manual review for us to make sure we are doing that and we are Getting Better. Another thing is servicing educational content and we are working to provide that so if people go searching at the search results they will see that informational content. We are working with them right now and we should have that content up and running soon. I can follow up with the details. It is critically important that that information is shared with users. Everyone in this room appreciates online extremism and disinformation are difficult problems that brought coordinated solutions. These are new challenges and failing to respond seriously is dangerous. Social media helps extremists find each other, helps make their opinions more extreme and helps them hurt our community. We want strong policy from your companies that keep us safe and while i truly believe the current policy is well intentioned there is a lot more that needs to be done and, frankly, some of it shouldve been done already. Im looking forward to working with you and my colleague for broad, real solutions. The chair recognizes the gentleman from new york. Thank you and thank you all for being here today. It is obvious from this conversation that this is a very difficult area to maneuver and i understand your concerns about First Amendment infringements and i also understand and i applaud the Companies Desire to try to find that delicate balance and since you are not a Government Entity you have more flexibility on how you do that and it is up to you as a steward to do the best job you possibly can. I will get back to you in a minute i want to followup with mr. Slater i want to make sure i am perfectly clear with what you are saying and i am well aware that what the policy and practices are at google, but that video that Mister Rogers reference shows that it looks like they are talking about a very serious political bias and their attempt to implement that in their job whether or not that happens i wouldnt know. Im not asking about policy and actresses i am asking you if you personally have ever been that has use political bias and to alter content or whether, have you ever heard that . I know what your policy and practices are. Im not aware of any situation. We have robust checks and balances to present prevent that. You have not heard of that since your time at google . The movie and the allegation with congressman walker reference about the abortion movie, you havent heard nothing about people having limited contact with respect to that as well . Im not familiar with that video, no. And youve never heard anybody content on that . We would remove where it violates the policy, but the policy i am aware, have you i am asking you understand the difference is what you are personally aware of . I believe i understand and i am not aware of any situation like that. I want to talk to all of you, this Internet Forum which is the lamest acronym ever can you give me a little detail on what exactly the goal is of this . Facebook and google have shared, i think the critical thing is giftct is bringing four companies together who have investments in counterterrorism and recognizing the challenges far bigger and the three strands support Small Companies as we remove content it goes across the internet and Fund Research so we can better understand so we have a Research Network and finally share technical tools. You have heard people reference the digital fingerprint to make sure or or and twitter we shared the url. If we take down an account for spreading a terrace manual and we see a link to a company we will tell the company it is linked to something on your service and you should check it out. It is similar to malware. The Industry Collaboration is at the heart of this. What companies are involved . It is google, twitter, youtube and facebook and dropbox has joined. One of the things we have is a partnership against terrorism which allows more companies to go through a training process so they learn things like how to write terms of service and enforce terms of service and by mentoring them and we are hopeful that we will have more Companies Joining and as many members, 15 members we share you are only 13 companies so it is broad and we want to have a high standard. We want membership to be the companies who are doing the best and we want to keep the bar high. As far as encrypted messaging platforms i take it they are not all members of this or participants. Do you know ms. Bickert . Thank you for the question. The main members are those five companies. In terms of these Smaller Companies that include some of the encrypted messaging services because some of this is about understanding what are the right lines to draw how to work with Law Enforcement and which services can do. Some of my biggest concern is that the big players in this field seem to be endeavoring to try to do the right thing especially with counterterrorism and that the encrypted messaging, by and large, a much broader field and there doesnt seem to be much we can do to stop their content from spreading their filth and their violence. I would love to hear any suggestions. I know my time is up, perhaps in writing as how we can entice some of them to be part of this effort. The encryption is obviously a breeding ground for premises of violence of all sorts and trying to get the company to be more responsible and that would be great to hear from you guys. I wanted to switch gears and talk about the influence and the spread of foreignbased information, foreignbased political ads in particular in our political process and many of us read the Mueller Report page by page and i was interested, ms. Bickert, that the facebook general counsel stated for the record that for the low, low price of 100,000 Russian AssociatedInternet Agency got to 126 million american eyeballs and im interested in this because the political ads that they put forward were specifically targeted to 20 states and michigan and we saw an overabundance of these ads. They were specifically paid for by foreign entities and they were advocating for or against a candidate in our political process. I have a serious problem with that. For the separate issues of speech and what an american does or does not have the right to say can you speak about specifically the fact that they spread for and purchase information it doesnt matter who it came from and what steps you have taken since 2016 to prevent the spread of foreign information. Absolutely, thank you for the question. Where we were in 2016 we are in a much better place and let me share with you, first of all, all of those ads came from fake accounts and we have a policy against fake accounts and weve got much better about enforcing that and now we are stopping more than 1 million accounts per day at the time of upload and republish what we are removing ever quarter and you can see how much better we have gotten and another thing we are doing with political ads we are requiring unprecedented levels of transparency. Now if you want to run a political or political issue out in the United States you have to verify your identity and show you are an american which we send you something because we have seen fake ids uploaded from advertisers. We send you something through the mail and you actually get a code and upload so we verify a real american and we also put a paid for disclaimer on the political ad and we put it in the apps library we have created that is visible to everyone. You can search what type of political ads are appearing, who is caring for them and other in the nation about how they are targeted. That is good to hear. I would love to see if there reports and be directed to them so i could see them. For the others at the table could you talk about your policy on the event of foreign ads. The first thing we did was to be in russia and entities for using any of our Advertising Products going forward. We took all of the revenue from russia today and there entities i and Funding Research and partnerships. To Research Better how we can prevent against this. We then took the unprecedented step of publishing every tweet not just be paid for ones every tweet that was produced by a foreign influence operation in a public archive so you can now access more than 30 million tweets that rents for the terabyte of videos and photographs in a public archive and those include operations from russia, iran, venezuela and other countries. Thank you for the question. , looking backward we found very limited improper activity on our platform that is a product of our Analysis Group and other tools to help that sort of behavior. Looking forward to continue to invest in that looting the election transparency efforts requiring verification of advertising for candidate disclosure in the attic and a transparency report. What about the threat of information through pops. What sort of informational climate when someone is receiving or viewing something they have some way of knowing whether it is a machine or a human being. Lets start with facebook. One of our policies is that you have to have your real name to be using an account and when we are removing bot accounts we are removing them those are all numbers that we publish. Every week we challenge between eight and 10 million accounts for breaking our rules on suspicious activity including malicious automation so we are removing those accounts about 75 of those eight 8 to 10 million and they are removed every week. We have strict policies about misrepresentation is asked, impersonation. We are looking out through the Analysis Group for and we will take action. My time has expired, thank you. I recognize the gentleman from louisiana. Mr. Slater, are you ready . Get your scripted answers ready. Google and youtube are developing quite a poor reputation in our nation. Clear history of repetitively silencing the many voices, we are talking about freedom of speech and access to open communication. We are here today to discuss extremist content, violet dress, terrace were routing tactics and instigation of violence to get the same justification your platform uses to quell two extremism has been used to silence and restrict voices that you disagree with. We dont like it. For example Prager University a series of five minute videos which discuss political issues, religion, economic topics, conservative perspective has had over 50 of their videos restrict. Some of those include why america must lead. Because of our stance for freedom for all voices to be heard. 10 commandments, do not murder, video pulled by your people. What is wrong with the 10 commandments, might i ask . Why did america fight the korean war collects a legitimate reflection on a significant part of the history of our nation. You removed a video from project veritas which appears to show a Senior Executive technology politically motivated search manipulation with intent to influence election outcomes. None of us want that on either side of this file. I dont know a man or woman president is not a true praetorian and love their country that has bearing perspectives, yes, but we love our country and we will stand for freedom. Including against google. A frequent reason provided by you is the content in question for formed a Broader Community. What could be more harmful than a Broader Community than the restriction of our free speech and open communication regardless of our ideological stance . What do you mean harm the Broader Community . And is not limited to biblical threats and incitement to violence as a should be or is it a convenient justification to restrict content that you team needs to be restricted. Please explain to america how you determine what is arming the Broader Community . Lets have your scripted answer. Congressman, thank you for the question. I appreciate the concern and the desire to foster robust debate. We want youtube to be a place where everyone can share their voice and get a view of the world. You dont allow everyone. I have given examples in my brief time. Thank you for recognizing my time. Something that offends an individual or something an individual agrees with is that your definition of extreme . We have Community Guidelines that lay out the rules what is not permitted including incitement of violence, hate speech, harassment, and so on. You can clarify what youre asking about specifically. Spoke with mister slater, google is in a bind. The day america is watching. Today, america is taking a step back and looking at the services and the platform we use and we are finding to our or that they cant be trusted. Today, america is looking carefully at google and a word reverberates through the minds of americans, freedom. Shall it be protected, preserved, persecuted. Subject to the will and women massive Tech Companies. Mr. Chairman, thank you for recognizing my time and i feel the balance. Thank you, very much mr. Chairman and thank you for appearing today. I want to go into the issue of deep state because ive recently introduced legislation the first ever a house bill to regulate the technology and if my bill passes what he would do, it would make sure that those videos include a prominent, unambiguous disclosure as well as a digital watermark that cannot be removed and the question i have is whether your platform, when it comes to your intentions, that a video has been since sensually altered or entirely fabricated, how your company decides whether to do nothing, label it or remove it . That is for the panel. Thank you for the question. When it comes to fakes this is a top priority because of the coming election. Right now our approach is we try to use the thirdparty Fact Checking and if they rated as false they can tell us if something has been manipulated and at that point we will put the information next to it so much like a label approach this is a way of letting people understand this is something that is in fact false. We also reduce distribution of it. We are also looking to see if there is something we should do specifically in the area of deep fakes. We want to have a solution and part of that means we have to get a comp mission of what it means to actually have the faith. Spoke of my requirement would be that there is a digital watermark and similar to how your company does terrace content. If there was a central database of deepfake hashtags would you agree to utilize that . Im happy to pick up on that and the previous question. I was at a conference hosted by the bbc and digital witness and they work on issues around verifying media from war zones of war crimes and this policy goes from a whole spectrum of content from synthetic to edited to manipulated and certainly from our points of view every partnership is one we want to explore to make sure we have the information. Your framing of how in some circumstances there may be situations to remove content and others it is providing context to the user and giving them more information. That is the best balance is making sure we have the tools available and that is the approach we are developing now. Time is not your friend. What we are trying to find is something universal that creates transparency, respect the First Amendment and also make sure that it is something that as americans whose eyes are constantly on video something can identify right away. If you have to go through all of these sources to determine and each platform has a different way of indicating, it almost nullifies that. I wanted to put that on your radar because i think there needs to be some sort of universal way in which americans can detect immediately that what they see is altered in some way and that is what my bill seeks to do. Imagine if just days before the 2020 election a fake video of a president ial candidate accepting a bribe or committing a crime. If your companies learn of a deepfake video to influence the election when you commit to removing it . How would you handle such a scenario lacks have you thought about it . Give us your thoughts. We do have a real requirement and various transparency requirements that reinforce so if it is started by someone not her real name we would simply remove it. We have a clear policy so activity into associated with an entity weve removed and i say weve removed millions of tweets. We removing activity with that organization. Thank you for the question. We would evaluate that under our policy including the deceptive practice policies and look at we would as we would any foreign interference. Ideal back and i look forward to talking to you about this further. We have to get to it and we are not there. Years ago required reading i had was the book 1984 and this Committee Hearing is scaring the heck out of me. I have to tell you, it really is because here we are talking about, you know, just someone google vaccine the answer was we are going to put of what the person is actually looking for, we think it is. Who are the people judging what is best what is accurate this is really scary stuff and really goes to the heart of our First Amendment right and i dont always agree with aclu and you are the past president and i agree with you on this. We have to be very careful because what you deem as inaccurate, i do not deem as inaccurate or other bowl may not deem. We had in a previous briefing on this issue one of the members said i think President Trumps tweets insight terrorism. Are we now going to pan what President Trumps says because someone thinks that it insight terrorism . This is some really scary stuff and im very concerned and im glad im part of this because, boy, we need more of a standing up for our rights whether it is what you believe or i believe. I have a specific question and this is to mr. Slater. In this project various video which i did watch last night , they allege that there are internal google documents which they put on the video and this is what it said. For example, imagine that a google image query for ceo shows predominately men even if it were factually accurate representation of the world it would be algorithmic unfairness. In some cases it may be appropriate to take no action if it affects current reality while in other cases it may be desirable to consider how we might help society reach a more and equitable allstate for with product intervention. What does that mean, mr. Slater . Im not familiar with the specific slide. I think what we are getting at is when we are designing our products we are designing for everyone and we have a robust set of guidelines to make sure we are providing relevant and trustworthy information. We work with a set of raters around the world and country to make sure those guidelines are followed and they are transparent and available for you to read. I personally dont think that answered the question at all. Let me go to the next one. You asked mister clay higgins for a specific example and he was talking about Prager University. I use google and it came up and on that website it says conservative ideas are under attack. Youtube does not want young people to hear conservative ideas. Over 10 of our entire library is under restricted mode. Why are you putting Prager University about liberty and those types of things and restricted mode . Thank you, i appreciate the question. To my knowledge it is a huge Success Story on youtube with melons of views and subscribers remain to this day. There is a mode that users can use called restricted mode where they might restrict the sorts of videos. That is something that applies to many different types of videos from across the board consistent not just with political viewpoints but the daily show and other sorts of channels as well and to my knowledge it has been applied to a very small percentage those videos on prager, university and that channel has been a huge Success Story with a huge audience. Mr. Pickles, regarding twitter and President Trump has said i think on multiple occasions that he has accused twitter of people having a hard time been deleted from followers. This happened to my husband. He followed donald trump and all the sudden it was gone. Can you explain that . What is happening there . Why does that happen . Because a lot of conservatives really think there is a conservative conspiracy going on here. I will look into the case to make sure it wasnt an issue and President Trump is the most followed head of state anywhere in the world. Is the most talked about politician anywhere on the world in twitter and he lost some followers when we recently undertook an exercise to clean up optimized accounts president obama lost for mark far more followers. People can look at the way people see is tweets widely and be reassured that the issues you are outlining are not representative. And i ran out of time. If we have another round i want to hear from her because she hasnt had a lot of time to speak. I hope my colleagues ask her. Thank you very much. This is for all of you. I want to talk a little bit about your relationship with Civil Society groups that represent communities and including white supremacist content. I am referring to content that targets religious minorities, ethnic minorities, immigrants, lgbtq and others. Can you help by describing your engagement with Civil Society groups in the United States to understand the issue of such content for combating this content . Thank you for the question. Any time we are evolving our policy which we do constantly we are reaching out to Civil Society groups around the world. I have a team in germany called Stakeholder Engagement and that is what they do and when they are doing this one of their jobs lets say we are looking at hate speech policy one of their jobs is to make sure we are talking to people across the spectrum so different groups that might be affected and differing opinions all of those people have the conversation. We have teams around the world talking to groups every day and something we are doing is training them. Twitter is a unique public platform and a public conversation when people challenge hatred and offer a counter narrative. Their views can be seen all over the world. This was seen all over the world after an attack in paris and similar after christchurch. We talked to Civil Society groups about our policy and also how we can use the platform to reach more people. I want to make sure you incorporate this, one of my concerns is the report of the hateful content is placed on those that are targeted by the hateful content. That can make it places so can you also tell us what your companies are doing to alleviate this burden . Speaking of how we enforce the Community Guidelines and we have updated the policy to deal with people expressing superiority justifying discrimination and a combination of machines and people. Machines scan across four broad patterns compared to previous contents and we do take our responsibility very seriously and our ability to detect that first, review it before it has been flagged and were making Great Strides and we rely on flags from users and other users we work with very closely in the development of our policy and again in flagging those videos. This is something we have said previously that was too much burden on victims. Year ago 20 of the abuse removed was practical and that is now 40 . In a year weve been able to double the content. We are continuing to raise that number further. Can you provide an example where you had Community Engagement and because of that feedback there is a policy change that you made . Let me share a slightly different example which is how we write the policy to prevent that. When we were crafting the policy on imagery that covers not just media shared by next partner but it might share creep shots which have been various countries are asking do you have a policy and because weve spoken to those groups our policy from the beginning was capturing not just the original problem but all those different issues. Let me address the second question you asked about. Putting the burden on the victims. We have invested a lot in Artificial Intelligence so there are certain times it has helped us and in other areas where it is in its infancy. With hate speech over the past few years weve gone from zero proactive detection to now in the First Quarter of this year, the majority of content we are removing we are finding using Artificial Intelligence and other technology. Big gains there. There is still a lot to go because all of those posts after they are flagged have to be reviewed by real people who can understand the context. In terms of engagement and what that has led to, the one thing i was. 2 is the use of hate speech in imagery. The way that we originally had our policy on hate speech it was focus on what people were saying in tax. It was only working with partners that we were able to see how we needed to refine that to cover images and another thing, a lot of groups that it was hard to know how we defined hate speech and where we drew the line. That was a contributing factor in why a couple years ago we published a very detailed version of our Community Standards and people can see exactly how we define it. Thank you for the discussion on combating terrorism online and it is a worthy debate to be had there. Good questions on whether some of this content provides education or whether it is radicalizing people. Those are our discussions to have and i dont know that we will solve them today. The problem is the testimony doesnt stop there. The policy at social Media Companies do not stop there. It doesnt stop with a clear cut line of terrorist propaganda. That is exactly what were talking about. It goes much further than that. It goes down the slippery slope of what speech is appropriate and the big standards you employ to decide what is appropriate. This is especially concerning given the recent news and the league email from google. Shows that labeling mainstream conservative media as is a premise upon which you operate. Not even a question according to those emails, given that and shapiro, Jordan Peterson they are given that is a premise, what do we do about it. Two of three of these people are jewish, very religious and you think they are. It begs the question what kind of education to people at google have because i think to think religious are. Ben shapiro is the number one target of the all right and you work off the premise that he is a. It is disturbing. It gets to the question, do you believe in hate speech, how do you define that . Can you give me a quick definition right now is it written down somewhere . Can you give me a definition of hate speech . Yes, hate speech and is updated in our guidelines extends to superiority over protected groups that justify discrimination, violence and so on based on a number of defining characteristics whether that is race, Sexual Orientation, veteran status. You have an example of them engaging in speech . We evaluate individual content based on that rather than based on the speaker. Lets get the next question. You believe speech can be violent . Not can you incite violence, that is very clearly not protected. Can speech just be violent. It isnt specifically calling for violence. Is that possible . Im not sure i fully understand the distinction. Incitement to violence or encouraging dangerous behavior are the things that would be against our policy. Here is the thing, when you call someone a nazi you can say that they are inciting violence. There is a Common Thread in this country that they are bad and evil and should be destroyed. When youre operating off that premise and is a good premise to operate on, what you are implying is that it is okay to use violence against them when you label them one of the most powerful social Media Companies in the world labels people as nazi you can make the argument that it is wholly irresponsible and it doesnt stop there. A year ago it was also made clear that your fact check system is blatantly targeted conservative newspapers. You know what im talking about . Im not familiar with the specific story. I am aware that from all political viewpoints we sometimes get questions of this sort i can say that the fact check labels generally are done based on a markup and follow policy. For the record they target conservative news media and often times they have a fact check on their that doesnt even represent the article, but google make sure it is right next to it so make people understand that that one is questionable even though when you actually read through it it has nothing to do with it. A few days ago and one of my constituents posted photos on facebook of republican women daring to say there women to for trumpet. Facebook took that down right away. Is there any explanation for that . Without seeing it, i am happy to followup on that specific example. If we dont share the values of free speech im not sure where we go from here. This practice of silencing millions and millions of people it will create wounds and division in this country that we cannot heal from. This is extremely worrisome. You have created amazing platforms and do amazing things. If we continue down this path it will tear us apart. You do not have a constitutional obligation to enforce the First Amendment, but i say you have an obligation to enforce American Values and the First Amendment is the underpinning of American Values that we should be protecting until the day we die. Careful take precognitive and allowing you to make a comment. Thank you for protecting my free speech. The main points i want to make it even if we have content moderation that is in force with the noblest principles and people are striving to be fair and impartial, it is impossible these socalled standards are irreducibly subjective. What one person hate speech is and an example was given by congressman higgins is someone elses cherished loving speech, for example, in european countries, canada, australia, new zealand which generally share our values people who are preaching religious text that they deeply believe in and are preaching out of motivation of love are prosecuted and convicted for engaging in hate speech against lgbtq people. I happen to disagree with those viewpoints, but i absolutely defend their freedom to express those viewpoints. At best these socalled standards and i did read every single word of facebook standards and the more you read them the more complicated it is and no to facebook enforcers agree with each other and none of us would either so that means that we are interesting to some other authority the power to make decisions that should reside in each of us as individuals as to what we choose to see and what we choose not to see and what we choose to use our own freeze the rights to respond to and that i think these platforms, i cannot agree more about the positive potential, but we have to maximize that positive potential through user empowerment tools and radically increase print transparency. Thank you for holding this very critical hearing and important issues. We want to turn a little bit to the russia interference in 2016. Mall report three companies for conspiring to subvert our election system, 2018 russians were added again. 2020, former secretary of Homeland Security nielsen before she resigned and the fact that the russians were at 2020 again. The other countries trying to affect our election system. And and addressing the issue of the First Amendment. Does it cover fake videos online . We talked about the nancy pelosi fake video and maybe you say yes and i probably say not and that is a damaging video with both content and although you may be private companies, when i hear my children tell me i saw it on this platform, the assumption is that it is factual and ms. Bickert it took you 24 hours to take that down and the others didnt take it down. You are essentially a messenger and when your information shows up online, this population believes that you are credible and information on there is probably credible. That is what is damaging to our country, to our democracy. Moving forward we have another election happening now and if misinformation continues to be promulgated through your social media, through your companies we have a First Amendment and also of democracy. Any thoughts . Ms. Bickert. Thank you for the question. We sure the focus on making sure we are ready 24 hours is not fast enough. Are we playing defense or offense . Are you being proactive so that the next nancy pelosi video is something you can take down essentially faster than 24 hours . We are being proactive. I do agree that there is a lot we can do to get faster. Our approach is making sure people have the context to understand it. We dont want people to see it in the abstract we want to inform people. That is something that we are focused on Getting Better at. Who put the pelosi video up . It was uploaded by regular person with a regular account. Someone at home with very Good Software was able to put together a fake video and put it up . The technique that was used was to slow down the audio which is the same thing we do a lot of comedy shows what were the consequences to this individual for putting up essentially a video of someone defaming and hurting her reputation . Congressman, that video, our approach of misinformation is we reduce the distribution and we put content from Fact Checkers next to it so they can understand if it is false or has been manipulated. Mr. Pickles. We talked about how to provide context so or your policies changing so you can take it down next time when to let it ride we are looking at all the policies. Are you going to let it ride or take it down next we are looking at what you going to do next time you see a video like this next with respect to that video we took it down under our deceptive practices policy. You think this false video is online constitutionally protected . There is a very strict definition of false. The Supreme Court has repeatedly said that blatant outright lies are constitutionally protected unless let me switch, will you write policies so outright lies do not have the devastating effect on our voters that they had in the 2016 election we are looking at the whole issue. We are making sure we have the right approach for the election. We want to raise up authoritative contact andrew reward it and demote borderline content or harmful information. If i may say this is the reason why President Trump wants to change the liable loss because it is now legal to lie about politicians and government officials. This is an area we will Work Together on some issues. Thank you very much. Thank you for coping here. This is been very informative. Let me ask you a quick question, yes or no, if giftct or the collaboration, is keeping your trade secrets interfere with your sharing standards and working together . I dont think so. I know you use this platform for terrorism to use the platform at all for the hate groups . At present, but after new zealand we highlighted we needed to broaden our approach. In my briefing, dog whistling has been mentioned as a certain kind of political messaging strategy that employers coded language to send a message certain groups that fly under the radar and white supremacist groups often and it is rapidly evolving on social media platforms and it has a space and it targeting racism and other things that we find important in this country. How do you solve the challenge of moderating dog whistle content on your platform especially when it is being used to encourage these things we of were so much. Firstly, we enforce our rules. One of the things that our rules are is about behavior. If you are targeting somebody because of their membership of a protected characteristic, thats the important factor. Gif ct has research. We can investigate the latest trends, what are the things we need to be learning about those kinds of terms. Finally, when we see whether its different kinds of extremist groups speaking for twitter, weve banned more than 180 groups from our platform for violent extremism across the spectrum. Both in the u. S. And globally. So we have a policy framework and the industry sharing. Thank you. Thank you, congresswoman. I would echo that a lot of this is about getting to the groups. We do have a hate speech policy. Beyond that we know that sometimes there are groups that are just engaging in bad behavior. So we ban not only violent groups, but also white supremacist groups and hate groups and we have removed more than 200 from our platform today. Thank you for the question. We remove hate speech on our platform and the sort of concerns you are talking about are what motivated the recent changes. We recognize that things may brush up against those policies, be borderline but not quite cross them. For those we do work to reduce, demote them in the frequency and recommendations and so on. Congresswoman, if i could have ten seconds . I am going to ask you a question. You will have a little bit more than that. Thank you. This a very quick question. Miss bickert, did you bring any staff today . Any employees from your yes, i did. Could you please have them stand up . For those that have accompanied miss bickert, please stand up. Two. Thank you very much. Mr. Pickles, you . Thank you. Mr. Slater . Thank you very much. A couple of things that you mentioned. You talked about making sure that people are real and that they are american when they are going to do advertisement and you said we are going to send information to you. You have to send it back. It simply proves that you may be pretending to be an american living and really living here, or having a domicile here, an address here still doesnt necessarily guarantee that they are legitimate. And so thats a challenge, i think, that we might have. Is that understandable, mr. Slater . Am i confusing you . If you could clarify the question, i would appreciate it. Its not a question. A statement. We were talking earlier about making sure that people who are doing political advertising, et cetera, are not foreign nationals, that they are americans, that these that we not have this discussion about this advertisement, and it was stated by somebody there thank you. Thats right. That you do verification to make sure the person is an american, does live in america, and isnt this false whatever coming from another nation. I said that really doesnt necessarily prove that as far as im concerned. Congresswoman, just to clarify. That is facebooks approach. The 10 sorry. We look at the government i. D. My question to you, are there trigger words that comes out of some of this speech that you think that should be protected that needs to be taken down because it incites . All of them, its a problem. I wanted to give an kprafrm from a story in Bloomberg News today that talked twitters sorry, youtubes recent new policy of broadening the definition of unprotected hate speech. On the very first day that it went into effect, one of the people that was suppressed was an online activist in the u. K. Against antisemitism, but in condemning antisemitism he was of course referring to nazi expression and nazi insignia, and hence he was kicked off. So there is no trigger words. It seems to me, i think it was mr. Pickles, did you do the definition of hate speech for us earlier . That was the hateful conduct under twitter yeah, i think that probably covers the president of the United States of america, unfortunately. Thank you, mr. Chairman. I back. The chair recognizes the gentleman from new york. Thank you for being here. Two months ago in the immediate aftermath of the christchurch incident, we sent out a letter to you all asking how much money are you spending on Counter Terrorist screening and how many people do you have allocated to it. We have had interesting conversations over those ensuing months, and the three basic problems that you have brought to me are that, one, that that oversimplifies it because there is also an a. I. Component to this. Yesterday we did a hearing that showed a. I. Alone cannot solve this impossible and not into the future. You all agree with that. The second thing though that you have all said to me is that this is a collective action problem. Were all in this together, and we have the gif ct. I have some basic questions about the gif ct. I would appreciate if you immediately answer yes or no. First question, does the gif ct have any fulltime employees, miss bickert . [ inaudible ]. No. Does the we have people at facebook full time dedicated to gif ct. Okay. The same. We have people at twitter working with gif ct. Yes. Our answer is the same. Guz tdoes the gif ct have a and mortar structure. If i wanted to visit, could i . No. We host the database fully at facebook. Okay. Mr. Pickles . No, our collaboration is full Companies Working together. We meet in person. We have virtual meetings. Its about collaboration, not about a physical building. Mir. Slater . Nothing further. No brick and mortar structure. I presume you have a google or facebook hangout. But adhesive and sealant council, an association located in bethesda, maryland. The adhesive and sealant council. It has five fulltime staff. It has a brick and mortar structure. And you all cannot get your act together enough to dedicate enough resources to put fulltime staff under a building dealing with this problem. I think it speaks to the ways in which were addressing this with this technocratic libertarian elitism, and all the while people are being killed. All the while, there are things happening that are highly preventible. A. I. Are there any a. I. Systems that any of all have that are not available to the gif ct . Congressman, yes, depending how our products work. So Artificial Intelligence works differently. What weve and we actually worked for some time doing this. We had to come up with a common technical solution that everybody could use. We now have that for videos and we give it to free for Smaller Companies. Thats but one technique we have. Okay. Please keep it i want to know if you have any a. I. , not that the gif ct doesnt have available. I would also say this isnt just a. I. Thats why we share urls. Low tech if you are a small company. When someone gives you a url to content, you dont need an a. I. To look at that. Thats why i think its a combination solution. Nothing further to add. My understanding is that there were no officially declared pocs for the gif ct that were made public from each company until after the christchurch shooting. I know that they were there, but they were not declared established pocs at each of your shootings until after the christchurch shooting two months ago. Is this the case . Congressman, we have a channel that people can use that goes gets routed to whoever is on call. Is that the case, that there were no established pocs and this is the information that you all have given me already. I am asking you to put it on the record. No established pocs at the gif ct until after the christchurch shooting, is that correct . Perhaps not publicly listed, but certainly people no established public pocs until after the christchurch shooting . I draw a distinct between the pocs and companies. You thi i think the point is Crisis Response you are not taking it seriously because there were no there is in public building, no full time staff, no public pocs until after the christchurch shooting. Thats what im speaking to. How is anyone supposed to think that you all take this collective action problem seriously . If you have no one working on it full time . This is not something that technology alone can solve. This is a problem that we are blaming the entire industry for, rightfully so, and there are the smallest of associations in this town and throughout the country that do so much more than you do, and it is insulting, it is insulting that you would not at least apologize for saying that there were no established pocs. It was a joke of an association. It remains a joke of an association. And we have got to see this thing dramatically improved. Lastly, if there were terrorist content shown to be on your platforms by a public entity, would you take it down . So, miss bickert, why when the Whistleblower Association reveals that you are, facebook is establishing through its a. I. Platform alqaida Community Groups such as this one, a local business, alqaida in the Arabian Peninsula with 217 followers. I have it right here on my phone. By the Whistleblower Association. It is considered the most active of al qaedas brarjs or franchises that emerge due to weakening central membership. Its in yemen and saudi arabia. Why is this still up . We have every right right now to feel as if you are not taking this seriously. And by we i do not mean congress. I mean the american people. Thank you. Thank you. The chair recognizes the young lady from florida, miss demens, for five minutes. Thank you so much, mr. Chairman. We have already talked about the massacre at the christchurch, and we also know that it was the it was Law Enforcement who notified facebook about what was going on. Miss bickert, id like to know if you could talk a little bit about your work and relationship with Law Enforcement and share some of the specifics, specific things that you are doing to further enhance your ability to work with Law Enforcement to continue to work to prevent incidences like this from happening again. Thank you, congresswoman. We have a special point of contact from our Law Enforcement engagement team. So people from within our company, usually former Law Enforcement, who are assigned to each company. Those relationships are well functioning and are the reason that new zealand Law Enforcement were able to reach out to us. Once they did, within you surely believe they would have been able to reach out to you if you didnt have a Law Enforcement team, right . Wouldnt that have opinion part been part of their responsibility, any Law Enforcement agency that saw what was happening live on your platform to notify you . We want to make it easy. So here with new zealand, when they reached out to us, we responded within minutes. We also have an online portal through which they can reach us. That is manned 24 hours a day. If there is any kind of emergency, we are on it. Finally, if we see that there is an imminent risk of harm we proactively reach out to them. Also any time there is a terror attack or Mass Violence in the world we proactively reach out to Law Enforcement to make sure that if there are accounts we should know about or names of victims, any sort of action we should be taking, we are on it immediately. Okay. Moving right along, mr. Pickles, you said that we will not solve the problems by moving content alone. Is that correct what you said . Okay. And i know that most companies do a pretty good job in terms of combatting or fighting Child Exploitation or pornography. And i would like to hear you talk a little bit about your efforts to combat terrorism and share some of the similarities, because we cant solve the problems by just taking down content alone. So if you could just show some of the similarities in terms of your efforts of combatting terrorism along with your efforts to combat child importa pornography. I know you put a lot of resources in combatting child pornography, rightfully so. Could you talk about the similarities in the two goals . Absolutely. The similarities and differences. In the similarity space we use Similar Technology to look for an image we have bsubpoena before, if that appears before. We can detect that image and stop it being distributed and work with Law Enforcement to bring that person to so we work with the National Center for missing and exploited children who work with Law Enforcement around the world. So that process of discovering content, working with Law Enforcement is seamless because i think particularly for Child Sexual Exploitation but for violent threats. What about for combatting terrorism. In either case, if someone is posting that content, removing the content is our response. But there is a Law Enforcement response there as well which holds people to account, potentially prosecutes them for criminal offenses, and that working in tandem between the two is very important. We have a similar industry body that shares information and we also work with governments to share Threat Intelligence and analysis of trends so we can make sure we are staying ahead of bad actors. The biggest area of similarity is the bad actors never stay the same. They are constantly evolving. So we have to constantly be looking for the next opportunity to improve. Thank you. At the beginning of this conversation we talked about the chairman asked the question about or referenced the video of the speaker and why some of you removed it and some did not, and mr. Slater, i was so pleased to hear your answer, which was you look for deceptive practices. It was deceptive, you removed it, correct . Could you talk a little bit more about it seemed like such a and because and miss strossen, i believe you said that the social media platforms free speech right is their ability to decide what is posted and what is not posted . Exactly. Its just that simple, right . They can decide what is posted and what is not posted. So, mr. Slater, if you could talk a little bit about your process. It was deceptive, you took it down. Happy to, congresswoman. Important question. We have Community Guidelines. One of those guidanelines is abt deceptive practice. We review content to make sure whether its volytive so on and so forth and do that on an individualized basis to see if the context has been met and we present those guidelines publicly on our website for anyone to read. Thank you very much. Mr. Chair, i yield back. Thank you. The chair recognizes the gentleman from texas, mr. Taylor, for five minutes. Thank you, mr. Chairman. Just a quick question. So is google an American Company . Congressman, we are headquartered in california, yes. Are you loyal to the American Republic . I mean, is that something you think about or do you think of yourselves as an International Company . We build products for everyone. We have offices all across this country, have invested ef hadly in this country and are proud to be founded and headquartered in this country. Do you so if you found out that a terrorist organization was using google products, would you stop that . Would you end that . We have a policy, congressman, of addressing content from designated terrorist organizations to prohibit it, make sure its taken down. I am not asking about content. I am saying if you found al noo nusra was using email to communicate inside that terrorist organization, would you stop that . Do you have a policy on that . If you dont have a policy, thats fine. Where are you on this . Certainly. Where appropriate we will work with Law Enforcement to provide information about relevant threats, illegal behavior and so on. Similarly, we will respond to vald requests for information from Law Enforcement. Im not asking if you respond to subpoenas. I appreciate that. Its good to hear that you deign to be legal. If a terrorist Organization Uses a google product and you know about that, do you allow that to continue . Or do you have a policy . Under the appropriate circumstances where we have knowledge, we would terminate a user and provide information to Law Enforcement. Okay. So youll forgive me for not your answer is a little opaque. Im still trying to figure this out. So if a terrorist organization is using a google product, do you have a policy about what to do about that . Thank you, congressman. Im attempting to articulate that policy. I would be happy to come back with further information if its unclear. Will the gentleman yield . Sure. Listen to the answer about referring it to Law Enforcement. I think thats an appropriate response because if there is a suspicion that criminal activity is afoot, you would want to refer it to Law Enforcement and Law Enforcement make the call on that. So just the kind of maybe help you a little bit with that particular portion. Thanks, chief. Back to you. Appreciate it. Just to kind of follow up with that. The Islamic Republic of iran is the largest state sponsor of terrorism in the world, right . You know, piece of the Islamic Republic are terrorist organizations. Do you have a specific ban on that terrorist organization and their ability to use your google products . We have prohibitions on designated terrorist organizations using products, uploading content, so on. So you seek to ban terrorist organizations from using google products . Im not trying to put words in your mouth. Terrorist organizations, prohibitions on that sort of organization, correct. Im not asking about content. Im asking about services. You provide regime, i calendar, a host of service that is people can use. Im asking about the services not the content. I realize the focus of this hearing is about content which is why you are here. I am asking about the actual services. To the best of my knowledge, if we have knowledge, and again as my colleagues have said, these bad actors are constantly changing their approaches, trying to gain the system and so on. We do everything we can to prohibit that sort of illegal behavior interest fr behavior from those sorts of organizations. Coudo you have screens set uo figure out who the users are, to pierce the veil, so to speak, to an anonymous account, figure out where its sourcing from . Are you looking at that . Is that part of how you operate as an organization that google does . Absolutely, congressman. We use combination of Automated Systems, threat analysis to ferret out behaviors that may be indicative in that way. Thank you. I appreciate your answers. With that, mr. Chairman and i appreciate the panel for being here. This is an important topic. Thank you, mr. Chairman. The chair recognizes the young lady from nevada, miss titus, for five minutes. Thank you, mr. Chairman. We have heard a lot about incidences, but we havent mentioned about one that occurred in my district of las vegas. This was the deadliest shooting in the United States in modern history, october 1st, 2017. A gunman opened fire on a music concert, a festival. After that attack there was a large volume of hoaxes, conspiracy theories, misinformation that popped up all across your platforms, including about a misidentity of the gunman, his religious affiliation, and some of the fake missing victims. Some individuals even called it a false flag. In addition, when you put up a search safety check site on facebook where loved ones could check in to see who was safe and who wasnt, there were all kinds of things that popped up like links to spam websites that solicited bitcoin donations. They pedaled false information claiming that it was associated with some antitrump army. A lot of mess there where people were trying to make contact. I wonder if you have any specific policy or protocols or algorithms to deal with immediate aftermath of that mass shooting like this. All three of you. Thank you, congresswoman. Let me say that Las Vegas Attack was a horrible tragedy and we see we have improved since then, but i want to explain what our policies were then and how we have gotten better. With the Las Vegas Attack we remove any information that is praising that attack or the shooter. We also took steps to protect the accounts of the victims. Sometimes in the aftermath of these things we will see people try to hack into accounts or do other things like that. So we take steps to protect the victims and worked very closely with Law Enforcement. Since then one area where weve gotten better is Crisis Response in the wake of a violent tragedy. So, for instance, with christchurch you had these companies at the table and others communicating real time, sharing with one another urls, new versions of the video of the attack, and so forth to make sure and it was literally a real time for the first 24 hours operation where we were sharing in that first 24 hours on facebook alone we were able to stop 1. 2 million versions of the video from hitting our site. We have gotten a let better technically. This is an area we will continue to invest. Thank you. And as you have just heard, i think one of the challenges we have in this space is Different Actors will change their behaviors to try to get around our rules. One of the things we saw after chooiz which was concerning was people uploading content to prove the event had happened. So the suggestion that because Companies Like ours were removing content at scale, people were calling that censorship. So people were uploading content to prove the attack had happened. Thats a challenge we havent had to deal with before. Its something we are very mindful of and we need to figure out whats the best way to combat that challenge. We have policies against the abuse and harassment of the survivors and victims and their families. So if someone is targeting someone who has been a victim or a survivor and is denying the event took place or is harassing them because of another factor like political ideology, we would take action for the harassment in that space. And then, finally, the question of how we work with organizations to spread the positive message going forward. Thats where if there are groups in your communities who are affected by this and working with the victims to show the kind of positivity of your community and then we begin to work with those organizations, whenever they are in the u. S. , to spread that message of positivity. Mr. Slater. Thank you, congresswoman. This is of the utmost seriousness. It was a tragic event for our country, for society. Personally, someone who lived in las vegas and new zealand, both of these events i hold deeply in my heart. We take a threefold approach to sort of misinformation and conduct you are talking about. We try on youtube to raise up authoritative sources of information, particularly in a breaking news event, to make sure that authoritative sources outpace those who might wish to misinform and so on. We will strike, remove, denials of well documented violent events or people who are spreading hate speech towards the survivors of that event and we will also seek to reduce exposure to content that is harmful misinformation, including conspiracies and the like. Well, these people have already been victimized in the worst sort of way. You hate to see them then become victims of something that occurs over the internet. One thing we heard from Law Enforcement was that you might think about, and i think this relates to what you were saying, mr. Slater, using algorithms to elevate posts that come from Law Enforcement so people seeking help go to those first as opposed to some of this other information just that comes in randomly. And in your working with Law Enforcement, would you consider that . I know you were addressing the chiefs questions earlier. Miss bicker . Thank you. Thats something we can explore with Law Enforcement. We certainly try to make sure that people have Accurate Information after attacks. Our systems didnt work the way we wanted them to after las vegas. We learned from that. And i think we are in a better place today. I appreciate it if youd look into that. I think Law Enforcement would, too. Thank you, mr. Chairman. Thank you. The chair recognizes the gentleman from mississippi for five minutes. Thank you, mr. Chairman. First of all, to our representatives from facebook, google, and twitter, i want to thank you for being here today. I want to thank you for previously appearing for a closed briefing that we had earlier this year and so we seek to continue to examine this complex issue of balancing First Amendment rights against making sure that content that is on social media does not promote terroristic activity. Professor strassen, you were not sheer during that closed briefing. I want to ask you a couple of questions. During your written testimony you highlight the potential dangers associated with content moderation. Even when done by private companies and in accordance with their First Amendment rights. You make case for social Media Companies to provide free speech protections to users. You state in the conclusion of your written testimony, you say how to effectively counter the serious potential adverse impact of terror content and misinformation is certainly a complex problem while restricting such expressions might appear to be clear, simple solutions, it is in fact neither, and, moreover, it is wrong. I know that was a conclusion of an 11page report that you provided. But could you just briefly summarize that for the purpose of this hearing . Thank you so much, congressman. Yes. The problem is the inherent subjectivity of these standards. No matter how much you articulate them, and i think its wonderful that facebook and the other companies have now fairly recently shared their standards with us, you can see that it is impossible to apply them consistently to any particular content. Reasonable people will disagree. The concept of hate. The concept of terror. The concept of misinformation are strongly debated. One persons fake news is somebody elses cherished truth. Now, a lot of attention has been given to the reports about discrimination against conservative viewpoints in how these policies are implemented. I want to point out that there also have been a lot of complaints from progressives and civil rights activists and social justice activists complaining that their speech is being suppressed. And what im saying is that no matter how good the intentions are, no matter who is enforcing it, whether it be a Government Authority or whether it be a private company, there is going to be at best unpredictable and arbitrary enforcement and, at worst, discriminatory enforcement. And let me ask you, as an expert in the First Amendment, do you feel that content moderation by social Media Companies has gone too far . I think that, you know, first of all, they have a First Amendment right. I think thats really important to stress. But given the enormous power of these platforms, which has the Supreme Court said in the unanimous decision two years ago, that this is now the most Important Forum for the exchange of information and ideas, including with elected officials. Those who should be accountable to we, the people. So if we do not have free and unfettered exchange of ideas on these platforms for all practical purposes, we dont have it. And that is a threat to our democratic republic as well as it is to individual liberty. And there is a lot that these platforms can do in terms of user empowerment so that we can make our own choices about what to see and what not to see, and also information about, that will help us evaluate the credibility of the information thats being put out there. And finally, do you have any recommendations that you have feel would help balance First Amendment, individuals First Amendment rights versus trying protect social media from terrorists being able to use it as a platform that you would recommend, first to the social Media Companies, and then are there any recommendations that you would have of this body, things that congress should consider, that would help us as we navigate this very difficult situation . I think that Congress Oversight as you are exercising very vigorously is extremely important. I think encouraging, but not requiring the companies to be respectful of all of the concerns, Human Rights Concerns of fairness and trans sparnsy and due process as well as free speech, but also concerns about potential terrorism and dangerous speech. I actually think that the United StatesSupreme Court and International Human rights foreign ministers, which largely overlap, have gotten it right. They restrict discretion to enforce standards by insisting that before speech can be punished or taken, suppressed, that there has to be a specific and direct causal connection between the speech in that particular context which causes an imminent danger. And we can never look at words alone in isolation to get back to the question that i was asked by the congresswoman because you have to look at context. If in a particular context there is a true threat, there is intentional incitement of imminent violence, there is Material Support of trim, there is defamtory statements, there is fraudulent statements, all of that can be punished by the government and, therefore, those standards should be enforced by social media as well. That would give us, in my view, that is exactly the right way to strike the balance here. Thank you, mr. Chairman. You yield back. Thank you very much. The chair recognizes the gentleman from missouri for five minutes. Thank you, mr. Chairman. I am having a different approach than my colleagues. Miss strossen, in 1989 i was a member of city council in kansas city. The klan had planned a big march. Its still online. You can look at it. I fought against them. And the aclu supported the right to march. And that if i had passed an ordinance, i was also vicemayor at the time. If i passed an ordinance, they would challenge it in court. Im not mad. Im not upset. I was i am a former board member of the aclu. So i think that free speech has to be practiced even when its now, for everybody else, and in some ways i feel sorry not enough to let you out without beating up on you a little bit, but, you know, we have im afraid of, for our country. I mean, we have entered an age of respect, people are respecting alternative truths, and it is just so painful for me to watch it. I dont think im watching it in isolation. Alternative truths, people will Say Something thats not true and continue to say it. Doesnt matter. I saw it last night where the president said barack obama started this border policy and i tried to im correcting it. What they did, and this is what i want you to consider, what one of the tv networks did is put up people making statements about what was happening. They showed Jeff Sessions when he had first announced the separation policy and so forth. And the problem is that churchill said that a lie can travel halfway around the world before the truth puts on its shoes, and that is true. That is if we started at 20th century new bible that should be one of the scriptures because its a fact. And the truth cannot always be uncontaminated with sprinkles of deceit. So you guys have a tough job. I dont want to make it seem like its something that you can do easily. Our system of government, i think, even beyond that, our moral connections are dependent a lot more and i didnt realize this. I spent three and a half years in seminary. I didnt realize this until recently. But we depend significantly on shame. I mean, there is some things that laws cant touch. And so our society functions on shame. And so when shame is dismembered, im not sure what else we have left. But what i would like for you to react to and maybe even consider is, you know, instead of taking something down in some instances, why not just but up the truth next to it . I mean, the truth. Im not talking about somebody elses response. Im talking about the truth where you you know, like the video i wish i could have brought to you, here is the lie and here is the truth. Anybody . Help me. Okay. This is a very important issue, congressman. Yes. Absolutely. One of the things we have been trying to do is twofold with respect to harmful misinformation. One is where there is a video that says, say, the moon landing didnt happen and my grandmother says that. Or the earth is flat, the video may be up, but you will see a box underneath it that says here is a link to the Wikipedia Page about the moon landing or the Encyclopedia Britannica page where you can learn more. That speaks to the feature you were talking about. You do that now . We do that today, yes, sir. And the other thing we try to do is reduce the exposure, the frequency of the recommendations to information that might be harmful misinformation such as those sorts of spear cia sees. Thank you. The interplay between whats on social Media Companies, the news media, whats on a tv, and how that cycle of information works together. And its a critical part of solving this. For twitter, because we are a public platform, very, very quickly people are able to challenge, to expose, to say thats not true, heres the evidence, here is the data. There is something incredibly important about these conversations taking place in public. I think its something as we move into the information century, we need to bear in mind. Thank you. Congressman, thank you. Similar to what my colleague referenced, we actually, if there is Something Like misinformation that a thirdparty factchecking organization has debunked and we work with 45 of these organizations worldwide, they meet objective criteria, we actually take the articles from those Fact Checkers and put it right next to the false content so that people have that content. We say if you go to share some of that content, we say this content has been rated false by a fact checker and we link them to it. Similarly, when it comes to things like misinformation about vaccines, we are working with organizations like the cdc and the World Health Organization to get content from them that we can actually put next to vaccinerelated misinformation on our site. We think this is a really important approach. It obviously takes a lot of resources. Another thing we are trying to do is, i guess what i would say is empower those, and this is similar to what mr. Pickles mentioned, empower those with the best voices to reach the ride audience. We invest in promoting counter speech and truthful speech. Thank you. Thank you very much. Before we close, id like to insert into the record a number of documents. The first of several letters from stakeholders addressed to the facebook as well as twitter and youtube about hateful content on their platform. The second is a joint report from the center for european policy studies and the counter extremism project. The third is a statement for the record from the antidefamation league. The fourth are copies of Community Standards as of this day for facebook, twitter, and google. Without objection, so ordered. I thank the witnesses for their valuable testimony and members for their questions. The members of the committee may have additional questions for the witnesses, and we ask that you respond expeditiously in writing to those questions. The other point id like to make for facebook. You are 30 hours late with your testimony. And staff took note of it and for a company your size, that was just not acceptable for the committee. So i want the record to reflect that. Without objections the committee record shall be kept owe open for ten days. Hearing no further business, the committee stands adjourned. Pen ten days. Hearing no further business, the committee stands adjourned. A committee stands adjourne. I am prejudiced. And the reason it is, is something i wasnt taught, but its kind of something that i learned. I dont like to be forced to like people. I like to be led to like people through example. What can i do to change, you know, to be a better american . That was a remarkable moment. I didnt really realize until i stepped off the set because there were more calls after that. We had to keep rolling. How powerful it was. There was something in his voice that touched me. You can hear it. Its so authentic as he searches for the words to Say Something to a National Audience that most of us wont admit in our homes. Im prejudiced. Sunday night on q a heather mcghee, president of the Public Policy organization dimos was a guest in august of 2016 when gary sevetelo called. She talks about her follow up with him. Part of the reason for that, this was august, you know. Wed had this sort of raciallycharged summer with Donald Trumps campaign, with black lives matter and the Police Shootings and then the, you know, tragic events all in baton rouge and dallas. I mean, it was really a time when people felt like all they were seeing on tv about race was bad news. And here was, first, a white man admitting that he was prejudiced, which for people of color was, you know, we kind of just all said, finally. Sunday night at 8 eastern on cspans q a. This weekend American History tv will mark the 50th anniversary of the Cuyahoga River fire, an event that shed light on Water Pollution and helped to create the clean water act. On sunday at 9 00 a. M. Eastern historian and Katelyn Ohashi author of where the river burned joins us live from along the river in cleveland to take your calls and talk about the fire, myths associated with it, and the campaign by mayor carl stokes to find solutions. Watch our program on the 50th anniversary of the Cuyahoga River fire live sunday at 9 00 a. M. Eastern. On American History tv on cspan3. Here is a look at our live coverage thursday. On cspan the house is back at 9 00 a. M. Eastern to work on an Election Security bill. On cspan2 the Senate Returns at 9 30 a. M. Eastern to resume debate on a bill that sets 2020 defense policy and programs. A final passage vote is expected later in the day. And on cspan3 there is a Senate Energy hearing on the various options for Nuclear Waste storage. Thats at 10 00 a. M. Eastern. And at 3 00 p. M. Its day two of the faith and Freedom Coalition conference featuring remarks from several senators, including majority leader mitch mcconnell. Amtrak president and ceo Richard Anderson was among the witnesses who testified at a Senate Commerce and transportation hearing on the railway Companys Services and operations. They touched on a number of topics, including amtraks role in connecting rural areas to urban areas, and the implementation of new railway technology. This is an hour and 45 minutes. Good morning. Today the committee gathers for a hearing to examine amtrak, next steps for passenger rail. Im glad to convene this hearing with my friend and colleague Ranking Member cantwell. I welcome or panel of witnesses and thank them for hearing. We will hear from Richard Anderson, president and ceo of