A white supremacist terrorist killed 51 people and wounded 39 more at two mosques in christchurch, new zealand. Our thoughts and prayers are with the victims and their families. The motive behind the attack is not in question. The terrorist has written an extensive manifesto outlining his white supremacist, white nationalist, antiimmigrant, antimuslim, and fascist beliefs. His act was horrifying beyond words, and it shook the conscience. Shockingly, the terrorist was allowed to live stream on facebook, where the video and its gruesome content went undetected initially. Instead, Law Enforcement officials in new zealand had to contact the company and ask that it be removed. When new zealand authorities called on all social Media Companies to remove these videos immediately, they were unable to comply. Human moderators could not keep up with the volume of videos being reposted, and their Automated Systems were unable to recognize minor changes in the video, so the video spread online and spread around the world. The fact that this happened nearly two years after facebook, twitter, google, microsoft, and other Major Tech Companies established the Global Internet forum to counter terrorism, or pronounced gifct, is troubling, to say the least. The gifct was created for Tech Companies to share best practices to combat the spread of online terrorist content. Back in july 2017, representatives of the gifct briefed this committee on this new initiative. At the time, i was optimistic about its intentions and goals and acknowledged that its members demonstrated the initiative and willingness to engage on this issue while others have not, but after a while, and white supremacist terrorists were able to exploit social media platforms in this way, we have doubts of the gifct and its effectiveness. Representatives of gifct briefed to this committee in march after the christchurch massacre. Since then, myself and other members of this committee has asked important questions about the organization and have yet to receive satisfactory answers. Today, i hope to get answers regarding your actual efforts to keep terrorist content of your platform. I want to know how you will prevent content, like the new zealand attack video, from spreading on your platforms again. This committee will continue to engage social Media Companies about the challenges they face in addressing terror content on their platforms. In addition to terror content, i want to hear from our panel about how they are working to keep hate speech and harmful misinformation off their platforms. I want to be very clear, democrats will respect the free speech rights that you find in the First Amendment, but much of the content i am referring to is either not protected speech or violates the social Media Companies own terms of service. We have seen time and time again that social media platforms are vulnerable to being exploited by bad actors, including those working at the behest of foreign governments who seek to sow discord by spreading misinformation. This problem will become more acute as we approach the 2020 elections. We want to understand how companies can strengthen their efforts to deal with the persistent problem. At a fundamental level, todays hearing is about transparency. We want to get an understanding of whether and to what extent social Media Companies are incorporating questions of national security, public safety, and integrity of our domestic institutions into their business model. I look forward to having that conversation with the witnesses here today and to our ongoing dialogue on behalf of American People. I thank the witnesses for joining us and the members for their participation. With that, i now recognize mr. The Ranking Member of the full committee come the gentleman from alabama, mr. Rogers, for five minutes. Rep. Rogers thank you, mr. Chairman. Concern about online content has been here since the creation of the internet. This has peaked over the last decades in which Foreign Terrorists and global supporters have explored the openness of Online Platforms to radicalize, mobilize, and promote their violent messages. These tactics have proved successful, so much so that we are seeing domestic extremists mimic many of the same techniques to gather followers and spread hateful, violent propaganda. Other pressure has grown on to changeia companies their terms of service to limit posts links to terrorism, violent, criminal activity, and most recently the hateful rhetoric of misinformation. Companies have responded to this pressure in a number of ways, including the creation of the Global Internet forum to counter terrorism, or gifct. They are also hiring more human content moderators. Todays hearing is also an important opportunity to examine the constitutional limits placed on the government to regulate or restrict freeze them. Advocating violent acts and recruiting terrorists online is illegal, but expressing ones political views, however repugnant they may be, is protected under the First Amendment. I was deeply concerned to hear googles policies regarding President Trump and conservative news media. Googles head of responsible innovation, jen gennai, recently said, quote, we all got screwed over in 2016. The people got screwed over, the news media got screwed over, everybody got screwed over, so we rapidly were like, how we prevent this from happening again . Then ms. Gennai wrote Elizabeth Warren wants us to break up google. That will not make it better. It will make it worse. All of these Smaller Companies without these similar resources will be charged with preventing the next trump situation. We. Ms. Gennai is entitled to her opinion, but we are in trouble if her opinions are policy. This report and others like it are a stark reminder of why our founders created the First Amendment. In fact, the video i just quoted on has been removed from youtube, the platform owned by google, who is here today. I have serious questions about googles ability to be fair and balance when it appears they have colluded to silence negative press coverage. Regulating speech quickly becomes a subjective exercise for the government or the private sector. Noble intentions often give way to bias and personal issues. The solution to this problem is complex and will involve enhanced cooperation by the government, industry, individuals, while protecting Constitutional Rights of all americans. I appreciate our witnesses participation today. I hope todays hearing will provide Greater Transparency and understanding of todays complex challenge, and with that, i yield back, mr. Chairman. Rep. Thompson thank you very much. Other members are reminded that under committee rules, Opening Statements may be submitted for the record. I welcome our panel of witnesses. Our first witness, ms. Monika the Vice President of Global Policy management at facebook. Nick we are joined by pickels, who is senior Public Policy strategist at twitter. Our third witness is derek slater, global director of information at google. Finally, we welcome ms. Nadine as asen who serves professor of law at new york law school. Theout objection, witnesses full statements will be inserted in the record. I will now ask each witness to summarize the statement for five minutes, beginning with ms. Bickert. Ms. Bickert thank you, chairman thompson, Ranking Member rogers , and members of the committee. Thank you for the opportunity to be with you today. I am monika bickert, facebooks Vice President of Global Policy management, and i am charged with policy and counterterrorism efforts. Before i joined facebook, i prosecuted federal crimes for seven years on the department of justice. On behalf of our company, i want to thank you for leadership, combating extremism, terrorism, and other threats to our homeland and national security. I would also like to start by saying that all of us at facebook stand with the victims, their families, and everyone affected by the recent terror attacks, including the horrific violence in sri lanka and new zealand. In the aftermath of these acts, it is even more important to stand together against hate and violence, and we make this a priority in everything that we do at facebook. On terrorist content, our view is simple there is no place absolutely no place on facebook for terrorists. They are not allowed to use our services under any circumstances. We remove their accounts as soon as we find them. We also remove any content that praises or supports terrorists or their actions, and if we find evidence of imminent harm, we promptly inform authorities. There are three primary ways we are implementing this approach. First with our products, that help stop terrorists and propaganda at the gates. Second, through our people, who help us review content and implement our policies. And third, our partnerships outside of the company, which help us stay ahead. So first, our products. Facebook has invested significantly in technology to help identify terrorist content, including through the use of artificial intelligence, but also using other automation and technology. For instance, we can now identify violating textual posts in 19 different languages. With the help of these improvements, we have taken action on more than 25 million pieces of terrorist content since the beginning of 2018. Of the content that we have removed from facebook for violating our terrorism policies, more than 99 of that is content we found ourselves using our own technical tools, before anyone has reported it to us. Second, our people. We now have more than 30,000 people who are working on safety and security across facebook, across the world, and that is three times as many people as we had dedicated to those efforts in 2017. We also have more than 300 highlytrained professionals exclusively or primarily focused on combating terrorist use of our services. Our Team Includes counterterrorism experts, former prosecutors, like myself, former Law Enforcement officials, former intelligence officials, and together, they speak more than 50 languages, and they are able to provide 24hour coverage. Finally, our partnerships. In addition to working with thirdparty intelligence providers to more quickly identify terrorist activity on the internet, we also regularly work with academics who are studying the latest terroristic trends, and government officials. Following the attacks in new zealand, facebook was proud to be a signatory to the christchurch call to action, a ninepoint plan for the industry to better combat terrorist attempts to use our services. We also partner across industry. At the chairman and Ranking Member mentioned, in 2016, we launched the Global Internet forum to counter terrorism, or gifct come up with google, microsoft, and twitter. The point of gifct is to share information and to also Share Technology and research to better combat these threats. Through gifct, we have expanded database for companies to share what we call hatches, which are basically digital fingerprints of terrorist content, so that we can all remove it more quickly and help Smaller Companies do that, too. We have also trained over 110 companies from around the globe in best practices for countering terrorists use of the internet. Facebook took over gifct in 2019, and along with our members, we have this year worked to expand our ability, including making test audio to other companies, especially Smaller Companies, and we have also improved our crisis protocol. In the wake of the horrific christchurch attacks, we communicated in real time, and we were a list of hundreds of videos of the attack, despite the fact that bad actors were actively trying to edit the video to upload it to try to circumvent our system. We know the adversaries are always evolving their tactics, and we have to improve if we want to stay ahead. Though we will never be perfect, we have made progress, and we are committed to tirelessly combating extremism on our platform. I appreciate the opportunity to be here today. I look forward to answering your questions. Thank you. Mr. Pickles chairman thompson, Ranking Member rogers, the members of the committee, thank you for the opportunity. O be here today we keep the victims rep. Thompson turn your mic on, please. Can you pull the mic closer . Mr. Pickles sorry. Is that better . Thank you. We keep the victims, their families, and the affected communities in christchurch and around the world in our minds as we undertake this important work. We have made this our top priority and measure our efforts in quick thinking on the platform. Conversely, hateful contact, terrorist content, and deceptive practices detract from the health of the platform. I would like to begin by outlining three key policies. Firstly, twitter has a zerotolerance approach to terr. Acts on our platform. Individuals may not engage in terrorism recruitment or terrorist acts. Since 2015, we have suspended more than 1. 5 million accounts for violation of our rules related to terrorism, and we have seen more than 90 of the accounts suspended through proactive measures. In the majority of the time we , take action at the account creation stage before the account has even tweeted. The remaining 10 is through a combination of user reports and other methods. Secondly, we prohibit violent extremist groups. These are defined as groups who, whether by their statements on or off the platform, promote violence against civilians or use violence against civilians, to further their cause, whatever their ideology. Since 2017, we have taken action on 184 groups globally and permanently suspended more than 2000 unique accounts. Thirdly, twitter does not allow hateful content on its service. An individual on twitter is not permitted to promote violence or directly affect or threaten people based on protected characteristics. These rules are broken, we will were ever take action to remove to remove the content and will permanently remove those who promote terrorism on twitter. As you have heard, twitter is a member of the Global Internet forum of counter terrorism, in partnership with facebook, google, and microsoft, as well providing information across industry as well as providing support to Smaller Companies. We learned a number of lessons from the christchurch attacks. The distribution of media was manifestly different from how other terrorist actions worked. This reflects a change in the wider threat environment that requires a renewed approach and a focus on crisis response. After christchurch, an array of individuals online sought to continuously reupload the content created by the attacker, both the video and manifesto. The broader ecosystem presented then and still presents a challenge we cannot avoid. A range of thirdparty services were used to share content, including forums and websites that have long hosted some of the most egregious content online. Our analysis found that 70 of the views of the video posted by the christchurch attacker came from verified accounts on twitter, including news organizations and individuals posting the video to condemn the attack. We committed to learning and improving, but every entity has a part to play. We should also take some heart from the social examples we have seen on twitter around the world, as users come together to challenge hate and challenge division. Hashtags like pray for orlando, je suis charlie, or after the christchurch attacks, hello brother, allow for a Better Future for us all. In the months since the attack, governments, civil society, and others are united to commit to a safe, secure, open, and Global Internet. In fulfilling our commitment to the christchurch call, we will take a wide range of actions, including to continue investing in technology, so we can respond as quickly as possible to future instance. Let me now turn to our approach to attempts to mitigate public conversation. As a uniquely open service, twitter enables the clarification of falsehoods in realtime. We proactively enforce policies and use technologies to halt the spread of content propagated through manipulative tactics. Our rules clearly prohibit account manipulation, malicious information, and fake accounts. We continue to explore further how we may take further actions through both policy and product to address these in the future. We continue to look at what we can do to safeguard the healthy conversations on twitter. We look forward to working together on these important issues. Thank you. Rep. Thompson thank you for your testimony. I now recognize mr. Slater to summarize his testimony for five minutes. Mr. Slater chairman thompson, Ranking Member rogers, and distinguished members of the committee, thank you for the opportunity to appear before you today. I appreciate your leadership of on the important issues of radicalization and misinformation online and welcome the opportunity to discuss googles work in these areas. My name is derek slater, and i am the global director of information policy at google. In my role, i lead a team of Public Policy framework on online content. At google, we believe that the internet has been a force for creativity, learning, and access to information, supporting the free flow of ideas is core to our mission, to make the world universally accessible and useful, yet there have always been legitimate limits to , even where laws protect free expression, and this is true both online and off, espe