Misinformation in social media. Facebook, twitter and google executives gave open answers, followed by member questions. The committee on Homeland Security will come to order. Is moving tee meeting today to receive testimony on examining social Media Companies efforts to counter online terror content and misinformation. Terroristpremacist killed 15 people and wounded 39 more at him mosque in christchurch new zealand. Our thoughts and prayers are with the victims and their families. The motive behind the attack is not in question. The terrorist has written an extensive manifesto outlining his white supremacists, white nationalists, antiimmigrant, antimuslim, and fascist beliefs. That was horrifying beyond words, and it shocked the conscience. Shockingly, the terrorist was allowed to live stream on facebook where the gruesome video went undetected initially. Instead, Law Enforcement officials in new zealand had to contact the company and ask that it be removed. When new zealand authorities called on all social Media Companies to remove these videos immediately, they were unable to comply. Human moderators did not keep up with the volume of videos being reposted, and their Automated Systems were unable to recognize minor changes in the video, so the video spread online, and is spread around the world. The fact that this happened merely two years after facebook, twitter, google, microsoft, and other Major Tech Companies establish the Global Internet forum to Counter Terrorism, gifct, is troubling, to say the least. Gifct is to share best practices to combat the spread of online terrorist content. Back in july 2017, representatives of the gifct briefed the community on this new initiative. At the time, i was optimistic about its intentions and goals and acknowledged that members demonstrated the willingness to engage on this issue while others have not, but after a while, and white supremacist terrorists were able to exploit social media platforms of various ways, we have doubts of the gifct, and representatives of gifct briefed to this committee in march after the christchurch massacre. Since then, myself and other members of this committee has asked important questions about the organization and have yet to receive satisfactory answers. Today, i hope to get answers regarding your actual efforts to keep terrorist content of your platform. I want to know how you prevent content, like the new zealand attack video, from spreading on your platform again. This committee will continue to engage social Media Companies about the challenges they face in addressing terror content on the platform. In addition to terror content, i want to hear from our panel about how they are working to keep hate speech and harmful misinformation off their platform. I want to be very clear, democrats will respect the free speech rights that you find in the First Amendment, but much of the content i am referring to is either not protected speech or violates the social Media Companies own terms of service. We have seen time and time again that social media platforms are vulnerable to being exploited by that actors, including those working at the behest of foreign governments who seek to sow discord by spreading misinformation. This will only become bigger as we approach the 2020 elections. We need to deal with the persistent problem. At a fundamental level, todays hearing is about transparency. We need to get an understanding of whether and to what is in will what extent social Media Companies are incorporating questions of national security, public safety, and integrity of our domestic institutions into their business model. I look forward to having that conversation with the witnesses here today and to our ongoing dialogue on behalf of American People. I think the witnesses for joining us and the members for their participation. With that, i recognize mr. Rogers. Rep. Rogers concern about online content has been here since the creation of the internet. This has peakedover the last decades in which Foreign Terrorists and global supporters have explored the openness of Online Platforms to radicalize, mobilize, and share their violent messages are these tactics have proved successful, so much so that we are seeing domestic extremists mimic many of the same techniques to gather followers and read hateful, violent propaganda. Some have changed their terms of service to limit posts links to terrorism, violent, criminal activity, and most recently the hateful rhetoric of misinformation. Companies have responded to this pressure in a number of ways, including the creation of the Global Internet forum to Counter Terrorism, or gifct. They are also hiring more human content moderators. Todays hearing is also an important opportunity to examine the constitutional limits placed on the government to regulate or restrict freeze them. Advocating violent acts and recruiting terrorists online is illegal, but expressing ones political views, however repugnant they may be, is protected under the First Amendment. I was deeply concerned to hear googles policies regarding President Trump and conservative news media. Googles head of responsible innovation, jen gennai, recently said we all got screwed over, theeople got screwed over news media got screwed over, everybody got screwed over, so we rapidly more like, how we prevent this from happening again. Then ms. Gennai wrote Elizabeth Warren wants us to break up google. That will not make it better. It will make it worse. All of these Smaller Companies without these similar resources will be charged with preventing. Weext trump situation are in trouble if her opinions are policy. This report and others like it are a stark reminder of why our founders have the First Amendment. The video has been removed from youtube, the platform owned by google. I have serious questions about googles ability to be fair and balance when it appears they have colluded to silence negative press coverage. Regulating speech quickly becomes a subjective exercise for the government or the private sector. Noble intentions also often give way to bias and personal issues. It will require enhanced cooperation by the government, industry, individuals, while protecting Constitutional Rights of all americans are i appreciate our witnesses participation today. I hope it will provide Greater Transparency and understanding of todays complex challenge, and with that, i yield back, mr. Chairman. Rep. Thompson thank you. Other members are reminded that Opening Statements may be submitted for the record. I welcome our panel of witnesses. Our first witness is monika bickert, head of Global Policy management at facebook. Our second witness is nick pickles, twitter senior Public Policy strategist. Our third witness is derek slater, global director of information at google. And our fourth witness is nadine strossen, new York Law School professor. They witnesses fall statement will be inserted in the record. I will now ask each witness to summarize the statement for five minutes, beginning with ms. Bickert. Ms. Bickert thank you, chairman thompson, rating member, and the committee. Ranking member rogers and the committee. I monika bickert, head of global am policy management at facebook, and i am charged with policy and counterterrorism efforts. Before i joined facebook, i prosecuted federal crimes for seven years on the department of justice. On behalf of our company, i want to thank you for leadership, combating extremism, terrorism, and other threats to our homeland and national security. I would also like to start by saying all of us at facebook stand with the victims, their families, and everyone affected by the recent terror attacks, including the horrific violence in sri lanka and new zealand. In the aftermath of these acts, it is even more important to stand together against hate and violence, and we make this a priority in everything that we do at facebook. On terrorist content, our view is simple there is no place on facebook for terrorists. They are not allowed to use our services under any circumstances. We remove their accounts as soon as we find them. We also remove any content that praises or supports terrorists or their actions, and if we find evidence of imminent harm, we promptly inform authorities. There are three primary ways we are implementing this approach. First with our products, that help stop terrorists and propaganda at the gates. Second, our people, who review policies, and third, our partnerships outside of the company, which help us stay ahead. First, our products. Facebook has invested significantly in technology to help identify terrorist content, including through the use of artificial intelligence, but also using other automation and technology. For instance, we can now identify violating textual posts in 19 different languages. With the help of these improvements, we have taken action on more than 25 million pieces of terrorist content since the beginning of 2018. Of the content we have removed from facebook, violating our terrorism policies, more than 99 of that is content we found ourselves using our own technical rules, before anyone has reported it to us. Second, our people. We now have more than 30,000 people who are working on safety and security across facebook, across the world, and that is three times as many people as we had dedicated to those efforts in 2017. We also have more than 300 highly trained professionals exclusively or primarily focused on combating terrorist use of our services. Our Team Includes counterterrorism experts, former prosecutors, like myself, former log oarsmen officials, and together, they speak more than 50 languages, and they are able to provide 24hour coverage. Finally, our partnerships. In addition to working with thirdparty intelligence providers to more quickly provide terrorist activity on the internet, we work with academics who are studying the latest trends, and government officials. Following the attacks in new zealand, facebook was proud to be a signatory to the christchurch call to action, a ninepoint plan for the industry to better combat terrorist attempts to use our services. We also partner across industry. At the chairman and Ranking Member mentioned, in 2016, we launched the Global Internet forum to Counter Terrorism with gifct. The point of that is to share information and to also Share Technology and research to better combat these threats. Through gifct, we have expanded database for companies to share what we call patches, which are what we call hashes, which are basically digital fingerprints of terrorist content so that we can all remove it more quickly and help Smaller Companies do that, too. We have also trained over 110 companies from around the globe in best practices for countering terrorists use of the internet. Facebook took over gifct, and along with our members, we have this year worked to expand our ability including making test audio to other companies, especially Smaller Companies, and we have also improved our crisis protocol. In the wake of the horrific christchurch attacks, we communicated in real time, and we were a list of hundreds of videos of the attack, despite the fact that bad actors were actively trying to edit the video to upload it to try to circumvent our system. We know the adversaries are always evolving their tactics, and we have to improve if we want to stay ahead. Though we will never be perfect, we have made progress, and we are committed to tirelessly combating extremism on our platform. I appreciate the opportunity to be here today. I look forward to answering your questions. Thank you. Mr. Pickles chairman thompson, Ranking Member rogers, the committee, thank you for the opportunity. We keep the victims rep. Thompson turn your mic on, please. Can you pull the mic . Mr. Pickles sorry. Is that better . Thank you. We keep the victims, their families, and affected communities and christchurch and around the world in our minds as we undertake this important work. We have made this our top priority and measure our efforts. Conversely, hateful contact, terrorist content and deceptive practices detract from the health of the platform. I would like to begin by outlining three key policies. First, twitter has a zerotolerance approach. Individuals may not engage in terrorism recruitment or terrorist acts. Since 2015, we have suspended more than 1. 5 million accounts for violation of our roles related to terrorism and we have seen more than 90 of the accounts suspended through proactive measures. We take action at the account creation stage before the account has even tweeted. The remaining 10 is through a andination of user reports other methods. Secondly, we prohibit violent extremist groups. These are defined as groups who, whether by statement on or off the platform, promote violence against civilians or use violence against civilians, whatever cause, whatever their ideology. We have taken action on 184 groups globally and permanently suspended more than 2000 unique accounts. Thirdly, it individual on twitter is not permitted to promote violence or directly affect or threaten people based on characteristics. Where any of these rules are broken, we will take action to remove the content and will permanently remove those who promote terrorism on twitter. As you have heard, twitter is a member of the Global Internet forum of Counter Terrorism, in partnership with facebook, google, and microsoft, as well as providing essential support. We learned a number of lessons from the christchurch attacks. The distribution of media was manifestly different from how other terrorist actions worked. This reflect a change in the wider threat environment that needs a new approach and a focus on crisis response. After christchurch, an array of individuals online sought to continuously reupload the content created by the attacker, both by video and otherwise. Both the broad the video and manifesto. The broad interecosystem presented then and still presents a challenge. A range of thirdparty services were used to share content, including forums that have long hosted some of the most egregious content online. Our analysis found that 70 of the views of the video posted by the christchurch attacker came from verified accounts on twitter, including news organizations and individuals posting the video to condemn the attack. We committed to learning and improving, but every into the entity has a part to play. We should also take some art heart from the social examples we have seen on twitter around the world, as users come together to challenge hate and challenge division. Hashtags like pray for orlando, or after the christchurch attacks, hello brother, allow for a Better Future for us all. In the months since the attack, governments, Civil Society, and others are united to commit to a safe, secure, open, and Global Internet. In fulfilling our commitment to the christchurch call we will , take a wide range of actions, including to continue investing in technology, so we can respond as quickly as possible to future instance. Let me turn to our approach to attempts to mitigate public conversation. As a uniquely open service, twitter enables the clarification of falsehoods in realtime. We proactively use technologies to halt the spread of content propagated through manipulative tactics. We prohibit account manipulation, malicious information and fake accounts. We continue to explore further actions through both policy and product to address these in the future. We continue to look at what we can do to safeguard the healthy environment on twitter. We look forward to working together on these important issues. Thank you. Rep. Thompson thank you for your testimony. I now recognize mr. Slater to summarize his testimony for five minutes. Mr. Slater chairman thompson, Ranking Member rogers, distinguished members of the committee, thank you for the opportunity to appear before you today. I appreciate your leadership of radicalization online and welcome the opportunity to discuss googles work in these areas. My name is derek slater, and i am the global director of information that google. At google. Of role, i lead a team for Public Policy framework on online content. At google, we believe that the internet has been a force for creativity, learning, and access to information, supporting the free flow of ideas to make the world universally accessible and useful, yet there have always been legitimate limits to free expression, and this is true both online and off, especially when it comes to terrorism, hate speech, and misinformation. We take these issues seriously and want to be a part of the solution. Solution. In my testimony today, i will focus on two areas where we are making progress to protect our users. First, on the enforcement of our policies around terrorism and hate speech, and second, combating this more broadly. On youtube, we have rigorous policies and programs against those that spread hate or incite violence. We have invested heavily in machines and people. First, youtubes enforcement system starts at the point of which a user uploads a video. If it is similar to videos that automatically violate our policies, it is sent for humans to review. If they determine it violates our policies, they remove it, and the system makes a digital fingerprint, so it cannot be uploaded again. In the