The committee will come to order. Before we begin, i want to remind all members we are in open session. We will discuss unclassified matters only. Please have a seat. Our members may be wandering in a bit late. We are here until 1 00 in the morning. But those in Armed Services were here until about 5 00 or 6 00 in the morning. So a few groggy members here on the committee. In the heat of the 2016 election, as the russian hacking and dumping operation became apparent, my predominant concern was that the russians would begin dumping forged documents along with the real one that is they stole. It would have been all too easy for russia or another malicious actor to see forged fox sh documents among the authentic one that is will make it impossible to identify or rebut the fraudulent material. Even if the victim could expose the forgery for who it was, the damage would be done. Three years later, were on the cusp of a technolaj cal revolution that could enable even more sinister forms of deception and disinformation by malign actors, foreign or domestic. Advances in ai or Machine Learning has led to types of media, socalled deepfakes that enable malicious actors to foment kay as, division or crisis and have the capacity to disrupt entire campaigns including that for the presidency. Rapid progress in Artificial Intelligence, algorithms has made it possible to manipulate media, video imagery, audio text with incredible, nearly imperceptible results. With sufficient training data, these powerful algorithms can portray a real person doing something they never did or saying words they never uttered. These tools are readily available and accessible to expert and novices alike, meaning that at bugs of a deepfake to a specific author, whether a Hostile Intelligence Service or a single Internet Troll will be a constant challenge. Whats more, once someone views a deepfake or a fake video, the damage is largely done. Even if later convinced that what they have seen is the forgery, that person may never lose completely the lingering negative impression the video has left with them. It is also the case that not only may fake videos be passed off as real, but real information can be passed off as fake. This is the socalled liars dividend, which people with propensity to deceive are given the benefit of an environment in which it is increasingly difficult for the public to determine what is true. To give our members and the public a sense of the quality of deepfakes today, i want to share a few short examples and even these are not the stateoftheart. The first comes from Bloomberg Business week. It demonstrates an ai powered cloned voice of one of the journalists. Lets watch the now to really put my computer voice to the test, im going to call my dear, sweet mother and see if she recognizes me. [ ringing ] hey, mom. What are you guys up to today . Well, we didnt have any electricity early this morning and were just hanging around the house. Im just finishing up work and waiting for the boys to get home. Okay. I think im coming down with a virus. Oh, well you feel better, hey. You were talking to the computer. Like i was talking to you. Its amazing. All right. Its bad enough that was a fake. But hes deceiving his mother and telling her that hes got a virus. That seems down right cruel. The second comes from courts and demonstrates a puppet master type of deepfake video. As you can see, these people are able to coop the head movements of their targets if married with convincing audio, you can turn a leader into a ventriloquist dummy. Next a brief cnn clip high lyght research from a professor, an acclaimed expert on deep fakes and featuring on example of a face swap video in which senator elizabeth warrens face is seen mostly trans ponded on the body of Kate Mckinnon. I havent been this excited since i found out my package from l. L. Bean had shipped. Im ready to fight. So the only problem with that video is Kate Mckinnon looks a lot like elizabeth warren. But the one on the left was actually both were Kate Mckinnon. One had elizabeth warrens face swapped on to her. It shows you how convincing that kind of technology can be. These algorithms can learn from pictures of real faces to make completely artificial portraits of persons who do not exist at all. Can anyone here pick out which face is real and which are fake . And of course, as you may have all guessed, all four are fake. All four of those faces are synthetically created. None of those people are real. Think ahead to 2020 and beyond. One does not need great imagination to envision even more nightmarish scenarios that would leave the government, the media and the public struggling to discern what is real and what is fake. A state backed actor creates a deepfake video of a political candidate accepting a bribe with a goal of influencing an election. Or an individual hacker claims to have stolen audio of a private conversation between two World Leaders when, in fact, no such conversation took place. Or a troll farm uses text generating algorithms to write false or sensational news stories flooding social media platforms and overwhelming the ability to verify and users ability to trust what theyre seeing or reading. What enables deep fakes and other modes of disinformation to become pernicious is the ubiquity of social media and the validity at which false information can spread. We have an example of that when nancy pelosi video went viral on video in the span of a few hours. It was a crude manual manipulation some have called a cheap fake. Nonetheless, the videos vie ralt on social media demonstrates the scale of the challenge we face and the responsibilities that social Media Companies must confront. Already the companies have taken different approaches with youtube deleting the altered video of Speaker Pelosi, while facebook labeled it as false and throttled back the speed it spread once it was deemed fake by independent Fact Checkers. Now is the time for social Media Companies to put in place policies to protect users from this kind of misinformation. Not in 2021 after viral deepfakes have polluted the 2020 elections. By then it will be too late. And so in keeping with the series of opening hearings that have examined different strategic challenges to our security and democratic institutions, the committee is devoting this hearing to deepfakes and synthetic media. We need to and the internet platforms that give them breach before we consider appropriate steps to mitigate the potential harms. We have a distinguished panel of experts to help us understand the potential threat of deepfakes. Before turning to them, id like to recognize Ranking Member nunez for any Opening Statement hed like to give. Thank you, mr. Chairman. I join you in your concern about deepfakes and i want to add to that, fake news, fake dossiers and Everything Else that we have in politics. I do think that in all seriousness, though, this is real. If you get online, you can see pictures of yourself, mr. Chairman, on there. Theyre quite entertaining, some of them. I decided maybe theyre entertaining for you. I decided not to play a screen today. But with all seriousness, i appreciate the panelists being here and look forward to your testimony. Yield back. I thank the Ranking Member. Without objection, the Opening Statements will be made part of the record. Id like to welcome todays panel. First, jack clark, who is the policy director of open ai. A research and Technology Company based in San Francisco and a member of the center for a new American Securities task force on Artificial Intelligence and National Security. Next, david dorman, a professor and kreker director of Artificial Intelligence, the Artificial Intelligence institute at the university of buffalo. Until last year, he was the Program Manager of media forensics program. Danielle sit ron is a professor of law at the university of maryland, francis King Ferry School of law. She has coauthored several articles about the potential of deepfakes on nags security and democracy. Finally, mr. Clint watts, distinguished Research Fellow at the Foreign Policy research institute, a senior fellow at the gmf alliance for securing democracy and his recent scholarship addressed social media influence operations. Welcome to all of you. And why dont we start with you, mr. Clark. Chairman schiff, Ranking Member nunez and committee members, thank you for the invitation to testify about the National Security threats posed by the intersection of ai, fake content and deepfakes. So what are we talking about when we discuss this subject . Fundamentally, were talking about Digital Technologies that make it easier for people to create synthetic media and that can be video, images, audio or text. People would be manipulating media for a very long time as you well know. But things have changed recently. I think there are two fundamental reasons for why were here. One is both a continued advancement of computing capabilities, that is for the physical hardware we use to run software on. Thats got significantly cheaper and more powerful. At the same time, software has become increasingly accessible and capable and some of the software is starting to incorporate ai, which makes it dramatically easier for us to manipulate media and it allows for a step change in functionality for things like audio or video editing, which was previously very difficult. Now, the forces driving cheaper computing and easier to use software are fundamental to the economy and many of the innovations weve had in the last few years. So when we think about ai, one of the confounding factors here is similar Ai Technologies used in for production of synthetic media or deepfakes are likely to be used in valuable scientific research. Theyre used by scientists to allow people with hearing issues to understand what other people are saying to them or theyre used in molecular as say or other things that revolutionize medicine. Now, at the same time, these techniques can be used for purposes for justifiably cause unease. Like synthesizing the sound of someone elses voice, impersonate them on video, write text in the style they use online. Weve seen techniques combining these things, allowing them to create a virtual person who can say things they havent said and appear to do things that they havent necessarily done. Im sure that members of the committee are familiar with their runins with the media and know how awkward it can be to have words put in your mouth with something you didnt say. Deepfakes accelerate it. How might we approach this challenge . I actually think there are several interventions we can make and this will improve the state of things. One, institutional intervention. It may be possible for largescale Technology Platforms to try and develop and share tools for the detection of malicious synthetic media at both the individual account level and the platform level. And we could imagine these Companies Working together privately as they do today with Cyber Security where they exchange Threat Intelligence with each other and with other actors to develop a shared understanding of what this looks like. We can also increase funding. So as mentioned, dr. David dorman previously led a program here. We have existing initiatives that are looking at the detection of these technologies and i think that it would be judicious to consider expanding that funding further. So that we can develop better insights here. I think we can measure this. What i mean by measurement is that its great that were here now ahead of 2020, but these technologies have been in open development for several years now. Its possible for us to Read Research papers, read code, talk to people and we could create a quantity of metrics for the advance of this technology several years. I strongly believe that governments should be in the business of measuring and assessing these threats by looking directly at the scientific literature and developing a base of knowledge from which to work out next steps. Being forewarned is forearmed here. We can do that. I think we also need to do work at the level of norms. At open ai, weve been thinking about different ways to release or talk about the technology that we develop. I think that its challenging because science runs on openness and we need to preserve that so that science can move forward. We need to consider different ways of releasing technology or talking to people about the technology that were creating ahead of us releasing it. Finally, i think we need comprehensive ai education. None of this works if people dont know what they dont know. We need to give people the tools to let them understand that this technology has arrived and, though we may make a variety of interventions to deal with the situation, they need to know that it exists. So as i hope this testimony has made clear, i dont think a. I. Is the cause of this. I think a. I. Is an accelerant to an issue been with us for some time and we do need to take steps here to deal with this problem because the pace of this is challenging. Thank you very much. Thank you. Mr. Dorman. Thank you. Chairman schiff, Ranking Member nunez, distinguished members of the committee, thank you for the opportunity to be here this morning to discuss the challenges of countering media manipulation at scale. For more than five centuries, authors have used variations of the phrase seeing is believing. But in just the past half decade, weve come to realize that thats no longer always true. In late 2013, i was given the opportunity to join dar fa as a Program Manager and able to address a variety of challenges facing our military and our intelligence communities. Although im no longer a representative of darpa, i did start the media forensic programs metaphor. The general problem of metaphor is address our limited ability to analyze, detect and address manipulated media that at the time was being used by increased frequency with increased frequency by our adversaries. Its clear that our manual processes, despite being carried out by exceptionally competent analysts and personnel in the government, at the time could not deal with the problem at scale that the manipulated content was being created and proliferated. Typical fashion, the government got ahead of this problem knowing it was a marathon, not a sprint. And the program was designed to address both current and evolving manipulation capabilities. Not with a single point solution, but with comprehensive approach. Whats unexpected, however, was the speed at which this Information Technology would evolve. In just the past five years, weve gone from a new technology that could produce novel results at the time but nowhere near what could be done manually with basic desktop editing software. Open Source Software such as deepfakes that can take the manual effort completely out of the equation. Now, theres nothing fundamentally wrong or evil about the underlying tech noll that rise to the concerns that were testifying about today. Like basic image and video desktop editors, deepfakes is only a tool. There are a lot more positive applications of genre tiff networks than negative ones. As of today, there are point solution that is can identify deepfakes reliably. Its only because the focus of those developing the technology have been on visual deception, not on covering up trace evidence. If history is any indicator, its only a matter of time before the current Detection Capabilities will be rendered less effective. In part, because some of the same mechanisms that are used to create this content are also used to cover them up. I want to make it clear, however, that combatting synthetic and manipulated media at scale is not just a technical challenge. Its a social one as well, as im sure others, witnesses will be testifying this morning. Theres no easy solution. Its likely to get much worse before it gets much better. Yet, we have to continue to do what we can. We need to get the tools and the processes in the hands of individuals rather than relying completely on the government or on social media platforms to police content. If an individual can perform a sniff test and the media smells misuse, they should have ways to verify it or prove it or easily report it. The same tools should be available to the press, to social media sites, anyone who shares and uses this content. Because the truth of the matter is, the people that share this stuff are part of the problem. Even though they dont know it. We need to continue to work towards being able to apply automated detection in filtering at scale. Its not sufficient to only analyze questioned content after the fact. We need to be able to apply detection at the front end of the distribution pipeline. Even if we dont take down or prevent manipulated media from appearing, we should provide appropriate warning labels that suggest that this is not real or not authentic or not what its purported to be. Thats independent of whether this is done and the decisions are made by humans, machines or a combination. We need to continue to put pressure on social media to realize that the way that the platforms are being misused is unacceptable. They must do all they can to address todays issues and not allow things to get worse. Let there be no question that this is a race. But better manipulators get, the better detectors need to be. There are orders of magnitude of those. Its also a race that may never end. It may never be won. But no one but it is one where we must close the gap and continue to make it less attractive financially, socially, politically to propagate false information. Like spam and malware, it is easy and its always a problem and it may be the case that we can level the Playing Field. When the Metaphor Program was conceived at darfa, one thing that kept me up was add remember vars i creating events. It might include images of scenes from different angles, video content from different tea dee viegss and text from other devices that an event occurred and it could lead to social unrest or retaliation before its countered. If the past five years are any indication, that someday is not very far in the future. Thank you. Thank you. Professor sit ron. Thank you chairman schiff. Ranking member nunez and the committee for having me here today to talk about the phenomenon of deepfakes. What law the risks can impose. Im a professor of law at the university of Maryland School of law. There are a few phenomenon that come together that make deepfakes particularly troubling when theyre provocative and destructive. The first is that we know as human beings, we the video and audio is so visceral. We tend to believe what our eyes and ears are telling us. We also tend to believe and tend to share information that confirms our biases. Its particularly true when that information is novel and negative. So the more salacious, were more willing to pass it on. Were seeing deepfakes or well see them in social networks that are addriven. So the entire enterprise is to have us click and share. When we bring all of these things together, the provocative deepfake, the salacious will be spread virally. So let me describe, there are so many harms that my coauthor and i have written about. Im going to focus on the more concrete ones and what law can and should do about it. So there are concrete harms in the here and the now. Especially for individuals. So rana is an investigative journalist in india who writes about government corruption and the persecution of religious minorities. Shes long used to getting Death Threats and rape threats. Its par for the course for her. She wrote a provocative piece in april 2018 and what followed was posters circulated over the internet, deepfake sex videos of rana. So her face was morphed into pornography. That first day is goes viral, its on every social media site, whatsapp, its as she explained to me, millions of phones in india and the next day, so paired with deepfake sex video of rana, was rape threats, her home address and the suggestion that she was available for sex. Now, the fallout was significant. She had to basically go off line. She couldnt work. Her sense of safety and security was shaken. It upended her life and she had to withdraw from Online Platforms for several months. So the economic and the social and the psychological harm is profou profound. Its true that in my work on cyber stalking, the phenomenon is going to be increasingly felt by women and minorities and for people from marginalized communities. Of course, its not just individuals. We can imagine the deepfake about sort of the night before an ipo, timed just right with the ceo saying something that he never said or did, basically admitting to the company was insolvent. And so the deepfake, the night before the epo can upend the ipo. The market will respond far faster than we can debunk it. The question is we can imagine all sets mr. Watts and i talked about im going to let him take some of the National Security concerns like elections, up ending Public Safety, but the next question is what do we do about it . I feel like our panel is going to be in heated agreement that theres no Silver Bullet. That we need a combination of law, markets and really societal resilience to get through this. But law has a modest role to play. There are targeted individuals can bring. They can sue for defamation, intentional infliction of emotional disstregs, privacy tort. But the hardest thing is that its incredibly expensive to sue. Criminal law offers too few levers for to us push. At the state level, there are a handful of impersonation imt at the federal level, theres an impersonation statute. But its professor Marion Franks and i are writing a model statute, one narrowly tailored to address harmful, false impersonations that would capture some of the harm here. But, of course, there are practicalities for legal solutions. You have to be able to find the defendant to so prosecute them. Youve got to have jurisdiction over them. So and the platforms, the intermediaries our digital gate keepers are immune from liability. We cant use a legal incentive of liability to get them on the case. He i see my time is running out. I look forward to your questions and thank you. Thank you very much. Mr. Watts. Chairman schiff, Ranking Member nunez, members of the committee, thanks for having me here today. All advanced nations recognize the power of Artificial Intelligence to empower mill tear is. But those cup tis with the most advanced a. I. Capabilities in unlimited access to large data troves will gain enormous advantages in information warfare. Ai provides purveyors of disinformation to to identify psychological vulnerabilities and to create modified content digital forgeries advancing false narratives against americans and american interest. Historically, each from text to speech to video to Virtual Reality more deeply engages information consumers enriching the shaping a users reality. The falsification allows man incompetent lay tors allows them to dupe audiences lead to go widespread mistrust and at times physical mobilizations. False video and audio, once consumed and believed can be extremely difficult to refute and counter. Moving forward, id estimate russia is an enduring purveyor of disinformation is and will continue to pursue the acquisition of synthetic media capability, employ the out puts against adversaries around the world. I suspect theyll be joined and out paced by china. Chinas rival the u. S. And are powered by to include vast amounts of information stolen from the u. S. And the country has already shown a propensity to employ synthetic media in broadcast journalism. Theyll likely use it as part of disinformation campaigns to discredit foreign detractors, incite fear inside westernstyle democracies and three, distort the reality of audiences and the audiences of americas allies. Deepfake proliferation presents two clear dangers. Delivered development of false synthetic media will target democratic processes with an enduring goal of democracy. Demoralizing the american constituenc constituency. Circulation may incite mobilizations under false pretenses, anybody initiating Public Safety dry sees and sparking the outbreak of violence. The recent state of false conspiracies be whatsapp in india offer a relevant example of how a bogus messages in media can feel violent. The spread will only increase and the frequency and intensity will continue. U. S. Diplomats and military personnel deployed overseas will be prime target for deep taik disinformation planted by adversaries. Con sums has jumped from analog in person conversation toss social media lacking any form of verification filter will likely be threatened by bogus synthetic media campaigns. It will be at the u. S. Embassy in cairo. Rumors of protests at an air bass have been accompanied with fake audio or video content could have been far more damaging in terms of that. Id also point to a story out hours ago from the Associated Press which shows the use of a synthetic picture for what appears to be espionage purposes on linkedin. Essentially a honey potting attack. Deepfake employment focus on foreign adversaries, with the greatest threat may not come from abroad but from home. And from the private sector. Thus far, ive focused on authoritarian nation states. Ive brought a map. But they will use vast resource toss develop deepfakes as needed in pursuit of their goals. Recent examples of disinformation and misinformation suggest it will be oligarchs, Political Action groups and Public Relation firms and others with support that will seek out these media capabilities and amplify deepfakes. The net effect will be the same. Did he egradation of sporadi violence by individuals and groups mobilized under false pretenses. I have several recommendations but ill only hit a couple here in the oral remarks. First, congress should implement legislation prohibiting u. S. Officials, elected representatives and agencies from creating and distributing false and manipulated content. The u. S. Government must be the purveyor of facts and truth to constituents. Assuring the Effective Administration of democracy via productive policy debate from a shared basis of reality. Second, policy makers should work jointly with social Media Companies to develop standards for content and accountability. Third, the u. S. Government should partner with private sectors to implement digital verification designating a date, time and physical origination of the content. Fourth, social Media Companies should enhance labeling across platforms and work as an industry to when it should be appropriately marked. Not all synthetic media is nefarious in nature. But information consumers should be able to determine the source of the information and whether its authentic depiction of people and events. Fiflt x and most pressing is the u. S. Government from a National Security perspective should maintain intelligence on capabilities of adversaries to conduct such information. The departments of defense and state should immediately develop Response Plans for deepfake smear campaigns and mobilizations overseas in an a hemt to mitigate harm last, i echo my panelists, Public Awareness of deepfakes and signatures will assist in tamping down attempts to subvert u. S. Democracy and incite violence. I would like to see us help the public make better considerations about the content theyre consuming and how to judge that con ten. Thank you, all. Ill proceed with questions. I recognize myself for five minutes. Two questions. One for professor sit ron and one for mr. Watts. Professor, how broad is the immunity that the social platforms enjoy and is it time to do away with that immunity so that the platforms are required to maintain a certain standard of care . It seems to me not very practical to think about bringing People Justice who are halfway around the world or the difficulties of attribution or the fact that given the cheap cost of this technology now, just how many people can employ it, is it time to take that step . Was it appropriate for one social media to leave out the pelosi video, even labeling it in a certain way. Mr. Watts, whats a proportionate response should the russians start to dump deepfakes, release a deepfake of joe biden to try to diminish his candidacy . What should the u. S. Response be . Should it be a cyber response, not a tit for tat in the sense of a deepfake of putin but rather some Cyber Reaction or sanctions a better response . How do we deter this kind of foreign meddling, realizing thats only one part of the problem. Professor . So im going to start request how broad the immunity is and then that it is time for us amend section 230 of the decency act. Under a law passed in 1996, the communications dee sense i act. You can imagine the internet without porn. That would be objective of the decency act. Most of the law struck down. What remains is a provision thats called Good Samaritan blocking and filtering of offensive content. Its been interpreted really broadly. To say if you underfilter content, if you dont engage in any selfmonitoring at all, even if you encourage abuse, that youre immune from liability for usergenerated content. That means that revenge porn operators can say theyre immune from it while others post exs photos. Theyre right, theyre immune from liability because they are not generating, cocreating the content. So the question is, in a world, here we are 25 years later, the internet, weve got dominant players. The internet is not in its infancy. Is it time to reassess it . I think the answer is yes. That we should condition the immunity. It shouldnt be a free pass and it should be conditioned on reasonable content moderation practices. Ben wit is and i have written a sample statute that you could a donte if you so chose that would condition the immunity on reasonable content practices. Then the question, of course, is in any given case, are platforms making the right choices . Under an approach that would have us look to the reasonableness of content practices, we would look at the platforms total, its approach generally speaking to content moderation. Not any given decision with content. Lets take the pelosi video. I think the answer is that it should have been taken down. We should have a default rule. Platform should have a default rule, if were going to have impersonations or manipulation that do not reflect what weve done or said, platforms should, once they figure it out, take it down. The technology is such that we cant detect it yet, right . We cant automatically filter and block. But once weve figured it out, were already in a place where the public has deep distrust of the institutions at the heart of our democracy. An audience primed to believe things like manipulated video of lawmakers. I would hate to see the deepfake where the prominent lawmaker is seen purported to seen taking a bribe that you never took, right . I hope that platforms come to see themselves if we cant require them to have legal liability that they come to see themselves as the purveyors of responsibly facilitating discourse online. And their importance to democracy. Thank you, mr. Watts. Id like to start off with a basic principle of information warfare. R. H. Knapp, a professor to studied wartime rumors. His quote was once rumors are current, they have a way of carrying the public with them. The more rumors is told, the greater its plausability. He wrote that in 1944. Thats still the essential thing. Who is there first and the most. Thats the danger of social media propaganda with a. I. In terms of how we deal with this, theres several parts. One is you have to have a plan. Its a multipart plan. The other part is we have to respond quickly. This is not the tradition of our government. For example in iraq, when there would be fake al qaeda propaganda put out to inspire people to show up places, we had Rapid Response teams that would show up with video, with audio that would shoot footage from there to show this is not true, this has been disproven. Thats a great example about if this starts to get leaked out, what is our plan right now. The u. S. Government for any official Government Agency should immediately offer a counter based on fact in terms of whats actually going on. This happened in the summer of 2016 at an air base. It was russian state sponsored propaganda put out about a potential coup, maybe the base was surrounded. We should be able to turn on the cameras at that care base immediately and show this is not happening. The faster we do that, the less chance they see it first and less chance they see it often and believe it. The next part, it comes down to the Political Parties. Republican and democrat. If they have the smears coming through, they should be able to rapidly refute that and put out a basis of truth. This candidate or a candidates were not there, the partnership with the social Media Companies. I would not go as far as saying every piece of synthetic video that gets loaded up on a social media platform needs to come down. Im glad you came down one of the articles about President Biden comes from the onion. It was that he was waxing his camaro in the driveway of the white house. It was a comedy bit. It had manipulated photos and content on it. If we went to that extreme, we would have a country where anything ever changed or modified for any reason would have to be policed and wed be asking a private Sector Company to police that. I would instead offer a different system, which is triage. Which is social Media Companies, how do they accurately label content as authentic or not. The danger with social media is the source is not necessarily there. We saw that in 2016. We see that today. So they should be able to refer back to whatever the base source is quickly. How do you do that . They should be able to triage. The three areas which i would suggest that they immediately triage in is if they see something spiking in terms of ver at. That should be put in a cue and have it done for human review, linked to Fact Checkers, down rate it, dont let it into news feeds and let the mainstream understand what is manipulated content. Thats the jump were most concerned about. The other thing is outbreaks of violence and Public Safety and then anything related to elected officials or public institutions. Should immediately be flagged and pulled down and checked and then a context given to it. I see it as the public need to be given a context that were not really suppressing all freedom of speech, all development of the context. There are legitimate reasons to use synthetic media for entertainment, comedy, visualizations out there. Im going to go to mr. Nunez, at some point id love to followup and see what the response is to foreign adversary. If you give me 20 seconds, i can tell you. Refuting, number one. Number two, offensive cyber is in place and i like what the nsa has done in 2018. Number 3, more aggressive responses in terms of sanctions. Sanctions around content comes from. Mr. Nunez. Thank you. Mr. Chairman. How do you put in filters to the tech oligarch companies. Theres only a few of them. You know who they are. That doesnt its not developed by partisan left wing like it is now where most of the time its conservatives who get banned and not democrats. The pelosi video was taken down. Thats fine. I dont have a problem with that. I can tell you theres videos up of republicans that go on and on and on. Its all in who is building the filter, right . Are you asking you were talking filters. What i was suggesting is that it would be impossible to exante deepfake we cant detect it as far as the state of the art goes now, nor i think in the arms race well be able to really filter it and what i would say is that theres Something Like a video where its clearly a doctored and impersonation. Not satire. Theyre wonderful uses for deepfakes that arent rejuvenating for people to create about themselves. But, rather im not i think i mostly agree with you other than i just dont know how you the chatening is how do you implement it challenge is how do you implement it . These are hard problems of content moderation. Ive worked with companies for ten years. Particularly on the issue of nonconsensual pornography and threats and stalking. Its such a contextual question. You cant proactively filter. But when its reported, the question is when we see videos going viral, theres a way in which we companies should react and react responsibly. And absolutely should be bipartisan. There shouldnt be ideology that drives the question. But rather, is this a misrepresentation in a defamatory way, right, that we would say its a falsehood that is harmful to reputation. Thats an impersonation, then we should take it down. That is the default im imagining. Uhhuh. For social Media Companies. But it would be expost. But its a challenge, you talked about the 96 law that needs to be changed. And i i think it has to be one way or another. They have to be a truly open Public Square which then its very difficult to filter. Developing a filter, puts their bias into the filter. Actually, 1996, that bill, it did not imagine an open Public Square war private companies couldnt filter the opposite. It was designed tone courage selfmonitoring. And to provide an immunity in exchange for Good Samaritan filtering and blocking of offensive content. So the entire premise of section 230 is to encourage and provide an immunity so that there was filtering and blocking. Because congress knew it would be too hard for congress and the ftc to get ahead of this. That was in 96. Imagine now the scale that we face. I the scale that we face. I think we should preserve the immunity but condition it on reasonable content moderation practices so that there are some sites that literally traffic in abuse that encourage illegality and they should not enjoy immunity from liability. Right. And were back to where we started. This is the challenge, right. So how do we draft legislation that would enable that. Happy to tell you how to do it. Section 230 c 1 now says, no speaker or publisher or no Online Service shall be treated as a speaker of publisher of someone elses continue tent. What we can do is change that to say that no Online Service that engages in reasonable content moderation practices shall be treated as a speaker or publisher of somebody elses content. So we can change section 230 with some imagination. That depends on what the definition of reasonable is. And thats what law does really well. So every time i hear a lawyer say we cant figure out whats reasonable, its called tort law. Negligence is built on the foundation of reasonableness. We often start with no liability because we really want to protect businesses and we should and we experiment and we realize thats a lot of harm. Then we often overreact and impose strict liability. And then we get somewhere in the middle. Thats where negligence lives, reasonable practices. We have industries theres content moderation has been going on for the past ten years and ive been advising twitter and facebook all of that time. There is meaningful reasonable practices that are emerging and have emerged in the last ten years. We have a guide. Its not as if this is a new issue in 2019. So we can come up with reasonable practices. Thank you. I yield back. Mr. Hiems. Thank you, mr. Chairman. Dr. Dormund i want to get a quick idea of the status quo. Before i do that i want to highlight something i think is of immediate and intense interest to the Intelligence Community. You said somethings happening on the base somewhere, we can just turn on the cameras. Im not sure thats right because if you can create a deep fake, theres no reason why you cant create a deep fake from that camera to the screen, right . The point im trying to make is our Intelligence Community obviously relies on things like full motion video and photographs and that sort of thing. One of the threats here is not just the threat we might be made to look silly on youtube, but our Intelligence Community using its own assets might not be able to tell fact from fiction, is that correct . One of my other recommendation was digital verification. These folks will know better because theyre more technically sound than i am on this. Digitalification for date, time. Part of that would be then if you as the u. S. Government turn on your cameras, it can be verified by news agencies, reporters. We could have it on cspan. We could use it in a lot of different ways but we have to make sure we have the ability to verify that our content is real. Ill defer to them in terms of the technical but some of this has already been developed. I would want that accompanied with it. I understand theres no Silver Bullet here. This is going to be a cat and mouse game. Take a minute or two and tell us where are we in that cat or mouse game. Should we expect to have undetectable deep fakes out there in one year, two years, five years . I think there is the risk of having undetectable content that gets modulated, shared online. Right now things like compression, if you have a very low resolution version of a video, the attribution can be destroyed. The camera fingerprint where this content came from can be destroyed. A lot of the trace evidence can be destroyed with very simple types of manipulation on top of the deep fake process or any type of manipulation. The challenge that we have, though, is that we do have Point Solutions for a lot of these components. And bringing them together in a useful way and, as i said, getting them in the hands of everyone throughout the pipeline, imagine if facebook or youtube or any of these other companies could have this up front and when the human reviewers facebook just reported they hired 30,000 people to review content could have this and say i have a questioned video or piece of audio or something that i need to review, now and let me go run this algorithm on it. Do that up front so they have a report associated with a particular image or video. And then if theres questions, to put that warning up there. I think the public doesnt know enough about whats possible to demand that if somebody knows something. The truth of the matter is when this stuff gets shared, it gets created once and when it gets shared, it gets shared across different platforms. It gets shared by different people with different media. But the truth of the matter is that signature for that particular piece of video or audio is there. And so there are tools that the social Media Companies could use to link those together and make a decision and then share it with everyone, the same way as we do with malware, for example, cyber issues. Weve gotten to the point where were protecting our front door and we need to protect our front door from images and video as well. Thank you, doctor. Professor citron, the theme of this hearing is how scary deep fakes are. But one of the more scary things ive heard this morning is your statement that the pelosi video should have been taken down. I dont have a lot of time. Sadly, there wont be a moment for you to answer but i do want to have this conversation because as awful as i think we all thought that pelosi video was, theres got to be a difference if the russians put that up, which is one thing, versus if mad magazine does that as a satire. As you know better than me, we dont have a lot of protections as public figures with respect to defamation. And some of the language youve used today makes me worry about First Amendment and satire. I simply just wanted to put that on the record and hope we have an opportunity this morning to hear more about where that boundary lies and how we can protect a long tradition of freed freedom of expression. Thank you all for being here. Boy, weve come a long way. I remember chevy chase playing gerald ford on saturday night live and he didnt even pretend to look like gerald ford. Then we see forest gump which was a wonderful movie. Out of everything bad theres a chance to do something bad but out of everything good theres obviously a chance for people to do something bad. I think we see that. The way it sounds with where were headed its like were all living in the truman show or Something Like that. I think about in that vein out of something good something bad can happen, im sure the Wright Brothers when they learned to fly didnt think and maybe we can fly this into a building someday and kill people, right . But thats what happens in this world, unfortunately. As a result of that, 9 11 for example, it takes a lot longer to get on a plane and for good reason. I think that where we need to be headed might be and i want your opinions on it obviously weve got to slow this down, you know, before something just hits it. I think youre talking about the triage idea. Maybe we label. Maybe we have to tell people before they see something this is satire, its not real and you have to in some way verify. Its kind of pathetic but at the same time that may be what we have to do. Slow it down, triage it. This is not verified, this is satire. Maybe on a global scale when it comes to punitive measures to people that are doing things nefarious, maybe we have to have internal extradition laws, because when something comes from some other country, maybe even a friendly country, that defames and hurts someone here, maybe we both agree amongst those nations that well extradite those people and they can be punished in your country for what they did to one of your citizens. Id love your opinion on those, the triage, labeling and extradition. Whoever wants to take it first. I think thats absolutely right. One of the reasons that these types of manipulated images gain traction is because its almost instantaneous that they can be shared around the world, shared across platforms. You can see something on one platform and theres a button there to post it to ooanother. A lie can go halfway around the world before the truth can get its shoes on and thats true. Personally i dont see any reason why you know, broadcast news does it with live types of they have a delay, sevensecond delay or 15second delay. Theres no reason why things have to be instantaneous. Our social media should instill these types of things to delay so they can get these types of things online. They can decide whether they should label it. We still need to put the pressure on for those types of things. But theres a seriousness issue. Theres from satire all the way up through child pornography. Weve done it for child pornography. Weve done it for human trafficking. Theyre serious about those things. This is another area thats a little bit more in the middle, but i think they can make the same effort in these areas to do that type of triage. If you say what youre about to see is satire and has been modified. Go ahead. So i think one thing were stressing is we will continue to be surprised by Technological Progress in this dough mean. The lure of a lot of this stuff is all of these people think theyre the Wright Brothers, they feel that. Theyre all busily creating stuff and figuring out the effects of what they build is difficult. I think we need to build infrastructure so you have some third party measuring the progression of these third parties so you can anticipate the other things in expectation. And the labeling, i think, is incredibly important. And there are times in which thats the perfect rather than second best, that where we should err on the side of inclusion and label it as synthetic and be so required to label it. Its true that there are some instances, though, where we say where labeling is just not good enough, right, that it is defamatory, that people will believe the lie. Theres really no counter speech to some falsehoods. If we get a chance, id love to hear back from you on the notion of extradition laws and other punitive measures. Thank you. I yield back. Ms. Sul. Thank you, mr. Chairman. Dr. Dormund you didnt really answer my colleagues question about how far away are we from actually being able to detect deep fakes. I know you were working on that. Where are weve commercially or by government or researchers to be technologically able to detect deep fakes. Deep fakes is typically referred to as a mparticular technology that there are Certain Software out there for doing that. Its not a general concept. The initial paper that was published that gave rise to this technology came after the start of the medical ftaphor program. We did adapt to start looking at those things. There are Point Solutions out there today that deep fakes coming from these particular softwares can be detected. Do we have the technology to actually be able to digitally verify the videos, the photographs, et cetera . The problem is doing it at scale. If you give me a particular video, with high confidence i can tell you whether this is a fake video and i can also come back and say, okay, here are the videos or images that went into it. How long does that take . Is that a matter of an hour, 30 minutes. With the right hardware and things, it can be done with a constant delay. Yes, 15 minutes, 20 minutes. So in advance of the 2020 elections, what can campaigns, Political Parties, candidates do to prepare for the possibility of deep fake content. Mr. Watts. One thing i think even here on capitol hill and with the Political Parties is urge the social Media Industry to Work Together to create unified standards. So part of the problem with all of these incidents is if youre a manipulator, domestic or international, and youre making deep fakes, youre going to go to whatever platform allows you to post anything from inauthentic accounts. They go to wherever the weak point is and it spreads throughout the system even if facebook or google or twitter do a good job. I think one thing is really pressuring the social Media Industry to Work Together. That goes for extremism, disinformation, political smear campaigns. I think the other thing is having Rapid Responses to deal with this stuff. Any sort of lag, as much as defense is not the best way, any sort of lag in terms of response allows that conspiracy to grow. Mainstream Media Outlets can also work to help refute. Other politicians or elected officials can help you do that refutation. What would you suggest . I think candidates should have clear policies about deep fakes and a commitment not to use them, not to spread them and have established early on relationships with social Media Companies so that when a candidate can say, i wasnt therine there, i wasnt doing that or saying that at that point in time, whoever it is at facebook, whoever it is at twitter, whoever it is that they have immediate Rapid Response teams. How do we even begin to tackle this sort of liar dividend . I love that. Youre using my phrase. Thank you. In which politicians can deny the truth by claiming the recording is a deep fake. I love this. Twice weve gotten some play for the liars dividend which we conceived in our California Law review piece. What most worries us is in an environment of deep fakes weve cu cultured people to not believe their eyes and ears. A person can take a genuine recording of ms. Jeff. Oh thats not me, thats a deep fake. Part of it is education. Part of our robust education we have to have with the public is telling them about the phenomenon about the liars dividend. Its not that we shouldnt educate people. Should we give up . Our response is absolutely not. It must be a part of the robust learning curve to say, look, we know that wrongdoers are going to seize on the deep fake phenomena to escape reality and we cant let them do that either. We have to get in the middle from completely believing everything our eyes and ears tell us to being skeptical without being nihilistic. We dont want to get into that space where we have a nonfunctioning marketplace of ideas. Thank you. Mr. Stewart. Thank you, chairman. And to the witnesses, thank you for being here. Its been a helpful panel, although i have to say that im a little bit concerned with some of your suggestions. I think although in an ideal world they would be helpful, in the real world we live in im afraid some of them are nearly impossible to implement. And some of them have some troubling aspects themselves. Its kind of like Fact Checkers who arent really Fact Checkers that insert their opinion. This is just a reality. Sitting on the intel committee, im often asked in conversations what do i think is the greatest threat facing the world. A couple of years i answered that and said without thinking, i think its that no one knows what is real anymore. I think that is the greatest threat facing our nation. People dont accept basic truths or basic falsehoods anymore. Its not just deep fakes. Rt television is extraordinarily good at propaganda that many people think is perfectly legitimate and real. Fake news, a term that we all unfortunately have become very familiar with. Manipulation of intelligence products is extraordinarily troubling to me. And we live in a world where black is white and white is black. I could show you evidence that white is black. And a lot of people would bel f believe me that white is black. By the way,i think we can control governments. I think we can control to a certain extent legitimate businesses. We cant control everyone. This is going to be so pervasive and so available that virtually anyone could create this. Its easy to control the u. S. Government but you cant control the over 6 billion people on the earth. That is my concern, just the pure volume of it. Its like trying to monitor every bumblebee thats flying around america. Last thing, it goes both ways. This is my concern as well. We could create the impression that a lie is real, but we can also say that Something Real is a lie. A politician caught in a bribe, which by the way politicians do much worse things than that. A politician is caught in a bribe and it could be actually true and he would then say, no, no, its just a deep fake, thats not real. So you lose the credibility both ways, which brings me to my question. The first is with the potential for so much harm, should we have the algorithms that create deep fakes, should they be open source . And if the answer is no, weve got to do that right now. We cant wait for two or three years to do that because theyll already be per vavasive through the world. The second question and this is almost rhetorical is how do we prepare people to live in a world of deception . How do we prepare people to live in a world where they generally may not know whats real or not . Should the algorithms be open source or should we control them . We made a conscious decision to make the Metaphor Program open. Youll see even a week and a half from now at the conference there will be a workshop there thats dealing with this. Even though theres potential for our adversaries learning from these things, theyre going to learn anyway. We need to get this type of stuff out there. We need to get it into the hands of users. There are companies out there that are starting to take these types of things. I absolutely think these types of things need to be open sourced. Its the same technology being used in terms of deep learning to create this type of content. Youre saying it should be open sourced primarily because theyll get access to it anyway, is that the essence of your response . And people need to be able to use it. The more we can use it to educate the community, educate people, give people the tools so they can make the choices for themselves. Thats what were looking for. Ill accept that with some hesitation. I think they get it anyway. What about suggestions on how we prepare people to live in a world that just is so steeped in deception . Im sure youve thought through that. We have ten seconds to answer. So when Justice Oliver when dell holmes came up with the marketplace of ideas, he was a cynic. He worried about humanity. But the broader endeavor is at the foundation of our democracy we can have a set of accepted truths so we can have real meaningful policy conversations. We cant give up on the project. I agree with you. But that foundation of accepted truths is very shaky at this moment. Thank you, chairman. Mr. Carson. Thank you, chairman schiff. In an aera of prevailing mistrut with journalists and the media, do deep fakes risk aggravating this kind of distrust . Prior to working in a. I. I was a professional journalist for seven or eight years so i speak from some experience. Yes, i think this is a very, very, very severe and potentially undercovered threat because when you write a story that people dont like, they try and attack you as the author of it or they try and attack the integrity of the institution. So yes, not only do we see the journalists themselves being attacked, but i think what is so across i have is the notion that the media is going to sit on real evidence for fear that its a fake. And we certainly have already seen sort of stings with media organizations and now theyve fot to got to be wary of stings. The corrosive effort, what we call trust decay, effects not only politicians and our view of civic and political institutions, but everything. And centrally so journalism and the media. If i could just add to that, over time if an information consumer does not know what to believe, they cant tell fact from fiction, then they will either believe everything or theyll believe nothing. If they believe nothing, it leads to longterm apathy and that is destructive for the United States. I think you could look to russia as an example of what has happened internally to the russian people and how the russian government has used the fire hose of falsehoods approach that if you cant believe anything, you just give up and surrender. The consequences to democracy are long term apathy, disappointment in officials, not wanting to show up and register for the draft or show up as a volunteer force. That would be one i would look at over the next 1015 years. I think thats the longterm corrosive effect. They just are much more patient than we are and willing to wait decades for it to come to fruition. In addition to that, will Technology Solutions for authentication be available for sufficient amounts for journalists or media organizations or Fact Checkers to keep up with even validating a piece of media before reporting on it, like chairman schiff crowd surfing at south by southwest or premiering his own netflix special, how can you verify these things as journalists . Its important to have these tools out there for them. We have Point Solutions. We dont have a general solution. We dont have a gatekeeper that can be automated completely. This is a cat and mouse game. As things get better for our better being able to deceive visually, theyre going to get better and theyre going to move onto covering up their trace evidence. I think the tools can be put in the hand and they should be. Weve had situations where there are embedded reporters where somebody comes up to them with something on a cell phone and shows an atrocity. They need those tools. They have to know whether to report on that or not. The its a major concern. It could even evolve into some kind of new scam where you have someone with a piece of information selling it to tmz or even a socalled Credible Media outlet scamming for 50,000 and the piece is like fake, you know. Its possible, sure. If i could add one dimension to this, though, is how lucky we have to have a very engaged public in terms of actually rebutting things that are false and challenging them. Its not just journalists that are doing it. Its also the public that is challenging back and forth. One of the dangers we dont think of is in information environments where authoritarians control and eliminate all rebuttals that could have a very significant backlash to us which is why i would like to see widespread proliferation for authentication not just here in the united stat states. Fact checking is expensive and time intensive and the number of news organizations on the planet who are doing well in economic terms is sort of dwindling over time. So i think if we were to go down this path you need to find a way to fund that because they of their own volition because of the economics theyre not going to naturally adapt this stuff other than a few small trusted institutions. It becomes incredibly difficult to remain a news source if youre having to pay to fact check constantly. Yes. Thank you. I yield back. Mr. Kroufcrawford. Thank you, mr. Chairman. Weve come a long way since milli vanilli. Two british artists teamed with an Israeli Company canny a. I. They created a video of Mark Zuckerberg saying he could control the future. They posted that on facebook specifically to challenge facebook and then zuckerberg has responded by saying hes not going to take that down. I just wonder if you could comment on that. What do you think this is about and do you think its a wise decision for zuckerberg to not take it down given what weve talked about . I think thats a perfect example where given the context. Thats satire and parody. That is really healthy for conversation. And all of these questions are hard. Of course our default presumption as we approach speech online is from a First Amendment perspective, which is we want to keep government out of calling balls and strikes when it comes to ideas in the marketplace. But private companies can make those kinds of choices and they have an incredible amount of power. They also have free without any liability and i think they made the right choice to keep up it was a conversation about essentially the cheap fake of nancy pelosi. It seemed to be a conversation about the choices they made and what does that mean for society. So it was incredibly productive, i think. It seems correct in this instance, but all of these companies are kind of groping in the dark when it comes to what policies they need overall because its a really, really, really hard problem. I think what would be helpful is to have a way for them to share policies across multiple companies and to seek standardization because these judgment calls are very kwaltive in nature. This does point out the idea of context. Part of that video, it spread for one purpose only which was to challenge this rule. But no one really believes Mark Zuckerberg can control the future because he surely wouldnt want to show up here to testify or being in the quagmire hes in. Im trying to make a very serious point about context which is very vir ralty strikes that human cure ration, we see 4,000 shares in ten minutes. Now we see 16,000 shares over 15 minutes. Then we look at labeling. We look at context. How do we inform the public so they make good decisions around it. We had a parallel to this in the analog era. When i was a kid it would say aliens landed at area 51. Id ask my mother or friends where does this come from. I like it that facebook has been consistent in terms of their enforcement and im also not going to say they should never change what those terms are. I think theyre looking to capitol hill to figure out what is it that we want to be policed, what do we want europe what does europe want to be policed . I think they would like to hear from legislatures about what falls inside those parameters. The one thing that i do really like that theyre doing is inthi inauthentic account creation. Is there a particular company or a particular region or particular nation that is especially adept at this technology that is developing at a quicker rate or whatever . Its distributed along the lines youd expect of Prominent Research centers in america and china and europe. Its distributed wherever you have good a. I. Technologyisistt. At some point folks at home will be able to access it. It already is. Absolutely. Thats one of the big differences. You used to have to go out and buy photo shop or have some of these desktop editors. Now a High School Student with a good computer and if theyre a gamer they already have a good gpu card, can Download Data and train this type of thing overnight with software thats open and freely available. Its not something that you have to be an a. I. Expert to run. A novice can run these type of things. Thank you. Mr. Quigley. Thank you for your participation. Following up on those points, the themes here getting easier to do, the quality is getting better, getting harder to detect. Now, the examples we talk about as victims,elected officials, corporations, this horrible attack on journalists, but what about a Small Business with limited resources . What about individuals who are victims of the example you gave, professor, revenge porn for example. And doctor, you talked about the scale and widespread authentication. What capabilities might exist as we go forward either on social media platforms, Law Enforcement or for individuals themselves to deal with this detection issue . Well, you know, i envision some time where theres a button on every social media piece or every time you get even a text message with a video attached to it that you can hit, goes off, it gathers information, not necessarily totally automated. Its been vetted by one of Many Organizations if you can identify where it came from so the individual can make those decisions. The problem is a lot of these types of technologies exist in the labs, in research, in different organizations that are not shared and theyre not implemented at scale. So if i want to go out and test a picture there was a very interesting picture before a tornado up in maryland a couple of weeks ago. It looked surreal. And i immediately thought, oh, that must have been somewhere else, somewhere in the midwest years ago. So i did a search. Theres a reverse image search that you can do. After doing some research, i found that it was indeed real and it was practically in my back yard. But not everybody has those types of capabilities. Not everybody thinks to do that type of thing. I know that i have relatives that just do this and they see something and they want to share it. So i think the education piece in getting these tools at scale is what we need to work towards. But the key is even with detection, for the everyday person who has a deep fake sex video in the Google Search of their name prominently featured and a platform refuses to take it down, it is their cv meaning its part of what everyone sees about them. So it is incredibly destructive. The same is probably true for the Small Business if there is a deep fake that really casts us under their business model. They may not be able to have it removed even though it is false and its an impersonation. Even if its defamation we know the law moves really slowly. Were in this limbperiod where individuals will suffer and its incredibly hard to talk to victims because theres so little that i can force anyone to do. Were going to see a lot of suffering. The issues that we just talked about, are you trying to tackle those with your model laws that youre talking about . Yes. I am the Vice President of the Cyber Civil Rights Initiative and we have been working with lawmakers around the country both at the state level and the federal level both in terms of thinking about how we might really carefully and narrowly craft a law that would ban deep fakes or manufactured videos that amount to criminal defamation. I think weve got work ahead of us at ccri and laws around the country. It can be tackled but its going to have a really modest impact because law moves slowly. When youre doing this, youre talking to local and state Law Enforcement . Yes. I wrote a book about hate crimes in cyberspace which was about the phenomenon of cyber stalking and how difficult it is to teach local Law Enforcement both about the technology and the laws themselves. Theyre great at street crimes but when you talk about online crimes they say i dont know how to really begin. I dont know how to get a warrant to get the isp. I know congressman clark has called for funding, some training of local Law Enforcement on the question of cyber tastalking. Id love to see that not only with regard to cyber stalking and threats but more broadly. Thank you all. Mr. Hurt. Thank you, chairman. Im going to try to do something thats probably impossible in the next five minutes and get your perspective on four areas. You touched on authentication as a strategy for this. How do we develop a strategy in a narrow National Security sense to counter disinformation and who should be doing that and broadly education . My first question is probably to you, mr. Clark. Can you talk to us about the ability to detect and the forensics . Is there an ability to do a pixel by pixel analysis, a bit by bit analysis . Are there other areas we should be focused onto help with the ability to detect . The approach thats been taken in the community is one of a kpcomprehensive view. Yes, there are pixel types of applications, not necessarily pixel by pixel but the meta data that you get on algorithms. You know there are residual information left so you take it and compress it. How easy is that now and who should potentially be doing that . Well, the government is putting a lot of money into this piece. As i said, theres a lot more manipulators than there are detectors. I would hope that behind closed doors at the social media sites and the youtubes of the world are looking into this type of application. Im not sure. Is the ability to understand the various meta data or even getting to a point where we can do pixel by pixel exploratie in mass personally i dont like to use the word authentication. As we know, absolutely everything that goes up online is modified in some way whether its cropped or theres a color distribution adjustment. What word do you use . We like to use that things have been modified. But its a scale. So if theres a modification of intent if you put a flower in a picture next to someone that has a very different effect than if you replace somebodys face in a picture. This discussion, this attribution piece and the actual report that says this is exactly what was done was a big part of the Metaphor Program as well. The closest youre going to debt is say all of these things happened to this image so the user would have to make the decision on whether this is credible or not . Yes. Even in an automated way, if you are taking an image and youre the fbi and youre going to court, even if you did change one pixel, you lose the credibility. But if youre fbi and youre doing an investigation and you have a very compressed grainy Surveillance Video it still might give you information. Disinformation is a subsection of covert action. Covert action and counter covert action is the responsibility of the Central Intelligence agency. Yet the Central Intelligence agency cant do covert action in the United States. How should we be looking at a Government Strategy to deal with disinformation especially in the context of National Security . Or somebody else more appropriate to start with that. I think its two parts. I would encourage the social Media Industry and the platforms to focus on methods, whos doing deep fakes, digital forgeries. Can we have a listing of those . Theyre not always nefarious but then we know who the people are who are building the equipment. I would encourage the government then to focus on actors. This is in the case of the cia overseas, dhs in terms of protecting the homeland, state department which used to have the u. S. Information agency would be out there outing and overtly going after those actors that are doing the manipulation. I feel like we are still after several years now really slow to do this. Theyre the only ones that can figure it out. When i have worked with social media teams and we spot actors we believe are doing things, we sometimes have to wait years for the government to go, yes, heres the mueller report. That had already been out in the news. The more rapidly the public can help, the more social Media Companies know what to take down. Thank you. Chairman, i yield back. Mr. Heck. Thank you, mr. Chairman. First of all, professor citron, if something happened to that reporter in india had happened in america, did i understand correctly that that would not constitute a crime per se . It might be understood as cyber stalking which is a crime under federal and most states laws. The problem is it was sort of like death by a thousand cuts. To constitute cyber stalking you need a course of conduct, a persistent repetition by the same person. So if it were the first time what happens is its like a cyber mob coming together. So one person puts up the photo or screen shot. Another person puts up the home address. Yet another person puts up im available. All it says is im available with a screen shot. So the person who originated it under current law would likely not be subject to criminal prosecution. Right. Did i also understand you to say that even if it were, it would have modest impact . That is what i said was if we had criminal laws that combatted the sort of deep fake phenomenon and really tailored to falsehoods and impersonations that create harm, i think law is really important. Its modest in the overall impact because we need a partnership. I want to move on but i also cannot help but have this terrible flash of dantes inferno. Abandon hope all ye who enter here. Whose job should it be to label . That wasnt clear. I kind of thought it might be the media platform companies. I think it would be the creator that we could much as we do in the Campaign Finance space where we say theres certain Disclosure Rules that we say if its a political ad, you have to own it. If its a foreign originator, how is it that we have any jurisdictional reach . We dont. There are no boundaries. As a matter of practical fact, even if its created in america, transmitted to a foreign person and then retransmitted, we have no means of enforcement. Right. So labeling in and of itself look weve got social media platforms. If they had some responsibility, they may and im pretty skeptical about whether were going to get there in the near future about the technology of detection. Assuming thats possible, a reasonable practice could be disclosure saying this is a fake, do with it what you will. So we actually have, as it were, a comparable truth Verification Mechanism currently snopes. And yet a member of my immediate family once posted how outrageous it was and how the constitution ought to be amended because members of congress can retire after one term and immediately collect full pension benefits, have Health Care Free for life and their children go to college for free. Not one word, not one letter of that assertion is true, which could have easily been verified if they had gone to snopes. They didnt. And even if they did in a political context, the truth is the person whos perpetuating that may have a political agenda such that they may also in a parallel fashion engage in ad hominem attacks against the reliability of snopes. I dont have much time left but im really interested in mr. Hiems getting at the issue of political speech and First Amendment. You mentioned that we are protected against being impersonated. Its not clear to me how we square case law which has created a very high barrier. Its incredibly important to recognize that everything youve just described is totally protected speech. You know, the United States has made clear in a case called the United States versus alvarez, look, we protect falsehoods. As Justice Kennedy explained, it reawakens the conscious, it has us engage in counter speech and sort of recommit to citizenship. But there are times when falsehoods create certain kind of cognizable harm we should regulate. And that includes defamation. Even if Public Officials have said with actual malice, thats the truth of the actual matter asserted. Theres 21 crimes made up of speech. We can regulate certain words and images if it falls in one of those categories or if we can meet strict scrutiny. Yes, the presumption is that its protected speech if its a falsehood but falsehood that causes cognizable harm the entire court has said that is a space that we allow regulation. Thank you. Mr. Welch. Thank you very much. This is very helpful. Theres different categories and were all trying to get our arms around them. Theres the question the First Amendment which mr. Heck and mr. Hiems were talking about. Theres the question of foreign interference and theres the question of economic harm reputational harm. The whole world of publishing is upside down. It doesnt exist like it did prior to the internet. The question is whether we want to get back to some of the principaiples that applied pre social media. Its not like those principles have to be abandoned. They would apply. I just want to ask each of you whether we should get back to the requirement of an editorial function and a reasonable standard of care by treating those platforms as publishers. I know ms. Citron you said yes. Id be interested in seeing what the others see, just dwroyes or on that. I think the horse has sort of left the gate on this. I dont think were going to be able to get back to that type of what about with that statutory change that ms. Citron was proposing. Who has the duty . May i be clear for one second . Yeah. It wasnt that i was suggesting social media platforms be understood as publishers strictly liable, but rather that we condition their immunity on reasonable practices. Those reasonable practices may be the content moderation practices they use right now. So im going to disagree about calling them publishers who would be strictly liable for defamation. Thats not what im suggesting at all. Thank you for that clarification. That seems to be one fundamental question we would have to ask because that would be a legislative action. I think you have a whack a mole problem here. I do agree that its very difficult to contemplate controlling speech in this way because i think the habit of the entire culture of people has changed. What about this question of somebody going online and putting up a fake video that destroys an ipo . Who has the duty of care with respect to allowing that to be stated on their platform . Nobody has it . I think we can authenticate content and users and i think you can make users culpable for certain types of content they face. Who would be liable in that case about the destroyed ipo . The speaker, the creator of the deep fake. The plat form had reasonable practices of authentication and ex post moderation practices. Does the platform under current law have any duty. They have no liability. That seems like a very direct question. Right. Theres a different point of view often between republicans and democrats about bias and what goes on on the platforms so there would have to be some standard that wasnt seen as tilting the Playing Field for republicans or democrats. Is that possible to do . And was that something that was true presocial media in the days of youre the journalist, jack. I was going to save it for standards. We can actually use technology a bit here to create technological standards for making a judgment call as to whether something is or is not fake. If you have open standardscadem reasonable. Thank you all very much. My time is up. I yield back. Ms. Demings. Thank you so much, mr. Chairman. Thank you to all of you for being here. This conversation this morning has been pretty disturbing and actually quite scary. The internet is the new weapon of choice. As i listened to the testimony and the questions here, as we think about an individual who goes out and violates laws or creates harm would be held accountable i believe that any individual or entity that bullies or stalks or creates harm or becomes a Public Safety risk, any entity that creates an environment for those things to happen should be held accountable as well. When i think about those around the world who are not our allies, they want to create chaos in this country and what a wonderful way, an easy way to be able to do that. The problem that of course the fake information is a problem, but the other problem is it creates an environment where good people no longer believe the good guys. And boy are we seeing that in our country right now. Thats a major problem. Our institutions that we have grown to depend on and believe in are no longer being believed. And that can create total chaos. Back to mr. Hecks statement about for example a fake video is being created in america but then transmitted to another country. Could not the act of transmitting that video be the violation . I know theres been a lot of discussion about there are no boundaries, how do you hold someone in a foreign land accountable. But id love to hear your thoughts. There are two pieces to that. Theres the procedural jurisdictional question of whether its constitutional to haul them into court. Then theres the extradition question which im going to rely on mr. Watts for that. If youre in america and you transmit a video that creates a Public Safety concern or a National Security risk, could not the very act of transmitting it from america be the violation . Its directed outside the United States. Directed outside, transmitted from florida that was a different question than i thought you were asking. Under the 14th amendment how we think about personal jurisdiction, if youre aiming an activity to another state and youre doing it purposefully, we can as long as theres a long arm statute. Youve confused e many a little because the question is when its an american directing harmful activity abroad, thats contingent on that countrys extradition treaties and arrangements with them. Im not a lawyer and i try avoid them, but i would generally say there is no specific provision around transmitting that abroad. I think it comes down to whatever country is affected by it, if it breaks their laws and if they have an extradition relationship with the United States. That is probably not worked out. Im not sure if its ever been executed. It could have and im not aware of it. It is something that needs to be addressed because what has been very clear over the last four years is there is no physical boundary in these communities, in these disinformation networ networ networ networks online. And sometimes the smartest manipulators out there, russia, iran, they enlist the content to look authentic. And theyre setting people up. Sometimes theyre not aware of it. Sometimes theyre aware of it. Those who are aware of it are doing it willingly. If you look at the mckrone leaks, it was someone in north america that alerted the world to it and pointed the world to it. We are going out to other countries and asking them to do that for us. I know weve talked quite a bit too about the Intelligence Community and our National Security entities. But could you talk just a little bit about how should we task the intelligence communities and our National Security entities with assessing and forecasting future impacts of deep state technology . I think theres two parts. One, who are the purveyors and actors who are going to use that. Thats pretty straightforward. From the outside even where i work i can see that. I think the part thats missing is where the technologies are being developed. The number one place i would have someone as a liaison of the u. S. Government is tel aviv. This is a central hub from everything from cyber tools to influence tools, influence operations, both good and bad depending on what your perspective is. But that is a tech ub had. I feel like often times when i talk to the government about that, theyre wellinformed about what deep nation states are doing but missing what the private sector is available in term of ai and other tools out there. Yes, mr. Clark . Can he respond quickly . Yes. Just quickly to this point its worth repeating that the fundamental techniques and tools for this are published online and we can easily compile quantitative metrics so we can do that forecasting. I agree with what mr. Watts said, but it is easy to go discover this information for ourselves. Thank you. Mr. Chairman, i yield back. Dr. Webster. Thank you, mr. Chairman. Thank you, mrs. Demings. You asked the question we ran out of time on on extradition laws. I appreciate that. Getting to punitive measures we may think about, with the extradition laws we may have a lot of people hanging out in other peoples embassies for many, many years rather than being extradited. As a doctor, i dont often find myself eager to engage with trial lawyers, but thats probably where we need to head with this as people are harmed through all this. So, im going through my mind what kind of punitive measures. Certainly monetary would be included because people end up as we pointed out with huge monetary losses because of these fake stories. And what about prison time . I mean i think we really need to consider being pretty tough on this. If its to be effective. One thing i would add that chairman ship brought up was about sanctions. What you did see in the indictment in july of 18, theyre essentially being outed. Thats very effective but you could move down the chain of command such that hackers and influencers, prop gandists dont want to be hired at those if you remembers because they could be individually sanctioned. It would be hard to execute but once we get good at it, it would be a great facet to where if you could turn down employment where the best hackers dont want to work with the authoritarian regimes, it could change things. Also in terms of suber and hacking tools that are being used for malicious purposes and influence tech neegs you could go off that companies that are often times international. Theyre not necessarily tied to a nation state and that would send a downward pressure across a disinformation space. It would send it more under cover in places like the dark web but thats okay because that plays to our strengths. We have great intelligence capabilities at that end and we have sophisticated intelligence agencies. And we would know where it is. Right. It changes the problem but to our advantage. The other thing is you mentioned sanctions. That does make a lot of sense, especially if its a country that theres no way youre going to get some type of extradition agreement in place, right . Right. And i think thats the case with most of these locations whether its china, iran, or russia. Those are the three big ones. It would also send a message out across the world if you are pushing on us, there are options we have. I do think that the time for offensive cyber is at hand. General nagsoki has done Good Measures talking about the measures they are talking. If these foreign manipulators knew we were going to respond in a very aggressive way, they would move away whether its arrest and extradition, if its sanctions individually, or even in term of cyber response. Right now theres not a lot of assurance. No. Its proliferated because we have not responded. Thank you. I yield back unless someone else wants to yield back. I appreciate it. Thank you for the time. Thank you. I have a few questions. Can you talk a little bit about the oh, im sorry. Mr. Castro. Thank you, chairman. Professor, first, i enjoyed your article with bobby out of austin. You mentioned the case out of falsehoods. I think this will be a Monumental Task to grapple with how we treat deepfakes. There are some speech like hate speech and fighting words that are not as protected as political speech. In making that determination, we have to figure out what the value is of the type of speech or expression. So, let me ask you, what is the value of a fake . And just to add to that and thank you so much for reading our piece. Sure. Is the value to the listers. When we think about free speech theory, its the value to the autonomy rights of the speaker for selfgovernance. The creator in this case. The creator but also of course the listeners. Sure. The value of the fake could be profound, right . It could be that the deep fake contributes to art. Star wars, we have Carrie Fischer coming back. I recognize what my copanelists are suggesting. But we do have guide s in the lw about impersonations that cause harm whether its defamation law or its another kind of speech where we say fraud. So, you think we may go down the road or the court may go down the road where certain speech like hate speech is not protected the same way as political speech or even ordinary speech. I think were going to stay firm on hate speech. But fakes. There will be certain fakes treated differently than other fakes. Right. Depending on the context. This is contextual. I dont think we can have a one size fits all rule for deep synthetic video even as to impersonations because you could have satire and parody which is important. At the psalm time weve got to bring context to the floor and say these falsehoods cause real harm. Real harm that doesnt enjoy First Amendment protection or enjoying less rigorous protection and we can regulate it. Let me follow up. I wanted to ask yall, one of the big challenges we had with the russian interference, particularly what they put on the facebook and social media is that it seemed as though the social Media Companies were unprepared for that. There was no infrastructure for vetting or moderating those things. So, you know, just my rough sketch obviously yall have thought about this a lot longer. But i see theres a creator who uses software who then posts on social media and then the traditional media picks it up and further proliferates it into the bloodstream of american society. So, where in there do we construct that infrastructure for vetting and for moderating these deepfakes . Who is responsible at each of those levels . You know, again, im not the lawyer or the policy maker, but i think theres another piece to that puzzle. Somebody put something up thats innocent on and it gets used by someone else for a different message. So, you know, this is almost not even the deepfakes problem but something that gets put out there and then gets twisted the in a certain way somewhere down the line. Theres a lot of people that dont realize that the onion articles are sat firire. Exactly. We need these things at every level, that we need to be able to show the attribution of this information, how it progressed, and be able to make those decisions at every level. I would add i think that scenario is exactly what will happen going into future elections by foreign adversaries which will be to use as much organic content that suits their narrative and amplify and inject it back. Thats a standard disinformation approach. Especially as false content is proliferating, more people are able to make it each year. They can make fake content. That means its more available for adversaries to repurpose and reuse which is the scenario david just talked about. I think the social Media Companies need to look at their thresholds, how they do labelling, and triage in term of severity of impact. We know what some of those are, mobilization to violence, talking to violence, but also in terms of effect to democracies and political institutions, things related to elections. Right now i would be very worried about someone making a fake video about Electoral Systems being out or broken down on election day 2020. We should already be building a battle drill, a Response Plan about how we would handle that the in the government, the state governments, and the dhs as well as with the social Media Companies. Thank you, chairman. I yield back. Thank you. I just want to ask a few followup questions. I dont know if any of you to date know how many views the doctored video of Speaker Pelosi has received. But i wonder if you have a sense of if there are xmillion views of those videos, how many of those millions will ultimately learn that was a fake and how many will be permanently misled. And then, whats more, if you could comment on the phenomena that even if youre later persuaded that what you have seen of the person is not true, i understand that psychologists will tell you that you never completely lose the lingering negative impression you have of the person. So, i wonder if you could comment on those two issues . Fact checks and clarifications tend not to travel nearly as far as the initial news. We would expect the same thing here. A tiny minority will be aware it was doctored would be the assumption. So, the assumption would be that if you put this out, a very small minority will learn its a fake no matter how good you or the press do at putting that out there. Because the truth in this case, that what youve seen is false, maybe not be as ishl havely impacting, may not be visual at all as seeing the video. The way i put it, if you care, you care about clarifications and fact checks. But if youre just enjoying media, youre enjoying media. You enjoy and experience the media and absolute minority care whether its true. Its just a general thing. What should whether its journalism or what not, what should teachers in schools be educating young people about these days about whether you can believe what you see . This gets to the liars dividend. And by the way, in politics, theres a saying the first time you hear an expression or anecdote or story, you make personal attribution. The second time you say somebody once said and the third time its pretty much yours. So, liars dividend is now out there. But, you know, how do we educate young people or notsoyoung people about how to perceive media now without encouraging them to distrust everything in which case there is this liars dividend . Its true that the more that what were seeing, even if its totally false, confirms our world view, social psychologist studies show that we are just going to ignore it so that we will believe the lie if it confirms. Its confirmation bias theory. It becomes hard for the fakery to be debunked because its a visceral video and if it becomes your world view. Its really tough. I guess thats the task of as parent, as educators, as teachers, as we talk to our students the critical i wrote ten years ago, remember the Critical Thinking was about how do we teach students how to do a Google Search and can they believe everything thats in a prominent search in whatever theyre doing . And we saw you did a search for jew and there was an antisemitic search was the first to come up. Teachers explained that doesnt mean. That doesnt mean its the authority. I think we have the same struggle today, that yes were going to teach them about deepfakes. But i think weve also got to teach them about the misuse of the phenomenon to avoid an escape responsibility. The other challenge too the we have a white house that is popularized the term fake to describe lots of things that are real. In fact, some of the most trusted news sources in the country. So, theres already an environment in which theres license to call things fake that are true but are critical. And it seems that thats a pretty Fertile Ground for the proliferation of information that is truly fake. And we find ourselves, frankly, trying to find other words for it, false, fraudulent. Because fake has been so debased as a term. People dont know really what you mean by it. I think its worth noting too that when the President Trump referred to access Hollywood Tape he said that never really happened, the holt interview, that wasnt right. Weve seen the liars dividend happen from the highest of the bully pulpits, so i think weve got a real problem on our hands. Do you think theres some optimism for tools . Ive been involved in numerous arguments with friends where weve okay gone and checked Something Like wikipedia. You end up using the Information Sources around you and you can train people that there are certain sources you can go to settle an argument as it were. And i think we can develop such tools for some of this technology. I think that, you know, thats a great motivation for having this, you know, this information up front. You know, when mr. Heck was saying that, you know, had a Family Member that didnt know about going to snopes, if that information was attached to the video or email ahead of time, they would have had access to it and wouldnt have to search for it. Im thinking of applying in 2020 what we saw in 2016. In 2016, among other things, the russians mimicked black lives matter to push out continent to racially divide people. And you can imagine a foreign bad actress, particularly russia, dividing us by pushing out fake videos of Police Violence on people of color. We have plenty of really authentic ones. But you could certainly push out videos that are enormously jarring and disruptive. And even more so of seeing false video of someone and still having that negative impression. You cant unwind the consequences of what happens in the community. Its hard to imagine that not happening because its such low barriers to entry. And there will be such easy deniability. If i could add, there is some good news in that social media if you watch facebooks news room, theyre doing takedowns nearly every week now. Theyve sped that up precipitously. We have the curriculum for evaluating Information Sources in the u. S. Government. I was trained on it at the fbi academy. They have it at the Defense Intelligence agency, Central Intelligence agencies which how to evaluate information outlets, evaluate expertise. Its unclassified. Theres no secret course. Its how you adapt that into the online space. The audience im most worried about is not young people in social media. Its older generation thats come to the Technology Late that doesnt really understand they understand the way newspapers are produced, where the outlet is coming from, who the authors are. I was with a group at new York City Media Lab and they had a group of students that was how do we help older generation that is new to social media. You can send them tips and cues, do you know where the information is located at . Do you know who the author is or the content provider is . I think there are simple tools like that we could develop or the social media autolets could develop for all audiences. Young people have done this more than parents have. I think in terms of thinking about approaches, its whats the generation, what are the platforms they are on. Do they understand that places like 8chan is based in the philippines and not really in the United States in the sense of our ability to administer these things. There are simple tools we could do that are nothing more than widgets, Public Awareness campaigns, things we could take from the government that weve already developed and really repackage for different audiences in the United States. Dr. Doerman, if i could, is the Technology Already at the stage where good ai could produce video that is indistinguishable to people with the naked eye. Could ai fool you if you dont have computer analysis as to whether the video is authentic . Yes, i think there are examples out there that if taken out of context, that if theyre sent out there and theres a story or message with it that people will believe it. And its not just people that have that agenda. There are there was a video that was out there that showed a plane flying upsidedown, very realistic looking. And i think what people will need to do is get confirmation from other sources that, you know, Something Really happened. So ar so, a video in isolation if thats what youre talking about, a video in isolation, youre given this asking is this does this look awe thent authentic or not independent of whether it passes the sniff test so to speak. It wont always be able to disprove the audio or video by disproving the circumstances around it. Nerd, if there were an audio of dr. Winstrop on a phone discussing a bribe, he wouldnt be able to say i was in this place at this time and couldnt have been on the phone because the call could have taken place at any time. Or if theres a video of al demings, it wont always be possible to show she was somewhere else at the time. Do you see the Technology Getting to the point where in the absence of the ability to prove externally that the video or audio is fake that the algorithms that produce the content will be so good that the best youll be able to do is a computer analysis that will give you a percentage, the likelihood this is a forgery is 75 , but you will never be able to get to is hundr100 . Are we headed for that day where it just wont be possible to show something weve seen or heard is illegitimate. Part of the Metaphor Program was exactly that, coming up with quantity tif scale of what ma nip lags or deception is. I dont know if theyve gotten there. I left partway through the program. But yes, i think theres going to be a point where we can throw absolutely everything we at this, at these types of techniques, and theres still some question about whether its authentic or not. Theres no in the case of the audio, you could do close analysis with tools and voice verification, all of those sorts of things. But just like a court of law, youre going to have one side saying one thing and have another side saying another thing and theres going to be cases where theres nothing definitive. I definitely believe that. My colleagues have any further questions . On that optimistic note, well conclude. And once again, my profound thanks for your testimony and your recommendations. The committee is adjourned. Were going to take you live now to House Speaker nancy pelosi and her weekly