Committee. In the heat of the 2016 election, as the russian hacking and dumping operation became apparent, my predominant concern was that the russians would begin dumping forged documents along with the real ones that pistol. It would have been all too easy seed forgedto documents among the authentic ones in a way that would make it almost impossible to identify. Even if a victim could ultimately expose the forgery for what it was, the damage would be done. Three years later, we are on the cusp of a technological revolution that could enable even more sinister forms of deception and this information by malign actors, foreign or domestic. And machineai learning have led to the emergence of advanced, digitally doctored types of media, enableed deepfakes that malicious actors to film and disrupt entire campaigns including that for the presidency. Artificialess in intelligence algorithms has made it possible to manipulate media, video imagery, audio, text, with incredible, nearly imperceptible results. With sufficient training data, these powerful algorithms can portray a real person doing something they never did, or saying words they never uttered. These tools are readily available and accessible and accessible to both experts and novices alike, meaning that attribution of a deep fake to a specific offer, whether a Hostile Intelligence Service or a single Internet Troll will be a constant challenge. What is more, once someone views a deepfake, the damage is largely done. Even if later convinced that what i have seen as a forgery, that person may never lose completely malingering negative impression the video has left with them. It is also the case that, not only made videos the cast off as real, but real information can be passed off as fake. This is the socalled liars dividend in which people with propensity to deceive are given the benefit of an environment in which it is increasingly difficult for the public to determine what is true. To give our members and the public a sense of the quality of today, i want to share a few short examples, and even these are not the stateoftheart. The first comes from Bloomberg Businessweek and demonstrates an aipowered clone the voice of one of the journalists. Lets watch. [video clip] my dear,oing to call sweet mother, and see if she recognized me. [phone ringing] mom. Y, what are you guys up to today . We didnt have any electricity early this morning and we are just hanging around the house. Im just finishing up work and waiting for the boys to get home. Ok. I think im coming down with a virus. I was messing around with you, you were talking to a computer. I thought i was talking to you. It is bad enough that was fake, but he is deceiving his mother and telling her that hes got a virus, that seems downright cruel. The second clip demonstrates a puppetmaster type of video. As you can see, these people are able to coopt the head movements of their targets with convincing audio and can turn a world leader into a ventriloquist honey. Next, a brief cnn clip highlighting the research a claimed expert from uc berkeley and featuring an example of a socalled face what video in which senator elizabeth warrens face is seamlessly transplanted on the body of Kate Mckinnon. I havent been this excited since i found out my package from l. L. Bean had shipped. So, the only problem with that video is Kate Mckinnon actually looks a lot like elizabeth were in. But the one on the left was actually both were Kate Mckinnon, one just had elizabeth face swapped onto her, but it shows you how convincing that kind of technology can be. These algorithms can also learn from pictures of real faces to make completely artificial portraits of persons who do not exist at all. Take out which of these spaces are real and which are fake . Of course, as you may have guessed, all four are fake. All of them are synthetically created. Thinking ahead to 2020 and beyond, one does not need great imagination to envision any more nightmarish scenarios that would leave the government and the media struggling to discern what is real and what is fake. A deep fakeates video of a political candidate accepting a bribe with the goal of influencing an election. Hacker claimsual to have stolen audio of a private conversation between two World Leaders when no conversation took place. Or, a troll farm uses text generating algorithms to write full force and social news stories, overwhelming journalist abilities to verify a user ability to trust what theyre seeing or reading. What enables them to become truly pernicious is the ubiquity of social media and the velocity at which false information can spread. We got a preview of what that might look like recently when a doctored video of nancy pelosi went viral on facebook, receiving millions of views in the span of 48 hours. That video is not a deepfake, but a crude manual manipulation. Nonetheless, it demonstrates that the scale of the challenge we face and responsibilities that social Media Companies must confront. Already, the companies have taken different approaches with youtube deleting the altered video while facebook labeled it as false and throttle back the of the spread once it was deemed fake. Now is the time for social Media Companies to put in place policies to protect users from this kind of misinformation. Not in 2021 after viral deepfakes have polluted the 2020 elections. Keeping with a series of open hearings that have examined different challenges to our National Security, the committee is devoting this hearing to deepfakees and synthetic media. We need to understand the implications, the Underlying Technology, and the internet reacheds that theyd before we consider appropriate steps to mitigate the potential harms. We have a distinguished panel of experts and practitioners to help us understand and contextualize the potential threat but before turning to them, i would like to recognize wrecking member nunez for any statements he would like to give. Thank you, mr. Chairman. I join you in your concern and i want to add to that, fake news, fake dossiers and Everything Else we have in politics, i do think that, in all seriousness, this is real. If you get online, you can see pictures of yourself, mr. Chairman, on there. They are quite entertaining, some of them. They be for you. I decided not to bring any today on the screen. [laughter] with all seriousness, i appreciate the panelists being here. Member. Nk the ranking i would like to welcome todays panel. First, jack clark who is the policy director of a research and Technology Company based in san francisco, and a member of the center for new American Security task force on Artificial Intelligence and National Security. Up next, a professor and director of Artificial Intelligence. The Artificial Intelligence institute at the university of buffalo. Until last year, he was the Program Manager of the media Forensics Program. Danielle is a professor of law ,t the university of maryland and she has coauthored several notable articles about the potential impacts on National Security and democracy. And finally, a distinguished Research Fellow at Foreign Policy research institute, a senior fellow at the alliance for securing democracy, and his recent scholarship, welcome to all of you, and why dont we start with you, mr. Clark chairmanship, Ranking Member nunez, and committee members, thank you for the invitation to testify about the National Security threats posed by the imperfection of ai content. So, what are we talking about when we discuss this subject . Fundamentally, were talking about Digital Technologies that would make it easier for people to create synthetic media and that can be video images, audio, or text. People have been manipulating media for a very long time, as you know, but things have changed recently. I think two fundamental reasons why we are here. One is the continued advancement of computing capabilities. That is, the physical hardware reuse to run software on. That is significantly cheaper and more powerful. At the same time, software has become increasingly acceptable some of theand software is starting to incorporate ai, which makes it dramatically easier for us to manipulate media and it media. And it allows for step changing functionality for video or audio editing, which was previously very difficult. Driving cheaper computing and easier to use software are fundamental to the economy and many of the innovations we have had in the last few years. , one ofthink about ai the confounding factors here is that similar Ai Technology is used in the production of synthetic media, or deep fakes, and is also likely to be used invaluable scientific in valuable scientific research. Hearing aids so people can understand what theyre saying to them, and other things that may revolutionize medicine. At the same time, these techniques can also be used for purposes that justifiably cause unease, like being able to censor someone elses voice, impersonate them on video, and write text in the style they use online. We have also seen researchers bella techniques that can mine researchers develop techniques that combine these things, creating people that say things they have not said and do things that they have not appeared to have necessarily done. I know how awkward it can be to have words that are put in your mouth that you did not say. Deep fakes take this problem and accelerate it. I believe there are several interventions we can make that will improve the state of things. One is institutional intervention. It may be possible for largescale Technology Platforms to try and develop tools for detection of malicious synthetic the individual account level and the platform level. And we could imagine these Companies Working together privately, as they do today, with cybersecurity, where they exchanged Threat Intelligence with each other and develop a shared understanding of what this looks like. We can also increase funding. Dormanioned, dr. David previously let a program here. We have existing initiatives that are looking at the detection of these technologies, and i think it would be judicious to consider expanding that funding further so we can develop better insights here. I think we can measure this. What i mean by measurement is, it is great we are here now, ahead of 2020, that these technologies have been in open development for several years now. It is possible for us to Read Research papers, code, talk to people, and we could have created metrics for the advancement of this technology for several years, and i believe that government should be in the business of measuring and assessing these threats by looking directly at the scientific literature and working from a basis on where to lay out the next steps. I think we also need to do work with the level of norms. Beenen ai, we have thinking about different ways to release or talk about the technology that we develop. I think it is challenging, because science runs on openness, and we need to preserve that so science continues to move forward, but we need to consider different ways of releasing technology or talking to people about the technology we are creating ahead of us releasing it. Finally, i think we need comprehensive ai education. None of it has worked if people do not know what they do not know. We need to give people the tools to let them understand that this technology has arrived. Although we may make a variety of interventions to deal with the situation, they need to know that it exists. Hope this testimony has made clear, i do not think ai is because of this. I think ai is an accelerant to an issue that has been with us for some time. We do need to take steps here to deal with this problem, because at the base of this, it is challenging. Thank you very much. Thank you. Mr. Dorman . Dr. Dorman thank you. Chairmanship, Ranking Member nunez, thank you for the opportunity for me to be here this morning to discuss the challenges of countering media manipulation and scale. Authors have used the phrase seeing is believing, but in the past half decade, we have come to realize that is not always true. In late 2013, i was given the opportunity to join dartmouth as a Program Manager and was able to address a variety of challenges facing our military and intelligence communities. Although i am no longer a representative of dartmouth, i did start the media Forensics Program metaphor, and it was created to address the many technical aspects of the problems we are talking about today. The general problem of metaphor is addressing our ability to analyze and detect manipulated media that was being used with increased frequency by our adversaries. It is clear that our manual processes, despite being carried out by competent analysts and personnel in the government, at the time, could not deal with the problem at scale that the manipulated content was being created and proliferated. The government got ahead of this problem knowing that it was a marathon, not a sprint. The program was designed to address both current and evolving manipulation capabilities. Not a single point solution, but with the copperheads of approach. A comprehensive approach. We haveast five years, gone from a new technology that can produce novel result, at the time, but nowhere near what could be done manually with basic desktop editing software. Open Source Software that can take the manual efforts completely out of the equation. There is nothing fundamentally wrong or evil about the Underlying Technology that brings the concerns we are testifying about today. Deep fakes are only a tool. There are more positive applications of networks then there are negative ones. As of today, there are solution s that can identify deep fakes reliably, but only because they have been on visual perception, covering up trace evidence. If history is any indicator, it is only a matter of time before the current Detection Capabilities will be rendered less effective. In part because some of the same mechanisms that are used to create this content are also used to cover them up. I want to make it clear, however, that combating synthetic and manipulated media at scale is not just a technical challenge. It is a social one as well, as i am sure other will be testifying this morning. There is no easy solution. It is likely to get much worse before it gets better. We have to continue to do what we can. We need to get the tools and the processes and the hands of individuals, rather than relying completely on the government or social media platforms to police content. Can perform a sniff test and the media smells be misuse, they should have ways to verify it, prove it, or easily report it. Same tools should be available to the press, social media sites, anyone who shares or uses this content. The truth of the matter is, people who share this stuff are part of the problem, even though they dont know it. We need to apply automated detection and filtering at scale. Weis not sufficient when analyze content after the fact, we need to apply detection at the frontend of the pipeline. Even if we do not take down manipulated media from appearing, we should try to provide appropriate warning labels that suggest this is not real or authentic, or what it is reported to be. As tos independent whether the decisions are made by humans, machines, or a combination. We need to continue to put pressure on social media to realize that the way their platforms are being misused is unacceptable. They must do all they can to address todays issues and not allow things to get worse. But there will be no question that this is a race. The better manipulators gets, the better detectors need to be, and there are more manipulators then there are detectors. It might be a race that never won, it may never be but we must close the gap and make it less attractive to propagate false information. It ispam and malware, always easy and always a problem, and it may be the case that we can level the laying field. One thing that kept me up at night was that someday, our adversaries would be able to create entire events with minimal effort. They might include images and scenes of different angles, video content appearing from different vices and text providing overwhelming amounts of evidence that an event has occurred, which could lead to social unrest or retaliation before it gets countered. If the past five years are any anyion that someday indication, that someday is not far in the future. Thank you. Thank you. Professor citroen . Professor citroen thank you very much, chairmanship, Ranking Member nunez, and the commission for having me here today to talk about the phenomenon of deep fakes, the risks that they pose, and what law can and do should about it can and should do about it. Few things come together that fakes particularly troubling when they are provocative and destructive. We know as human beings, video and audio are so busy role. We tend to believe what our eyes and ears are telling us. We also tend to believe and share information that confirms our biases. It is particularly true when that information is novel and negative. The more salacious, the more willing we are to pass it on. And we are seeing deep fakes, or will see them, in social networks that are ad driven. The Higher Enterprises to have us click and share. When we bring these things together, the provocative deep fakes, the salacious will be sped spread virally. That mye so many harms coauthor and i have written about, but i will focus on more of the concrete ones and what law can and should do about it. There are concrete harm