Transcripts For CSPAN3 House Intelligence Hearing On Nationa

CSPAN3 House Intelligence Hearing On National Security Risks Of Artificial... July 14, 2024

The committee will come to order. Before we begin, i want to remind all members we are in open session. We will discuss unclassified matters only. Please have a seat. Our members may be wandering in a bit late. We are here until 1 00 in the morning. But those in Armed Services were here until about 5 00 or 6 00 in the morning. So a few groggy members here on the committee. In the heat of the 2016 election, as the russian hacking and dumping operation became apparent, my predominant concern was that the russians would begin dumping forged documents along with the real one that is they stole. It would have been all too easy for russia or another malicious actor to see forged fox sh documents among the authentic one that is will make it impossible to identify or rebut the fraudulent material. Even if the victim could expose the forgery for who it was, the damage would be done. Three years later, were on the cusp of a technolaj cal revolution that could enable even more sinister forms of deception and disinformation by malign actors, foreign or domestic. Advances in ai or Machine Learning has led to types of media, socalled deepfakes that enable malicious actors to foment kay as, division or crisis and have the capacity to disrupt entire campaigns including that for the presidency. Rapid progress in Artificial Intelligence, algorithms has made it possible to manipulate media, video imagery, audio text with incredible, nearly imperceptible results. With sufficient training data, these powerful algorithms can portray a real person doing something they never did or saying words they never uttered. These tools are readily available and accessible to expert and novices alike, meaning that at bugs of a deepfake to a specific author, whether a Hostile Intelligence Service or a single Internet Troll will be a constant challenge. Whats more, once someone views a deepfake or a fake video, the damage is largely done. Even if later convinced that what they have seen is the forgery, that person may never lose completely the lingering negative impression the video has left with them. It is also the case that not only may fake videos be passed off as real, but real information can be passed off as fake. This is the socalled liars dividend, which people with propensity to deceive are given the benefit of an environment in which it is increasingly difficult for the public to determine what is true. To give our members and the public a sense of the quality of deepfakes today, i want to share a few short examples and even these are not the stateoftheart. The first comes from Bloomberg Business week. It demonstrates an ai powered cloned voice of one of the journalists. Lets watch the now to really put my computer voice to the test, im going to call my dear, sweet mother and see if she recognizes me. [ ringing ] hey, mom. What are you guys up to today . Well, we didnt have any electricity early this morning and were just hanging around the house. Im just finishing up work and waiting for the boys to get home. Okay. I think im coming down with a virus. Oh, well you feel better, hey. You were talking to the computer. Like i was talking to you. Its amazing. All right. Its bad enough that was a fake. But hes deceiving his mother and telling her that hes got a virus. That seems down right cruel. The second comes from courts and demonstrates a puppet master type of deepfake video. As you can see, these people are able to coop the head movements of their targets if married with convincing audio, you can turn a leader into a ventriloquist dummy. Next a brief cnn clip high lyght research from a professor, an acclaimed expert on deep fakes and featuring on example of a face swap video in which senator elizabeth warrens face is seen mostly trans ponded on the body of Kate Mckinnon. I havent been this excited since i found out my package from l. L. Bean had shipped. Im ready to fight. So the only problem with that video is Kate Mckinnon looks a lot like elizabeth warren. But the one on the left was actually both were Kate Mckinnon. One had elizabeth warrens face swapped on to her. It shows you how convincing that kind of technology can be. These algorithms can learn from pictures of real faces to make completely artificial portraits of persons who do not exist at all. Can anyone here pick out which face is real and which are fake . And of course, as you may have all guessed, all four are fake. All four of those faces are synthetically created. None of those people are real. Think ahead to 2020 and beyond. One does not need great imagination to envision even more nightmarish scenarios that would leave the government, the media and the public struggling to discern what is real and what is fake. A state backed actor creates a deepfake video of a political candidate accepting a bribe with a goal of influencing an election. Or an individual hacker claims to have stolen audio of a private conversation between two World Leaders when, in fact, no such conversation took place. Or a troll farm uses text generating algorithms to write false or sensational news stories flooding social media platforms and overwhelming the ability to verify and users ability to trust what theyre seeing or reading. What enables deep fakes and other modes of disinformation to become pernicious is the ubiquity of social media and the validity at which false information can spread. We have an example of that when nancy pelosi video went viral on video in the span of a few hours. It was a crude manual manipulation some have called a cheap fake. Nonetheless, the videos vie ralt on social media demonstrates the scale of the challenge we face and the responsibilities that social Media Companies must confront. Already the companies have taken different approaches with youtube deleting the altered video of Speaker Pelosi, while facebook labeled it as false and throttled back the speed it spread once it was deemed fake by independent Fact Checkers. Now is the time for social Media Companies to put in place policies to protect users from this kind of misinformation. Not in 2021 after viral deepfakes have polluted the 2020 elections. By then it will be too late. And so in keeping with the series of opening hearings that have examined different strategic challenges to our security and democratic institutions, the committee is devoting this hearing to deepfakes and synthetic media. We need to and the internet platforms that give them breach before we consider appropriate steps to mitigate the potential harms. We have a distinguished panel of experts to help us understand the potential threat of deepfakes. Before turning to them, id like to recognize Ranking Member nunez for any Opening Statement hed like to give. Thank you, mr. Chairman. I join you in your concern about deepfakes and i want to add to that, fake news, fake dossiers and Everything Else that we have in politics. I do think that in all seriousness, though, this is real. If you get online, you can see pictures of yourself, mr. Chairman, on there. Theyre quite entertaining, some of them. I decided maybe theyre entertaining for you. I decided not to play a screen today. But with all seriousness, i appreciate the panelists being here and look forward to your testimony. Yield back. I thank the Ranking Member. Without objection, the Opening Statements will be made part of the record. Id like to welcome todays panel. First, jack clark, who is the policy director of open ai. A research and Technology Company based in San Francisco and a member of the center for a new American Securities task force on Artificial Intelligence and National Security. Next, david dorman, a professor and kreker director of Artificial Intelligence, the Artificial Intelligence institute at the university of buffalo. Until last year, he was the Program Manager of media forensics program. Danielle sit ron is a professor of law at the university of maryland, francis King Ferry School of law. She has coauthored several articles about the potential of deepfakes on nags security and democracy. Finally, mr. Clint watts, distinguished Research Fellow at the Foreign Policy research institute, a senior fellow at the gmf alliance for securing democracy and his recent scholarship addressed social media influence operations. Welcome to all of you. And why dont we start with you, mr. Clark. Chairman schiff, Ranking Member nunez and committee members, thank you for the invitation to testify about the National Security threats posed by the intersection of ai, fake content and deepfakes. So what are we talking about when we discuss this subject . Fundamentally, were talking about Digital Technologies that make it easier for people to create synthetic media and that can be video, images, audio or text. People would be manipulating media for a very long time as you well know. But things have changed recently. I think there are two fundamental reasons for why were here. One is both a continued advancement of computing capabilities, that is for the physical hardware we use to run software on. Thats got significantly cheaper and more powerful. At the same time, software has become increasingly accessible and capable and some of the software is starting to incorporate ai, which makes it dramatically easier for us to manipulate media and it allows for a step change in functionality for things like audio or video editing, which was previously very difficult. Now, the forces driving cheaper computing and easier to use software are fundamental to the economy and many of the innovations weve had in the last few years. So when we think about ai, one of the confounding factors here is similar Ai Technologies used in for production of synthetic media or deepfakes are likely to be used in valuable scientific research. Theyre used by scientists to allow people with hearing issues to understand what other people are saying to them or theyre used in molecular as say or other things that revolutionize medicine. Now, at the same time, these techniques can be used for purposes for justifiably cause unease. Like synthesizing the sound of someone elses voice, impersonate them on video, write text in the style they use online. Weve seen techniques combining these things, allowing them to create a virtual person who can say things they havent said and appear to do things that they havent necessarily done. Im sure that members of the committee are familiar with their runins with the media and know how awkward it can be to have words put in your mouth with something you didnt say. Deepfakes accelerate it. How might we approach this challenge . I actually think there are several interventions we can make and this will improve the state of things. One, institutional intervention. It may be possible for largescale Technology Platforms to try and develop and share tools for the detection of malicious synthetic media at both the individual account level and the platform level. And we could imagine these Companies Working together privately as they do today with Cyber Security where they exchange Threat Intelligence with each other and with other actors to develop a shared understanding of what this looks like. We can also increase funding. So as mentioned, dr. David dorman previously led a program here. We have existing initiatives that are looking at the detection of these technologies and i think that it would be judicious to consider expanding that funding further. So that we can develop better insights here. I think we can measure this. What i mean by measurement is that its great that were here now ahead of 2020, but these technologies have been in open development for several years now. Its possible for us to Read Research papers, read code, talk to people and we could create a quantity of metrics for the advance of this technology several years. I strongly believe that governments should be in the business of measuring and assessing these threats by looking directly at the scientific literature and developing a base of knowledge from which to work out next steps. Being forewarned is forearmed here. We can do that. I think we also need to do work at the level of norms. At open ai, weve been thinking about different ways to release or talk about the technology that we develop. I think that its challenging because science runs on openness and we need to preserve that so that science can move forward. We need to consider different ways of releasing technology or talking to people about the technology that were creating ahead of us releasing it. Finally, i think we need comprehensive ai education. None of this works if people dont know what they dont know. We need to give people the tools to let them understand that this technology has arrived and, though we may make a variety of interventions to deal with the situation, they need to know that it exists. So as i hope this testimony has made clear, i dont think a. I. Is the cause of this. I think a. I. Is an accelerant to an issue been with us for some time and we do need to take steps here to deal with this problem because the pace of this is challenging. Thank you very much. Thank you. Mr. Dorman. Thank you. Chairman schiff, Ranking Member nunez, distinguished members of the committee, thank you for the opportunity to be here this morning to discuss the challenges of countering media manipulation at scale. For more than five centuries, authors have used variations of the phrase seeing is believing. But in just the past half decade, weve come to realize that thats no longer always true. In late 2013, i was given the opportunity to join dar fa as a Program Manager and able to address a variety of challenges facing our military and our intelligence communities. Although im no longer a representative of darpa, i did start the media forensic programs metaphor. The general problem of metaphor is address our limited ability to analyze, detect and address manipulated media that at the time was being used by increased frequency with increased frequency by our adversaries. Its clear that our manual processes, despite being carried out by exceptionally competent analysts and personnel in the government, at the time could not deal with the problem at scale that the manipulated content was being created and proliferated. Typical fashion, the government got ahead of this problem knowing it was a marathon, not a sprint. And the program was designed to address both current and evolving manipulation capabilities. Not with a single point solution, but with comprehensive approach. Whats unexpected, however, was the speed at which this Information Technology would evolve. In just the past five years, weve gone from a new technology that could produce novel results at the time but nowhere near what could be done manually with basic desktop editing software. Open Source Software such as deepfakes that can take the manual effort completely out of the equation. Now, theres nothing fundamentally wrong or evil about the underlying tech noll that rise to the concerns that were testifying about today. Like basic image and video desktop editors, deepfakes is only a tool. There are a lot more positive applications of genre tiff networks than negative ones. As of today, there are point solution that is can identify deepfakes reliably. Its only because the focus of those developing the technology have been on visual deception, not on covering up trace evidence. If history is any indicator, its only a matter of time before the current Detection Capabilities will be rendered less effective. In part, because some of the same mechanisms that are used to create this content are also used to cover them up. I want to make it clear, however, that combatting synthetic and manipulated media at scale is not just a technical challenge. Its a social one as well, as im sure others, witnesses will be testifying this morning. Theres no easy solution. Its likely to get much worse before it gets much better. Yet, we have to continue to do what we can. We need to get the tools and the processes in the hands of individuals rather than relying completely on the government or on social media platforms to police content. If an individual can perform a sniff test and the media smells misuse, they should have ways to verify it or prove it or easily report it. The same tools should be available to the press, to social media sites, anyone who shares and uses this content. Because the truth of the matter is, the people that share this stuff are part of the problem. Even though they dont know it. We need to continue to work towards being able to apply automated detection in filtering at scale. Its not sufficient to only analyze questioned content after the fact. We need to be able to apply detection at the front end of the distribution pipeline. Even if we dont take down or prevent manipulated media from appearing, we should provide appropriate warning labels that suggest that this is not real or not authentic or not what its purported to be. Thats independent of whether this is done and the decisions are made by humans, machines or a combination. We need to continue to put pressure on social media to realize that the way that the platforms are being misused is unacceptable. They must do all they can to address todays issues and not allow things to get worse. Let there be no question that

© 2025 Vimarsana