Transcripts For CSPAN2 House Intelligence Hearing On Nationa

CSPAN2 House Intelligence Hearing On National Security Risks Of Artificial... July 14, 2024

6 00 in the morning, so we have a few groggy members here on the committee. In the heat of he 2016 election, as the russian hacking and dumping operation became apparent, my freedom unanimous concern was that the russians would begin dumping forged documents along with the real ones they stole. It would have been too easy are russia or another malicious actor to seed formed documents among the how then tick onestake toe make almost imfor identify or rebut the fraudulent material. Even if a victim could expose the foreign ricer the damage would be done. Three years later were on the cusp of a technological revolution that could enable even more sipster forms of deception and disinformation by malign actors, foreign or domestic. At vanses in a. I. In Machine Learning let to emergence of advanced digitally doctored types of media, so called deep fakes, that enable malicious actors to foment okay or crisis and disrupt entire campaigns, including that for the presidency. Rapid progress inard official intelligence algorisms made it possible to manipulate media, video imagery, audio, text, with incredible nearly imperceptible results. With sufficient training dat the powerful deep fake generating algorisms can portray a real person doing something any never did or saying word theyd never uttered. This tools are readily available and accessible to expert and novices alike, meaning that attribution of a deep fake to a pick author, whether a Hostile Intelligence Service or a single Internet Troll will be a constant challenge. What is more once someone views a deep fake or fake video the damage is large didone, even if later convinced what they see is a formry that person may never lose let completely the negative impression the video left with them and also the case that not only may fake videos be passed off as real, but real information can be passed off as fake. This is the socalled liars dividend in which people with propensity to deceive are given the benefit of an environment in which it is increasingly difficult for the public to determine what is true. To give our members and the public a sense of the quality of deep fakes today, i want to share a few short examples and even these are not the stateoftheart. The first comes from Bloomberg Business week and demonstrates an i. E. Power clone advice of a journalist so lets watch the now to real put any computer voice to the test im going to call my dear sweet mother and see if she recognizes me. Hey, mom what are you guys up to today. Well, we didnt have any electricity early this morning and were just hanging around the house. Im just finishing up work and wait fork the boys to get home. Okay. I think im coming down with a virus. Oh, well, you feel bad, huh . I was messing around with you. You were talking to a computer. Thought i was talking to. You its amazing. Bad enough that was fake but deceiving his mother and telling her he has a virus. Seems cruel. The second clip comes from courts and demonstrates a punt puppet master type of deep fake video. As you can see these people are able to coopt the head movements of their targets, if married with convincing audio, you can turn a world leader into a ventriloquist dummy. Next, the brief cnn client low highlight research on a claimed expert on deep affect from uc berkeley and featuring an example of a socalled face swap video in which senator elizabeth warrens face is seamlessly transpondded on the body of snl actress Kate Mckinnon. I havent been this excite since found out my package from ll bean had ship. I was ready to fight. So, the only problem with the video is Kate Mckinnon actually looks like senator warren but both were Kate Mckinnon, uphad elizabeth warrens face swapped on to her. But it shows you just how convincing that kind of technology can be. These algorisms can also learn from pictures of real faces to make completely artificial pore private persons who do not exist at all. Can nip here pick out which face is real and which are fake . And of course, all four are fake. All four of those faces are synthetic include created and none of those people are real. Think ahead to 2020 and beyond. One does not need my great imagination to eninvestigation even more nightmarish scenarios that would leave the government, media and the public struggling to discern what is real and whats fake. A statebacked actor creates a deep fake video of a candidate accepting a bribe or an individual cacker claims to have stolen a conversation between two leader been the fact no such conversation took play 0 a troll form writes false or sensational news stories that scale, letting social media platforms and overenemying journalists able to verify and users ability to trust what they are seeing or reading. What enables deep fakes is the uquick witness of social media and the velocity as which false information can spread. We got a preview of that recently when a doctored video of Speaker Nancy Pelosi went viral on facebook, receiving minimums of views in the span of 48 hours. That video was not an a. I. Assisted deep fake but a rather crude manual manipulation that some called a cheap fake. Nonetheless the videos virallity represents representse of the challenge we face and responsibilities that social Media Companies must confront. The companies have taken different approaches which youtube deleading altered video of Speaker Pelosi and facebook labeled it as false and throttledback the speed with which is spread when it was to be fake. Now is the time to mutt in place policies to protect years from this kind of information, not in 2021 after viral deep fakes polite the 2020 elects. By then it will be too late. So in keeping with a series of opening series that examines the challenges to National Security and ore Democratic Institutions the hearing is voting to deep fake and synthetic media. We need to understand the implication, and the i. E. Technology and the internet platforms that give them reach before we can consider appropriate steps to mitigate the potential arms women have a distinguished panel of experts to help us understand and contextualize the potential threat of deep fakes but id like to recognize ranking anybody nunez for any Opening Statement he would like to give. Thank you, mr. Chairman. I join you in your concern about deep fakes and i want to add to that fake news, fake dossiers and agency necessary politics. I do think that in all seriousness, this is real. If you get online you can see pictures of yourself, mr. Chairman, on there, quite entertaining, some of them. I decided not to maybe entertaining for you. I decided not play them today on the screen. But with all seriousness, i appreciate the panelists being here and look forward to your testimony. Yield back. I thank the Ranking Member. Without the objection the Opening Statements will be made part of the report of. I walk want to welcome today panel, jack clark the policy director of open aea research and Technology Company in San Francisco and a member of the center for new American Securities task force on Artificial Intelligence and National Security. Next, david doorman, a professor and director of Artificial Intelligence the Artificial Intelligence institute at the university of buffalo. Until that here he was the Program Manager of darpas media forensics program. Daniel citroen a professor of law at the university of cao are to the several aurals be the potential impact of deep fakes on National Security and democracy. And finally mr. Clint watts the distinguished Research Fellow at the Foreign Policy research institute, a senior fellow at the gmc alliance for securing democracy and his recent scholarship as addressed social media influence operation iches welcome to you and start with you, mr. Clark. Chairman schiff, Ranking Member and Committee Members thank you for the invitation to testify about the National Security threats posed by the intersection of a. I. , fake content and deep fakes. Whatwe talking about when we discuss this subject . Fundmentalityly were talking bit Digital Technologies that make it easier for people to create synthetic media, video images, audio or text. People have been manipulating need where for a very long time, as you well know but things have changed recently. I think there are two fundamental reasons why were here. One is that the continued advancement of computing capabilities, the physical hardware we use to run software on, thats got significantly cheaper and more powerful. And at the same time, software has become increasingly accessible and capable and some of the Software Starting to incorporate a. I. , which makes it dramatically easier to manipulate media and allows for a step change in functionality for things like Video Editing or audio editing which was previously very difficult. The forces driving cheaper computing and easier to use software are fun. Am to the economy and many innovations we have had in the last few years. So when we think about a. I. , one of the con founding factors here is that similar a. I. Technologies used in production of synthetic media or deep fakes are also likely to be used in valuable sciencic research used by scientists to allow people with hearing issues to understand what other people are saying or use in molecular assay, and at the same time the techniques can we used for purposes that justifiably call unease, like being able to synthesize the sound of someones voice, impersonate them on video and write tex of the style used online and seen researcher develop techniqued that come bean these things, allowing them to create a virtual person who can say thinks they havent said and appear to do things they havent necessarily done. Im sure that members of the committee are familiar with their runins with the media and know how awkward to have words put in your mouth you didnt say. Deep fakes tase this problem and accelerate them. How might we approach the challenge . I think for several interventions that we can make this will improve the state of things. One is institutional interventions. It may be possible for large Scale Technology platforms to try to develop and share tools for the detection of malicious synthetic media at both the yesterday account level and for platform level and we can imagine these Companies Working together privately as they do today, where cyber security, where they exchange Threat Intelligence with each other and with other actors to develop a shared understand offering what this looks like. We can also increase funding. So as mentioned, dr. David doctorman led a program here, we he have existing initiatives looking at the detention of these technologies and i think that it would be judicious to consider expanding the funding further so we can develop better insights here. I think we can measure this and what i mean by measurement is that its great that were here now a head of 2020 but these technologies have been in open development for several years now, and its possible for us to read, research papers, read code, talk to people and could have created metrics for the advance of this technology for several years and i strongly believe that government should be in the business of measuring and assessing these threats by looking directly at the science tick lit tour and developing a base of knowledge from which to work out next extend, being forewarned, for mayored here and we can do that. I think we also need to do work at the level of norms so at open a. I. We have been think can about different ways to release or talk about technology we develop i think its challenging because science runs on openness and we need to preserve that so that science continues to move forward but we need to consider different ways of releasing technology or talking to people about the technology that were creating ahead of us releasing it. Finally, i think we need comprehensive a. I. Education. Up in of this work is people dont know what they dont know so we need give people the tools to let them understand that this technology has arrived, and though we may make variety of interventions to deal with the situation, they need to know that it exists. I hope this testimony has made clear, i dont think a. I. Is the cause of this. I think a. I. Is an accelerant to an issue that has been with us for some time and we need to take steps their deal with this problem because the pace of this is challenging. Thank you very minute. Thank you. Mr. Dorman. Thank you, chairman schiff, thank you for the opportunity to be here this morning and discuss the challenges countering media manipulation at say. For more than five flurries awe authorized have used variations of the fair, seeing is believing, but in just the past half decade, we have come to realize thats no longer always true. Late 2013, i was given the opportunity to join darpa as Program Manager and addressing a variety of challenges nation military and Intelligence Community. Im no longer representative of darpa, i did start the media Forensic Program metaphor it and was created to address the many technical aspects about these problems. The general problem of metaphor is dressing our limited ability to analyze, detect and address manipulated media. That at the time was being used by increased frequency if win creased frequency by our adversaries. Its clear that our manual processes, despite being carried out by exceptionally competent analysts and personnel in the government, at the time could not deal with the problem at scale, that the manipulated content was being created and proliferated. In typical darpa fashion that government got ahead of the fashion knowing it was marathon, not a sprint, and the program was designed to address both current evolving manipulation capabilities. Not with a single point solution but with comprehensive approach. With up expected was the speed at which this Manipulation Technology would evolve. Just the past five years, we have gone from a new technology that can produce novel results at the time, but nowhere near what could be done man automatically with basic desk top ed iting software. Open Source Software that can take the manual effort out of the equation. Theres nothing fundamentally wrong with or evil but a the Underlying Technology that arises to the concerned were testifying bowed do it. Basic engine video desk topped temperatures deep fake is only a tool and there are more positive applications than new englandtive ones. As negative ones. There are solutions to identify deep flakes buts only because the focus of the developing the deep Takes Technology have been on visual deception, not on covering up trace evidence. If history is any indicator its a matter of time before the current Detection Capabilities will be less effective but a some of the same mechanisms used to create this content are also used to cover them up. I want to make it clear that combating synthetic and manipulated media at scale is not just a technical challenge. Its so social one as well as im sure other witnesses will detestifying this morning and theres no easy solution. It is likely to get much worse before it gets better, yet we have to continue to do what we can web need to get the tools and flows hand of individual rather than relying on the government or social media platforms to police content. If individuals can perform a sniff test and the media smell as misuse, they should have ways to verify it or prove it or easily report it. The same tools should be available to press, social media sites, anyone who shares and use the content because the truth of the matter the people who share this stuff are part of the problem. We need to continue to work towards being able to apply automated detection and filtering at scale. Its not sufficient only to analyze questioned content after the fact. We need apply detection at the front edge of the distribution pipeline. And even if we dont take down or prevent manipulate media from appearing we should provide appropriate warning labels that suggest this is not real or not authentic or not what its purported to be, and thats independent of whether this is and unthe decisions are made by humans, machines or a combination. We need to continue to put pressure on social media to realize that the way their platforms are being misused are unassettable and they must not allow things to get worse. Let there be no question this is a race but better manipulators get, the better detector inside to be and their orders of magnitude. And its a race that may never end, my never be won but no one but is is one where we must close the gap and continue to make it less attractive financially, socially, politically to propagate false information. Like spam and malware it is easy and its always a problem and may be a case that we can level the Playing Field. When the Metaphor Program was conceived at darpa, one thing that kept me up is the concern that at veer South Carolina could create events we little effort. Me a include video of keane, video content and texts delivered through various medium providing overwhelming amount of evidence that an event osecured this could lead to social unrest or retaliation bef

© 2025 Vimarsana