In late we were here until one in the morning but those from Armed Services were here until about five or six in the morning. So we may have a fewfe groggy members here on the committee. In the heat of the 2016 election as a russian hacking operation was apparent my predominant concern is that they began dumping forged documents along with the real ones that they stole it would be too easy for russia to see forgedse documents in a way to make it almost impossible to identify to review the fraudulent material. Unit they could expose it the damageag would be done. Three years later we on the cusp of a technological revolution with more sinister forms of deception by actors foreign or domestic leading to the emergence of doctored media called the deepfake that allows malicious actors to have the capacity to disrupt entire campaigns including the presidency. With the progress of Artificial Intelligence algorithms makes it p possible to manipulate video imagery with imperceptible results. The sufficient trainingt data these algorithms can portray a real person doing something they never did or to say words they never uttered this is still readily available and accessible to expertss and novices that the attribution to a specific author from an Intelligence Service or a single troll is a constant challenge. Want somebody views the deepfake video the damages donet to convince them that they have seen i a forgery they may never lose the negative impression it has left withiv them. Its also the case not only the fake videos are passed off as real but real information can be passed off as fake this is the liars dividendn that people are given the benefit of an environment is increasingly difficult for the public to determine what is true. To give members a sense of the quality today want to share some examples and even these are not stateoftheart. From Bloomberg Businessweek to demonstrate the clone voice of one of the journalist so lets watch. We will put my voice to the testy and call my mother to see if she recognizes me. Hey mom. What are you guys up to today quick. We didnt have any electricity this morning we are just hanging around the house. Of just finishing up work waiting for the boys to get hom home. Okay. I think im coming down with awi virus. You feel bad . [laughter] i was messing around with w you and you are talking to the computer. Its bad enough it is a fake but he deceives his mother that seems cruel. Coming from a puppetmaster with the deepfake video as you can see they can coopt the head movements of their targets to turn a world leader into a ventriloquist dummy. One next tori highlight research from the acclaimed expert with the face swap video it is transform on the body of an actress. Have never been so excited my package from l. L. Bean. [laughter] speedily time but the one on thepr left both for kate but it shows you how that technology can be. But those algorithms to make completely artificial portraits off persons. Can anyone here pick out which of these faces are real or fake . Of course, all four of those faces are fake and synthetically created. None of them are real. With 2020 and beyond the great imagination more nightmare scenarios leaving the public to discern what is real or fake. A bad state actor has a video of a candidate accepting a bribe with the goal to influence an election. If whether stolen audio of a private conversation between two worldbe leaders. A troll farm using algorithms to write false or sensational news stories or media platforms to verify end users ability t to trust. But this information becomes pernicious is the velocity of what false information can spread brick and we got a preview what that might look like when Nancy Pelosis video went viral in a matter of 48 hours. That was a crude manual manipulation somebody call the cheap fake nevertheless it shows the scale of the challenge we face and the responsibilities of social Media Companies must confront. Ready the companies have taken different approaches to lead the altar video facebook labeled it as false and then that spread was it was deemed fake by independent Fact Checkers. Now is the time for social Media Companies to put in place to protect users from this misinformationd. And after the 2020 elections , by then it iss too late. Keeping up with a series of hearings to National Security and institutions the committee is devoting this to deepfake and with those technologies on those platforms before we have those steps to mitigate rehab practitioners to helptu contextualize the threat before turning to them mode like to recognize the Ranking Member for any opening statement. Thanknk you mister chairman ten fake dossiers and everything else. I do think with all seriousness, this is real. You can see. [laughter] but with all seriousness i appreciate the panelist being here for your testimony. I yield back spank these Opening Statements are part of the record welcome to todays panel first the policy director of ai research and Technology Company based in San Francisco with that Data Task Force and next the director of Artificial Intelligence institute at the university of buffalo until last year he was referred to the darpa program. Danielle is professor of lot and has coauthored articles of the impact on National Security and democracy. And a distinguished Research Fellow but then that recent scholarship has addressed operations and welcome to all of you. Chairmanship fin Ranking Member, thank you for the invitation to testify about National Security threats posed byhe ai fake content with deepfake. What we are talking about when we discuss this is fundamentally we talk about Digital Technology making it easier for people to create synthetic media with video or audio or text people can manipulatete mini media for a long time but things have changed recently. There are two fundamental parts one continuing those advancements of computing capability of that physical hardware to run the Software Just got cheaper and more powerful and at the same time it has become increasingly accessible to make it dramatically easier and allows for a change of functionality like video and audio the forces driving the software are fundamental t the economy over the last few years so thinking about ai similar technologies the deepfake is likely to be used in research allowing people with hearing issues to a people are saying with those that could revolutionize some at the same time these can be used for purposes justifiably causing unease impersonate them on video we have seen researchers developing techniques allowing them to create them to do things that they havent necessarily done. They are potentially accelerated. So, how might we approach this challenge . I think for several interventions that we can make and this will improve the state of things. One is institutional intervention. It may be possible for the largescale Technology Platforms to try to develop and share tools for the detection of malicious synthetic media at both the individual account level and the platform level, and we could imagine the Companies Working together privately as they do today but cybersecurity where they exchange and 410 intelligence with each other and other actors to develop a shared understanding of what this looks like. We can also increase funding for as mentioned previously with the program here they are looking at the detection of these technologies and i think that it would be judicious to consider expanding the funding further survey can develop better insights here. I think we can measure this and what i mean by measurement is that its great we are here now i have 2020, but the technologies have been an open development for several years now and its possible for us to Read Research papers and code and talk to people when we could have created a quantitative metrics for the advancement of the technology for several years and i strongly believe that the government should be in the business of measuring and assessing these threats by looking at the scientific literature from which to work out the next steps being forewarned we have been thinking about different ways to release or talk about technology we developed and its challenging because science runs on openness and we need to preserve that so the science continues to move forward but we need to consider different ways of releasing technology or talking to people about the technology we are creating ahead of us releasing it. Finally, i think we need comprehensive ai education. None of this works if people dont know what they dont know so we need to give people the tools so they understand the technology has arrived. The testimony has made clear i dont think it is the cause of this. I think it is an excellen an aco an issue that has been with us for some time and we need to take steps to deal with this problem because it is very challenging. Thank you very much. Thank you, chairman schiff, Ranking Member nunes, thank you for the opportunity to be here this morning to discuss the challenges of countering the manipulation of scale. These variations of the phrase seeing is believing that in the past decades i was given the opportunity to join as a Program Manager and was able to address a variety of challenges facing the military and intelligence communities. Thats at the time was used by the frequency by our adversaries and its clear that process despite being carried out with the personnel and the government at the time couldnt appeal with the problem at the scale that the manipulative content was being created and proliferated. In the typical fashion the government got ahead of the problem knowing that it was a marathon, not a sprint and the program was designed to address both current and evolving capabilities not with a single point solution, but with comprehensive approach. Its unexpected however is the speed with which the Manipulation Technology wa techd evolve and in just the past five years, weve gone from a new technology that could produce the results at the time, but nowhere near what could be done manually with the basic desktop editing software. Open Source Software that can take the manual effort completely out of the patient and theres nothing fundamentally wrong about the Underlying Technology with the concernconcerns they are testifg about today like the basic desktop editors, its only a tool and theres a lot more positive applications of the networks then there are negative ones. As of today, there are Point Solutions that can identify these rely only but its only because the focus of this developing the technologies have been on the visual perceptions not covering up trace evidence i want to make it clear however that combating the manipulative media scale is not just a technical challenge he is a social one as well other witnesses will be testifying this morning we have to continue to do what we can if we need to get the tools and processes in the hands of individuals rather than relying completely on the government or social media platforms. If individuals can perform a sniff test and the media smells the misuse, they should have ways to verify it or prove it or report it. The same tool should be available to the press, social media sites, anyone who shares and uses this content because the truth of the matter is its part of the problem even though they dont know it. But the filtering of scale its not sufficient to only analyze question content afterthefact, we need to employ depictions of the fronten front end of the distribution pipeline. Its not what its purported to be and is independent of whether this is done and the decisions are made that there be no question this is a race but the better the detectors need to be in there is orders of magnitude more than there are detectors. Its also a race that we may never end. It may never be one. But it is one where we must close the gap and continue to make it less attractive to propagate the false information. It is easy and its always a problem and it may be the case that we can level the playing field. When the program was conceived, one thing that kept me up was the concern someda that maybe adversaries would be able to create and tired events with minimal effort. These might include images of scenes from different video content that appears from different devices and through various media providing overwhelmingly the phenomenon of deep faith and the risk they pose and with all can and should do about it. Im at the university of Maryland School of law. There are a few phenomenon that come together. To confirm the biases is particularly true when that information is lawful and negative. So the more salacious the more we are willing to pass upon. There are enterprises to have us click and share so when we bring all these things together, the provocative, it will be spread via really. There are so many harms mike walker and i have written about what the law can and should do about it, so there are concrete harms in the here and now for each individual. An investigative journalist in india who writes about government corruption and persecution of religious minority there was a provocative piece in 2018 and what followed was videos where her face was morphed into pornography and the first day it goes viral its on every social media site. It had her home address and the suggestion that she was available. The fallout was significant she had to withdraw from Online Platforms for several into the economic and social and psychological harm is profound and its true that in my work on cyber stalking, the phenomenon is going to be filled by women and minorities and for people who marginalized communities. Its not just we can imagine the deep hatre faith about the night before an ipo if timed just right with the ceo saying something he never said or did basically admitting to. The market will respond faster than we can debunk it. Im going to let him take some of the National Security concern like the taping of an election, but the next question is what do we do about it and i got the panel is going to be in a heated agreement that there is no silver bullet. We need a combination of all, markets and resilience to get through this. There are several claims that victims of targeted individuals can bring and sue for defamation, Emotional Distress with privacy to words. Criminal law offers too few letters for us to push. There are the criminal destination and impersonation walls and there is an impersonation of a Government Official statute thats really an act for the problems that we face today. So, we are amidst writing a model statute that we might deploy, one that is narrowly tailored that would address harmful, false impersonation, that would capture some of the harm here, but of course no practical hurdles for any solution. You have to be able to find them to prosecute them and youve got to have jurisdiction over them and so the platforms, the intermediaries are immune from liability so we cant use a legal incentive of liability to get them on the case. I see my time is running out. Thank you for your question. Members of the committee, thanks for having me here today. All foundations recognize the power of Artificial Intelligence to revolutionize economies and in paramilitaries but those countries with the most advanced capabilities and unlimited access to large data will gain enormous advantages and information warfare. Ai provides disinformation the ability to rapidly wreak the psychological vulnerabilities and create modified content and digital forgery advancing. Fullstop it is against americans and american interest. Historically each advancement from text to speech to video to the Virtual Reality deeply engages information consumers enriching the context of experiences and shaping the users reality. The falsification of loud at the manipulators to do highly convincing ways provoking emotional response to lead to widespread mistrust and have time physical mobilizations. False video and audio once consumed can be extremely difficult to refute an and coun. Moving forward underestimate russia has been in the ring this information will continue to pursue the acquisition of the capabilities and be adversaries around the world. The arrival of the u. S. That are powered by the data for the vast amounts of information stolen from the u. S. Into the country has already shown up at the for the Television Broadcast journalism. The will largely use this as part of the discredit for detractors and from the westernstyle democracies and free to start the reality of the american audiences and those of americas allies. The proliferation presents to dangers over the longterm development of the false synthetic media with target officials and institutions, democratic processes with the goal of demoralizing the american constituency. The conspiracies offer a relevant example of how the messages can feel violent. The capabilities will increase in frequency and intensity of these outbreaks will continue the interest in the developing world where the consumption has jumped from an inperson conversations to social media sharing lacking any form of the filter would be threatened by the bogus synthetic media campaigns. These would be mobilization of the embassy in cairo, the consulate and rumors of protest to the airbase had they been accompanied with fake audio or video content could have been far more damaging in terms of that and i would also point to a story just out hours ago from the Associated Press that shows the use of a synthetic picture as it would appear to be espionage purposes from t