Committee. In the heat of the 2016 election, as the russian hacking and dumping operation became apparent, my predominant concern was that the russians would begin dumping forged documents along with the real ones that pistol. It would have been all too easy seed forgedto documents among the authentic ones in a way that would make it almost impossible to identify. Even if a victim could ultimately expose the forgery for what it was, the damage would be done. Three years later, we are on the cusp of a technological revolution that could enable even more sinister forms of deception and this information by malign actors, foreign or domestic. And machineai learning have led to the emergence of advanced, digitally doctored types of media, enableed deepfakes that malicious actors to film and disrupt entire campaigns including that for the presidency. Artificialess in intelligence algorithms has made it possible to manipulate media, video imagery, audio, text, with incredible, nearly imperceptible results. With sufficient training data, these powerful algorithms can portray a real person doing something they never did, or saying words they never uttered. These tools are readily available and accessible and accessible to both experts and novices alike, meaning that attribution of a deep fake to a specific offer, whether a Hostile Intelligence Service or a single Internet Troll will be a constant challenge. What is more, once someone views a deepfake, the damage is largely done. Even if later convinced that what i have seen as a forgery, that person may never lose completely malingering negative impression the video has left with them. It is also the case that, not only made videos the cast off as real, but real information can be passed off as fake. This is the socalled liars dividend in which people with propensity to deceive are given the benefit of an environment in which it is increasingly difficult for the public to determine what is true. To give our members and the public a sense of the quality of today, i want to share a few short examples, and even these are not the stateoftheart. The first comes from Bloomberg Businessweek and demonstrates an aipowered clone the voice of one of the journalists. Lets watch. [video clip] my dear,oing to call sweet mother, and see if she recognized me. [phone ringing] mom. Y, what are you guys up to today . We didnt have any electricity early this morning and we are just hanging around the house. Im just finishing up work and waiting for the boys to get home. Ok. I think im coming down with a virus. I was messing around with you, you were talking to a computer. I thought i was talking to you. It is bad enough that was fake, but he is deceiving his mother and telling her that hes got a virus, that seems downright cruel. The second clip demonstrates a puppetmaster type of video. As you can see, these people are able to coopt the head movements of their targets with convincing audio and can turn a world leader into a ventriloquist honey. Next, a brief cnn clip highlighting the research a claimed expert from uc berkeley and featuring an example of a socalled face what video in which senator elizabeth warrens face is seamlessly transplanted on the body of Kate Mckinnon. I havent been this excited since i found out my package from l. L. Bean had shipped. So, the only problem with that video is Kate Mckinnon actually looks a lot like elizabeth were in. But the one on the left was actually both were Kate Mckinnon, one just had elizabeth face swapped onto her, but it shows you how convincing that kind of technology can be. These algorithms can also learn from pictures of real faces to make completely artificial portraits of persons who do not exist at all. Take out which of these spaces are real and which are fake . Of course, as you may have guessed, all four are fake. All of them are synthetically created. Thinking ahead to 2020 and beyond, one does not need great imagination to envision any more nightmarish scenarios that would leave the government and the media struggling to discern what is real and what is fake. A deep fakeates video of a political candidate accepting a bribe with the goal of influencing an election. Hacker claimsual to have stolen audio of a private conversation between two World Leaders when no conversation took place. Or, a troll farm uses text generating algorithms to write full force and social news stories, overwhelming journalist abilities to verify a user ability to trust what theyre seeing or reading. What enables them to become truly pernicious is the ubiquity of social media and the velocity at which false information can spread. We got a preview of what that might look like recently when a doctored video of nancy pelosi went viral on facebook, receiving millions of views in the span of 48 hours. That video is not a deepfake, but a crude manual manipulation. Nonetheless, it demonstrates that the scale of the challenge we face and responsibilities that social Media Companies must confront. Already, the companies have taken different approaches with youtube deleting the altered video while facebook labeled it as false and throttle back the of the spread once it was deemed fake. Now is the time for social Media Companies to put in place policies to protect users from this kind of misinformation. Not in 2021 after viral deepfakes have polluted the 2020 elections. Keeping with a series of open hearings that have examined different challenges to our National Security, the committee is devoting this hearing to deepfakees and synthetic media. We need to understand the implications, the Underlying Technology, and the internet reacheds that theyd before we consider appropriate steps to mitigate the potential harms. We have a distinguished panel of experts and practitioners to help us understand and contextualize the potential threat but before turning to them, i would like to recognize wrecking member nunez for any statements he would like to give. Thank you, mr. Chairman. I join you in your concern and i want to add to that, fake news, fake dossiers and Everything Else we have in politics, i do think that, in all seriousness, this is real. If you get online, you can see pictures of yourself, mr. Chairman, on there. They are quite entertaining, some of them. They be for you. I decided not to bring any today on the screen. [laughter] with all seriousness, i appreciate the panelists being here. Member. Nk the ranking i would like to welcome todays panel. First, jack clark who is the policy director of a research and Technology Company based in san francisco, and a member of the center for new American Security task force on Artificial Intelligence and National Security. Up next, a professor and director of Artificial Intelligence. The Artificial Intelligence institute at the university of buffalo. Until last year, he was the Program Manager of the media Forensics Program. Danielle is a professor of law ,t the university of maryland and she has coauthored several notable articles about the potential impacts on National Security and democracy. And finally, a distinguished Research Fellow at Foreign Policy research institute, a senior fellow at the alliance for securing democracy, and his recent scholarship, welcome to all of you, and why dont we start with you, mr. Clark chairmanship, Ranking Member nunez, and committee members, thank you for the invitation to testify about the National Security threats posed by the imperfection of ai content. So, what are we talking about when we discuss this subject . Fundamentally, were talking about Digital Technologies that would make it easier for people to create synthetic media and that can be video images, audio, or text. People have been manipulating media for a very long time, as you know, but things have changed recently. I think two fundamental reasons why we are here. One is the continued advancement of computing capabilities. That is, the physical hardware reuse to run software on. That is significantly cheaper and more powerful. At the same time, software has become increasingly acceptable some of theand software is starting to incorporate ai, which makes it dramatically easier for us to manipulate media and it media. And it allows for step changing functionality for video or audio editing, which was previously very difficult. Driving cheaper computing and easier to use software are fundamental to the economy and many of the innovations we have had in the last few years. , one ofthink about ai the confounding factors here is that similar Ai Technology is used in the production of synthetic media, or deep fakes, and is also likely to be used invaluable scientific in valuable scientific research. Hearing aids so people can understand what theyre saying to them, and other things that may revolutionize medicine. At the same time, these techniques can also be used for purposes that justifiably cause unease, like being able to censor someone elses voice, impersonate them on video, and write text in the style they use online. We have also seen researchers bella techniques that can mine researchers develop techniques that combine these things, creating people that say things they have not said and do things that they have not appeared to have necessarily done. I know how awkward it can be to have words that are put in your mouth that you did not say. Deep fakes take this problem and accelerate it. I believe there are several interventions we can make that will improve the state of things. One is institutional intervention. It may be possible for largescale Technology Platforms to try and develop tools for detection of malicious synthetic the individual account level and the platform level. And we could imagine these Companies Working together privately, as they do today, with cybersecurity, where they exchanged Threat Intelligence with each other and develop a shared understanding of what this looks like. We can also increase funding. Dormanioned, dr. David previously let a program here. We have existing initiatives that are looking at the detection of these technologies, and i think it would be judicious to consider expanding that funding further so we can develop better insights here. I think we can measure this. What i mean by measurement is, it is great we are here now, ahead of 2020, that these technologies have been in open development for several years now. It is possible for us to Read Research papers, code, talk to people, and we could have created metrics for the advancement of this technology for several years, and i believe that government should be in the business of measuring and assessing these threats by looking directly at the scientific literature and working from a basis on where to lay out the next steps. I think we also need to do work with the level of norms. Beenen ai, we have thinking about different ways to release or talk about the technology that we develop. I think it is challenging, because science runs on openness, and we need to preserve that so science continues to move forward, but we need to consider different ways of releasing technology or talking to people about the technology we are creating ahead of us releasing it. Finally, i think we need comprehensive ai education. None of it has worked if people do not know what they do not know. We need to give people the tools to let them understand that this technology has arrived. Although we may make a variety of interventions to deal with the situation, they need to know that it exists. Hope this testimony has made clear, i do not think ai is because of this. I think ai is an accelerant to an issue that has been with us for some time. We do need to take steps here to deal with this problem, because at the base of this, it is challenging. Thank you very much. Thank you. Mr. Dorman . Dr. Dorman thank you. Chairmanship, Ranking Member nunez, thank you for the opportunity for me to be here this morning to discuss the challenges of countering media manipulation and scale. Authors have used the phrase seeing is believing, but in the past half decade, we have come to realize that is not always true. In late 2013, i was given the opportunity to join dartmouth as a Program Manager and was able to address a variety of challenges facing our military and intelligence communities. Although i am no longer a representative of dartmouth, i did start the media Forensics Program metaphor, and it was created to address the many technical aspects of the problems we are talking about today. The general problem of metaphor is addressing our ability to analyze and detect manipulated media that was being used with increased frequency by our adversaries. It is clear that our manual processes, despite being carried out by competent analysts and personnel in the government, at the time, could not deal with the problem at scale that the manipulated content was being created and proliferated. The government got ahead of this problem knowing that it was a marathon, not a sprint. The program was designed to address both current and evolving manipulation capabilities. Not a single point solution, but with the copperheads of approach. A comprehensive approach. We haveast five years, gone from a new technology that can produce novel result, at the time, but nowhere near what could be done manually with basic desktop editing software. Open Source Software that can take the manual efforts completely out of the equation. There is nothing fundamentally wrong or evil about the Underlying Technology that brings the concerns we are testifying about today. Deep fakes are only a tool. There are more positive applications of networks then there are negative ones. As of today, there are solution s that can identify deep fakes reliably, but only because they have been on visual perception, covering up trace evidence. If history is any indicator, it is only a matter of time before the current Detection Capabilities will be rendered less effective. In part because some of the same mechanisms that are used to create this content are also used to cover them up. I want to make it clear, however, that combating synthetic and manipulated media at scale is not just a technical challenge. It is a social one as well, as i am sure other will be testifying this morning. There is no easy solution. It is likely to get much worse before it gets better. We have to continue to do what we can. We need to get the tools and the processes and the hands of individuals, rather than relying completely on the government or social media platforms to police content. Can perform a sniff test and the media smells be misuse, they should have ways to verify it, prove it, or easily report it. Same tools should be available to the press, social media sites, anyone who shares or uses this content. The truth of the matter is, people who share this stuff are part of the problem, even though they dont know it. We need to apply automated detection and filtering at scale. Weis not sufficient when analyze content after the fact, we need to apply detection at the frontend of the pipeline. Even if we do not take down manipulated media from appearing, we should try to provide appropriate warning labels that suggest this is not real or authentic, or what it is reported to be. As tos independent whether the decisions are made by humans, machines, or a combination. We need to continue to put pressure on social media to realize that the way their platforms are being misused is unacceptable. They must do all they can to address todays issues and not allow things to get worse. But there will be no question that this is a race. The better manipulators gets, the better detectors need to be, and there are more manipulators then there are detectors. It might be a race that never won, it may never be but we must close the gap and make it less attractive to propagate false information. It ispam and malware, always easy and always a problem, and it may be the case that we can level the laying field. One thing that kept me up at night was that someday, our adversaries would be able to create entire events with minimal effort. They might include images and scenes of different angles, video content appearing from different vices and text providing overwhelming amounts of evidence that an event has occurred, which could lead to social unrest or retaliation before it gets countered. If the past five years are any anyion that someday indication, that someday is not far in the future. Thank you. Thank you. Professor citroen . Professor citroen thank you very much, chairmanship, Ranking Member nunez, and the commission for having me here today to talk about the phenomenon of deep fakes, the risks that they pose, and what law can and do should about it can and should do about it. Few things come together that fakes particularly troubling when they are provocative and destructive. We know as human beings, video and audio are so busy role. We tend to believe what our eyes and ears are telling us. We also tend to believe and share information that confirms our biases. It is particularly true when that information is novel and negative. The more salacious, the more willing we are to pass it on. And we are seeing deep fakes, or will see them, in social networks that are ad driven. The Higher Enterprises to have us click and share. When we bring these things together, the provocative deep fakes, the salacious will be sped spread virally. That mye so many harms coauthor and i have written about, but i will focus on more of the concrete ones and what law can and should do about it. There are concrete harms in the here and now, especially for individuals. An investigative journalist in india writes about government corruption and the persecution of religious minorities. She is used to getting Death Threats and rape threats. For her it is par for the course. She wrote a provocative piece in april 2018, and what followed was posters circulating over the internet, deep fake sex videos of her. Her face was morphed into print ,n murphy into pornography and it is on every social media site. She explained to me it was on millions of phones in india. The next day, paired with the deep fake sex video, was rape threats, her home address, and the suggestion that she was available for sex. The fallout was significant. She had to basically go offline. She could not work. Her sense of safety and security was shaken. It upended her life, and she had to withdraw from Online Platforms for several months. Andeconomic and the social psychological harm is profound. It is true that in my work on cyber stalking, the phenomenon is going to be increasingly felt by women and minorities. For people from marginalized communities. Fakes,imagine the deep the night before an ipo, timed just right, with the ceo saying something that he never said or admitting that the company was insolvent. The deep fake, the night before respond farket will faster than we can do monk it. Debunk it. Mr. Watts take some of the National Security concerns, like elections, tipping of elections, upending Public Safety, but the next question is what do we do about it . Be in like our panel will heated agreements that there is no Silver Bullet. We need a combination of law, markets, and really societal resilience to get through this. But law has a modest role to play. There are several claims that individualsargeted can bring. They can sue for defamation, intentional infliction of the distress,tional but it is incredibly expensive to sue. Criminal law offers to few us to push. At the state level, there are a handful of criminal defamation and impersonation laws. At the federal level, there is an impersonation of a Government Official stats, but it does not serve the purposes we need today. A model statute is being deployed that is narrowly address that would harmful, false impersonations that would capture some of the harm here. Of course, there are practical hurdles for any legal solution. We have to be able to find the defendants and still prosecute them. And you have to have jurisdiction over them. Platforms, the intermediaries, our digital gatekeepers are immune from liability. We cannot using use a legal incentive of liability to get them on the case. I look forward to your questions and thank you. Thank you very much. Mr. Watts . Chairmanship, Ranking Member nunez, thank you for having me here today. All advanced nations recognize the power of Artificial Intelligence to revolutionize economies and empower militaries, but countries with the most advanced ai capabilities and no access to data throws will gain Information Warfare advantages. Will be able to identify psychological vulnerabilities and create modified content and digital forgeries advancing false narratives against americans and american interests. Historically, each advancement more deeply engages information consumers, enriching the content of experiences, and shaping the user reality. Allowslication of video audience members to be duped in ways that can lead to widespread mistrust and physical mobilizations. Physical video and audio can be extremely difficult to refute and counter. Moving forward, i would suggest employwill continue to the outputs against its adversaries around the world. I suspect it will be joined and outpaced potentially by china. Artificial intelligence capabilities rival the u. S. And include vast amounts of information stolen from the u. S. The country has already shown a propensity to employ synthetic media and broadcast journalism. Will likely use deep fakes as part of information campaigns seeking to discredit foreign influence and domestic detractors, and distort the realities of american audiences and the audience of american allies. Proliferation presents to clear dangers. Over the long term deliberate development of false synthetic media will target u. S. Officials, institutions, and democratic processes with the goal of subverting democracy and demoralizing the american constituency. In the near and short term, this and sparked the outbreak of violence. The recent conspiracies proliferating whatsapp in india show how media messages can fuel violence. The spread of deep fake capabilities will only increase in the intensity as the violent outbreaks continue. U. S. Merkel u. S. Military primenel overseas will be targets for misinformation planted by adversaries, and in likely bewhere threatened by bogus and synthetic media campaigns. Three examples would be mobilization of the u. S. And the embassy in cairo, where the airbase had been accompanied with fake audio or video content, which could have been far more damaging in terms of that. I also point to a story out hours ago from the Associated Press that shows the use of a synthetic picture for what appears to be espionage purposes on linkedin, a honey potting attack. Recent discussions have focused on foreign adversaries, but the greatest threat may come not from abroad, but from at home, and not from nationstates, but the private actor. Authoritarian nationstates and brought a chart as to the range that manipulators will use their research to acquire deep fakes if needed. Recent examples suggest it could be oligarchs, corporations, Political Action groups, and activists with public financial relations support to amplify these in International Lord to mastic context. The effect will be the same degradation of democratic institutions, unelected officials, we can trust in social media platforms akened trust in social media platforms as well. Should implement legislation prohibiting u. S. Officials, elected representatives, and agencies from creating and distributing false and manipulated content. Always. Government must be the purveyor of facts and truths to its constituents, ensuring the Effective Administration of democracy via a shared basis of reality. Policymakers should work with social Media Companies to develop standards for content accountability. Third, the u. S. Government should partner with the private sector to implement digital verification signatures designating the date, time, and physical origin of the content. Or social Media Companies should platformseling across and codify how the content should be appropriately marked. Shouldtion to consumers be able to contain the source of the information and whether it is an authentic depiction of people and events. This and what is most pressing right now is the u. S. Government from a National Security maintainve to intelligence on their adversaries deploying deep fake content or what they do to produce such information. The state should immediately deepop response plans for fake smear campaigns and violent mobilizations overseas in attempts to mitigate u. S. Personnel and interest harm. And the last is the Public Awareness of deep fakes, which will assist in tamping down on and incitingy rounds. They need to make better decisions on the content they are consuming and how to judge that. Thank you. Thank you all. I will now proceed with the questions. I recognize myself for five minutes. Professor, how broad is the immunity that social media platforms enjoy . With that to do away immunity so the platforms are required to maintain a certain standard of care . It seems to me not very practical to think about bringing people to justice who are halfway around the world or the difficulties of attribution, thehe fact that, given that cheap cost of this technology now just how many people can employ it, is it time to take that step . Oneit appropriate for social Media Company to leave up the pelosi video, even labeling it a certain way . Mr. Watts, what is a proportionate response, should the russians start to dump deep fakes, release a deep fake of joe biden to try to diminish his candidacy . What should the u. S. Response be . Should it be a cyber response, not a titfortat in terms of doing a deep fake of putin, but some Cyber Reaction or sanctions . How do we deter this foreign meddling, realizing that it is only going to be one part of the problem . Citroen i will start with how broad the immunity is, and that it is time to amend this law. Under the law passed in 1996, it largely was an antiporn provision. That was the objective of the Communications Decency act. Most of the law was struck down. What remains is a provision that is called Good Samaritan blocking and filtering of offensive content. It has been interpreted really broadly, to say that if you , nor filter content selfmonitoring at all, as that from liability. Operators cann say they are immune from liability while encouraging sthers to post their ex nude photos, and they are. We have dominant players. The internet is not in its infancy, and is it time to reassess its . Reassess it . The answer is yes. It should not be a free pass, and should be conditioned on reasonable content moderation practices. We have written a sample statute that you could adopt, if you so chose, that would condition the community on reasonable content practices. And the question is, in any platforms making the right choices . That would go to the reasonableness of content practices. We would look at the platforms total approach to content moderation, not any given decision with content. Lets take the pelosi video. The answer is it should have been taken down. We should have a default rule that if we are going to have impersonation or manipulation that do not reflect what we have done or said, the platforms should, once they figure it out, take it down. The technology is such that we cannot detect it yet, cannot automatically filter and block. Once we have figured it out, we are already in a place where the public has deep distrust of institutions at the heart of our democracy. We have an audience primed to things like manipulated video of lawmakers. I would hate to see a deep fake where a prominent lawmaker is seen taking a bribe that he never took, and i hope that platforms come to see themselves , if we cannot require them to have legal liability, that they come to see themselves as the purveyors of responsibly facilitating discourse online, and their importance to democracy. Thank you. Mr. Watts . Mr. Watts i would like to start off with a basic principle of Information Warfare. A professor who studied wartime rumors, his quote was once rumors are a crime, they have a way of carrying the public with them. The greater the rumor, the more its plausibility. He wrote that in 1944. It comes down to who is their first and who is there the most is therefirst and who the most. Plan, and weve a have to respond quickly. This has not been the tradition of our government. In iraq, when there would be fake al qaeda propaganda put out to inspire people to show up places, we had Rapid Response teams that would shoot footage from there to show, this is not true and has been disproven. That is a great example, if this starts to get leaked out, what is our plan right now . The u. S. Government, any official Government Agency should offer a counter based on facts in terms of what is actually going on. This happened in the summer of 2016. Russian sponsored propaganda was put out about a potential coup, the base was surrounded we should be able to turn on the cameras immediately and say, this is not happening. That, the lesso chance people will see it and believe it. Then it comes down to the Political Parties, republican and democrat, if they have these smears coming through. They should be able to refute that and put out a basis of truth. The candidates were not there, but partnership with the social Media Companies in terms of this i would not go as far as saying every piece of synthetic video that gets loaded on a social media platform needs to come down. Im glad you brought up former Vice President biden. One of the classic articles about former Vice President biden comes from the onion, and was that he was acting his camaro in the driveway of the comedy bit, and it had manipulated photos, manipulated content on it. If we went to that extreme, we would have a country that where everything has never been changed or modified would have to be policed, and we would be asking a private Sector Company to police that. I would offer a different triage, how dos social Media Companies accurately label content as authentic or not . The source is not necessarily there. We saw that in 2016 and we see that today. They should be able to refer back to the source quickly. How do you do that . They should be able to triage. The three areas i would suggest that they immediately triage in, if they see something spiking in terms of our reality letlity, down rate it, not it go into newsfeeds, link it to Fact Checkers, and help people understand what is manipulated content. That is the jump we are most concerned about. The other concerns or outbreaks of violence and Public Safety, and anything related to elected officials or Public Institutions should be flagged, pulled down, xt beingked, and conte given to it. We need to be given a context so we are not suppressing freedom of speech, because there are legitimate reasons that we might want to use synthetic media for entertainment, comedy, and other visualizations that are out there. At some point i would love to follow up and see what your response would be as far as a foreign adversary the flying this. Mr. Watts i think offensive cyber is in place, and it was done in 2018. Number three, more aggressive responses in terms to sanctions. Sanctions around troll farms and other places where the content comes from. Thank you. Mr. Nunez . Thank you. How do you put in filters to these tech oligarch companies . There are only a few of them, you know who they that doesnt its not developed by partisan left wing like it is now where most of the time its conservatives who get banned and not democrats. The pelosi video was taken down. Thats fine. I dont have a problem with that. I can tell you theres videos up of republicans that go on and on and on. Its all in who is building the filter, right . Are you asking you were talking filters. What i was suggesting is that it would be impossible to exante filter deepfake content. We cant detect it as far as the state of the art goes now, nor i think in the arms race well be able to really filter it and what i would say is that theres Something Like a video where its clearly a doctored and impersonation, not satire. Not parity. Theyre wonderful uses for deepfakes that are rejuvenating for people to create about themselves. Im not suggesting all the fakes. Im not i think i mostly agree with you other than i just dont know how you the challenge is how do you implement it . These are hard problems of content moderation. Ive worked with companies for about ten years. In particular on the issue of nonconsensual pornography and threats and stalking. Its such a contextual question. You cant proactively filter. But when its reported, the question is when we see videos going viral, theres a way in which we companies should react and react responsibly. And absolutely should be bipartisan. There shouldnt be ideology that drives the question. But rather, is this a misrepresentation in a defamatory way, right, that we would say its a falsehood that is harmful to reputation. Thats an impersonation, then we should take it down. That is the default im imagining. But it would be expost. But its a challenge. You talked about the 1996 law that needs to be changed. I think it has to be one way or another. Either they have to truly be an open Public Square which then , its very difficult to filter. Whoever is developing the filter puts their bias into the filter. Actually, 1996, that bill, it did not imagine an open Public Square for private companies cannot filter. The opposite. It was designed tone courage to encourage selfmonitoring. And to provide an immunity in exchange for Good Samaritans filtering and blocking of offensive content. So the entire premise of section 230 is to encourage and provide an immunity so that there was filtering and blocking. Because congress knew it would be too hard for congress and the ftc to get ahead of this. That was in 1996. Imagine now the scale that we face. I think we should preserve the immunity but condition it on reasonable content moderation practices so that there are some sites that literally traffic in abuse, that encourage illegality and they should not enjoy immunity from liability. Right. And were back to where we started. This is the challenge, right . So how do we draft legislation that would enable that . Happy to tell you how to do it. Section 230c1 now says, no speaker or publisher or no Online Service shall be treated as a speaker of publisher of someone elses content. What we can do is change that to say that no Online Service that engages in reasonable content moderation practices shall be treated as a speaker or publisher of somebody elses content. So we can change section 230 with some imagination. That depends on what the definition of reasonable is. Thats what law does really well. Every time i hear a lawyer say we cant figure out whats reasonable, its called tort law. Negligence is built on the foundation of reasonableness. So often law moves in a pendulum. We often start with no liability because we really want to protect businesses and we should and we experiment and we realize thats a lot of harm. Then we often overreact and impose strict liability. And then we get somewhere in the middle. Thats where negligence lives, reasonable practices. We have industries theres content moderation that has been going on for the past ten years and ive been advising twitter and facebook all of that time. There is meaningful reasonable practices that are emerging and have emerged in the last ten years. We have a guide. Its not as if this is a new issue in 2019. So we can come up with reasonable practices. Thank you. I yield back. Mr. Hiems. Thank you, mr. Chairman. Dr. Dormund. I want to get a quick sense of you from the status quo with our ability to detect with a race is. Before i do that i want to highlight something i think is of immediate and intense interest to the Intelligence Community. Mr. Watts you said somethings , happening on the base somewhere, we can just turn on the cameras. Im not sure thats right because if you can create a deepfake, theres no reason why you cant create a deepfake from that camera to the screen, right . The point im trying to make is our Intelligence Community obviously relies on things like full motion video and photographs and that sort of thing. One of the threats here is not just the threat we might be made to look silly on youtube, but that our Intelligence Community using its own assets might not be able to tell fact from fiction, is that correct . When you say lets turn on the cameras, im not sure that is enough. One of my other recommendation was digital verification. These folks will know better because theyre more technically sound than i am on this. Digital verification for date, time, location to include realtime content. There is some block chain rigidly Registry Solutions being developed. Part of that would be then if you as the u. S. Government turn on your cameras, it can be verified by news agencies, reporters. We could have it on cspan. We could use it in a lot of different ways but we have to make sure we have the ability to verify that our content is real. If thats what of impersonation is done, we can do it quickly and people will know which one to sort through. Ill defer to them in terms of the technical but some of this has already been developed. It is not quite there yet but i with that accompanied with it. That leads into my question. I understand theres no Silver Bullet here. This is going to be a cat and mouse game. Take a minute or two and tell us where are we in that cat or mouse game. Should we expect to have undetectable deepfakes out there in one year, two years, five years . Where are we today and how big of a challenge is this . I think there is the risk of having undetectable content that gets modulated, shared online. Right now things like compression, if you have a very low resolution version of a video, the attribution can be destroyed. The camera fingerprint where this content came from can be destroyed. A lot of the trace evidence can be destroyed with very simple types of manipulation on top of the deepfake process or any type of manipulation. The challenge that we have, though, is that we do have Point Solutions for a lot of these components. And bringing them together in a useful way and, as i said, getting them in the hands of everyone throughout the pipeline. Imagine if facebook or youtube or any of these other companies could have this up front and when the human reviewers facebook just reported they hired 30,000 people to review content could have this and say i have a questioned video or piece of audio or something that i need to review, now and let me go run this algorithm on it. Or this set of algorithms. Do that up front so they have a report associated with a particular image or video. Then if theres questions, to put that warning up there. I think the public doesnt know enough about whats possible to demand that if somebody knows something. The truth of the matter is when this stuff gets shared, it gets created once and when it gets shared, it gets shared across different platforms. It gets shared by different people with different media. But the truth of the matter is that signature for that particular piece of video or audio is there. And so there are tools that the social Media Companies could use to link those together and make a decision and then share it with everyone, the same way as we do with malware, for example, cyber issues. Weve gotten to the point where now were protecting our front door and we need to protect our front door from images and video as well. Thank you, doctor. Professor citron, the theme of this hearing is how scary deepfakes are. But one of the more scary things ive heard this morning is your statement that the pelosi video should have been taken down. There should be this. I dont have a lot of time. Sadly, there wont be a moment for you to answer but i do want to have this conversation because as awful as i think we all thought that pelosi video was, theres got to be a difference if the russians put that up, which is one thing, versus if mad magazine does that as a satire. As you know better than me, we dont have a lot of protections as public figures with respect to defamation. And some of the language youve used today makes me worry about First Amendment, equities, centuries long tradition of satirizing People Like Us who richly deserve being satirized. I simply just wanted to put that on the record and hope we have an opportunity this morning to hear more about where that boundary lies and how we can protect a long tradition of freedom of expression. With that i yield back. Dr. Windstorm . Thank you all for being here. Boy, weve come a long way. I remember chevy chase playing gerald ford on saturday night live and he didnt even pretend to look like gerald ford. Then we see forest gump which was a wonderful movie. It was entertainment. I remember thinking, how did they do that . Out of everything bad theres a chance to do something good but , out of everything good theres obviously a chance for people to do something bad. I think we see that. The way it sounds with where were headed its like were all living in the truman show or Something Like that. We have got to be careful about that. I think about in that vein out of something good something bad can happen, im sure the Wright Brothers when they learned to fly didnt think and maybe we can fly this into a building someday and kill people, right . But thats what happens in this world, unfortunately. As a result of that, 9 11 for example, it takes a lot longer to get on a plane and for good reason. I think that where we need to be headed might be and i want your opinions on it obviously weve got to slow this down, you know, before something just hits it. I think youre talking about the triage idea. Maybe we label. Maybe unfortunately we have to tell people before they see something this is satire, its not real and you have to in some way verify. Its kind of pathetic but at the same time that may be what we have to do. Slow it down. Triage it. This is not verified, this is satire. Maybe on a global scale when it comes to punitive measures to people that are doing things nefarious, maybe we have to have internal extradition laws, because when something comes from some other country, maybe even a friendly country, that defames and hurts someone here, maybe we both agree amongst those nations that well extradite those people and they can be punished in your country for what they did to one of your citizens. Id love your opinion on those, the triage, labeling and extradition. Whoever wants to take it first. I think thats absolutely right. One of the reasons that these types of manipulated images gain traction is because its almost instantaneous that they can be shared around the world, shared across platforms. You can see something on one platform and theres a button there to posted to another. A lie can go halfway around the world before the truth can get its shoes on and thats true. Personally i dont see any reason why you know, broadcast news does it with live types of they have a delay, sevensecond delay or 15second delay. Theres no reason why things have to be instantaneous. We should give our social media should instill these types of things to delay so they can get these types of things online. They can decide whether they should label it. We still need to put the pressure on for those types of things. But theres a seriousness issue. Theres from satire all the way up through child pornography. Weve done it for child pornography. Weve done it for human trafficking. Theyre serious about those things. This is another area thats a little bit more in the middle, but i think they can make the same effort in these areas to do that type of triage. If you say what youre about to see is satire and has been modified. Go ahead. So i think one thing were stressing is we will continue to be surprised by Technological Progress in this domain. The lure of a lot of this stuff is all of these people think theyre the Wright Brothers, they feel that. Theyre all busily creating stuff and figuring out the secondorder effects of what they build is difficult. I think we need to build infrastructure so you have some third party measuring the progression of these third these technologies so you can anticipate the other things in expectation. The labeling, i think, is incredibly important. And there are times in which thats the perfect, rather than secondbest. That where we should err on the side of inclusion and label it as synthetic and be so required to label it. Its true that there are some instances, though, where we say where labeling is just not good enough. That it is defamatory, that people will believe the lie. Theres really no counter speech to some falsehoods. Some impersonations. If we get a chance, id love to hear back from you on the notion of extradition laws and other punitive measures. Thank you. I yield back. Sewell. Dr. Dormund, you didnt really answer my colleagues question about how far away are we from actually being able to detect deepfakes. I know at darpa you were working on that. Where are weve commercially or by government or researchers to be technologically able to detect deepfakes . Deepfakes is typically referred to as a particular technology that there are , Certain Software out there for doing that. Its not a general concept. The initial paper that was published that gave rise to this technology came after the start of the Metaphor Program. We did adapt to start looking at those things. There are Point Solutions out there today that deepfakes coming from these particular softwares can be detected. Do we have the technology to actually be able to digitally verify the videos, the photographs, et cetera . The problem is doing it at scale. The problem is doing it at scale. If you give me a particular video, with high confidence i can tell you whether this is a fake video and i can also come back and say, ok, here are the videos or images that went into it. How long does that take . Is that a matter of an hour, 30 minutes . With the right hardware and things, it can be done with a constant delay. Yes, 15 minutes, 20 minutes. So in advance of the 2020 elections, what can campaigns, Political Parties, candidates do to prepare for the possibility of deepfake content . Mr. Watts . Professor . One thing i think even here on capitol hill and with the Political Parties is urge the social Media Industry to Work Together to create unified standards. So part of the problem with all of these incidents is if youre a manipulator, domestic or international, and youre making deep fakes, youre going to go to whatever platform allows you to post anything from inauthentic accounts. If they cant share across, it is like a cancer. They go to wherever the weak point is and it spreads throughout the system even if facebook or google or twitter do a good job. I think one thing is really pressuring the social Media Industry to Work Together. That goes for extremism, disinformation, political smear campaigns. All things across what is the standard for policing . I think the other thing is having Rapid Responses to deal with this stuff. Any sort of lag, as much as defense is not the best way, any sort of lag in terms of response allows that conspiracy to grow. The quicker you get out on it, Mainstream Media outlets can also work to help refute. Other politicians or elected officials can help you do that refutation. Professor, what would you suggest that Political Parties and candidates do . I think candidates should have clear policies about deep fakes and a commitment not to use them, not to spread them and have established early on relationships with social Media Companies so that when a candidate can say, i wasnt there, i wasnt doing that were saying that at that particular time, its an immediate entree for the folks at the content moderation whoever it is at , facebook, whoever it is at twitter, microsoft whoever it is , that they have immediate Rapid Response teams. How do we even begin to tackle this sort of liar dividend . I love that. Youre using my phrase. Thank you. In which politicians can deny the truth by claiming the recording is a deepfake. What do you suggest we do about that conundrum . I love this. Twice weve gotten some play for the liars dividend which we conceived in our California Law review piece. What most worries us is in an environment of pervasive deepfakes weve cull cultured culturated people not to believe their eyes and ears. The wrongdoer can seize on that there is a genuine recording of ms. Jeff and say that is not me. Thats a deepfake. Part of it is education. Part of our robust education we have to have with the public is telling them about the phenomenon about the liars dividend. Its not that we shouldnt educate people. So often the response the bobby guy night is, do we give up . Our response is absolutely not. It must be a robust part of the learning curve. To say, look, we know that wrongdoers are going to seize on the deepfake phenomena to escape reality and we cant let them do that either. We have to get in the middle from completely believing everything our eyes and ears tell us to being skeptical without being nihilistic. We do have a marketplace of ideas that is potentially aretioning, but one but we saying is now we are saying we dont want to get into that space where we have a nonfunctioning marketplace of ideas. Thank you. Mr. Stewart . Thank you, chairman. And to the witnesses, thank you for being here. Its been a helpful panel, although i have to say that im a little bit concerned with some of your suggestions. I think although in an ideal world they would be helpful, in the real world we live in im afraid some of them are nearly impossible to implement. And some of them have some troubling aspects themselves. In the sense that its kind of like Fact Checkers who arent insertFact Checkers who their opinion. This is just a reality. Sitting on the intel committee, im often asked in conversations and in casual discussions what do i think is the greatest threat facing the world. A couple of years i answered that and said without thinking, i nearly blurted it out. I think its that no one knows what is real anymore. As i was driving home that evening i started thinking and i realize that is true. I think that is the greatest threat facing our nation. People dont accept basic truths and basic falsehoods anymore. Partly because of their own interests or they dont understand what is really true. Its not just deepfakes. Rt television is extraordinarily good at propaganda that many people think is perfectly legitimate and real. Fake news, a term that we all unfortunately have become very familiar with. Manipulation of intelligence products is extraordinarily troubling to me. And we live in a world where black is white and white is black. I could show you evidence that white is black. And a lot of people would belief believe me that white is black. I think for us to lose that by the way, i think we can control governments. I think we can control to a certain extent legitimate businesses. We cant control everyone. This is going to be so pervasive and so available that virtually anyone could create this. Its easy to control the u. S. Government and say you cant create it or use it for political manipulations but you , cant control the over 6 billion people on the earth. That is my concern, just the pure volume of it. Its like trying to monitor every bumblebee thats flying around america. Last thing, it goes both ways. This is my concern as well. We could create the impression that a lie is real, but we can also say that Something Real is a lie. A politician caught in a bribe, which by the way politicians do much worse things than that. It is seven 1970s but lets go with that example. A politician is caught in a bribe and it could be actually true and he would then say, no, no, its just a deep fake, thats not real. So you lose the credibility both ways, which brings me to my question. The first is with the potential for so much harm, should we have the algorithms that create deep fakes, should they be open source . And if the answer is no, weve got to do that right now. We cant wait for two or three years to do that because theyll already be per vasevasive throughout the world. Be pervasive throughout the world. The second question and this is almost rhetorical is how do we prepare people to live in a world of deception . How do we prepare people to live in a world where they generally may not know whats real or not . Anyone who would jump on those . Should the algorithms be open source or should we control them . I will address the we made a first one. Conscious decision to make the Metaphor Program open. Youll see even a week and a half of net at the recognition conference, there will be a workshop there thats dealing with this. Even though theres potential for our adversaries learning from these things, theyre going to learn anyway. We need to get this type of stuff out there. We need to get it into the hands of users. There are companies out there that are starting to take these types of things. I absolutely think these types of things need to be open sourced. Its the same technology being used in terms of deep learning to create this type of content. Youre saying it should be open sourced primarily because theyll get access to it anyway, is that the essence of your response . Well, people need to be able to use it. The more we can use it to educate the community, educate people, give people the tools so they can make the choices for themselves. Thats what were looking for. Ill accept that with some hesitation. I think they get it anyway. What about suggestions on how we prepare people to live in a world that just is so steeped in deception . Im sure youve thought through that. We have ten seconds to answer. When Justice OliverWendell Holmes came up with the marketplace of ideas, he was a cynic. He was not suggesting truth would always win out. He worried about humanity. But the broader endeavor is at the foundation of our democracy we can have a set of accepted truths so we can have real meaningful policy conversations. We cant give up on the project. I agree with you. That is our hope but that , foundation of accepted truths is very shaky at this moment. Thank you, chairman. Thank you. Mr. Carson. Thank you, chairman schiff. In an airera of prevailing mistrust with journalists and the media, do deepfakes risk aggravating this kind of distrust . Prior to working in a. I. , i was a professional journalist for seven or eight years and finished up working at bloomberg and businessweek. I speak from some experience. Yes, i think this is a very, very, very severe and potentially undercovered threat because when you write a story that people dont like, they try and attack you as the author of it or they try and attack the integrity of the institution. This makes it trivial to do that, to produce stuff they can convince people that you were not being factually accurate. So yes, not only do we see the journalists themselves being attacked, but i think what is so corrosive is the notion that the media is going to sit on real evidence for fear that its a fake. And we certainly have already seen sort of stings with media organizations and now theyve got to be wary of stings. They are tough to debunk without the legwork, without journalistic effort. The corrosive effort, what we call trust decay, effects not only politicians and our view of civic and political institutions, but everything. And centrally so journalism and the media. If i could just add to that, over time if an information consumer does not know what to believe, they cant tell fact from fiction, then they will either believe everything or theyll believe nothing. If they believe nothing, it leads to longterm apathy and that is destructive for the United States. I think you could look to russia as an example of what has happened internally to the russian people and how the russian government has used the fire hose of falsehoods approach that if you cant believe anything, you just give up and surrender. The consequences to democracy our political participation longterm apathy, disappointment , in officials, not wanting to show up and do things like register for the draft or show up as a volunteer force. That would be one i would look at over the next 1015 years. I think thats the longterm corrosive effect. If you look at russias longterm doctrine of subversion, that is what they are after. They just are much more patient than we are and willing to wait decades for it to come to fruition. In addition to that, will Technology Solutions for authentication be available for even sufficient amounts for journalists or media organizations or Fact Checkers to keep up with even validating a piece of media before reporting on it . Like chairman schiff crowd surfing at south by southwest or premiering his own netflix special, how can you verify these things as journalists . Its important to have these tools out there for them. We are not at the case now. We have Point Solutions. We dont have a general solution. We dont have a gatekeeper that can be automated completely. This is a cat and mouse game. As things get better for our being able to deceive visually, theyre going to get better and theyre going to move onto covering up their trace evidence. I think the tools can be put in the hand and they should be. Weve had situations where there are embedded reporters where somebody comes up to them with something on a cell phone and shows an atrocity. They need those tools. They have to know whether to report on that or not. These manipulations, even before peopleomated deepfakes, were doing this kind of things. It could even evolve into some kind of new scam where you have someone with a piece of information selling it to tmz or even a socalled Credible Media outlet scamming for 50,000 and the piece is like fake, you know. Its possible, sure. If i could add one dimension to this, though, is how lucky we are to have a very engaged public in terms of actually rebutting things that are false and challenging them. Its not just journalists that are doing it. Its also the public that is challenging back and forth. One of the dangers we dont think of is in information environments where authoritarians control and eliminate all rebuttals that could have a very significant backlash to us which is why i would like to see widespread proliferation for authentication not just here in the united , states states. Wherever regime controls all the information flow and can suppress reality. Fact checking is expensive and time intensive and the number of news organizations on the planet who are doing well in economic terms is sort of dwindling over time. So i think if we were to go down this path you need to find a way to fund that because they of their own volition because of the economics theyre not going naturally adopt this stuff other than a few small trusted institutions. It becomes incredibly difficult to remain a news source if youre having to pay to fact check constantly. Yes. Thank you. I yield back. Mr. Crawford. Thank you, mr. Chairman. Weve come a long way since milli vanilli. In the time we have been here i pulled up a video that was recently posted. Two british artists teamed with an Israeli Company canny a. I. And they created a video of Mark Zuckerberg saying he could control the future. They posted that on facebook specifically to challenge facebook, and then zuckerberg has responded by saying hes not going to take that down. I just wonder if you could comment on that. What do you think this is about and do you think its a wise decision for zuckerberg to not take it down given what weve talked about . I will start with you, professor citron. I think thats a perfect example where given the context. Thats satire and parody. That is really healthy for conversation. And all of these questions are hard. Of course our default presumption as we approach speech online is from a First Amendment perspective, which is we want to keep government out of calling balls and strikes when it comes to ideas in the marketplace. But private companies can make those kinds of choices and they have an incredible amount of power. They also have free without any liability and i think they made the right choice to keep up it was a conversation about essentially the cheap fake of nancy pelosi. It seemed to be a conversation about the choices they made and what does that mean for society. So it was incredibly productive, i think. It seems correct in this instance, but all of these companies are kind of groping in the dark when it comes to what policies they need overall because its a really, really, really hard problem. I think what would be helpful is to have a way for them to share policies across multiple companies and to seek standardization because these judgment calls are very kwaltive in nature. This does point out the idea of context. They were trying to essentially duplicate that. This does point out the idea of context. Part of that video, it spread for one purpose only which was to challenge this rule. So we would sort of discuss it in this forum. But no one really believes Mark Zuckerberg can control the future because he surely wouldnt want to show up here to testify or being in the quagmire hes in. How do you know that . Im trying to make a very serious point about context which is whenever morality lity strikes,ra that human cure ration, we see 4,000 shares in ten minutes. Now we see 16,000 shares over 15 minutes. That is when it should go. Then we look at labeling. We look at context. How do we inform the public so they make good decisions around it . We had a parallel to this in the analog era. When i was a kid it would say aliens landed at area 51. I would ask mother or friends where does this come from. They would say that source is just putting out information for entertainment. That did not really happen. We need to help consumers make better decisions around that. I like it that facebook has been consistent in terms of their enforcement and im also not going to say they should never change what those terms are. I think theyre looking to capitol hill to figure out what is it that we want to be policed . What does europe want to be policed . I think they would like to hear from legislatures about what falls inside those parameters. The one thing that i do really like that theyre doing is inauthentic account creation. And inauthentic content creation. Is there a particular company or a particular region or particular nation that is especially adept at this technology, that is developing at a quicker rate or whatever . Its distributed along the lines youd expect of Prominent Research centers in america and china and europe. Its distributed wherever you have good ai technologists who have the capability to create this stuff, which makes it very challenging. At some point folks at home will be able to access it. It already is. Absolutely. Thats one of the big differences. You used to have to go out and buy photoshop or have some of these desktop editors. Now a High School Student with a good computer and if theyre a gamer they already have a good gpu card, can Download Data and train this type of thing overnight with software thats open and freely available. Its not something that you have to be an a. I. Expert to run. A novice can run these type of things. Thank you. I yield back. Mr. Quigley. Thank you, mr. Chairman. Thank you for your participation. Following up on those points, the themes here getting easier to do, the quality is getting better, getting harder to detect. The examples we talk about as victims, democracy elected , officials, corporations, this horrible attack on journalists, but what about a Small Business with limited resources . What about individuals who are victims of the example you gave, professor, revenge porn for example. And doctor, you talked about the scale and widespread authentication. What capabilities might exist as we go forward either on social media platforms, Law Enforcement or for individuals themselves to deal with this detection issue . Well, you know, i envision some time where theres a button on every social media piece or every time you get even a text message with a video attached to it that you can hit, goes off, it gathers information, not necessarily totally automated. Its been vetted by one of many other organizations. If you can identify where it came from so the individual can make those decisions. The problem is a lot of these types of technologies exist in the labs, in research, in different organizations that are not shared and theyre not implemented at scale. So if i want to go out and test a picture there was a very interesting picture before a tornado up in maryland a couple of weeks ago. It looked surreal. And i immediately thought, oh, that must have been somewhere else, somewhere in the midwest years ago. So i did a search. Theres a reverse image search that you can do. After doing some research, i found that it was indeed real and it was practically in my back yard. But not everybody has those types of capabilities. Not everybody thinks to do that type of thing. I know that i have relatives that just do this and they see something and they want to share it. So i think the education piece in getting these tools at scale is what we need to work towards. But the key is even with detection, for the everyday person who has a deepfake sex video in the Google Search of their name prominently featured and a platform refuses to take it down, it is their cv meaning its part of what everyone sees about them. So it is incredibly destructive. The same is probably true for the Small Business. If there is a deepfake that really casts us under their business model. They may not be able to have it removed even though it is false and its an impersonation. Even if its defamation we know the law moves really slowly. They brought a defamation suit assuming they can find the creator. Minal periodis lu where individuals will suffer and its incredibly hard to talk to victims because theres so little that i can force anyone to do. Were going to see a lot of suffering. The issues that we just talked about, are you trying to tackle those with your model laws that youre talking about . Yes. I am the Vice President of the Cyber Civil Rights Initiative and we have been working with lawmakers around the country both at the state level and the federal level, both in terms of nonconsensual pornography and now thinking about how we might really carefully and narrowly craft a law that would ban deep fakes or manufactured videos that amount to criminal defamation. I think weve got work ahead of us at ccri and laws around the country. It can be tackled but its going to have a really modest impact because law moves slowly. When youre doing this, youre talking to local and state Law Enforcement agencies . Yes. I wrote a book about hate crimes in cyberspace which was about the phenomenon of cyber stalking and how hard it is to teach local Law Enforcement both about the technology and the laws themselves. Theyre great at street crimes but when you talk about online crimes they say i dont know how to really begin. I dont know how to get a warrant to get an Online Service provider, for an isp address. I know congressman clark has called for funding, some training of local Law Enforcement on the question of cyber stalking, both as a technical matter and as law. Id love to see that not only with regard to cyber stalking and threats but more broadly. Thank you all. Mr. Hurt. Thank you, chairman. Im going to try to do something thats probably impossible in the next five minutes, touch on and get your perspective on four areas. The ability to text. You touched on authentication as a strategy for this. How do we handle how do we develop a strategy in a narrow National Security sense to counter disinformation and who should be doing that and broadly education . My first question is probably to you, mr. Clark. Can you talk to us about the ability to detect and the forensics . Is there an ability to do a pixel by pixel analysis, a bit by bit analysis . Are there other areas we should be focused on to help with the ability to detect . The approach thats been taken in the community is one of a comprehensive view. Yes, there are pixel types of applications, not necessarily pixel by pixel but the meta data that you get on an image. You know what compression algorithms were there. You know there are residual information left so you take it , modify and recompress it. At the digital level that is where the majority of the work is being done. How easy is that now and who should potentially be doing that . Well, the government is putting a lot of money into this piece. As i said, theres a lot more manipulators than there are detectors. So i would hope that behind , closed doors at the social media sites and the youtubes of the world are looking into this type of application. Im not sure. Is the ability to understand the various meta data or even getting to a point where we can enpixel by pixel exploration mass, is that going to help us with real authentication . Personally i dont like to use the word authentication. As we know, absolutely everything that goes up online is modified in some way, whether its cropped or theres a color histogram distribution adjustment. What word do you use . We like to use that things have been modified. But its a scale. So if theres a modification of intent. If you put a flower in a picture next to someone that has a very different effect than if you replace somebodys face in a picture. This discussion, this attribution piece and the actual report that says this is exactly what was done was a big part of the Metaphor Program as well. The closest youre going to debt is say all of these things happened to this image so the user would have to make the decision on whether this is credible or not . Yes. Even in an automated way, if you are taking an image and youre the fbi and youre going to court, even if you did change one pixel, you lose the credibility. But if you are doing the fbi is doing an investigation and you have a very compressed grainy Surveillance Video it still might give you information. You believe it. Disinformation is a subsection of covert action. Covert action and counter covert action is the responsibility of the Central Intelligence agency. Yet the Central Intelligence agency, because of the National Security act of cant do covert 1947 action in the United States. How should we be looking at a Government Strategy to deal with disinformation, especially in the context of National Security . Or somebody else more appropriate to start with that. I think its two parts. I would encourage the social Media Industry and the platforms to focus on methods whos doing deepfakes, digital forgeries. Who is doing computational propaganda . Can we have a listing of those . Theyre not always nefarious but then we know who the people are who are building the equipment. This is essentially the weapons being used. I would encourage the government then to focus on actors. This is in the case of the cia , overseas, dhs in terms of protecting the homeland, state department which used to have the u. S. Information agency would be out there outing and overtly going after those actors that are doing the manipulation. I feel like we are still after several years now really slow to do this. Theyre the only ones that can figure it out. When i have worked with social media teams and we spot actors we believe are doing things, we sometimes have to wait years for the government to go, yes, heres the mueller report. That had already been out in the news. The more rapidly the public can help, the more social Media Companies know what to take down. That attribution really only comes down to the u. S. Government. They are the only ones with the tools to do that. Thank you. Chairman, i yield back. Mr. Heck. Thank you, mr. Chairman. First of all, professor citron, i want to make sure i understood correctly. If something happened to that reporter in india had happened in america, did i understand correctly that that would not constitute a crime per se . It might be understood as cyber stalking, which is a crime under federal and most states laws. The problem is it was sort of like death by a thousand cuts. To constitute cyber stalking you need a course of conduct, a persistent repetition by the same person. So if it were the first time what happens is its like a cyber mob coming together. So one person puts up the photo or screen shot. Another person puts up the home address. Yet another person puts up im available. All it says is im available with a screen shot. So the person who originated it under current law would likely not be subject to criminal prosecution. Right. Did i also understand you to say that even if it were, it would have modest impact . That is what i said was if we had criminal laws that combatted the sort of deepfake phenomenon and really tailored to falsehoods and impersonations that create harm, i think law is really important. Its modest in the overall impact because we need a partnership. With technologists. I want to move on but i also cannot help but have this terrible flash of dantes inferno. Abandon hope all ye who enter here. Whose job should it be to label . That wasnt clear. I kind of thought it might be the media platform companies. I think it would be the creator that we could much as we do in the Campaign Finance space where we say theres certain Disclosure Rules that we say if its a political ad, you have to own it. If its a foreign originator, how is it that we have any jurisdictional reach . We dont. There are no boundaries. As a matter of practical fact, even if its created in america, transmitted to a foreign person and then retransmitted, we have no means of enforcement. Right. So labeling in and of itself look weve got social media platforms. If they had some responsibility, they may and im pretty skeptical about whether were going to get there in the near future about the technology of detection. But assuming thats possible, a reasonable practice could be disclosure saying this is a fake. Do with it what you will. So we actually have, as it were, a comparable truth Verification Mechanism currently, snopes. And yet a member of my immediate family once posted how outrageous it was and how the constitution ought to be amended because members of congress can retire after one term and immediately collect full pension benefits. Every member appear has heard this. Have Health Care Free for life and their children go to college for free. Not one word, not one letter of that assertion is true, which could have easily been verified if they had gone to snopes. They didnt. And even if they did in a political context, the truth is the person whos perpetuating that may have a political agenda such that they may also in a parallel fashion engage in ad hominem attacks against the reliability of snopes. So, see remarks above. I dont have much time left but im really interested in mr. Hiems getting at the issue of political speech and First Amendment. You mentioned that we are protected against being impersonated. Its not clear to me how we square case law which has created a very high barrier. Its incredibly important to recognize that everything youve just described is totally protected speech. You know, the United States has made clear in a case called the United States versus alvarez, a plurality in currencies of the court. Look, we protect falsehoods. As Justice Kennedy explained, it reawakens the conscious, it has us engage in counter speech and sort of recommit to citizenship. But there are times when falsehoods create certain kind of cognizable harm, we should regulate. When that includes defamation even if Public Officials have , said with actual malice, thats the truth of the actual matter asserted. Theres 21 crimes made up of speech. We can regulate certain words and images if it falls in one of those categories or if we can meet strict scrutiny. Yes, the presumption is that its protected speech if its a falsehood, but falsehood that causes cognizable harm, the entire court has said that is a space that we allow regulation. Thank you. Mr. Welch. Thank you very much. This is very helpful. Theres different categories and were all trying to get our arms around them. Theres the question the First Amendment which mr. Heck and mr. Hiems were talking about. Theres the question of foreign interference and theres the question of economic harm reputational harm. We are all learning as we go on this. I have heard you be describing essentially the whole world of publishing is upside down. It doesnt exist like it did prior to the internet. So, the question is whether we want to get back to some of the principles that applied pre social media. Its not like those principles necessarily have to be abandoned. They have to be applied. They would apply in different ways for each of those categories. I just want to ask each of you whether we should get back to the requirement of an editorial function and a reasonable standard of care by treating these platforms as publishers. I know ms. Citron you said yes. Id be interested in seeing what the other see, just yes or no on that. Working with people in this area i think the horse has sort of left the gate on this. I dont think were going to be able to get back to that type of what about with that statutory change that ms. Citron was proposing . Who has the duty . May i be clear for one second . It wasnt that i was suggesting social media platforms be understood as publishers , strictly liable, but rather that we condition their immunity on reasonable practices. Those reasonable practices may be the content moderation practices they use right now. So im going to disagree about calling them publishers who would be strictly liable for defamation. Thats not what im suggesting at all. Thank you for that clarification. That seems to be one fundamental question we would have to ask because that would be a legislative action. Mr. Clark . I think you have a whack a mole issue here. They compose platforms to evade rules we put against platforms doing certain kind of things. I do agree that its very difficult to contemplate controlling speech in this way because i think the habit of the entire culture of people has changed. What about this question of somebody going online and putting up a fake video that destroys an ipo . Who has the duty of care with respect to allowing that to be stated on their platform . Nobody has it . I think we can authenticate content and users and i think you can make users culpable for certain types of content they face. They post. Who would be liable in that case about the destroyed ipo . I will defer to the lawyer. The speaker. The createor of the deep fake. The platform had reasonable practices of authentication and ex post moderation practices. Does the platform under current law have any duty. They have no liability. That seems like a very direct question. One of the other issues that is debated theres a different point of view about bias and what goes on the platforms. There would have to be some standard that wasnt seen as tilting the Playing Field for republicans or democrats. Is that possible to do . Was that something that was true presocial media in the days of youre the journalist, jack. I was going to save it for standards. We can actually use technology a bit here to create technological standards for making a judgment call as to whether something is or is not fake. If you have open that might take the political aspects out of this if you have open standards created by academia and timetested to kick the tires and provide assurance. It seems reasonable. Thank you all very much. My time is up. I yield back. Ms. Demings. This conversation thank you so much, mr. Chairman. Thank you to all of you for being here. This conversation this morning has been pretty disturbing and actually quite scary. The internet is the new weapon of choice. As i listened to the testimony and the questions here, as we think about an individual who goes out and violates laws or creates harm would be held accountable i believe that any individual or entity that bullies or stalks or creates harm or becomes a Public Safety risk, any entity that creates an environment for those things to happen should be held accountable as well. Risk. In the entity that creates an even for those things to happen should be held accountable as well. You know when i think about those around the world who are not our allies. They want to create chaos in this country. What a wonderful way, and easy way to be able to do that. The problem is the fake information is a problem. But the other problem is it creates an environment where good people no longer believe the good guys. And boy are we seeing the and our country. That is a major problem, the institutions that we have grown to depend on and believing are no longer believe. That could create total chaos. Back to his statement about epic video is being created in america the transmitted to another country. Could that be the act of transmitting, simple act of transmitting the video be the validation. Another is been a lot of discussion with no boundaries, how to hold somebody in a foreign land accountable but id love to hear thoughts. There are two pieces to that, the procedural jurisdictional question of whether its constitutional to haul them into your court and then theres the expedition question which are rely on mr. Wass for that. If youre in america and you transmit video that creates a Public Safety concern or National Security risk from america be the validation. Its directed at the United States or in the United States and direct the outside and you transmit from. Thats a different question than i thought youre asking. Under the 14th amend how we think about personal jurisdiction, if your immune activity to another state and purposely we can haul you into court. The question is when its an american arresting her productivity abroad i would imagine hi its contingent mr. Restaurant taken from the quick. Im not a lawyer and i tried to avoid them but i would say theres no specific provision around transmitting the abroad. It comes down to whatever country affected every bricks or loss in the relationship with the United States. Im not sure if its ever been executed. It is something that needs to be addressed because what is been clear over the last four years exertional physical boundary in the communities whom are often times the smartest minute providers, russia, china, around, they actually look to enlist people and make the contact look more frantic and setting people up. Sometimes there where and sometimes not in those aware are doing it willingly. If you look at the leaks which was another hacking attempt trying to drag an election it was 70 north america that awarded the world to it. I think we need to figure out the relationships and how we would handle it in terms of the all our own Law Enforcement. Thank you. We talked about the Intelligence Community and the National Security entity. You talked a little bit about, how should we test the Intelligence Community interNational Security entity with assessing and forecasting future impact of technology. I think theres two part, one the purveyors and actors that are going to use it. Its pretty straight forward from outside from where i work. I think the part missing from the government perspective was where the technology being developed, the number one place i would have under the u. S. Government is tel aviv, this is from the central from cyber tools to abutment tools to implement operation. Both good and bad depending on the perspective. When i talked to the government about top there really wellinformed about what deep nationstate actors are doing but missing with the private sector is openly available of fbi and other tools out there. Quickly to his point, it is worth repeating. The fundamental techniques and tools publish online and we compile metrics of rates of improvement so i agree what he said but it is easy to go and discover this information for cells. Thank you. I yield back. Doctor windstream. Thank you, mr. Chairman and thank you for addressing the question that we ran out of time on the extradition loss. I appreciate have an opportunity and getting to other punitive measures that we may be able to Start Talking about and think about and with the extradition loss we might end up with a lot of people hanging out in other peoples embassies for many years. Rather than being extradited. At the same time i dont find myself eager to engage with trial lawyers. That is probably where we need to head with us. Located punitive measures certainly monitor would be included because people end up as we pointed out with huge monetary losses because of the fake stories. And what about prison time . We really need to consider being tough on this. If its to be effective for. One thing i would add that i ran out of time and open question was about sanctions. If you look at the gr you indictment in particular which is july of 18. There being sanctioned out of it and so the companies out of the february indictment. Thats effective but you can move down the chain of command such that hackers and influencers and propaganda student want to be hired at the firms because they know the risks that they can be individually synced in. It seems like it would be hard to execute but once we got good at it i think it would be a great facet if you could turn down the implement to were the best hackers in the best propaganda student want to work the authoritarian regime it changed the nature of things. We could look at those pushing up tools in terms of cyber and hacking tools that are being used for malicious purposes. You could go after those companies which often times international. Theyre not necessarily tied to a nation and that would also send a downward pressure across the Information Space and also send a more undercover and places like the dark web, thats okay because that plays into our strength. We have great intelligence abilities at the end in real good sophisticated intelligence agencies. Now we would know where it is. But it would be a black market. It changes the problem to our bandage. The other thing, mention sanctions, that does make a lot of sense especially if its a country that theres no way youre going to get some type of extradition agreement. Its proliferated because we have not responded. Thank you. I yield back in case anyone else wants to respond. I have a few questions can you talk a little bit about im sorry. Mr. Castro. I enjoyed your article and had a chance to visit a few months back there is hate speech and fighting words that are not as political speech and making the determination we had to find out the value of the type of speech or expression so let me ask you what is the value of a fake . To add to that is the value to the list when we think about free speech. As the value to the autonomy rights to the speaker. The value could be profound. It could be that deepfake contributes to art. In the sta star wars we had care fisher. There is a lot of value in deepfake. I recognize with my co panelists are suggesting. We do have guides in the law about falsehoods impersonations whether its another kind of speech where we say fraud. They may go down the road where certain speech like heat of speech isnt protected the same as political or even ordinary all of this is so contexcontext will i dont thine could have a onesizefitsall rule even as to impersonations weve got to bring context to the floor let me follow up i want to ask you all one of the challenges we had with the russian interfere in particularly what they put on facebook it seemed as though the social Media Companies were unprepared for that and there was no infrastructure for moderating those things. There is a creator that uses software who then posts on social media and then the traditional media picks it up and further proliferates it into the American Society so where do we construct that infrastructure from vetting and moderating . Putting something up thats innocent than it gets used by someone else for a different message. It gets out there and is twisted in a certain way down the line. Sometimes the onion articles are actually satire. That is a good example. We need these type of thing how it progress in its to use as much content that suits their narrative that is a pretty standard disinformation approach and false content that means it is more favorable for adversaries to repurpose which is the scenario david talked about so i think the social Media Companies need to work into the severity of impac the t we know that some of those are making a fake video about Electoral Systems being built or broken down on election day 2020. I want to ask a few followup question. I dont know if any of you know to date how many view as it has received that i wonder if you have faith in that there are x. Million and of those how many will learn that it was fake and how many will be permanently misled and furthermore if you can comment on the phenomenon even if youve are later persuaded what you have seen of the person isnt true, psychologists tell you you never lose the lingering negative impression of the person so i wonder if you can comment on those two issues. They tend not to travel so we would expect to hold the same. It is a fake no matter how good you or the press do putting that out there because the truth in this case about what youve seen its faults is not going to be as visually impacting and seeing the video . If you care, you care about clarifications and fact check if you are enjoying the media you enjoy the media so the speaker you enjoy or experience. I dont know i if it is journalism or what the plug should teachers and schools be educating about you can believe what you see. This gets to the wires dividend in politics there is a saying at first time you hear an expression you make personal attribution and the second time you say somebody once said third time its personally yours. For the dividend is now out there but how do we educate young people or not so young people about how to proceed media now without distrusting everything in which there is this lawyers dividend. The more we are seeing even if it is false conference the worldview. Social psychology studies show that we are just going to ignore it so that we will believe a lie if it confirmed its confirmation by its theory so you are right it becomes incredibly hard to be debunked to caus the closet cone worldview. Its tough. That is the task as parents, educators, teachers ten years ago remember the Critical Thinking was how do we teach students how to do a Google Search and ca came baby leave everything in this prominent search in whatever they are ande doing and we saw you did a search for the term. Teachers struggle to explain just because its prominent doesnt mean its real and i think that we will have the same struggle today weve got to teach them about the phenomenon to escape. We have a white house that is popularized to describe lots of things that are real and some in the country there there is alrn environment in which there is licensed to call things faith that are credible and it seems that is a pretty Fertile Ground for information that is truly fake. To find other words for it, false, fraudulent because its been debased as a term people dont know what you mean by it. Its worth noting when President Trump referred weve already seen the dividend has been from the bully pulpit so i think we have a problem on our. Do you think theres optimism for tools where weve gone and checked. Ahead of time they would have had access to it and they wouldnt have had to go search for it. Particularly russia dividing us by pushing out a fake video of Police Violence on people of color but you could certainly push out videos that are enormously jarring and disruptive introducing a video of that negative impression you cant unwind the consequences of what happens in the community its hard to measure thats not happening because of the low barriers to trade and there will be such easy deniability. If i could add there is good news that the socia but the soce watch face books newsroom they are taking down so theyve spreaspread to take us back thap precipitously. We have the curriculum for evaluating sources and the government i was trained on it at the fbi academy they have the Central Intelligence agency which is how to influence Information Knowledge and expertise they teach this as an unclassified sort of course but its how you adapt back into the online space. Its not young people in social media. Its the older generation has come to this Technology Late that doesnt understand they understand the way newspapers are produced where the outlet is coming in who the authors are. I was with a group of students how do we help older generations new to social media or with less experience evaluating the sources you can send them tips do you know where it is physically located at and who the author is. Its not just for the young People Places that are known for extremism and three package for different audiences in the United States. Is the Technology Already at this stage where it can produce a video in other words whether the video is authentic. There are examples out there that if taken out of context if they are sent out there and theres a story or message with it that people will believe it and its not just people that have got a. There is a video out there that showed a plane flying upside down, very realistic looking and i think what people need to do is get confirmation from other sources that some thing really happened so a video in isolation. But if thats what youre talking about independent of whether it passes the test so to speak, yes that is the type of technology thats out there. It will always be possible to disprove the audio by disproving the circumstances around it in other words, if there were audio on the phone i dont know if i were in this place at this time or if there is a video it will always be possible to show she was somewhere else at the time. Do you see the Technology Getting to the point where in the absence of the ability to prove externally the video where the audio is fake that the algorithms that help produce the content would be so good that the best you will be able to do is the computer and analysis that will give you a percentage and the likelihood that this is a forgery 75 that you will never be able to get to 200 , are we headed to that day where it wont be possible to show something we have seen or heard is a legitimate . It was exactly that, coming up with a quantitative scale of manipulation or deception i dont know if theyve gotten there and left partway through but there is going to be a point where we can throw everything they have at this, at these type of techniques and theres still a question about whether its authentic or not we can do close analysis for tools and voice verification but like the court of law one side saying the other thing and there will be cases where there is nothing definitive. On that optimistic note we will conclude and my thanks for your testimony and recommendations. [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] most of us, when we think of winston churchill, we think of an older man sending young men into war, but no one knew better and few new as well the tear and of war, the the devastation and he said to after the second world war, you cannot guilt it gild it. Tonight on q a, historian Candace Miller talks about the early military career of winston churchill. Give me a regiment, i want to go, i want to fight, so he inns up going with the regiment to pretoria on the day that it fell to the british, and he takes over the prison and freeze the men who have been his fellow prisoners. Prisoners his former jailers and watches as the flag is torn down in the union jack is hosted in its place. Tonight at 8 00 eastern on cspans q a. Heres a look at our live coverage monday. Senator mark warner of virginia will discuss chinas strategy to control technology of the future. That gets under way at 12 30 eastern. A 2 15, the u. S. Global leadership coalitions annual state Leaders Summit held in washington, d. C. Speakers include health alex azar, former michigan governor Jennifer Granholm and former pennsylvania senator rick santorum. At 4 00 on cspan two, the Washington Post hosts a summit on free speech. Texas senator ted cruz is among the speakers. This weekend on booktd, at 6 45 eastern, we visit the home authorsusbandandwife to hear how they maintain their relationship despite opposite political views. It is a case to me of what matters in life edits called the chemotherapy test. Lying on a hospital , you do not care the Party Affiliation of the person standing next you getting you fluid. The difference between the modern media today and the Patriot Media that helped found this country is the Patriot Media, the men with the printing presses, with the pamphlets, 30some newspapers that was it, they were trying to fundamentally transform government. They wanted Representative Government and they did not even want a lot of that. Today, the press is trying to fundamentally transform us. At 9 00 eastern, chief white acostaorrespondent jim offers his firsthand account of covering the Trump Administration in his book the enemy of the people. As folks know, as of this taping, we are roughly 90plus days since our last official White House Briefing we just do not have access to white house that we used to even in the Trump Administration where we have them on the record in that briefing room. Not Just Networks vying to get a question in, but also print reporters from wire services, newspapers, foreign news outlets. That has been lost. On tuesday, by a partyline vote of 229191, the house approved a resolution authorizing the house judiciary to take legal acts against william barr and don mcgann for access to the