vimarsana.com

6 00 in the morning, so we have a few groggy members here on the committee. In the heat of he 2016 election, as the russian hacking and dumping operation became apparent, my freedom unanimous concern was that the russians would begin dumping forged documents along with the real ones they stole. It would have been too easy are russia or another malicious actor to seed formed documents among the how then tick onestake toe make almost imfor identify or rebut the fraudulent material. Even if a victim could expose the foreign ricer the damage would be done. Three years later were on the cusp of a technological revolution that could enable even more sipster forms of deception and disinformation by malign actors, foreign or domestic. At vanses in a. I. In Machine Learning let to emergence of advanced digitally doctored types of media, so called deep fakes, that enable malicious actors to foment okay or crisis and disrupt entire campaigns, including that for the presidency. Rapid progress inard official intelligence algorisms made it possible to manipulate media, video imagery, audio, text, with incredible nearly imperceptible results. With sufficient training dat the powerful deep fake generating algorisms can portray a real person doing something any never did or saying word theyd never uttered. This tools are readily available and accessible to expert and novices alike, meaning that attribution of a deep fake to a pick author, whether a Hostile Intelligence Service or a single Internet Troll will be a constant challenge. What is more once someone views a deep fake or fake video the damage is large didone, even if later convinced what they see is a formry that person may never lose let completely the negative impression the video left with them and also the case that not only may fake videos be passed off as real, but real information can be passed off as fake. This is the socalled liars dividend in which people with propensity to deceive are given the benefit of an environment in which it is increasingly difficult for the public to determine what is true. To give our members and the public a sense of the quality of deep fakes today, i want to share a few short examples and even these are not the stateoftheart. The first comes from Bloomberg Business week and demonstrates an i. E. Power clone advice of a journalist so lets watch the now to real put any computer voice to the test im going to call my dear sweet mother and see if she recognizes me. Hey, mom what are you guys up to today. Well, we didnt have any electricity early this morning and were just hanging around the house. Im just finishing up work and wait fork the boys to get home. Okay. I think im coming down with a virus. Oh, well, you feel bad, huh . I was messing around with you. You were talking to a computer. Thought i was talking to. You its amazing. Bad enough that was fake but deceiving his mother and telling her he has a virus. Seems cruel. The second clip comes from courts and demonstrates a punt puppet master type of deep fake video. As you can see these people are able to coopt the head movements of their targets, if married with convincing audio, you can turn a world leader into a ventriloquist dummy. Next, the brief cnn client low highlight research on a claimed expert on deep affect from uc berkeley and featuring an example of a socalled face swap video in which senator elizabeth warrens face is seamlessly transpondded on the body of snl actress Kate Mckinnon. I havent been this excite since found out my package from ll bean had ship. I was ready to fight. So, the only problem with the video is Kate Mckinnon actually looks like senator warren but both were Kate Mckinnon, uphad elizabeth warrens face swapped on to her. But it shows you just how convincing that kind of technology can be. These algorisms can also learn from pictures of real faces to make completely artificial pore private persons who do not exist at all. Can nip here pick out which face is real and which are fake . And of course, all four are fake. All four of those faces are synthetic include created and none of those people are real. Think ahead to 2020 and beyond. One does not need my great imagination to eninvestigation even more nightmarish scenarios that would leave the government, media and the public struggling to discern what is real and whats fake. A statebacked actor creates a deep fake video of a candidate accepting a bribe or an individual cacker claims to have stolen a conversation between two leader been the fact no such conversation took play 0 a troll form writes false or sensational news stories that scale, letting social media platforms and overenemying journalists able to verify and users ability to trust what they are seeing or reading. What enables deep fakes is the uquick witness of social media and the velocity as which false information can spread. We got a preview of that recently when a doctored video of Speaker Nancy Pelosi went viral on facebook, receiving minimums of views in the span of 48 hours. That video was not an a. I. Assisted deep fake but a rather crude manual manipulation that some called a cheap fake. Nonetheless the videos virallity represents representse of the challenge we face and responsibilities that social Media Companies must confront. The companies have taken different approaches which youtube deleading altered video of Speaker Pelosi and facebook labeled it as false and throttledback the speed with which is spread when it was to be fake. Now is the time to mutt in place policies to protect years from this kind of information, not in 2021 after viral deep fakes polite the 2020 elects. By then it will be too late. So in keeping with a series of opening series that examines the challenges to National Security and ore Democratic Institutions the hearing is voting to deep fake and synthetic media. We need to understand the implication, and the i. E. Technology and the internet platforms that give them reach before we can consider appropriate steps to mitigate the potential arms women have a distinguished panel of experts to help us understand and contextualize the potential threat of deep fakes but id like to recognize ranking anybody nunez for any Opening Statement he would like to give. Thank you, mr. Chairman. I join you in your concern about deep fakes and i want to add to that fake news, fake dossiers and agency necessary politics. I do think that in all seriousness, this is real. If you get online you can see pictures of yourself, mr. Chairman, on there, quite entertaining, some of them. I decided not to maybe entertaining for you. I decided not play them today on the screen. But with all seriousness, i appreciate the panelists being here and look forward to your testimony. Yield back. I thank the Ranking Member. Without the objection the Opening Statements will be made part of the report of. I walk want to welcome today panel, jack clark the policy director of open aea research and Technology Company in San Francisco and a member of the center for new American Securities task force on Artificial Intelligence and National Security. Next, david doorman, a professor and director of Artificial Intelligence the Artificial Intelligence institute at the university of buffalo. Until that here he was the Program Manager of darpas media forensics program. Daniel citroen a professor of law at the university of cao are to the several aurals be the potential impact of deep fakes on National Security and democracy. And finally mr. Clint watts the distinguished Research Fellow at the Foreign Policy research institute, a senior fellow at the gmc alliance for securing democracy and his recent scholarship as addressed social media influence operation iches welcome to you and start with you, mr. Clark. Chairman schiff, Ranking Member and Committee Members thank you for the invitation to testify about the National Security threats posed by the intersection of a. I. , fake content and deep fakes. Whatwe talking about when we discuss this subject . Fundmentalityly were talking bit Digital Technologies that make it easier for people to create synthetic media, video images, audio or text. People have been manipulating need where for a very long time, as you well know but things have changed recently. I think there are two fundamental reasons why were here. One is that the continued advancement of computing capabilities, the physical hardware we use to run software on, thats got significantly cheaper and more powerful. And at the same time, software has become increasingly accessible and capable and some of the Software Starting to incorporate a. I. , which makes it dramatically easier to manipulate media and allows for a step change in functionality for things like Video Editing or audio editing which was previously very difficult. The forces driving cheaper computing and easier to use software are fun. Am to the economy and many innovations we have had in the last few years. So when we think about a. I. , one of the con founding factors here is that similar a. I. Technologies used in production of synthetic media or deep fakes are also likely to be used in valuable sciencic research used by scientists to allow people with hearing issues to understand what other people are saying or use in molecular assay, and at the same time the techniques can we used for purposes that justifiably call unease, like being able to synthesize the sound of someones voice, impersonate them on video and write tex of the style used online and seen researcher develop techniqued that come bean these things, allowing them to create a virtual person who can say thinks they havent said and appear to do things they havent necessarily done. Im sure that members of the committee are familiar with their runins with the media and know how awkward to have words put in your mouth you didnt say. Deep fakes tase this problem and accelerate them. How might we approach the challenge . I think for several interventions that we can make this will improve the state of things. One is institutional interventions. It may be possible for large Scale Technology platforms to try to develop and share tools for the detection of malicious synthetic media at both the yesterday account level and for platform level and we can imagine these Companies Working together privately as they do today, where cyber security, where they exchange Threat Intelligence with each other and with other actors to develop a shared understand offering what this looks like. We can also increase funding. So as mentioned, dr. David doctorman led a program here, we he have existing initiatives looking at the detention of these technologies and i think that it would be judicious to consider expanding the funding further so we can develop better insights here. I think we can measure this and what i mean by measurement is that its great that were here now a head of 2020 but these technologies have been in open development for several years now, and its possible for us to read, research papers, read code, talk to people and could have created metrics for the advance of this technology for several years and i strongly believe that government should be in the business of measuring and assessing these threats by looking directly at the science tick lit tour and developing a base of knowledge from which to work out next extend, being forewarned, for mayored here and we can do that. I think we also need to do work at the level of norms so at open a. I. We have been think can about different ways to release or talk about technology we develop i think its challenging because science runs on openness and we need to preserve that so that science continues to move forward but we need to consider different ways of releasing technology or talking to people about the technology that were creating ahead of us releasing it. Finally, i think we need comprehensive a. I. Education. Up in of this work is people dont know what they dont know so we need give people the tools to let them understand that this technology has arrived, and though we may make variety of interventions to deal with the situation, they need to know that it exists. I hope this testimony has made clear, i dont think a. I. Is the cause of this. I think a. I. Is an accelerant to an issue that has been with us for some time and we need to take steps their deal with this problem because the pace of this is challenging. Thank you very minute. Thank you. Mr. Dorman. Thank you, chairman schiff, thank you for the opportunity to be here this morning and discuss the challenges countering media manipulation at say. For more than five flurries awe authorized have used variations of the fair, seeing is believing, but in just the past half decade, we have come to realize thats no longer always true. Late 2013, i was given the opportunity to join darpa as Program Manager and addressing a variety of challenges nation military and Intelligence Community. Im no longer representative of darpa, i did start the media Forensic Program metaphor it and was created to address the many technical aspects about these problems. The general problem of metaphor is dressing our limited ability to analyze, detect and address manipulated media. That at the time was being used by increased frequency if win creased frequency by our adversaries. Its clear that our manual processes, despite being carried out by exceptionally competent analysts and personnel in the government, at the time could not deal with the problem at scale, that the manipulated content was being created and proliferated. In typical darpa fashion that government got ahead of the fashion knowing it was marathon, not a sprint, and the program was designed to address both current evolving manipulation capabilities. Not with a single point solution but with comprehensive approach. With up expected was the speed at which this Manipulation Technology would evolve. Just the past five years, we have gone from a new technology that can produce novel results at the time, but nowhere near what could be done man automatically with basic desk top ed iting software. Open Source Software that can take the manual effort out of the equation. Theres nothing fundamentally wrong with or evil but a the Underlying Technology that arises to the concerned were testifying bowed do it. Basic engine video desk topped temperatures deep fake is only a tool and there are more positive applications than new englandtive ones. As negative ones. There are solutions to identify deep flakes buts only because the focus of the developing the deep Takes Technology have been on visual deception, not on covering up trace evidence. If history is any indicator its a matter of time before the current Detection Capabilities will be less effective but a some of the same mechanisms used to create this content are also used to cover them up. I want to make it clear that combating synthetic and manipulated media at scale is not just a technical challenge. Its so social one as well as im sure other witnesses will detestifying this morning and theres no easy solution. It is likely to get much worse before it gets better, yet we have to continue to do what we can web need to get the tools and flows hand of individual rather than relying on the government or social media platforms to police content. If individuals can perform a sniff test and the media smell as misuse, they should have ways to verify it or prove it or easily report it. The same tools should be available to press, social media sites, anyone who shares and use the content because the truth of the matter the people who share this stuff are part of the problem. We need to continue to work towards being able to apply automated detection and filtering at scale. Its not sufficient only to analyze questioned content after the fact. We need apply detection at the front edge of the distribution pipeline. And even if we dont take down or prevent manipulate media from appearing we should provide appropriate warning labels that suggest this is not real or not authentic or not what its purported to be, and thats independent of whether this is and unthe decisions are made by humans, machines or a combination. We need to continue to put pressure on social media to realize that the way their platforms are being misused are unassettable and they must not allow things to get worse. Let there be no question this is a race but better manipulators get, the better detector inside to be and their orders of magnitude. And its a race that may never end, my never be won but no one but is is one where we must close the gap and continue to make it less attractive financially, socially, politically to propagate false information. Like spam and malware it is easy and its always a problem and may be a case that we can level the Playing Field. When the Metaphor Program was conceived at darpa, one thing that kept me up is the concern that at veer South Carolina could create events we little effort. Me a include video of keane, video content and texts delivered through various medium providing overwhelming amount of evidence that an event osecured this could lead to social unrest or retaliation before it is countered. If the past five years are any indication, that some day is not very far in the future. Thank you. Thank you. Professor citroen. Thank you. Thank you for having me here today to talk about the phenomenal of deep fakes, the risk they os and what law can and do shoo do. Im a professor of law the university of Maryland School of law, and there are fewphone that come together that make deep fakes particularly troubling when theyre provocative and destructive. This first is that we know that as human beings, we the video and audio is so visceral weapon tend to believe cat what our eyes and ears are telling us and tend to believe and share information that confirms our biases. And its particularly true when that information is novel and negative. So the more salacious, the more welling we for pass it on and were seeing deep fake order will see them in social networks that are ad driven so the enterprise to have us click and share. The salacious will be spread virally. So he me describe so many arms we have written about but im focus on concrete ones and what lieu and can do shoo do. There are concretes harms in the here and the now and especially for individuals. The an investigative journalist indiana. She has a long used to getting Death Threats and rape threats. Its par for the course for her but she wrote a piece in april 2018, and what followed was posters circulated over the internet, deep fake sex videos of her. So her face was morphed into pornography and that first day it goes viral, its on every social Immediate Use site, its on what appear, and millions of phones inned in and the next day, compared we then deep fake sex video was rape threats, her home address and the suggestion that she was available for sex. He this will fallout was significant. She had to basically go offline, couldnt work. Her sense of safety and security was shaken. It upended her life and hey had to withdraw for several months from online platform. So the economic and the social and the psychological harm is profound. And its true that in my work on cyberstalking, thephone is going to be increasingly felt by women and minorities and for people from marginalized communes. For such individuales can imagine the deep fakes about the night before an ipo, timed just right, with the ceo saying something that he never said or did, basically admitting to the company was insolvent. And so the deep fake, if the night before the ipo, could upend the ipo. And the market will respond far faster than we can debunk it. The question he we talk about ill let him take some of the non security concerned luke elects, tipping of an election, upending Public Safety, but the next question is what do we do about it . And i feel like our panel will be in heated agreement theres no Silver Bullet. We need other combination of law, markets, and really societal resilience to get through this. But law has a modest role to play. There are civil claims that victims can bring. They can sue for definition, intentional inflix of emotional distress, but the hardest thing is incredibly expensive to sue, and criminal law offers too few levers for us to push. At the state level there are handful of criminal definition laws and impersonation laws, and at the federal level there is an remember impersonation of a government issue. But the professor and i are writing a mold statute, taylored and would address harmful, false impersonations, that would capture the harm here. But of course, there are practical hurdles for any legal solution you have to find the defendant to prosecute them. And you have to have jurisdiction over them and so and the platforms the enter immediate areas or digital get a corps are immune from liability so we cant use a legal insettive of liable to get them on the case. I see my time is running out and look forward to your questions and thank you. Thank you very much. Mr. Watt. Chairman scheff, rank member nunez, thank you for having me here today. All answer vadz makes wreck nooks the power of Artificial Intelligence to empear military but the countries with the most advanced a. I. Capables and unlimited access to large data will gain enorm mess advantages in Information Warfare, awry providers bearers the able to recon American Social media us a audiences to fie and create modified content and digital forge riz advancing false negatives. Historically each advance independents media from text to speech to video to virtual reality, more deeply engages enriching the context of experience and shaping a users reality. Think can provoke emotional responses. False video and audio cons consumed and believed can be extremely rick to refute and counter. Moving forward i estimate russia is an enduring purveyor of disinformation is and will continue to pursue the acquisition of media i suspect joined and outpaced by china, chinas artificial capable rival the u. S. , powered by im now mass data and include vast amount of inflammation stolen from the u. S. And shown a propensity to employ sin athletic media. These dones will likely use deep affect to discredit dissentses dissentses dissentses dissentses and incite sphere discourt the rat of american odds and americas allies. Deep fake proliferation presents who clear dangers. Deliberate development of false synthetic media will target u. S. Officials, institutions, democratic processes and the goal of demoralizing the american constituency. In the center and sort them circulation of deep fake may incite mobiles under false pretenses initiating Public Safety crisis and spark the outbreak of violence. The offering a relevant example how bogus messages can field violence the spread of deep fake s u. S. Diplomats and military personnel will be prime targets for deep fake. U. S. Enter in the develop worldy consumption has jumped from analog in person conversations to social media sharing, lacking any form of verification filter will likely be threaten bid going gus media campaigns. Three examples would be mobilization at the and rumors protest had they been accompanied wed fake audio or video content could have been for imagining. Identity alleged point a story out from the Associated Press which shows the use of a synthetic picture for what appears to be espionage purposes on linged n, a honey potting etake. Recent public discussion offices deep flake deployment, focuses oregon foreigns a very series may come not from abroad but from home and not from nation states but the private sector. I have focused on authoritarian nation states. A range of advanced persistent manipulator i will develop and acquire deep fakes. Recent example of disinformation and misinformation suggest it could be oligarchs, cops, Political Action groups and activists with significant Financial Support that will seek out media capabilities and amplify deep fakes in international or domestic context. The net effect is the same, the degradationve institutions, lower faith in the electoral processes and violence by individuals and groups mobilized under false pretenses. Congress shame policemen legislation prohibiting u. S. Officials elect representative and agencies from creating and distributing false and manipulated content. The u. S. Government must always be the per very of facts and truth to its constituents, assuring the ef Effective Administration of democracy, via productive policy debate. Second, policymaker should use work with Media Companies to develop standard. The u. S. Government should partner with private sector implement digital physical signature. Social media skies enhance label of synthetic content and work as an industry how and when fake con think should be marked. Not all synthetic immediateways nefarious buttings in to consumers should be able to determine the source of information and whether its an othen at the depiction. Fifth and what is most pressing the u. S. Government should maintain intelligence ons a very scare april candles deploying fake content of the proxy they employ to conduct such information. The departments defense and state should immediately develop Response Plans for deep fake smear campaigns and violent mobilization over seas and the last iing a co my fellow panellests is public aware movers deep affect fakes and tamping down on attempts to subvert democracy. Id like to see is help the public make better decisions about the con at any time theyre consuming and how to judge the content. Thank you. Thank you all. Well in the proceed with questions. I recognize myself for fiveups two questions, one for mr. Watt and one for professor citroen. How broad is the community that the social media platforms enjoy, and is it time to do away with that immunity so the platforms are required to maintain a certain standard of care . It seemses no very practical to think about bringing people to justice who are halfway around the world, or the ricktoattribution or the fact that given the cheap cost of this technology now, just how many people can employ it. Is it time to take that step . Was it appropriate for one social Media Company to leave up the pelosi video, even labeling it in a certain way, and mr. Watt, what is a proportional response should the russians start to dumb deep takes, release a deep fake of joe biden to try to de minimis candidacy. What should the u. S. Response be . It should be a cyber response, not a titfortat in the seems of doing a deep peak of putin but rather some Cyber Reaction or sanctions a better response . How do we deter this kind of foreign medsling, realizing that is only going to be pun part of the problem. Professor . Im going to start with how broad he immunity is and then that it is time for is to amend sex 230 of odecencying a. Under a law passed in set 1996 the Communications Decency act it it was an antiporn provision. That was the objective of the Communications Decency act. And most of that law struck down but what remains is a provision that is good called Good Samaritan blocking and filtering of offensive content. And its been interpreted really broadly to say if you underfilter content, if you dont engage in any selfmonitoring at all, even if you encourage abuse, that youre immune from liability for user generated content. So that means that revenge porn operators can gleefully say theyre immune from liability and encouraging people to post their exesry nude photos and theyre immune from liable because they are not generation, cocreating the contented. So the question is in a world, here we, 25 areas later, the internet we have dominant players, its not the internet is not in ininfancy and is it time reassess it . I think the answer is, yes. That we should condition the immunity. Shouldnt be a free pass and should be conditioned on a reasonable content moderation practices. We have written a sample statute that you do adopt if you chose that would can be immunity on reasonable content practices. Then the request of course is, well in any given case, are platforms making the right choices . And under an approach that would have us look to the reasonableness of content practices, we would look at the platforms total its approach generally speaking to content moderation, not any given decision with content. So lets take the pelosi video. I think the answer is that shy be taken down. We should have a default rule that if were going to have impersonations or manipulation, that do not reflect what we have done or said, that platforms should once the figure it out take it down. The technology is such we cant detect it yet. We cant automatically filter and block. But once we have figured it out, we are ready or in a place where the public has deep distrust of the institutions of at the heart of our democracy. And have an audience primed to believe things like manipulated video of lawmakers and i would hate to see the deep fakes where a prominent lawmaker is seen purported to shown taking a bribe that you never took, and i hope that platforms come to see themselves if we cant require them to have legal liability, that they come to see themselves as the per veries of responsible per very purveyors of responsibility. Mr. Watt. Id like to start with a basic principle of Information Warfare and so ri snap, professor who is essentially studied wartime rumors. His quote was once rumors are current they have a way of considering the public with him. The mere rum yours told the greater its plausibility and wrote nat set 44678 it come. Downs who is there first the most. In terms of how we deal with this, theres several parts. One is we have to have a plan and its a multipart plan. The other part is responding quickly. This is not the tradition of our government. In iraq when there would be fake al qaeda propaganda put out to try to inspire people show up places we had rapt Response Teams that would show up with video and audio that should shoot video toshow this is not true. Thats a great example about if this starts to get leaked out what is our plan . U. S. Government or any official Government Agency should immediately offer a counter based on fact in term whites is actually going on. Happened in the summer of 2016 at an air base, russian state sponsored propaganda but a potential coup, maybe the base was surrounds, maybe theres protest. We should be able to turn on the cameras and say this is not happening. The faster we do that the lest chance people see it first, the lest people see it often and believe it. The second part is i think it comes down the Political Parties, republican and. If they have smears come through they should rapidly refute that and put out a basis of truth. This candidate or candidates were not there and that being partnership with the social Media Companies. Would not go as far as saying every piece of sended the tick video that is loaded on a somedia platform needs to come down and im glad you brought up former Vice President biden. One of the classic articles about former Vice President biden comes from the onion and was that he was waxing his cam merrill row in the driveway of the white house. A comedy bit and had manipulated photos. Manipulated content on it. If we went to that extreme we would have a country where everything that has ever been changed or modified for any reason would have to be employed policed and ask a private sector toy police that. I would offer a different system which is triage which is social Media Companies, how do they accurately label content as authentic or not. The danger with social media the source is not there. Thaw should be able to refer back to base source. They should be able to triage. The three areas which i would suggest they immediately triage in is if they see something spiking in terms of virallity. Have it turn for human review, downraid it, not let get to into news feed and help the main stream media understand what is manipulated content. Thats the jump were corn but. The other part is outbreaks of violence and Public Safety. And then anything related to elected officials or Public Institutions should immediately be flagged and pulled down and checked and then a context be given to it. I see it as the public needs to be given a context were not really suppressing all freedom of speech, all developmentthis context because there are legitimate reasons to use then sit tick media for entertain; comedy, visualizations. Id love to follow up about a foreign adversary. Give me 20 seconds can tell you. Refuting, number one. Number television think offensive vibeber is in place and i like what the nsa hays done in 2018. And then number three, more aggressive responses in terms of sanctions. I like sanctions around troll farms and cutouts where the content come from. Mr. Nunez. Thank you, mr. Chairman. How do you put in filters to these tech oligarch companies that doesnt not develop by partisan left wing like it house and its the conserves who get banned and not the democrats. The pelosi video was dont, thats fine, but i can tell you theres videoed up of republicans that go on and on and on. So its all in who is building the filter, right . Are you asking youre the one talking about filters. What i was suggesting is that it would be impossible to ex antifilter the deep fake content. Cant detect is as far at stateoftheart nor in the arms race will we be able to really filter it and what i was saying is that for Something Like a video, where its clearly a doctored and impersonation, not satire, not parity. Theyre wonderful use for deep fakes that are historical, rejuvenating for people to create it about themselves emfoment not suggesting all deep fakes i think i mostly agree with you other than i just dont know how you dish think the challenge is how to implement it, what you have been studying. These are really hard problems of content mod racing, i worked with companies for ten years now and in particular on the issue of nonconsensual pornography threats and stalking and its such a contextual question and so you cant proactively filter but when its reported, the question is when we see videos going viral, theres way in which we companies should expect react response by, and absolutely should be bipartisan. Shouldnt be ideology that drives question we but is this a misrepresentation in a defamatoriy way that we would say its a falsehoods that it harmful to reputation, an impersonation and we should take its down. That is the default im imagining. For social Media Companies but it would be ex post. But its a challenge you talk but the 96 law that needs to be changed. And i think it has to be one way or or another. Have to truly be an open, public square, and then its very rick to filter because whoever is developing the filter puts theyre own bias into the filter. The bill did not imagine an open public squares where private companies couldnt filter emthe opposite. Designed to encourage selfmonitoring, and to provide an immunity in exchange for Good Samaritan filtering and blocking of offensive content. So the entire premise of sex 230 is to encourage and provide imstudent there was filtering and blocking because congress knew it would be too hard for congress or the ftc to get ahead of themes and that was not 199 of. Imagine the scale we face now. Think we should reserve the immunity but condition it on reasonable content moderation practices so that theres some sites that literally traffic in abuse that encourage illegal justice should not enjoy immunity from liability. How do we draft legislation that would yep. Happy to tell you how. Section 230c1 how to says no speaker or publisher no Online Service school be treat as a speaker or publisher of someone elses content. What we can do is change sex 230c1 to say that no Online Service that engages in reasonable content moderation practices shall be treated as a speaker or publisher of somebody elses content. So we can change section 2300 with imagination. Depend on the definition of reasonable. Its hear lawyer say we can fishing out what is reason, its called tort law. Negligence is built on the foundation of reasonableness. So often law moves in a pendulum. We often start with no liability because we want to protect businesses and we should. And we experiment and realize theres a lot of harm and then we off overreact and impose strict liability and then get in the middle, where thats where negligence lives and reasonable practices and we have industries that theres content moderation going on for ten years and if a been advising twitter and facebook all of that time. There is meaningful, reasonable practices that are emerging and have effected in the ten years so we have a guide. Its not a new issue in 2019. Sew we come up with reasonable practices. Thank you. Yield back. Mr. Himes. Thank you, mr. Chairman. Doctor, want to get in a quick sense from our what the stat cuss foe is with respect to our ability to detect but before die that it want to highlight something that i think is of actually very immediate and intense interest to Intelligence Community. Mr. Watts you said something is happening on a base somewhere and just turn on the cameras. Im not sure thats right. If you can create a deep fake, theres no reason why you cant create a deep fake from the camera to the screen. At the point im trying to make speak intelligence communication relies on full motion photographs and one threat is at our enter generals Community Using its own assets might not be able to tell fact from fiction. Is that correct . When you say lets turn on the cameras one of my recommendation is digital verification which theres these phones will no boater because theyre more technically sound than i am but digital vacation for date, time and location of allton tone to will you realike con then. Theres already block chain that would be if you as the u. S. Government, turn on your cameras it can be verified by News Agencies and reported, have it on cspan and different ways but we have to make sure we have the a little to verify thatunder content is if an impersonation is done, people we will know which one to sort through. Religion defer to them but some is already being dold. I would want that with i. That lead into the question. I understand theres no Silver Bullet here. Its a cat and mouse game. Tell us where are we in that cat or mouse game should we expect to have undetectable deep fakes when a yardworks years, ten years . Where are we today and how imminent of a challenge. I think theres the risk of having an undetectable content that gets modulated on shared online. Right now things lime emprecision, a very low resolution version of a video, the an transcribe abuse can at transcribe abuse can be key steroid, the a cam practice fingerprint the trace evidence can be destroyed with very simple types of manipulation on top of the deep fake process or any type of manipulation. The challenge we have, though, is that we do have Point Solutions for a lot of these components. And bringing them together in a useful way and as i said getting in the hands of everyone through the pipeline, imagine if facebook or youtube or these other companies could have this up front and when the humans reviewers, facebook i think just report they have hired 30,000 people to review content could have this today of time rather than saying i have questioned video or questioned piece of audio or something that it need to review, now let me go and run this algorithm on it, do that up front so they have a report associated with a particular illinois or video. And then if theres questions, to put that warning up there. I think the public doesnt know enough about what is possible to demand that if somebody knows something, the truth of the merit when this stuff is shared, its created once, and when it gets shared it gets shared across different platforms, shared by different people, with different media, but the truth of the matter is that signature for that particular piece of video, that piece of audio, is there, and so there are tools that the social Media Companies could use to link those together and make a decision and then share it with everyone. The same way as we do with malware, for example. Cyber issues. We have gotten to the point where its now were protecting our front door and we need to protect the front door from images and video as well. Thank you, doctor. Professor citroen, i dont have time cover this topic but i want to express myself. The theme of the hearing is how scary deep affect are but one of the more scary thing is is your statement the pelosi video should have been taken down and there should be this. Dont have a lot of time. Sadly wont be a moment for you answer but i want to have this conversation because as often awful is a think we thought the pelosi video was, has to be a difference if the russians put that up, which is one thing, versus if mad magazine does that as a satire. As you know, we dont have a lot of prokes as public figures with respect to definition, and some of the youngow used here today makes me worry about First Amendment equities can free expression, centuries long tradition of sat tarrizing people who sat tarrizing i. Hope we can hear more about the boundary lies and how to brock the long tradition of freedom of expression. With that i yield back. Thank you, mr. Chairman. Thank you for being here i think we have come a along women remember chevy chase playing gerald ford an saturday night live and then forrest gump, wonderful movie, entertainment. And i remember sitting there think how did they do that . Problem we have, ive always said out of everything bad theres chance to do something good out of or everything good theres a chance for people to do something bad. And i think that we see that. The way it sounds with where were headed its like we are aural living in the truman show or Something Like that and we have to be careful. But i think about in that vein of out of something good, something bad can happen. The writing brothers when they learned to the Wright Brothers didnt think, maybe we can fly this interest a building and kill people. Thats what happens in this world unfortunately. But as a result of that, 9 11, its takes longer to get on plane and no go reason, and i think that where we need to be head might be dish want your opinions obviously we have to slow this down. Before something just hits i think youre talking about this, the tree triage idea and maybe we have to tell people this is satire, its not real, and you have to in some way verify, which is kind of pathetic but at the same time that may be what we have to do, slow it down, triage it, this is not verified. This is satire. And maybe on a global scale when it comes to punitive measures, the people that are doing things nefarious, maybe we have to have International Extradition laws because when something comes from some other country, maybe even a friendly country, the defames and hurts someone here, maybe we both agree amongst those nations that well extradite those people and they can be punished in your country. So id love your opinion on those. The triage labeling, and extradition. Whatever wants to take it first. I think thats absolutely right. One of the reasons that these types of midsted images and video gain attraction because its almost instant they can be shared. Shared around the world. They can be shared across platforms. You can see something on one platform and theres a button there to post it to another. Theres an old adage that says a lie can go half war around the world before the truth can get its shoes on and thats true. I personally dont see any reason why broadcast news does it with live type ofs of they have a delay, seven, second delay or 15second delay. Theres no reason why to goes have to be instant. We should give our social media should instill these types of things to delay so they can get these types of things online. They can decide whether they should label it. We still need to put the pressure on for the types of things but theres a seriousness issue. From satire up through child porn naggography. We have done it for human trafficking. Theyre serious. This is another area that its little bit more in the milled but i think they can make the same effort in these areas to do that type of triage. What you about to see is sat expire has been modified. Go ahead, jack. I think one thing were stressing is we will continue to be surprised by Technological Progress in this domain because the law is all of these people think theyre the Wright Brothers and feel that and theyre all busily creating stuff they have figuring out the second order effects of what they build is dipping. We need to build infrastructure so you have some third party measuring the progression of these technology so you can anticipate the other things. Will. It is true there are some instances where we say maybe its not just good enough. It is inflammatory. I would love to hear back from you on those extradition laws. Thank you mister chairman. Doctor drummond, you didnt really answer my colleagues question how far away are we from actually being able to detect quick. I know it darpa you were working on that. Where are we either o commercially by governmental researchers technologically able to detect quick. With that technology there is Certain Software out there so the initial paper that gave rise to this technology after that program and we did adapt that coming from these particular softwares can be protected. Do we have that technology to verify the videos and photographs quick. If you give me a particular video with high confidence i can tell you if it is a fake video i can also come back to say here are the images that have gone into it. Calendars that take quick. In our . 30 minutes quick. It can be done in about 15 or 20 minutes. In advance of the 2020 election what can campaigns or Political Parties do to prepare for thepa possibility for this fake content quick. One thing here on capitol hill is to urge the social Media Industry to create unified standards so if you are a manipulator domestic or international you will go to whatever platform allows you to post anything from inauthentic accounts. They cannot share cross a accounts and that spreads throughout the system tods the point where it really cannot be policed. So one thing is with social media to Work Together that goes for the political smear campaigns what is the standard for policing . And having rapid responses such as defense to lag in terms of a response that allows that to grow the mainstreamir Media Outlets help to refute other politicians or officials through that reputation. What would you suggest for Political Parties and candidates quick. Those about the commitment not to use them or spread them and also have relationships established early on with social Media Companies. With that immediate entree to folks whoever it is that facebook or twitter or microsoft that they have an immediate rapid response. How do we tackle that quick. May be recorded committing illegal acts . And then for that liars dividend that was conceived in california about what most a worries this is that we the acculturated people not to believe their eyes and ears. And to seize on to take a genuine recording of mischief to say thats not me. So i have a twofold answer part of it is education. And talk about the phenomenon that so often the responses the liars dividend we give up and stop educating. But absolutely not forgot must be a robust part of the learning curve. Wrongdoers will seize on that. We cannot let them do that either. So getting from completely what our eyes and ears believe us to be skeptical that is potentially functioning but what we say is not what we say we dont want to have that nonfunctioning marketplace. Thank you mister chairman i will say im a little bit concerned with your suggestions although in an ideal world it could be nearly impossible that some of them have some troubling aspects like a fact checker who is not really a fact checker so im often asked with conversations what do i think is the greatest threat and he said without thinking, i blurted it out but nobody knows what is real anymore. As i was driving home that evening that is true that is the greatest threat where people just dont accept a basic truth any longer partly because they just dont understand what is true. Television is extraordinarily good at propaganda fake news is a term as has indicated but the manipulation of intelligence products is extraordinarily troubling to me. The black is white and white is black and i can show you the difference between white is black and people will not believe me. I just think for us to lose that, by the way we can control government the legitimate businesses that we cannot control everyone. But this will be so pervasive and available virtually anyone can create this. Thing you cannot use that for political manipulation but you can control the other 6 billion people on the earth and that is my concern just appear volume so it goes both ways and this is my concern. We have the impression to say its real that you say it is real is alive. To use your examples of the politician caught in a bribe but if theyre caught in a bribe then it could be true for go they will say no its fake and it is not real. So then you lose credibility on both sides. So first my question. Should be have those algorithms . Should they be open sourced . If the answer is no we have to do that. And the second question is it is almost rhetorical but i love your answers so how do we prepare people to live in a world of deception . And they generally do not know if it is real or not . Should they be open sourced quick. I will address the first one. To see a week and a half from now at a conference there is a workshop there and even though there is potential learning from these things they will learn anyway. We need to get this out there into the hands of Users Companies are starting to use this o so i think it needs to be open source. That same technology of learning to create this content. For my early because they get access anyway . And people need to use it. And then to give people the tools to make the choices themselves. I will accept that was some hesitation so what about suggestions . Im sorry and how we prepare people to live in the world so steeped in deception . Im sure you thought through that. When Justice Holmes came up with that he was a cynic he didnt suggest otherwise but that broader endeavors out the foundation of our democracy to have a set of accepted truth but have meaningful conversation and cannot give up on a project. That foundation of accepted truth is very shaky at this moment. Thank you with that p prevailing distrust does the deep fake risk aggregate this distrust as a professional journalist working with bloomberg and businessweek i speak with some experience yes, this is a very severe and undercover threat if youd write something they dont like they tried to attack the integrity of the institution and this makes it trivial to do that. Or to convince people you are not being factually accurate. Not only do we see the journalist with the notion that they will sit on real evidence for fear it is fake. And to see that Media Organization and now to be wary of what is tough without that journalistic effort. So that corrosive effort that we call trust decay it isnt from civil and Political Institutions to fight everything. Over time if the consumer does not know what to believe then they will either believe everything or nothing at all if it is nothing at al that is longterm apathy that is destructive for the United States you can look to russia as an example of how the russianan government has used that approach but for those consequences of democracy disappointment that anything can be achieved not wanting to show up for the drafter as the all volunteer force that is what i would look at over the next ten or 15 years. That is a longterm corrosive effect if you look at the longterm doctrine of subversion they are much more patient and willing to wait decades coming too fruition. So will Technology Solutions for authentication and out for a journalist or a Media Organization or a fact checker to validate a piece of media . Crowd surfing or sxsw so how do you verify those things as a journalist to have these tools out there where its not the case now but we dont have a general solution or a gatekeeper to be automated completely this is cat and mouse as things get better they will move on so we had situations that people come up to them and then shows the atrocity. And a need to know to report on that or not so even those manipulations people were doing these kinds of things and it is a concern. It is a scam selling it to tmc or the critical media outlet scanning for 50000. So if i could add one dimension how lucky we are to have the engage public to review things that our false or challenging them not just the journalist but the public. One of the dangers we think about information environments eliminate all rebuttals to have a significant backlash to have where that regime controls all the information flow. Fact checkingng is expensive and from those that are doing well for those going down the path you have to find a way to fund that because of those economics so that is incredibly difficult when you pay to fact check. Thank you mister chairman. Since the time we have been here i pulled up a video recently posted one with an israeli company. And to create a video of Mark Zuckerberg saying he could control the future. And put it on facebook specifically to challenge facebook now zuckerberg has responded saying he will not take it down. So do you think that is a wise decision to not take it down . And then i will start with you. That is a perfect example given the context. That is really healthy for conversation. And all of these questions are hard. Oft course, the default presumption is from the First Amendment perspective you want to keep t government from calling in the marketplace. Private companies can make those choices with an incredible amount of power and you made thema right choice that is essentially with nancy pelosi is a t conversation with the choices that they mean and what does that mean for society . That is productive so. It seems correct that all these companies are groping overall because of the judgment calls and have become more numerous overtime. But i would just add while that comparison and then to be inebriated this does point out the idea the part of the video to challenge the rule to discuss this forum but nobody believed zuckerberg controls the future surely wouldnt want to show upld to testify. A very serious point about context in terms of triage going into human duration so we see 4000 chairs in ten minutes now 16 over ten minutes and then we look at labeling and context so they make good decisions around it. We had a pair a lot a parallel in the analog area. If we say aliens landed area 51 i would ask friends or family where they say they put it out for entertainment it didnt really happen we need to help the consumer make a better decision around that. D i like that facebook is consistent with their endorsement i will also not say they should never change those terms. I think theyre i i looking to capitol hill to figure out what we want to be for loose w the least what does europe want to be policed . I think they want to hear from legislators what falls inside those parameters. The one thing i really do like is an authentic account creation and content generation they have increased that that is really good how they scaleat g that. Is not perfect but better. Is there a particular company or region or nation that is especially adept at this technology that is developing at aa quicker rate . It is distributed along the lines you would expect between america and china and europe its where you have the capability to create this that makes it very challenging. But this will be available off the shelf folks can tech access this quick. Absolutely thats one of the big differences. You used to have to go out to buy a photoshop but now High School Students they are ready has this can Download Data and do this overnight. With software that is open and freely available. Its not something you have to be in ai to run a novice can run them. I yelled back. Thank you for your participation following up on those points its getting easier to do but harder to detect but those examples that we talk aboutct democracy, elected officials , tax on journalist but what about a Small Business with limited resources orou individuals who are victims . And you talk about the scale and widespread authentication, t capabilities might exist as we go forward on social media platforms to deal with a detection issue quick. I envision sometime where there is a button on every social media console ev time you get a text message with the video attached you can hit and it goes off to gather information not necessarily totally automated but vetted by one of other Many Organizations if you can identify where it came from so the individual can make those decisions. The problem is a lot of these technologiesm, exist in the labs a researcher different organizations, they are not shared or implemented. So if i want to test an interesting picture if a tornado it looks a real prick i immediately thought that must have been somewhere in the midwest years ago so i did a search a reverse image searchg you can do after doing some research i found it was real and is practically in my backyard. But not everybody has that capability are things to do that. I have relatives that just do this if they see something they want to share it. So that education piece to get them to scale is that we need to work towards. But even with detection for the everyday person who has a video in their Google Search prominently featured and a platform refuses to take it down so thats part of what everybody sees about them so it is incrediblyed destructive and probably for the Small Business cannot afford if there is a deep fake from their Business Model they need not to be able to have it removed even though it is false and an impersonation and defamation that is a law but that is a slow suit assuming they can find the creator. So we are in this. I whereay it could last years where individuals will suffer and its incredibly hard to talk to victims because there is so little i can force anyone to do and we will see a lot of suffering. The issued is either trying to tackle those with your laws you are talkinge about quick. Yes. I am the Vice President of the cyberinitiative we have been working with lawmakers around theak country but at the state and federal level for go both in terms of pornography and think about carefully and nearly crafting a lot to band thoseha that are impersonations of criminal assassination so we have our work ahead of us with laws around the country it can be tackled but it will have a modest impact slowly. You talk state and local enforcement agencies. Yes. I wrote an article on the phenomenon of cyberstalking and how hard it is to teach local Law Enforcement groups about this technology and the laws themselves. But when you talk to them about onlineut crimes they say i dont know where to begin i dont know how to get a warrant for an Online Service provider or the isp address so we do have some education congressman is called pro or funding and training for local bonfire enforcement was cyberstalking. I would love to see that not only with threats but more broadly. Thank you. I will try to do something thats probably impossible to get your perspectiveou on four areas. We have touched on this with authentication. So how do we develop a strategy in a narrow National Security since and then education . So can you talk about the ability with a forensics is there a pixel by pixel analysis . Is there other areas of basic research in order for us to detect quick. The approach there are pixels but that metadata on that image you know, those compression algorithms there is residual information left if you take an image at the digital level that is where the majority of the work is done. How easy is that and potentially who should do that . Bet the government puts a p lot of money into this piece there are more manipulators than detector detectors. I would hope behind closed doors those youtubes of the world are looking into this type of application. Im not sure. Is the ability at t availability to look at pixel explorationn en masse will that help us with real authentication every time you put up a video or picture there is a check mark quick. I dont like to use the word authentication because everything that goes up online is modified in some way whether it is cropped or color instagram distribution. We like to say they have been modified but it is a scale. There is a modification of intent if you put a flower in the picture next to someone that has a different effect than if you replaced their face. So the discussion or attribution piece and the report says this is exactly what is done is a big part of the program as well. So the focus is to say all of the things happen to the image so therefore the user has to make the decision if it is credible. Yes even in the automated way if you take an image you are the fbi and going to court even if you did change one pixel you lose credibility. If you are doing an investigation with a compressed grady Surveillance Video it could still give you informatio information. This is a subset of covert action which is the responsibility of the cia but because of National Security act they can do this in the United States of america. So how should we be looking at a Government Strategy to deal with this information especially in the context of National Security . It is two parts i encourage the social Media Industry to focus on the methods who was doing deepfake or propaganda can we have a listing of those . Who are those that are building the equipment . This is the weapons and and focus the cia to oversee terms of protecting the homeland. The state department would be out there too overtly going after those actors doing the manipulation that we are slow to do this and the only ones to figure it out when we spot actors are doing things we have to wait years for the government to say yes here is the Mueller Report but that has already been out in the news the more rapidly the government can do that the more social Media Companies know what to take down because it attribution comes down to the Us Government the only one that has tools that can do that. Copy. I yelled back. Thank you mister chairman. First of all, i want toe make sure i understood correctly if Something Like the reporter to india happened in america do i understand that would not constitute a crime per se . It might be understood as cyberstalking which is a crime under federal and most state laws but the problem is it was death by a thousand cuts to constitute that you need a course of conduct repetition of the same person. What happens it is coming together one person press up the photo or the screenshot the next one present the home address the other one says im available with the screenshot. The person who originated it under current law would likely not be subject to criminal prosecution. Right. Do i also understand that you said even if it were it would have modest impact quick. What i said was if we had criminal laws to combat this deepfake phenomenon to tailor to the impersonations then the law is important it is the overall impact because we do need that partnership. I want to move on. But also i cannot help but to have this terrible flash of dante inferno abandon hope for all of you who are here. So whos job should it be to label . That is not clear i thought it could be the media platform. I think it is the creator. Thats what we do with campaignfinance when we say there are certain Disclosure Rules you have to own it. If it is a foreign originato originator, how do we have any jurisdictional reach quick. We dont. There are no boundaries. So as a practical fact created in america transmitted to a foreign person than retransmitted, we have no means of enforcement without labeling. Look at the social media platforms if they have some responsibility and im skeptical whether we will get there in the near future with this technology of detection but assuming that is possible, a reasonable practice could be disclosure to say this is a fake. As it were we actually have a comparable truth verification and use currently which is snopes. But yet a member of my famil family, immediate family who shallte go unnamed once posted as outrageous it was how the constitution should have them their children go to college for free with health care foro life not one letter of that is true which could have easily been verified if they went to snopes but they didnt and even if they did and in a political context the person who perpetuates that could have a political agenda in a parallel fashion could in engage ad hominem attacks against the reliability of snopes. So i dont have much time left but im interested in getting at the issue offi political speech or First Amendment. To say we are protected against being impersonated but its not clear how we square case law that is a very high barrier. It ishi incredibly important everything you just described is protected speech. The United States has madeit clear we protect falsehood to enjoy First Amendment protection because it reawakens the conscience it re commits the citizenship. But there are times when falsehoods create arms, we can and should regulate one that includes defamation even with Public Officials with actuale malice you know, the truth of the matter that is asserte asserted, there 21 crimes made up with speech you can regulate certain words and images if in one of those categories or to meet strict scrutiny so that is the presumption it is protected speech if it is a falsehood but that which has harm the court has explicitly said, the entire court has said that is a space we allow regulation. Thank you very much. It is very helpful. There are different categories and we try to get her arms around a them. The question of the First Amendment. A question of foreign interference and a question of economic harm and reputational harm. We all learn as we go but i have heard you describing the whole world of publishing is upsidede down. It doesnt exist like prior to the internet. So the questionn is whether we want to get back to the principles that applied. Is not like those necessarily have to be abandoned but they just have to be applied. They would apply in ways of those categories. So i want to ask each review we should get back to the requirement of the editorial function with a reasonable standard of care by treating the platforms as publishers. Yound said yes im interested in the others. Working with a number of people in thisop area, i think the horse has left the gate for quite dont think we can geton back to that type. What about statutory change . Let me go on a little bit because who has the duty. It wasnt that i was suggesting the social media platforms be understood as publishers strictly liable but rather we conditioned their immunity on reasonable practices practices. Those may be the content moderation practice they use right now so i will disagree calling them publishers for defamation that is not what im suggesting at all. Thank you for the clarification because that is a quick question we have to ask that would be a legislative action. I think you have the whack a mole issue with their platforms they can close those very quickly. I do agree with the doctor here but its very difficult to contemplate control speech in this way for go the habit of the entire culture of people has changed. What of the question of somebody going online to put up a fake video that destroys an ipo . Isnt there a duty of care to allow that to be on their platform . I think we can authenticate content and users if you make users culpable for certain types of content that they post. Who would be in liable but that false statement that destroyed the value quick. The creator of the deepfake the speakers so as mister clark suggested the platform has Legal Practices of education. But does the platformm undercurrent law. There is no liability. That is a very direct question. But one thatis is debated there is a different point of view about bias and what goes on in i theey platforms so does there have to be a standard for the Playing Field for republicans democrats . Is that possible or something that was possible pre social media quick. Yes. I would say the standards we are using technology with those technological standards at that judgment pool to show its not fake or synthetic that could take the political aspect out if you have openn standards for companies to chime in on and those borders to provide assurance. Thank you all my time isti up bayou back. I yield back. Thank you mister chairman. Thank you to all of you for being here. This conversation has been disturbing and quite scary. I think about the internet is the new weapon of choice. Testimony, ashe we think about an individual who creates harm would be held accountable. I dont think any individual or entity that creates harm or is a publicc safety risk any entity that creates an environment should be held accountable as well. Thinking about those around the world that are not our allie allies, they want to create chaos and what a wonderful way to be able to do that. So of course, that information is a problem but it creates an environment where they no longer believe where we see that in our country right now. Thats a major problem with those institutionsns we will depend on or believe in are no longer being believed in that creates total chaos. So with that statement on a fake video and then transmitted to another country and then to transmit that video we the violation . I know there is a discussion aboutt boundaries. Love to hear your thoughts. And whether it is constitutional and then the extradition question. If you are in a video that has a Public Safety concern of a National Security risk to transmit from america be the violation . It is directed as the United States it is directed outside transmitted from florida. Thats what different than what i thought you were asking how we think about jurisdiction where you aim your activity to another state you do that purposely we can haul you into court but now you confuse me a little because if it is america directing activity abroad i would imagine it is contingent on that countrys jurisdictional rules. Im not a lawyer i tried to avoid them but generally i would say there is no specific provision around transmitting that abroad it is whatever country is affected if it breaks their laws and they have the extradition relationship with the United States is probably not worked out it off the government has executed that it is something that needs to be addressed because what has been very clear over the last four years is there is no physical boundary in these communities or Information Online and often times the smartest manipulators out there russia, chin russia, china, iran, they enlist people in foreign lands to make it look more authentic than sometimes they are aware sometimes they are not in those that are not are doing it willingly so look at another hacking attempt to drivera the election someone in north america alerted the world to point at the direction to that so we have to figure out those relationships how they are handled with their own Law Enforcement because now we go to other countries and asking them to do that for us. We have talked about the Intelligence Community and National Security. But how should we task the National Security entity to forecast the future impact of the deep Space Technology quick. As the purveyors and actors that is straightforward froms the outside but the part that might be missing is the technologies if they are developed this has been central from cybertools to impose operations. Both good and bad to spend and depending on your perspective often times they are well informed of the nationstate actors what they are doing and if they are openly available in terms of ai or what is out ther there. Quickly it is worth repeating the fundamental techniques that we can easily compile the metrics of those improvements to do that forecasting so i agree with what mister watts said but its easy to discover this for ourselves. I yelled back yield back. And for that question we ran out of time on extradition laws to give the opportunity to hear from you on that. And just the other punitive measures with the extradition laws people hanging out other peoples embassies for many years rather than being extradited but at the same time as a doctor i find myself eager on not eager to engage with trial lawyers. But if people are harme harmed, monetary . Because with future monetary losses because of these stories. What about prison time . We need to consider being tough on thiss. To be effective. We ran out of time in the opening question but what about sanctions . Looking at the gr you july 2018 essentially they are sanctioneded those companies from the february 18 troll farm that is very effective but you can move down the s such thatmand hackers and influencers dont want to be hired because they know the risk they could be individually sanctionedd that is a technique it seems it would be hard to execute but if we got good at it i think it would be a great asset if you can turn down theld employment where they dont want to work for those authoritarian regimes that could change the nature also those push out tools with cyberand hacking tools to be used for malicious purposes you could go after those companies which are international not necessarily tied to a nationstate and we have great intelligence collections capabilities at that end and good sophisticated agencies. Now that we know where it is it is a black market. But it is to our advantage moving in the right direction. Mentioning sanctions tti that does make sense especially iff theres no way you will get that extradition agreement in place thats the case with most locations like china or ira iran, but it sends a message across the world if you are pushing on us there are options that we have i do think offense of cyberis at hand there has been some briefings talking about that recently. If the foreign manipulators the makers of deepfake if they knew we were going to respond in the aggressive way they would move away whether extradition, sanctions individually or cyberresponse right now theres not a lot of concern. No. It has proliferated because we have not responded. I yield back. Thank you for the time. So can you talk im sorry, mister castro. Thank you chairman. Professor with the article i had a chance to visit on these issues a few monthss back talking about falsehoods this would be a Monumental Task for the Judicial Branch to grapple with how we treat deepfake hate speech and fighting words are not as protected as political speech. And making that determination to figure out the value of the type of speech or expressionou. What is the valueet of fake quick. Just to add you to thank you for reading our peace so it is the value to the speaker and the selfgovernance the creator of course, so the value could be profound it could be the deepfake contributes to our star wars. Carrie fisher comes back for go there is little value of t10 but also to recognize michael Panel Suggests we do havein guides about falsehoods in impersonations whether defamation or another kind of speech that we say frauds make you say we could go down the road like certain speech is not protected the same way as political speech or ordinary speech . Back i have a feeling. But certain fakes treated differently quick. This is also contextual i dont think it is a onesizefitsall for a synthetic video or impersonations but at the same time to bring context to the floor to say there are times these falsehoods cause real harm that doesnt enjoy First Amendment protection but what we could regulate. One of the challenges we had with the russian interference with facebook and social media is that it seems social Media Companies were not prepared for that. There was no infrastructure to vet or moderate those. So with my rough sketch i see there is a creator use the software and then to proliferate is that to moderate. Im not the layer or the policymaker but there is another piece of that puzzle. And to get used by someone else for a different message is not even the deepfake problem and then to get twisted in a certain way down the lines people dont realize those articles. Exactly. That is a good example. To show that attribution of how it progresses this decision at every level. I think that scenario is what will happen going into future elections with much more content thats a pretty Standard Approach especially withth false content to proliferate for go they can make it fate content it is more available for adversaries to repurpose andch reuse. Think the social Media Companies need to work in terms of morality and then within that that severity of impact like mobilization also with those Political Institutions and right now i would be very worried somebody making a fake video about the Electoral Systems being out or broken down on election day 2020. We should already have a Response Plan how we would handle that. Handle that. Travel as far as the initial news, is so weo would expect whether people who have seen the thered video the doctored video will be aware it was doctored. Is the assumption. The assumption will be if you put this out, that a very small minority will actually learn its the a fake no matter how good you or the press do of putting that out there . Because the truth in this case, that what youve seen is false, is not going to be as visually it may not be visual at all as seeing the video. The way i quickly put it is if you care, you care about clarifications and fact checks, but if youre just enjoying media, you enjoy media. So so the experience is you enjoy or experience that media, and an absolute minority care about whether thats true beyond the entertainment value you extracted for it, as a general thing. And, you know, whats your, you know, whether its journalism or what not, what should teachers in i schools be educating people these days about whether you can believe what you see, you know . This gets to liars dividend. By the way, you know, in politics theres a saying that the first time you hear an expression or an anecdote or a story, you make personal attribution. The second time you say somebody once said, and the third time its pretty much yours. So liars dividend is now out there. But, you know, how do we, how do we educate young people or not so young people about how to perceive media now without encouraging themce to distrust everything inn which case there is this liars dividend . Its true that the more that what were seeing even if its totally false confirms our world view, social psychology studies show we are just going to ignore it. So that we will believe the law if its confirmed. Its confirmation bias theory. So youre right, it becomes incredibly hard for the fakery to be the debunked because its is so visceral, video, and because if it confirms your wording view world view, its really tough. I guess thats the it is aing of parents, of educators, that as teachers as we talk to our students the critical i ten years ago remember that Critical Thinking was about how do we teach students how to do a Google Search, and what how do they believe everything thats in a prominent search in whatever theyre doing. And we saw that, you know, you did a search for the term jew, what would come up first was a site t called jew watch. And teachers struggled to explain to students that just because its prominent doesnt mean its real, doesnt mean thats then authority. I think were going to have the same struggle today, right . That, yes, were going to teach them about deep think, i think weve also got to teach them about the misuse of the phenomenon to avoid an escape respondent. Well, i mean, the other challenge ooh toohe is we have a white house that has popularized the term fake to describe lots of things that are real. In fact, some of the most trust thed news sources in the country. So theres already an environment in which theres license to call things fake that are true but are critical. And it seems that thats a pretty Fertile Ground for proliferation of information that is truly fake. And we find ourselves, frankly, trying to find other words for it; false, fraudulent. Because fake now has been so debased f as a term. People dont know really what you mean by it. I think its worth noting too that when the President Trump referred to the access hollywood, he said that never really happened. The whole interview, oh, that wasnt right. Weve already seen the liars dividend happen in practice from the highest of the bully pulpit. So i think weve got a real problem on our hands. I do think theres some optimism for tools. Ive been involved in numerous arguments with friends where weve gone and checked, though its imperfect, Something Like wikipedia. You end up using the Information Sources around you, is and you can train people there are the certain sources you can go to to settle an argument, as it were. And i think we can develop such tools through some of this technology. And i think, you know, thats a great motivation for having this, you know, this information up front. You know, when mr. Heck was saying that, you know, he had a Family Member that doesnt know about going if that information was attached to video or the email or whatever ahead of time, they would have had access to it, and they wouldnt have had to go search for it. Well, im t just thinking of, you know, applying in 2020 what we saw in 2016. In 2016, among other things, the russians mimicked black lives matter to push out content the racially to racially divide people. And you can imagine a foreign bad actors, particularly russia, dividing us by pushing out fake videosul of Police Violence on people of color. We have plenty of really authentic ones, but you could certainly push out videos that are enormously jarring and disruptive and there even more so than seeing a false video of someone and still having that negative impression, you cant unwind the consequences of what happens in a community. Its hard to imagine that not happening, because its such low barriers to entry. And therell be such easy deniability. If i could add, there is some good news in that social media, if you watch facebooks newsroom, theyre doing takedowns nearly every week now. So theyve sped that up precipitously are. We actuallyad have the curriculm for evaluating sources in the u. S. Government. I was trained on it at the fbi academy. They have it at the Defense Intelligence agency, Central Intelligence agency which is how to evaluate information, how do you evaluate expertise. They teach you this, its unclassified theres no secret sort a of course. But its how you adapt that into the online space. Of the audience im most worried about is actually not young people onn social media, it is the older generation whos come to Technology Late that doesnt really understand they understand the way newspapers are produced, where the outlet is coming from, who the authors are. So i was with a group at new york city media lab, and they actually had a group of students, it was how do we help older generations new to social media or have less experience evaluating these sources. You can send them tips and coups. Do you know where cues. Do you know who the author is, who the actual content provider or, again, the social Media Company. I think there are simple tools like that that we could develop or the social Media Companies could develop for all audiences because its not just for the young people. Young people have more iterations and information evaluation digitally than their parents do. They have actually done this at times more. So i think in terms of thinking about approaches, its about, you know, whats the generation, what are the platforms they are on, do they understand that places which are known for extremism is based in the philippines and thats not really in the United States . And the is sense of our ability to administer these things . There are some tools i think we could do that are nothing more than widgets, Public Awareness campaigns, things we can take from the government that weve already developed and really repackage for different audiences in the United States. Doctor, if i could, is the Technology Already at the stage where good a. I. Can produce a video that is indistinguishable from real to people with the naked eye . In other words, could a. I. Right now fool you if you dont have access to computer analysis of whether the video is authentic . Yes. I think there are examples out there that the, taken out of context, that if theyre sent out there there is a story or a message with it, that people believe it. And its not just people that have that agenda. There are, a video that was out in that showed a plane flying upside down, very realistic looking. And i f think what people will need to do is get confirmation from other sources that, you know, something really happened. So a video in isolation, but if thats what youre talking about, a video in isolation, you boknow, youre given this asking does this look authentic or not, independent of, you know, whether out passes the sniff test so to speak, yes, i think that type of technologys out there. And it wont always be possible to disprove a video or audio birdies proving the circumstances by disproving the circumstances around it. In other words, if there were an audio of dr. Wenstrup purportedly on a phone discussing a bribe, dr. Wenstrup wouldnt be able to say that i was in this place at this time, and i couldnt possibly have been on the phone because the call could have taken place at any time, or if theres a video of val demings, it wont always be possible for val to show that she was somewhere else at the time. Is, do you see the Technology Getting to point where in the absence of the ability to prove externally that the video or audio is fake, that the algorithms that produce the content will be so good that the best youll been able to do as a computer analysis that will give you a percentage, the likelied hood this is a forgery is likelihood this is t a forgery s 75 , but you wont get 100 . The part of the Metaphor Program was exactly that, coming up with a quantitative scale of whats manipulation or what deception is. I dont know if theyve gotten there. Partway throughdo the program, t yes, i think there is going to be a point where we can throw absolutely everything that we have at this, at these types of techniques, and theres still some question about whether its authentic or not. Theresr no, you know, in the case of the awe owe audio, you could do a close analysis with tools and voice verification, all of those sorts of things. But just like a court of law, youre going to have one side saying one thing and another side saying the other thing, and theres going to be cases where theres nothing definitive. I definitely believe that. Colleagues have any further questions . On that optimistic note, well conclude. And once again, my profound thank for your testimony and your recommendations. The committee is adjourned. [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] and saturday on cspan, the Virginia Democratic Party hosts the blue gala featuring amy climb char of minnesota and south bend, indiana, mayor pete buttigieg, live on cspan, cspan. Org and the cspan radio app. And sunday on news item makers news item makers, John Yarmouth on this years spending bills. Health and Human Services and other departments. Finish newsmakers, saturday at 10 a. M. And 6 p. M. Eastern on cspan. Oh, do i look forward to running against him. [cheers and applause] tuesday, President Donald Trump holds a rally e in old, florida, officially launching his run for a second term. Watch online at c pan. Org or listen live on the free cspan radio app. The energy and Natural Resources Committee Held a hearing on Wildland Fire Management programs. Among the witnesses, officials with the interior department and the u. S. Forest service as well as official from california and alaska

© 2024 Vimarsana

vimarsana.com © 2020. All Rights Reserved.