Transcripts For CSPAN2 The Communicators 20240713 : vimarsan

CSPAN2 The Communicators July 13, 2024

This week on the communicators. Professor, but a recent article in Science Magazine, and you start that article by saying to what extent a democratic elections vulnerable to social media manipulation . Whats the short answer . Guest the short answer to the question is that we dont know. So Robert Mueller who is possibly the most understated man in washington, has said election manipulation is among the most serious challenges to the democracy history and his entire career. Fbi director Christopher Wray says the threat is just escalating. What we know is facebook, facebook one of 26 Million People exposed to russian manipulation attempts and the 2016 election. 20 million on instagram, 10 million tweets on twitter, 6 million followers. We know russia attacked Voting Systems in all 50 states. We know they targeted misinformation at specific people and we know 27 of voting aged americans some russian misinformation in the final weeks leading up to the election. Thats what we know what we dont know is what affect, if any, any of this had on the election, on the 2010 midterms, what effect it will have been 2020. And not just in the United States but liberal democracies around the world. The brexit vote has been question in the same way. Elections in brazil, india, sweden. The reason we dont know and reason the short answer is we dont know is because we are not measuring it. The point of this paper is to say that it is eminently measurable. We just need the data to understand what the threat is to our democracy, and harden our democracy to future midocean. Host how do you propose to get that data and how are you going to measure it . Guest of the measurement is a simple four step procedure here the techniques for measuring within a certain degree of statistical confidence, with the impact of these types of manipulate messages are on an election or other types of behaviors have already been developing in fact, they been developed over the last ten years and successfully applied to many different types of behavior. For instance, shopping behavior, exercise behavior, voting behavior in two papers that have been peerreviewed and published with authors from facebook attached to the pics of the methods exist picture first question is how do we get at the data . Thats an important question. We have really no access to this kind of data. However, there are initiatives like social science one which is an industry Academic Consortium collaboration that is attempting to secure access to this type of data, political communication data, however they are having trouble actually getting access to the data. Craig silliman at buzzfeed reported in the past two weeks that not only is facebook delaying releasing the data but the finders of social science one and the projects had been approved at social sites one, they are threatening to pull out by the end of september is facebook doesnt release the data that they promised to release. Facebook has responded to that and what the it said is that they are having difficulty releasing it in a timely manner because of concerns about security and privacy of the individuals data that they intend to release. This essentially brings up what ive described as a transparency paradox. What that means is that these types of platforms are facing tremendous pressure to be more open and transparent about whats happening and how theyre affecting our democracy and of the things in our society, but they are also facing simultaneously a significant amount of pressure to be more secure and more private with individuals data. They Cambridge Analytica scandal is a revelation of millions of people dated which really cratered a call for greater privacy and Security Peoples data. These are in conflict. They want to more transparent but theyre being asked to be more secure and private at the same time. The only way to solve this problem is to thread the needle of the transparency paradox and to become more secure and more transparent at the same time. What does that mean . Technically it means using techniques like differential privacy to anonymize the data before it is released to researchers, to analyze what the effect might be on our democracy. Host Maggie Miller of the hill newspaper, she cover cybersecurity there, shes here to help us explore some of these issues. Professor, thank you for being with us today. You noted social media platforms really need to spread the needle between securing data but also making it more transparent in the lead up to the 2020 elections. How have social media platforms such as facebook and twitter done in terms of addressing this vital balancing act . How are they prepared for 2020 . Guest so ill tell you two things about that. Number one is we dont know enough about whats happening in totally at these platforms to prepare fourth 2020. There there hasnt been enough transparency simply in terms of policy. Forget releasing data. We dont really know what kind of preparations are being made behind the scenes to prevent this kind of manipulation and to protect Election Integrity in 2020. But number two, it is a difficult problem but one that can be solved. I think that what i hear from facebook is that the people that are responsible for, for instant giving data to social science one are working around the clock. They have a lot of people and Computing Resources dedicated to it but they are having trouble doing it in a timely manner. That having been said, i will say that im aware of more people, for instance, at google who have been researching differential privacy, for instance. That i was ever aware of Research Going on at facebook. It could be there are a little bit behind the curve on that particular front. That having been said, i dont have any inside information about the preparations and there has been any public declarations about here is our plan for protecting 2020. You note in your paper that legislators need to be careful when drafting privacy legislation as several committees on capitol hill are currently working on in order not to limit the amount of analysis that potentially could be done of the data from social Media Companies. What would you recommend that lawmakers keep in mind as they come back from recess this we can continue pursuing this topic . Guest ive one message for lawmakers as they pursue regulating social media platforms going forward. And that is consulted experts. I watched Mark Zuckerberg testify in front of congress and ive watched many other instances of congressional testimony where the tech platform sent representatives to the hill, and the legislatures are illprepared to create legislation without consulting experts before they do it. Ill give you two examples. There are a tremendous of very important tradeoffs that need to be managed in pursuing any type of regulation of the social media platforms or the tech platforms in general. Ill give each examples. The first one is one you mentioned between privacy and Election Integrity two examples. Obviously we want to explore reasonable privacy legislation in the United States. I am an advocate for privacy legislation. I do think that we do need to have some sort of legislation that regulates how private and individual data issues. Thats an important right that emanates from a the number of rights in the constitution. But at the same time a broad, sweeping, ill legitimization of Data Retention makes it very difficult to audit what social platforms do to our democracies and our society in general. And were going to want to be able to audit those kinds of things going forward. And so the legislation has to be designed in a way that protects individual privacy, perhaps do anonymization, but also retains the ability to also secure transparency at the same time. Another important tradeoff is, for instance, the tradeoff between free speech and harmful speech. We certainly want to prevent the Live Streaming of mass murders or terrorist attacks on facebook. What we also want to protect free speech in this country. Theres a clear and a tradeoff between regulation that would require platforms to quell speech or to shield speech, and free speech itself. These types of tradeoffs must be carefully thought out before the legislatures act. Host sinan aral is a a professor of institute for data systems and society at the Massachusetts Institute of technology. Hes also a professor of management there. Afsa, if you could though, will you go back and defined again differential privacy . Guest so differential privacy is a technique from Computer Science that anonymize his individual level data such that it cant be backwards created. You can discover who the person was in in a data set of indivil level data from the data that you have. It is a set of techniques that guarantee with some confidence the inability for the possessor of information to figure out what in individual identity is. Host is it your goal to explore what happened in 2016 or to prevent it from happening in 2020 . Guest we are much more interested in preventing it happening in the future, what i think that what we need to understand is that we have a broad comprehensive and sweeping instance of manipulation in 2016 that is ripe for study. In other words, to understand how it was done in the past will help us to understand how we can prevent it in the future. And i think that the goal is to prevent it from happening again. Its not about doing a retrospective on the 2016 election. Its about understanding how to harden our democracy from future attacks and to protect Election Integrity going forward. Host and it sends a refighting the last war though . Guest what do you mean . Host fighting something that happened in 2016, havent techniques moved on since then . Guest yes, indeed. You are completely right in that future tribulation attempts will be different, more sophisticated, perhaps more broadbased and more sweeping. They will certainly involve more innovative methods of misinformation. For instance, synthetic video and synthetic audio deep fakes which is very troubling to me because this technology is advancing rapidly. Its becoming much more convincing, and they say that seeing is believing for a reason. Its, the potential for it to be more convincing than contextual misinformation is great, and so yes things will be different in the future. However, its also the case that we can learn from the past, and those who dont are doomed to repeat it. In terms of deep fake, this is an issue that is been discussed by legislation on capitol hill. What can be done by congress or at least the social media platforms to try to mitigate the deep fakes ahead of 2020, or is it already too late to start to prevent this . Guest i dont think its ever wise to advocate that its too late and that we should throw our hands. That is certainly not the answer. I can you face tradeoffs. I dont think its appropriate to change on this technology. There are tremendous benefits to technological innovation. Synthetic video, synthetic audio, Synthetic Data generation, realistic generation, is used and highenergy physics experiments. Its used in medical testing and training. There are all sorts of legitimate applications to the technology. We should not shackle the technology in a way that prevents the innovation. However, we should be regulating uses of technology that create harm. The ways in which we do that have to be sensitive to the tradeoff between not preventing technological innovation on the one hand, but preventing uses of the technology that are nefarious and harmful. We know that it will be used in political circles, in elections. But we also know that it is already being used for commercial fraud. So there have been instances, the semantic ceo has reported, for instance, multiple clients being conned into transferring millions of dollars through synthetic audio that is used to mimic the voice of the ceo of a company calling the cfo requesting large sums of money transferred immediately, and that is been successful in the past. So there are commercial threats. There are democratic threats, and we need to regulate it in a way that doesnt shackle the technology but the control sits nefarious uses. Another potential disinformation that was highlighted a few weeks ago when jack dorsey is account was hacked and, unfortunately, some at the summit poster under briefly before being removed. What is the threat in terms of disinformation to accredited users, those who are their fight by twitter or facebook or other social media platforms be hacked and spreading news that may not be true . Guest and and i talk about this and upcoming book, the hype machine, which is about how social media is instructing our world. It was ironic that at the exact moment that jack dorsey account on twitter was being hacked i was on television talking about the danger to democracy from digital manipulation. The issue here is that ramification for society that we may not even be thinking of. For instance, in 2013 syrian hackers hacked and ap news twitter handle and put out a tweet that said that barack obama had been injured in explosion, two explosions in the white house that day. What happened was that automated trading algorithms that trade on the sentiment in social media for in an automated fashion observing this sentiment and started selling stocks. Thats big news, if the president has been injured in an explosion inside the white house. Thats potentially very destabilizing. That created a a market crash which lost 140 billion of equity value in a single day. So the threat from mimicking or inauthentic hack of individual verified users account is real and potentially much greater than simply reputational damage to the individual. In the case of jack dorsey, obviously he is the ceo of the major company. When that fake news gets out it could really damaged the reputation of the company. It could create downward trends in the stock price and so on. So its difficult their rice what types of potential could be but we have several examples of serious harm that could come from these types of attack. Host i want to go back to something you said earlier, but are the social Media Companies helping in this effort, or are you finding resistance . Guest i think they are genuinely interested in helping. I also think that they had natural incentives to drag their feet at times. Because theres a lot of potential exposure to commercial harm to them from being transparent. And so it takes a motivation on the part of the Senior Leadership team to understand that the impact of these technologies on our society are more important than the potential commercial harm or transparency and or any given, you know, disclosure. Now, obviously they have need to protect individuals private data. I do know some of the people who are working on this at facebook, and i believe that they have tremendous integrity. I also believe that they are genuinely interested in enabling research on this point, but thats not the Senior Management of the company. Those people, Mark Zuckerberg, Sheryl Sandberg and so on, they have a responsibility to the shareholders also puts it in a tight spot about how quickly to be how transparent tram 90 of her conversation today today has been about facebook and twitter. Should we be including anybody else . Guest everyone else. We make this point in the paper. Facebook is sort of the poster child for this for the converst the moment, and twitter, but there are a number of other platforms that are involved and are important in this conversation. For instance, messaging services are very important because a lot of them are encrypted. Take for instance, whatsapp or telegram. These encrypted services could be spreading out and we know that they are by examining public whatsapp messaging groups, spreading disinformation that is been linked to genocidal killings in india, has been linked to fake news in brazil, and oxford study noted that a full third of information before the swedish election was false, misinformation on social media platforms, online. So we have to be thinking about all of the potential platforms. Each one of them has a different role to potentially play in this. I describe many of them one by one in my book, and describe why they are important to these types of problems. Host when does the hype machine, . Guest the fall of 2020. Hopefully in september or october right before the election. So disinformation is part of a wider debate among them around Election Security has been taking place on capitol hill really since the 2016 election. Quite heated debate ended up in dozens of bills at this point floated on various aspects of election to get including the artist as act which is sponsored by senator klobuchar, warner and grant and that would basically those about ads on social media had to be transparent about who they are and why theyre buying these advertisements. Do you think legislation like this or any other type of Legislation Congress can push through prior to 2020 will have a major impact on disinformation efforts . Guest i do think so. I think the honest ads act is a step in the right direction. I also think that california is making steps in a in a similarn the right direction. Let me make a broad statement about legislation which i think is important. There are as you say dozens but at least four bills that are in front of congress right now that have to do with election reform, and it is surprising to me that the discussion of these bills is even being blocked by the majority, for instance, in the senate, Mitch Mcconnell obviously has received a lot of negative publicity around his desire to block this type of legislation. H. R. One just a sweeping legislation for election reform may be controversial in the sense that it has lots of different elements to it that people disagree about, but surely we can discuss things like the fire at him the secure democracy act or the cyber system, the cyber vote, the cybersecurity voting act. I dont remember the exact thing. These are very short bills with very limited scope that have to do

© 2025 Vimarsana