Transcripts For CSPAN Hillary 20240703 : vimarsana.com

CSPAN Hillary July 3, 2024

Students and talking to faculty about a lot of these a. I. Issues that we have surfaced during our panels today. Of course he wrote a very important book with the late drt artificial intelligence. Article official intelligence. So were ending our afternoon with eric and trying to see if question pull together some of the strains of thinking and challenges and ideas that weve hear. So eric, thank you for joining us. You look like youre in a very comfortable but snowy place. I wanted to start by asking you, what are you most worried about with respect to a. I. In the 2024 election cycle . Well, first madam secretary, thank you for invitinger me to participate in all the activities. Im at a tech conference in snowy montana which is why im not there. If you lock at misinformation, we now understand extremely well that virallity, emotion and particularly powerful videos drive voting behave you, human behavior, moods, everything. And the current social Media Companies are weaponizing that because they respond not to the content but rather to the emotion because they know the things that are viral are outrageous, right . Crazy claims get much more spread. Its just a human thing. So my concern goes Something Like this. The tools to build really, really terrible misinformation are available today globally. Most voters will encounter them through social media. So the question is what are the social Media Companies done to make sure that what they are promoting, if you will, is legitimate under some set of assumptions . You know, i think that you did an article in the m. I. T. Technology review fairly recently, maybe at the end of last year. And you put forth a 6 sixpoint plan for fighting misinformation and disinformation. I want to mention both because they are distinct. What were your recommendations in that article to share with our audience in the room and online . What are the most urgent action that is Tech Companies particularly as you say the social media platforms could and should take before the 2024 elections . Well, first i dont need to tell you about misinformation because you have been a victim of that and in a really evil way by the russians. When i look at the social media platforms, here is the plant fact if you have a large audience, people who want to manipulate your audience will find it and theyll start doing your thing. Theyll do it for political reasons, economic reasons or theyre simply nihilists. They dont like authority. And theyll spend a lot of time doing it. So you have to have some principles. One, is you have to know whos on the platform in the same sense that if you have an uber driver you dont know his name or details but uber has checked them out because of all the problems theyve had in the past. So you trust that uber will give you a driver thats a legitimate driver. The platform needs to know even though they dont know who they are that theyre real human beings. The other thing they have to know is where did it come from . We can tech knowledgeically put water marks, the technical term is called stegonography. You know roughly how it entered your system. You know how the algorithms work. We know its very important that you work on agegaiting so you dont have people below 16. So those are sensible ways of taking the worst parts of it out. I think one of things that i wrote about is if you look at the success of reddti and their i. P. O. , what they did they were reluctant to do anything. It improved the overall discourse. The lesson i learned is if you have a large audience, you have to be an active management manager of people who are trying to distort what you as a leader are trying to do. That re ddit example is a good one because i dont have anything like the experience you do. But just as an ober, it seems to me that theres been a reluctance on the part of some of the platforms to actually know. Its kind of like they want denybility. I dont want to look too close because i dont want to know. And i can tell people i didnt know. And maybe i wont be held accountable. But actually, i think theres a huge market for having more trust in the platforms because they are taking off, you know, certain forms of content that are dangerous in however you define that. And your recommendations in your article focus on your role of tributers. Maybe go first, eric, in explaining us to, like what should we think about and more importantly what should we expect from a. I. Content creators and from social media platforms that are either utilizing a. I. Themselves or being the platforms for the use of general generativea. N. How do we protect it even with a. I. Or open Source Developers . Is there a way to distinguish that . Its sort of a mess. There are many, many different ways in which information gets out. So if you go through the responsibility, the legitimate players, the offering tools and so forth, all have the responsibility to mark where the content came from and to mark that its since synthetically generated. In orders we started with this. And we made it into that. There are all sorts of cases linebacker i touched up the photo. But you should record that it was so you know theres an altered photo it doesnt mean its an in an evil way. The real problem has to be a confusion over free speech. So ill say my personal view which is im in favor of free speech including hate speech that is done by humans appear then we can say to that human, you are a hateful person and we can criticize them and listen to them and then hopefully correct them. Thats my personal view. What im not in favor is of free speech for computers. The confusion here is you get some idiot, right, who is just literally crazy who is spewing all this stuff out, who we can ignore but the algorithm can boost them. Theres liability on the platforms responsibility for what theyre doing. Unfortunately, although i agree with what you said, the trust an safety groups in some companies are being made smaller and, or are being eliminated. I believe at the end of the dayers these systems are going to get regulate and pretty hard. You have amisalignment of interests. If im the c. E. O. Of a social Media Company, i make more revenue with engagement. I get more engagement with outrage. Why are we so outrage online . Its because the media algorithm are boosting the stock. Most people it is believed this are more in the center and yet we focus on add this is true of both sides. Everybodys guilty. I think what will happen with a. I. Just to answer your question precisely is a. I. Will get even better at making things more persuasive which is good in general for understanding and so forth. But its not good for the standpoint of election truthfulness. Hillary yeah, that is exactly what weve heard this afternoon is that, you know, the authoritativeness and the authenticity issues are going to get more difficult to discern. And it will be a more effective message. You know, i was struck by one of your recommendations which is kind of like its a recommendation that can only be made at this point in human history. And that is to use more human beings to help. And its almost kind of absurd that were sitting around talk about well, maybe we can ask human beings help human position figure out what is and isnt truthful. How do we incentivize companies to use human beings . And how do we avoid the exploitation of human beings. Because theres been some pretty troubling disclosures about the sweat shops of human beings in certain countries in the global south who are b you know, driven to make these decision and kit be quite, you know, quite overwhelming. So when youve got companies as you just said got in the trust and safety, how do we get people back to some kind of system that will make the kind of judgment that is youre talking about . Well, speaking as a former c. E. O. , of a large company, companies tend to operate on fear of being sued and section 230 is a pretty broad exemption for those in the audience, section 230 its sort of the governing body on how content is used. And its probably time limit some of the broad protections that section 230 gave. There are plenty of examples where somebody was shot and killed over some content where the algorithm enabled this terrible thing to occur. There is some liability. We can try to debate what that is. If you look at it as a human being, somebody was happened and there was a chip of liable but the system made it worse. So thats an example of a change. But i think the truth if i can just be totally blunt is ultimately information and the Information Space that we live in, you cant ignore it. I used to give the speech and say you know how we solve these problems . Turn your phone off. Eat dinner with your family. And have a normal life. Unfortunately, my industry and im happy to have been part of that that made it impossible for you to escape all of this as a normal human being, youre exposed to all of this terrible filth. Thats going to get fixed by the industry collab rat actively or collaboration. Lets think about tiktok because tiktok is very controversial. T it is alleged that a certain kind of content is being spread more than others. Tiktok isnt social media. Its really television. And when you and i were younger, there was this huge frackous on how to regulate television. It was a rough balance where we said fundamentally its ok if you present up with side as long as you present the other side in a roughly equal way. Thats how societies resolve these information problems. Its going to get worse unless you do Something Like that. Well, i agree with you 100 in both your analysis and your recommendations and in a very first time we talked about the need to revisit and if not completely eliminate certainly dramatically revise section 230. Its outlived its usefulness. There was appear idea back in the late 1990s when this industry was so much in its infancy. But weve learned a lot since them and weve learned a lot about how we need to have some accountability, some measuring of liability for the sake of the larger society, but also to give the direction the companies. These are very smart companies. You know that. You spent many years at google. Theyre going to figure out how to make money. But lets have them figure out how to make a whole lot of money without doing quite so much harm. That partly starts with dealing with section 230. You know, when we were talk earlier about, you know, what a. I. Is aiming at, you know . The palace were all, you know, very forthcoming. And we said you know, we know there are problems. Were trying to deal with these problems. We know even from the public press that a number of a. I. Companies have invented tools that theyve not disclosed to the public because they themselves assess that those tools would make a difficult situation a lot worse. Is there a role, eric i know theres the munich statement negotiated at the Munich Security Conference chas start. But is there more that could be done with the public facing statement . Some kind of agreement by the a. I. Company than the social media platforms . You know, to really focus on preventing harm going into the election . Is that something thats even feasible . It should be. The reason im skeptical that theres not agreement nonpolitical leaders, of course, youre a worlds expert on that and the companies on what definition what defines harm. I have wandered around congress for a few years on these ideas. Add im waiting for the point where the republicans and the democrats are in agreement on from their local and individual perspectives that theres harm on both sides. We dont seem to be quite that the point. This may be bauds of the nature of how President Trump works, which is always sort of baffling to me. Bubu theres something in the water thats causing a nonrational conversation. This is not possible. So im keptup skeptical that thats possible. I obviously support your idea. The other thing i would say and i dont mean to scare people is that this problem is going to get much worse over the next few years maybe or maybe not by november. But certainly in the next cycle because of the ability to write programs. Ill give you an example. I was re cently doing a demo. The demo consist of you pick a stereotypical voter. Lets say its a hispanic woman with two kids. She has the two interests. You create a whole Interest Group around her. She doesnt exist. Its fake. Then you have python to have five different variance of her and different ages and background cho ming the same voices. So the ability to have a. I. Broadly speaking generate entire communities of pressured groups that are, in fact, virtual. Its very hard for the systems to dethackett these people are fake. There are clues and so forth. But to me, this question about the ability to have computers generate entire networks of people who dont exist to act for a common cause which may or may not be one that we agree on but probably influenced by the National Security for north korea or china or influenced by some business objective from the Tobacco Companies or you name it, i worry a lot about that. And i dont think were ready. These its possible just to hammer on this point for the evil person who inevitably is sitting in the basement of their home and their mother gives them food at the top of the stairs to use these computers. Thats how powerful these tools are. Ok. [laughter] well, lets try to bring it back a little bit to where we are here at the university in this, you know, great setting of so many people who have a lot to contribute and working in partnership with aspen digital which similarly has a lot of convening and outreach potential. What can universities do . What can we do in research particularly on a. I. . How do we create a kind of, you know, Broad Network of partners like were doing here between i. G. P. And apps digital. And we began to try to do whats possible to educate ourselves, educate ourself students, in combating miss and disinformation with respect to elections. So the first thing we need to do is to show people how easy it is. I would encourage every University Program to try to figure out how to do it. Obviously dont actually do it. But its real actively easy and its really quite an eye openerrer in. And ive done this for a long as ive been alive. The second i would do is there are theres an infrastructure that would be very helpful. The best design that im familiar with is block chain base. Its a name and origin for every piece of document independent of where it showed up. So if everyone knew that this piece of information showed up here, you can then have prominent and understand how did it get there . Who pushed it . Who amplified it . That would help our security services, our National Security people to understand is this a Russian Influence Campaign or sit Something Else . So there are technical things and theres also educational things. I think this is only going get fixed if there is a bipartisan, broad consensus that taking the edges, the crazy edges, the crazy people and, you know, who im talking about, and basically taking them out ill give you an example. There was an analysis in the last in covid that the number one spreader of misinformation about covid online was a doctor in florida which is like 13 of all of it. He had a whole influence campaign of lies to try to convince you to buy his supplements versus a vaccine. Thats just not ok in my view. The question for me why was that allowed by that particular social Media Company to exist even after he was pointed out . You have a morale, legal, and technical framework. But you have to be seen as its not ok to allow this evil doctor for profit allow people to mislead them on vaccinations. Just to follow up on that i mean, ill disagree about what has to happen if were going to end up with some kind of legislation or Regulatory Framework from the government. But if they were willing, is there anything that the companies themselves could do as i say if they were willing to that would lay out some of the guard trails need to be considered before we get to the consensus around legislation. Of course but of course, the sans yes. But the way it works in the company, you dont get to talk to the engineers. You get to talk to the lawyers. And the lawyers are very conservative and they wont make commitments. Its going to require some kind of agreement among the leadership of the companies of whats inbounds and whats out of bounds, right . And getting to that is a process of convening and conversations. Its also informed by examples. So i would assert for example that every time someone is physically harmed from something, we need to figure out how we can prevent that. That seems like a reasonable principle if youre in the Digital World now. Working way from those principles is the way its going to get started. Its not going to happen unless its forced by the government. The best way to make it happen in my view is to make a credible and feel proposal about where the guard rails are. Weve been working on this. And you have to have content moderation. When you have a large community, these these groups will show up. They will find you because their only goal is to fine an audience to spread their evil. Whatever the evil is. And im not taking sides here. Well, think i the guardrail proposal a really good one. Obviously, you know, we here at i. G. P. , aspen digital, the company who is here, others, researchers who are here. Maybe people should take a run at that. I mean, im not naive, i know how difficult it is. But i think this is a problem we all recognize. Its not going to get better if we keep wringing our hands and fiddling on the margins. We have to try something different. So let me just be obnoxious. I sat through all these safety discussions for a long time. And these are very, very thoughtful analysis. Theyre not producing solutions in their analysis that are implementable by the companies in a coherent way. Heres my proposal. Identify the people. Understand the providence of the data public your algorithms. Be held as a legal matter that your algorithms are what you said they reform section 230. Make sure you dont have kids and so forth. Etcetera. You know, make your proposals, but make them in a way thats implementable by the team. If theres a particular kind of piece of information that you think should be band, write a specification well enough that under your proposal, the Computer Company can stop that, right . Thats where it all fails because the engineers are busy doing whatever they understand. Theyre not talking to la

© 2025 Vimarsana