Transcripts For BBCNEWS BBC 20240703 : vimarsana.com

BBCNEWS BBC July 3, 2024

Already begun to play substantial role in elections. And concerns are growing about how to protect the public from deepfakes that could sway their vote, especially as the Technology Becomes more sophisticated and convincing. In pakistan, an ai Version Ofjailed imran khan was released by his own team, claiming victory in feburary. And in argentinas elections, the New York Times wrote an article showing multiple Images Released by both campaigns were digitally doctored. Here in the us, voters in New Hampshire Recieved A Fake Robocall from President Joe Biden telling them not to vote in the states primary back in january. Take a listen. What a bunch of malarkey. We know the value of voting democratic, when our votes count, it is important you save your vote for the november election. We will need your help in electing democrats up and down the ticket. Voting this tuesday only enables the republicans in their quest to elect donald. Republicans in their quest to elect donald. Obviously not joe biden there, elect donald. Obviously not joe biden there, but elect donald. Obviously not joe biden there, but was elect donald. Obviously not joe biden there, but was an elect donald. Obviously not joe biden there, but was an ai biden there, but was an ai generated deepfake. However, tech giants are pushing back. In feburary, 20 Major Companies including adobe, amazon, google, ibm, meta, microsoft, 0penai, tiktok, and x signed The Tech Accord at the munich security conference, an agreement to join forces to combat deepfakes and protect elections. More recently,tiktok annnounced on thursday that it will begin labeling ai generated content. Meanwhile, meta said last month it will do the same. The google owned video sharing platform youtube says it requires creators to disclose when realistic content is ai generated. This week microsoft and open ai announced a 2 million Societal Resilience Fund to further ai literacy. Just before coming on air i spoke to ginny badanes, head of microsofts Democracy Forward initiative, about how Tech Companies can protect against bad actors. Thank you forjoining us on bbc news. If we look out, as we are in this year, so many elections, if we look at the issue of deep steaks and ai generated content, some of it is very hard to tell but it is not true. How can people know what they are listening to, what they are listening to, what they are seeing, is authentic . It what they are seeing, is authentic . Authentic . It is a Great Question authentic . It is a Great Question. This authentic . It is a Great Question. This is authentic . It is a Great Question. This is a authentic . It is a great| question. This is a huge authentic . It is a Great Question. This is a huge year for elections, Something Like 2 billion people are going to the polls to vote in consequential elections. When generative ai is on the rise at the same time it is important people have tools they can use to decipher what ai is, what they can trust, and they also need to be able to access authoritative information. Can they look at an image and know right away if it is a real or not . Not necessarily, not yet, but there are some indicators if it is telling your story that is maybe not true, you can go look for the original source of the information to see if you can get additional context around what that puts a lot of emphasis Tech Companies be doing more . Emphasis Tech Companies be doing more . Absolutely. There is a big role doing more . Absolutely. There is a big role for doing more . Absolutely. There is a big role for the doing more . Absolutely. There is a big role for the tech is a big role for the Tech Companies in this. There are lots of promises around indicators of trust, labelling and watermarking that the companies are investing in and which they are starting to make more available and visible to the end user. The reality is where that technology is unaware labelling is now, we also need to invest in al literacy and ai training, so that people on the other side know what questions they should be asking. At your department to track and chart how much of this is out there. How much of this is out there. How much of this material a finding, especially that aimed at destabilising democracies or interfering with elections . We have been interfering with elections . Have been tracking especially what nationstates are doing in this space. There have been so many elections we can look at so far and see what has happened there stop so far we have not seen massive amounts of ai used to deceive people around elections, but that doesnt mean it isnt coming and it doesnt mean others arent planning for that. It is important for us to continue reporting on what we are seeing, what were seeing these adverse areas do, and equipping people, especially political campaigns and Election Officials, with what they should do if they encounter about. ,. , should do if they encounter about. , about. Can you say your own microsoft about. Can you say your own microsoft products about. Can you say your own microsoft products are about. Can you say your own microsoft products are or. About. Can you say your own | microsoft products are or are not being used to because harm and contribute to this . We not being used to because harm and contribute to this . And contribute to this . We are doinu and contribute to this . We are doin the and contribute to this . We are doing the best and contribute to this . We are doing the best we and contribute to this . We are doing the best we can and contribute to this . We are doing the best we can to doing the best we can to hopefully not have that be the case. We have different policies, restrictions on our products, to ensure they are not being used for those purposes. And we have policies and enforcement if we determine that somebody has used them in a deceitful way or a way that goes around what our policies state. If goes around what our policies state. , goes around what our policies state. , state. If an election campaign, if a candidate state. If an election campaign, if a candidate in state. If an election campaign, if a candidate in any state. If an election campaign, if a candidate in any country, l if a candidate in any country, i am not talking specifically about the us, but if they want to target their opponents with deceptive ai and they want to share that to their own existing mailing list of supporters, is there anything any tech company can do about that . If they are using it in that . If they are using it in that way and we are able to determine they have done so, as the provider of the technology, we can cut off usage for sure. So there are some ways to stop it in the first place. There are also tools to prevent that kind of use in the beginning. I think what we are looking for is less likely to be campaigns using it for this purpose, and more likely actors outside the political space looking to influence elections. That is where we are looking at our press intelligence team, which can be really helpful, to see what these actors are doing, how they are trying to manipulate the public, sharing that information with governments and with the public to know what to look out for. Are there particular things being targeted . So are there particular things being targeted . Are there particular things being targeted . So far we see russia, being targeted . So far we see russia. For being targeted . So far we see russia, for example, being targeted . So far we see russia, for example, is russia, for example, is continuing to put propaganda about ukraine out into the space, including places where elections are coming up in order to potentially influence those people around votes. Hard to say for sure what their intentions are. We have seen different techniques like deep fakes being used. Deepfakes. They are not particularly effective at the moment, these technologies are improving in these actors are looking to leverage the best technology out there. So it could continue over time. Out there. So it could continue over time out there. So it could continue over time. Microsoft and about 20 other companies over time. Microsoft and about 20 other companies have 20 other companies have obviously signed The Tech Accords, but that is self policing, self regulating. There is no legislation in this country. The eu has its ai act. Would you like to see legislation . Should there be legislation . Should there be legislation . Legislation . Should there be legislation . There should be. The tech accord legislation . There should be. The tech accord was legislation . There should be. The tech accord was a legislation . There should be. I The Tech Accord was a moment legislation . There should be. The tech accord was a moment in time when the Tech Companies came forward to acknowledge the challenges we are facing in this Election Cycle and the reality of the timeline, which is that it takes a while to create that legislation and enforce it. But we are already in the middle of an Election Cycle. The voluntary commitments were the tech industrys way of saying, we are going to step into this gap, we are identifying these challenges in committing to what we can do about it. That is not in lieu of regulation. We absolutely want to see additional regulation from the us and other governments. What us and other governments. What about freedom us and other governments. What about freedom of us and other governments. What about Freedom Of Expression . There has been political satire since the beginning of time and some would say some of this ai generated stuff is just that. Al generated material is not ai generated material is not necessarily bad. In fact, political campaigns using ai material is not necessarily bad. It is when it comes with the intent to deceive were drawing a line. If you put your words and elses mouth, that is fraud, and that is definitely crossing a line. If you use ai crossing a line. If you use al to make a funny video about yourself, that is absolutely acceptable from our perspective. So there is a difference between using the technology to further a message and using it to deceive others. 0k. And using it to deceive others. Ok. We will leave it there for the moment. Thanks forjoining us on bbc news. I spoke to adam presser, tiktok� s head of operations and trust and safety. I asked him what they are doing specifically to detect and stop deceptive actors from targeting Online Platforms when it comes to elections. Keeping our communities safe is our top priority, especially through these really important processes. We have navigated it over 150 elections, over the course of the last several years, all over the world, so we work with partners like democracy works to provide reliable information to people searching for information about the election in the United States, for example. We also have 18 Fact Checking partners to ensure the content people are seeing is reliable and has been verified. So it really requires that multipronged approach to ensure the experience on tiktok is as safe as possible. In experience on tiktok is as safe as possible as possible. In the us, tackling as possible. In the us, tackling the as possible. In the us, tackling the threat as possible. In the us, tackling the threat of l tackling the threat of Artificial Intelligence before the november president ial election has become a significant priority across the government. The fbi has warned that foreign adversaries could use ai that foreign adversaries could use al to interfere in the election and spread disinformation. 0ne senior official says it is a concern that we will probably see growth and in the coming years. The department of Homeland Security has also want Election Officials and generative ai could be used to enhance foreign influenced campaigns targeting the 2024 cycle. And in congress, lawmakers are discussing legislation on als impact in federal elections, but it is unlikely any will pass by november. 0ne but it is unlikely any will pass by november. One of those proposals is the protect elections from deceptive ai act, a bill that would ban the intentional publishing of materially deceptive ai generated political ads which are intended to influence an election or indeed to raise money. Earlier i spoke to one of the bills authors, democratic senator chris coons from delaware. From delaware. You are not facin from delaware. You are not facing the from delaware. You are not facing the electorate from delaware. You are not i facing the electorate yourself this year, but plenty of your democratic colleagues are, including President Joe Biden, a close confidant of yours. Haifa a close confidant of yours. How concerned a close confidant of yours. How concerned is a close confidant of yours. How concerned is he, a close confidant of yours. How concerned is he, is a close confidant of yours. How concerned is he, is the concerned is he, is the campaign, is the party as a whole, about the role that deceptive ai may play in this years election . Years election . Well, thank ou for years election . Well, thank you for a years election . Well, thank you for a chance years election . Well, thank you for a chance to years election . Well, thank you for a chance to be years election . Well, thank you for a chance to be on i years election . Well, thankl you for a chance to be on and thanks for your focus you for a chance to be on and thanks for yourfocus on you for a chance to be on and thanks for your focus on this important subject. I am very concerned about the rapidly growing capabilities of ai concerned about the rapidly growing capabilities of al to carry out, for example, fake Telephone Calls mimicking somebody� s voice. We already saw an episode of this in the New Hampshire primary here. We saw deepfakes used to try to influence the election in taiwan. We have seen it have an impact on the outcome of the election in slovakia. So there is already electioneering a foot, where ai fakes, whether they are video or audio, can and i think increasingly will be used to try to deceive the public, to dissuade them from voting, orto public, to dissuade them from voting, or to spread lies about what different candidates for office are doing or what they stand for. Office are doing or what they stand for office are doing or what they stand for. Senator, have you seen any evidence stand for. Senator, have you seen any evidence of stand for. Senator, have you seen any evidence of that i seen any evidence of that coming from foreign actors in this country . Coming from foreign actors in this country . Yes. Would you like to elaborate . This country . Yes. Would you like to elaborate . Well, this country . Yes. Would you | like to elaborate . Well, things that i like to elaborate . Well, things that i know like to elaborate . Well, things that i know from like to elaborate . Well, things that i know from classified that i know from classified briefings i am not going to talk about in detail. I will just say that the previous segment, where the person being interviewed was saying that a number of active foreign players had been using Artificial Intelligence to strengthen their Disinformation Campaigns with an intention to influence our elections, that is quite accurate, and a testimony, the director of the fbi has said publicly, we have a number of adversaries, whether it is russia, china, iran or the dprk, north korea, that are using disinformation and now Artificial Intelligence to seek to interfere with our elections. To seek to interfere with our elections to seek to interfere with our elections. ,. ,. , elections. You are a cosponsor ofthe elections. You are a cosponsor of the act, elections. You are a cosponsor of the act, it elections. You are a cosponsor of the act, it has elections. You are a cosponsor of the act, it has broad of the act, it has broad bipartisan support, one of the few issues in this country that there is agreement on. Why can that not be rushed through . Why are we unlikely to see that in action before november to protect against what you are describing there . I protect against what you are describing there . Describing there . I was encouraged describing there . I was encouraged that describing there . I was encouraged that the i describing there . I was i encouraged that the rules committee, which senator clover charge chairs, has been scheduled for a markup this coming week. Senator klobuchar. Frankly we have struggled to enact reasonable, responsible, meaningful protections in privacy generally. In data regulation more broadly. And in Artificial Intelligence. Senators klobuchar and holly are leading this bill and doing a good job, and if there is one piece of legislation i would like to see past before our elections, it would be this one. Do past before our elections, it would be this one. Past before our elections, it would be this one. Do you think it mitht . Would be this one. Do you think it might . It would be this one. Do you think it might . It might. Would be this one. Do you think it might . It might. We would be this one. Do you think it might . It might. We are it might . It might. We are havin it might . It might. We are having difficulty it might . It might. We are having difficulty getting i it might . It might. We are i having difficulty getting along in congress and moving things through the senate and the house, butjust the very breath of radiological spread between senator hawley, who was on the far right, and senator klobuchar, who

© 2025 Vimarsana