Transcripts For BBCNEWS BBC 20240703 : vimarsana.com

BBCNEWS BBC July 3, 2024

And the eu will head to the polls in 2024 and many already have. Half the worlds voting population, about 2 billion people, are expected to cast ballots. Artificial intelligence has already begun to play substantial role in elections. And concerns are growing about how to protect the public from deepfakes that could sway their vote, especially as the Technology Becomes more sophisticated and convincing. In pakistan, an ai Version Ofjailed imran khan was released by his own team claiming victory in february. And in argentinas elections, the New York Times wrote an article showing multiple Images Released by both campaigns were digitally doctored. Here in the us, voters in New Hampshire received a Fake Robocall from President Joe Biden telling them not to vote in the states primary back in january. Take a listen. What a bunch of malarkey, winner of the value of 13 democratic when our birds count, its important that you save your vote for the november election. We will need your help in electing democrats up and down the ticket. Learning this tuesday enables a republican in their quest to re elect more trump again. Loading this tuesday. However, tech giants are pushing back. In feburary, 20 Major Companies including adobe, amazon, google, ibm, meta, microsoft, openai, tiktok, and x signed The Tech Accord at the munich security conference, an agreement to join forces to combat deepfakes and Protect Elections. More recently,tiktok annnounced on thursday that it will begin labeling ai generated content. Meanwhile, meta said last month it will do the same. The google owned video sharing platform youtube says it requires creators to disclose when realistic content is ai generated. This week microsoft and openai announced a 2million Societal Resilience Fund to further ai literacy, just before coming on air i spoke to ginny badanes, head of microsofts Democracy Forward initiative about how Tech Companies can protect against bad actors. Thank you forjoining us on bbc news. If we look at, as we are this year, so many elections, if we look at the issue of deepfakes, ai generated content, some of it is very, very hard to tell that its not true, how can people know what they are listening to, what they are listening to, what they are listening to, what they are seeing is authentic . Its a great question, as you mentioned, this is a huge year for election and some of the 2 Million People are going to go to the polls to vote so when you have Generative Ai to the polls to vote so when you have Generative Al on the rise of the same time, its important that people have tools that they can use to decipher what ai is, what they can trust and also need to be able to access authoritative information so, can look at an image and know right away if its real or not real . Not necessarily, not quite yet but there are some things indicators that can look for such as if its telling them a story, maybe not true, they can go look for the original source of information to see if they can get additional contacts around what the picture tells them. Thats putting a lot of them. Thats putting a lot of the emphasis on the user though, isnt it . The emphasis on the user though, isnt it . Should take companies though, isnt it . Should take companies do though, isnt it . Should take companies do more . Though, isnt it . Should take companies do more . Absolutely, this is a Huge Companies do more . Absolutely, this is a huge role companies do more . Absolutely, this is a huge role for companies do more . Absolutely, this is a huge role for tech this is a huge role for Tech Companies, a whole lot of promise around indicators of trust so that labelling and watermarking of the companies are investing in and starting to make more available and more visible to the aid user but the reality is whether technology is and whether labelling is now, we need to invest in al literacy and training to the people on the other side know what questions they should be asking. figs what questions they should be askinu. R what questions they should be askinu. A. , asking. As part of your company. Asking. As part of your company, your asking. As part of your company, your department, to track and chart out how much of this is out there, how is harmful material are you finding that same that the stabilising democracy or interfering with elections . We have been interfering with elections . Have been tracking what nationstates are doing in the space, there have been so many elections that we can look at so fine see whats happened and so fine see whats happened and so far, we havent seen massive amounts of ai use to deceive people around elections but it doesnt mean its not coming, doesnt mean its not coming, doesnt mean its not coming, doesnt mean that others on planning so its important for us to continue reporting on what we are seeing and seen as adversaries do and again, equipping people particularly Political Campaigns and Election Officials with what they should do if they encounter that. They should do if they encounter that. Can you say that your encounter that. Can you say that your own encounter that. Can you say that your own microsoft that your own microsoft products are all are not being used to cause harm and contribute to this . We are doinu contribute to this . We are doing the contribute to this . We are doing the best contribute to this . We are doing the best we contribute to this . We are doing the best we can contribute to this . We are doing the best we can to l doing the best we can to hopefully not have that be the case, hopefully not have that be the case, we hopefully not have that be the case, we have different policies, restrictions on our products policies, restrictions on our products to ensure that they are not products to ensure that they are not being used for those purposes are not being used for those purposes and again, policies and purposes and again, policies and enforcement if we determine that someone has used them in a deceitfui that someone has used them in a deceitful way or a way that goes deceitful way or a way that goes around what our Policy States goes around what our Policy States if goes around what our Policy States. , goes around what our Policy States. , states. If an election campaign. States. If an election campaign, if states. If an election campaign, if a states. If an election campaign, if a candidate in states. If an election campaign, if a candidate in any country, not talking specifically about the us but if they want to target target their opponents with deceptive ai and want to show that to their own existing mailing list of supporters, is anything that any take company can do about that . If anything that any take company can do about that . Can do about that . If they are usin it can do about that . If they are using it in can do about that . If they are using it in that can do about that . If they are using it in that way can do about that . If they are using it in that way and using it in that way and determine that theyve done so as a provider of technology, we can cut off usage for sure but there are some ways you can stop it in the first place and some tools in place to prevent that kind of in the beginning. I think what were looking for is less likely to campaign using it for this purpose are more likely actors outside of the political space welcome to influence election. Thats working with intelligent teams to see what we see this act is doing, how they are trying to manipulate the public and share with the public and government to know what they looking out for. Share government to know what they looking out for. Looking out for. Are there thins looking out for. Are there things that looking out for. Are there things that been looking out for. Are there l things that been targeted . Looking out for. Are there things that been targeted . So far things that been targeted . Sr far we see russia, for example, continuing to put propaganda about ukraine out into the space including into places where letters are coming up, to influence those people, its hard to say for sure what their intentions are in different techniques like deepfakes being used and not super effective at the moment but again, this technology is improving and these actors are looking to leverage the best technology out there so we could continue to improve over time. To improve over time. When it comes to to improve over time. When it comes to regulation, to improve over time. When it | comes to regulation, microsoft and 20 other Tech Companies have signed The Tech Accords but that is self policing, self regulating, there is no legislation in this country the eu has its ai asked, would you like to see legislation, should they be legislation . They be legislation . There should be. They be legislation . There should be. The they be legislation . There should be. The tech they be legislation . There| should be. The tech accord they be legislation . There should be. The tech accord was a moment in time and take companies came together in a glimmers of the challenger were facing in this Election Cycle and the reality of the timeline which is, it takes a while to create legislation and enforce it but we are in the middle of an Election Cycle so the voluntary commitments with the tech industrys way of saying, we will step into this gap, identifying the challenges and commit to what we will do about it, that is not in the regulation, we want to see additional regulation from the us and from other governments. What about the Freedom Of Expression . This political satire in the beginning of time and some would say some of this ai generated stuff is just that . Ai generated stuff is ust that . �. ,. ,. , ai generated stuff is ust that . ,. ,. , that . Al generated material is not bad in that . Al generated material is not bad in fact that . Al generated material is not bad in fact political not bad in fact Political Campaign using ai material is not bad, its when it comes to the intent to deceive that we are really trying a line. If you put your words in someone elses mouth, thats fraud and thatis elses mouth, thats fraud and that is definitely going over a line. If you use ai that is definitely going over a line. If you use al to make a funny video about yourself, thats absolutely acceptable from a perspective so there is a difference between using the technology to further your message and using it to deceive others. , others. 0k, we will leave it there, others. Ok, we will leave it there, thank others. 0k, we will leave it there, thank you others. 0k, we will leave it there, thank you so others. 0k, we will leave it there, thank you so much. | in the us tackling the threat of Artificial Intelligence before the november president ial election has become an significant priority across the government. On thursday, the fbi warned that foreign adversaries could use al to interfere in the election and spread disinformation. 0ne senior official says it is a concern that will probably see growth over the coming years. The department of Homeland Security has also warned Election Officials that Generative Ai could be used to enhance foreign influence campaigns targeting the 2024 Election Cycle. And in congress, lawmakers are currently discussing legislation on als impact in federal elections however it is unlikely any will pass by november. One of those proposals is the Protect Elections from deceptive ai act. Its a bill that would ban the intentional publishing of Materially Deceptive ai generated political ads that are intended to influence an election or raise money. Joining me to discuss the effort is one of the bills authors, democratic senator from delaware chris coons. Thank you so much forjoining us on bbc news. Lets talk about this years election, uniface and the electric yourself commitment of the Democratic Party colleagues are including the President Joe Biden, a close confidant of yours. How concerned is he, is the campaign, is a party as a whole, about the role deceptive ai may play . Whole, about the role deceptive ai may play . Ai may play . Thank you for a chance to ai may play . Thank you for a chance to be ai may play . Thank you for a chance to be on ai may play . Thank you for a chance to be on in ai may play . Thank you for a chance to be on in the ai may play . Thank you for a chance to be on in the queuej chance to be on in the queue for your focus chance to be on in the queue for yourfocus on this for your focus on this challenging and important subject, im very concerned about the rapidly growing capability of ai, about the rapidly growing capability of al, to carrier for example fake telephone calls, mimicking someones voice, we saw an episode of this in the New Hampshire primary here, we saw deepfakes use to try and influence election in taiwan, we have seen it have an impact on the outcome of the election in slovakia. There is already election nearing a foot where ai fakes, with a video or audio can and will be used to try and deceive the public, to dissuade them from voting or to spread lies about what different candidates for office are doing and what they stand for. Iiidtu� e and what they stand for. Have ou and what they stand for. Have you seen and what they stand for. Have you seen any and what they stand for. Have you seen any evidence and what they stand for. Have you seen any evidence of and what they stand for. Have you seen any evidence of that| you seen any evidence of that coming from foreign actors in this country . Coming from foreign actors in this country . Yes. Would you like to elaborate . This country . Yes. Would you like to elaborate . Well, this country . Yes. Would you | like to elaborate . Well, things that i like to elaborate . Well, things that i know like to elaborate . Well, things that i know from like to elaborate . Well, things that i know from classified that i know from classified briefings, i wont talk about in detail, ijust say briefings, i wont talk about in detail, i just say that the previous segment where the person being interviewed was saying that a number of active foreign players have been using Artificial Intelligence to strengthen their Disinformation Campaigns with an intention to influence our election, thats quite accurate and in testimony, director of the fbi has said publicly that we have a number of adverse areas whether its russia, china, dprk, north korea that are using this information and our Artificial Intelligence to seek to interfere with our elections. To interfere with our elections. ,. , elections. Your cosponsor of the Protect Elections. Your cosponsor of the protect election elections. Your cosponsor of the protect election found the protect election found deceptive ai act, it has bipartisanship bought, one of theissues bipartisanship bought, one of the issues that there is agreement on, why can that not be rushed through, why are we likely to see that inaction before november to protect against what youre describing the . I against what youre describing the . Against what youre describing the . ,. , the . I was encouraged that the Rules Committee the . I was encouraged that the Rules Committee which the . I was encouraged that the Rules Committee which the Rules Committee which the Senator Shears has a schedule for a mark of this coming week. We have struggled frankly to enact a reasonable, responsible, meaningful protections in privacy generally in data regulation, more broadly and in Artificial Intelligence. Senators are leading this bill and doing a good job and if theres is one piece of legislation id like to see past before our elections, it would be this one. Y. , ~ one. Do you think it might then . We one. Do you think it might then . We are one. Do you think it might then . We are having one. Do you think it might then . We are having difficulty caettin then . We are having difficulty getting along then . We are having difficulty getting along in then . We are having difficulty getting along in congress then . We are having difficulty getting along in congress are| getting along in congress are moving things through the centre and the House Butjust A Very Breath of aetiological spread between the senator who is on the far right who like me as a democrat you some indication about how much support there might be for this. I had a hearing last week on another bipartisan Artificial Intelligence related bill that seeks to create a private right of action, Legal Protection of anyone, whether theyre a politician or an artist and actor orjust an average citizen who is the victim of a deepfake or an ai avatar being used to make them say or do things theyve never approved of dont agree with. As we heard earlier, deepfakes are being used when it comes to conflicts as well in propaganda and so i want to ask you about ano

© 2025 Vimarsana