Transcripts For CSPAN3 CEOs 20240703 : vimarsana.com

CSPAN3 CEOs July 3, 2024

Corporate executives discuss the latest developments in Technology Including Artificial Intelligence and clean energy, at a recent summit posted by axios in washington, d. C. This is about 45 minutes. Good afternoon, welcome to the third annual axios whats next summit. We appreciate all of you for being here, guest joining us virtually and around the world and thank you to the axios events team which has spun up this amazing space and series of events. I am mike allen, cofounder of axios in washington with editorinchief sarah. This is one of our favorite days of the year. Axios experts are in from around the country. This is like a reunion. We have been having fun together. An amazing program, its going to be covering topics from the i to how business is changing, the policies changing. Axios tries to get to what matters, what is coming, that is why we love what is next sensibility, tech, media and politics and how they intersect, connect and collide. This gives us a chance to gather insights from transformative leaders were going to hear from today from across policy, business, technology, and allows us to go a little deeper into how they are affecting what is next for how we work, play and live our lives. If youve been here in the past, previous years, our summits have featured amazing leaders and speakers such as renowned shift and humanitarian was a andre. Music producer timberland and heads of cbs, gm and you too. Just to name a few. Sorry for my voice. This year were going to hear from another emerging set of amazing leaders we have included two cabinet officials on the stage today. Slacks new ceo denise dresser. And ai thought leader, nelson. We call this our crystal ball event. In addition to the great programs here on the main stage, we would love you to check out the Innovation Hub downstairs, we have demonstration of Virtual Reality of electric air flight. I tried the ai robot that sorts your trash as you dispose of it. And its cold of course, oscar, and i also tried out the fully electric honda scooter and prolog suv. Cant really pack up the suv in your luggage but the modal compact oh scooter folds down this small. Were going to take a youtube of me on the scooter and i was like, you can put this in the overhead, it turns out the battery is two big the overhead. But the scooter is fun, and i was happy to see a scooter that has a seat. Also today we are honored to have Frank Mccourt here, such a great pioneer, he is at with his book, our biggest bite, which we were honored to debut at sxsw last week. Frank is an optimist, has great ideas about how we can improve tech and he is here he and project liberty are here along with our new president , we appreciate you being here, they are going to, frank will be signing books in the upstairs lounge. We appreciate project liberty and thank you very much to you and your team for being here to tell us more about our biggest fight, reclaiming liberty, humanity and dignity in the digital age. And as blurbs go, its a little tough to top this bruce springsteen, the boss calls this essential reading, essential readings for our times. I have a few logistics for us all to give you a sense of how the day is going to go. To provide you a quick rundown of the afternoon, we are going to have a short break in the middle of the programming today, so be sure to make it back here in time once thats over because we have a full slate of grammy after the break. Otherwise, youre welcome to stretch your legs, take a call, get a coffee, whatever you need to do in the lounge upstairs. If you need to refer to the summit agenda, it is on your badge you are wearing or on the screens upstairs. After the programming concludes, we encourage you to take the short commute upstairs for a reception where you can connect with fellow attendees and enjoyed some bites with us. Thank you so much. As we post on social media, we would love you to use the hashtag axios w and s, on from the show, chief tech corresponded, ina fried. Thank you, mike and sara. This is my third time at whats next summit and i am more excited for our first speaker. This is a critical time for society. The advent of Ai Technology will reshape things. How it reshapes is yet to be determined and i am excited to start with a Alondra Nelson she coauthored ,the ai bill of rights during her tenure as director of technology policy. Please help me in welcoming dr. Nelson. I want to start off, and the discussion around ai, often when i hear it talked about, i hear it talked about as if it is going to be good or bad. Correct me if im wrong, but my thinking is it is going to process ones and zeros and whether it is good or bad depends on what we tolerate, incentivize, regulate. Some of that thinking is key to your work, but how do you think about these questions . How do we make sure we have good outcomes . And not bad outcomes . Im so glad to be here. I think that is the profound question of our time. It is the case its a very powerful technology. It is a bunch of zeros and ones but also its complicated and theres a reason there is a kind of enchantment. We should stipulate that. Its also the case that these are tools we create and they dont have their own agency, even though people are building ai agents. And what they become is up to us, what they are and what they become is up to us. I think we very much have to remain in the space of appreciating that the work we do now and regulation and education, all throughout society, that is going to create the future that we want. So, you know, when people say ai is going to cure cancer and mitigate Climate Change and all of these things, all of that may be true, i dont know, they are all questions, but the case to your point, none of it is going to just happen. We have the benefit of being pretty early in a moment of a new technology that we can begin to sort of put in place, hopefully, some of the infrastructure, some of the norms and with the regulations, some of the standards we need to have the potential good outcomes that are still, even if we do some of this or a lot of this, only potentials. And both can happen. Its not like if we use ai to achieve some good outcome, that prevents the bad outcome. I want to rewind a bit, when you were in the white house, crafting this ai bill of rights, what were the key things you wanted to make sure got enshrined, obviously it wasnt law, la was the first up, but what were you hoping to achieve with that bill of rights . We were taking up some of the work of prior administrations. The Obama Administration had done work on ai. Policy issues the nation should think about. The trump administration, there was ai legislation passed. There was a national ai act passed. We are that said things like we are going to use ai and the government and it will be done with democratic values. It will be trustworthy. So the question for us into the Biden Harris Administration was what is it mean to have ai that we so very early on in the administration we spun up a process, as you say, the blueprint for an ai bill of rights is not lower legislation. We wanted to, particularly coming into office in the middle of a pandemic, at a time when there was quite a lot of distrust about science, government, technology. So, how we have a process in which we could engage people in thinking about questions about ai . So we wrote an oped, we put a white house email address at the bottom and said anyone can write to us about this. What are the things the American Public should be asking for or how should we be thinking about Ai Technology in the context of American Society . We spent almost one year talking to schoolteachers and High School Students and lots of experts. And folks in local and state government. To sort of get a sense of what they needed and what they were worried about. And also to distill from some really Good Business partners in some instances, and a lot of technological experts, what is the best in class we know we can do, the subtitle of the ai bill of rights is about making the principles into practice is. The system should be safe and effective. It shouldnt discriminate against you, there should be privacy. We were basically distilling the common sense things people were telling us they wanted and then trying to really flag, what were those initial things we know we can do. We call red teaming broadly. Risk assessment, thirdparty audits of algorithms as a way to flag how we might move ahead. So that is what we were trying to accomplish. Moving from lofty principles, but to also sort of have you begin to operationalize and research in industry and government. It was kind of a rude awakening i imagine when you started on your latest projects of looking at, where is the technology right now . I think that is an important question. We need to look at the existential risks, we have the set of technologies, it is here, it is powerful, and flawed. And talk about what you see in the work you have been doing. You been doing some amazing work , what have you found in terms of, what are the impacts that these tools can have today and might have in a year, where not only are americans going to the polls the billion people around the world. Its a real challenge. Julian i started the summer called the ai democracy project, we are studying in lots of different ways the impacts of algorithms on elections, democracy, and back to the ai rights, on rights. How is it impacting these sets of things . The first that he we did was working with Elections Officials from red and blue states, a bipartisan effort, so we realized, because so many of the Large Language Models hallucinate and they do so plausibly, to give information that looks right. So unless you know specifically about a particular voting district or voting law or issue, it looks fine. It sounds great. And so we did we have been calling an allday event with ai experts, people from Civil Society, Election Officials and colleagues are maricopa county, secretary of state from nevada, and we tested the checkbox on basic election information. I do appreciate that i think some of the data says maybe 25 of americans have used chat pods. Is not suggesting everyone is using this for election information. But to the extent that they are, can they get valid and reliable information . We had the experts rate the outputs, and we found that more than half are inaccurate. 13 were rated to be biased. In between that they were rated to be harmful. And inaccurate or incomplete. And inaccuracies could mean telling you the wrong day, the wrong procedures for voting by mail. Given where the technology is today, am i right in thinking that anyone the rents a chap but should actually reroute their election questions some other way and provide can you really trust eight chap but with todays generative ai for todays election information . But sadly you cannot. We had teams of people doing the testing and to me, or to someone even working out of election Civil Society organization, when it says, go to ww w i can vote. Org. It looks credible and you go there and think its good information. These urls didnt exist in some cases. In some instances, communities were told, we asked about a predominately African American community in philadelphia, the result was that the community didnt exist and didnt have a polling site because it can be found. So, we have challenges with elections. I think a lot of our discourse around it and the conversation is about, you know, about deceptive content. About deliberate disinformation with deepfakes and voice cloning and the sort of things. That is surely a problem and we are already seeing that rise up as a problem. Thats what i was going to ask, the problems you outlined in that article were mostly the system by itself with no actor was giving bad and reliable misinformation. We had for the last couple election cycles, outside interest. People trying to sow chaos. How powerful are the tools today is engines of this information . How worried are you that we are going to see a flood of social media posts, either just poisoning the information landscape or deliberately trying to change things in one way or another . Its already happening. Right . We have already seen, if we think about the New Hampshire democratic primary, think about the robo call voice cloning of president bidens voice. It is a problem. We know it has happened in past elections. It happened in the 2016 election for sure. But they didnt have these tools. The barrier to entry is much lower, its much easier to either, if its for the laughs, or aligned actors, it is much easier to do it now. But you compound that with being much easier to use, having more dissemination vehicles with lots of other kinds of social media, and also having the this information piece which is the kind of death by 1000 cuts of tools that actually just dont work. They dont even work for what they claim to be able to do. Which is to provide fairly Accurate Information about the world. So, it is pretty worrisome. Think we are right to be worried. We can anticipate all the challenges we will have in the election. I think it is a kind of Cyber Security of society because these things, we are going to just have to try to mitigate in place and try to forecast to the extent we can. But its going to be a constant back and forth dealing with these issues. Do have a sense where the Tech Companies are on this . My sense is that leading up to the 2020 election, certainly afterward, there was a lot of focus on Election Security and integrity in protecting against things. Because they knew four years earlier there had been as efforts. My sense is a lot of the companies have quietly scaled back. The big ones. The facebooks of the world. That is one thing. And then you have the new ai companies, we are very committed, we have four people on our Election Integrity team. Are you worried that the people controlling the technology are devoting enough resources . Yes. There have been, as we have seen, across the last year, volunteer commitment, also voluntary commitments from the companies around ai and elections in particular. And lots of wellmeaning discourse about wanting to be responsible actors. But we really just cannot afford to messes up. Even going back, julia and her team recently went back to some of the chat pots and tested with some of the same questions. And they hadnt been fixed. When we released a report, we went to the companies and said, this is what we found. They said, we had made promises in december and january this wouldnt happen or we fixed it. Its not going to happen again. And it is still a problem. It is a problem for the chat pots, but what we are the business model, one of the hopes for ai, if we mitigate the risk and get it right, you can build whole worlds on these foundation models. The foundations are creaky. This is not a foundation on which to build an Enterprise Business ecosystem, any kind of knowledge system if you cant get basic and Accurate Information. To the extent that things like election information in Large Language Models are being built into things like ai and other things of various ai systems are being built into Microsoft Office and things, its a real challenge. If we want to take it literally the use of the word foundation, the foundation of the ai ecosystem your building, it is creaky and brittle. And its a real problem for democracy. And certainly, people use the chat pots or ai systems to generate information. But the social media layers are where it gets broadcast. We are hearing a lot right now about tiktok. I am curious, what do you think . My sense there is an existential threat. Just like ai, there is a bunch of concerns, nearterm misinformation bias, discrimination, wealth concentration. And then there is the existential, robots might kill us all. Today we have to Pay Attention to both the have two separate conversations. It feels like theres something similar in social media we have one conversation about tiktok and the threat that someday the chinese might use this to influence us. But are we not in a moment where all sorts of countries are using the American Social media we have . How important is it that we regulate what is happening on facebook and whatever used to be twitter, and all those places in addition to paying attention to tiktok . I would slightly quibble and say its not two different conversations. This is why we are challenging, having a challenge mitigating it. The challenge we face with tiktok, potentially, is who owns the data and where is it going . Some of that data, some of the data we are concerned about is being sold by american data Broker Companies that are not regulated. So the issue i think that really is under all of it is really about data, you think we talk about ai being dual use or generalpurpose use, data and data flows and the data ecosystem, its the ultimate dual use. And so the data is circulating. You know, if we care about data privacy and we care about not sort of capitalizing on american data, being able to sell it all over the place. Even accounting for other expert controls and controls that they you can sell american data to these countries or people in these countries but not those countries. Is putting a bandaid on a larger issue which is, we fundamentally need a federal data privacy regulation. We got close last year. That is not going to solve everything, but it will really sort of create a baseline of expectations that would make it far more difficult for some of the concerns that we have, both from malign actors and the day today. So we have to do all those things. Would you also ban tiktok . I think it matters who owns tiktok. And so i think we have to take that quite seriously. I think we should regulate social media companies. I think we should regulate platforms. That can be done and we still have not done that and some of the

© 2025 Vimarsana