The real risks and real risks and opportunities related to ai. I frame it in those terms because for the last six months we had a very exciting ai debate. Dominated by extreme voices around extreme fears or frankly unrealistic utopian visions about what ai can do for us and there is really important middle in between. Those are the nuts and bolts i would like us all to dive into. If there is anyone who feels they have not been keeping up to speed in the debate, do not worry, this technology is transformative and it will be here forever. You have not missed much. Number two, everyone firmly believed that because this will transform all of our lives, we have all arrived in a state with a voice in this discussion. I am going to dive in with you. You have said that the most important first step in dealing with ais understanding and managing the risks. Not everyone agrees. Sam altman, he wasnt sitting there consulting what has blueprint for ai when he released his chatgpt product into the world. Here is what the white house would you to share this revolution. Thank you, ryan. Amazing to be here with you. This topic of ai is an active, urgent area of work at the white house. I appreciate the chance to kick off this discussion. President biden has been very clear and many of you must have heard him talk about how we are at an Inflection Point in history. He very much talked about ai in that context as one of the most powerful voices today. And the choices we make today including about ai are going to change the ark of the next few decades. That is why ai is such a high priority in the work we are doing and our work starts by recognizing the phenomenal breadth of this technology as the most powerful Technological Force of our time. We all know what Human History tells us about what happens with powerful technologies. We know that humans will use them for good and for ill so the approach we have taken from the white house on our Artificial Intelligence is we absolutely want to seize its benefits by the way to do that is we have to start by managing its risks. Because ai is so broad, its applications are vast so i will briefly give you four categories of risk that we think about. We need to untangle this. I am sure you have for the cacophonous talks about ai. The second is the broad category of risks to safety and security. Everything from selfdriving car to cybersecurity and biox by a security concerns. The third is the risk to Civil Liberties including issues that can be embedded in biorhythms. And risks to jobs in our economy. That starts to give you a sense of how incredibly broad this challenges with ai. What you will see from us is ongoing work the week i arrived to join the white house in october of last year when we release the ai bill of rights, i think when you are in choppy waters with ai moving as rapidly as it is, there is no more important time to be clear about your values. That is the important foundation. He will continue to see many actions. Today we are working closely with the leading ai companies that make helping them step up to the responsibility we are working across agencies and government and everything we can do through executive action to get ai on a good track. We will definitely continue to work with congress on a bipartisan basis and all of that as they start laying out the legislative agenda. And then finally, we are working with our International Allies and partners. You will see all of those lines of support. I want to step back and say we know we are in a time when every nation in the world is trying to use ai to shape the future there reflects their own core values. We can all disagree about many things but one of the things i know we agree upon is that none of us wants to live in a future that is driven by technology shaped by authoritarian regime. That is why at this moment in time, American Leadership in the world depends on American Leadership in ai. I think that is what we will keep our eyes on as we do our work. Speaking of values, one American Value is opportunity. On the other hand, google is trying to not rush products at the door. How do you walk the tightrope . How do you be a partner to the white has bus to make sure you are innovating and not missing out on those opportunities . We talked about the notion of innovating boldly and responsibly in doing that together. Doing that in a way that is inclusive and brings out a lot of different views. That is challenging. While minimizing the likelihood that it is misused. For us, that breaks down to three categories. Many of them paralleling what they were talking about. This will accelerate areas like quantum but also things that make a difference in peoples lives. , precision agriculture, many more. Many people in Computer Science have never seen anything like this in their careers. But has to be balanced with a response ability agenda and many of the comets that were made before go exactly to this. Making sure that we get fairness right. We have had a federal fairness program at google since 114. Have we are thinking about the ways ai will change the future of work . Are we making sure we are staying grounded and factual in this information can be challenging . You heard about submachine hallucinations. That is a big Research Agenda we are working on comprehensive leave. Goal alignment, safety, many other areas. But also, security, we have to think about the challenges for cybersecurity but also potentially the advances in cybersecurity. This draws on single trust computing but also adds to threat intelligence. And now the notion of red teaming and adversarial review that we started to work on throughout the industry. How do we make sure we are finetuning these models in a way that minimizes the harms and maximizes the benefits. Because if i just do a follow up to you there, not all ai models are created equal. There are Different Levels of risk and use cases. People have been afraid about letting some of these things out into the wild. I dont to get into two technical debate but when you release this, anybody can use it without any restrictions, that can get used in a lot of different ways. How worried are you about some of those . And i think you are on an incredibly important point. A few months ago we would have said that progress in ai is purely dependent on more crunch time and because of the proliferation. When i was a venture capitalist i would have said it is democratizing the technology today. When i was in the Defense Department i would say it is proliferating and both of things are true. We want ai to be safe and effective before we the horse has to be effective irrespective of the proprietary model that people can use. I think we should be clear that we actually dont have tools to know when something is safe and effective. By definition, not safe and effective. We cant know. That is the work we have to do. All the questions go to rc because he has the impossible job of figuring this out. To those of you who are not figuring this out, this cost 100 million or more to train. It is a huge amount to computer. Those are as you might be able to regulate because they attached to Big Companies we know. There is a proliferation of smaller models. Some open source, not open source, figuring out how to ever create the standards, is that fair . I think that is the dynamic landscape we are in. The notion of casebycase, they will be critical. It is very hard to have a general purpose. But we do have decades, centuries of experience, transportation, etc. As we start to finetune those specific use cases, we can develop benchmarks for evaluation, draw on regulatory expertise and get to a better outcome. That is more finegrained and more nuanced. Why do so many people seem to be so afraid of ai and how do we go about creating the Building Blocks of trust . It is not going away. We have to find some way that we have ai we can trust. People say there are threats out there. You can see Arnold Schwarzenegger coming down and wreaking havoc and so when we look at it, the problem is ai has been around a long time. There is no one putting this new technology in context. A gash every time you say hey google, or alexa, you are using in ai kind of mechanism. You are using an assistant, a digital assistant that will help you get information, set your alarm, do whatever youre going to do. We have been told there is some this is some new emerging scary Jurassic ParkType Technology as opposed to an iteration of what we have been doing in the past. Yes, there are new threats. It is not as scary as what people think. I think the media has driven this narrative of saying boo every time you say ai. Be very afraid, it is going to get you. Is going to take away your job, it will do this and we know over time, looking at this, when you look at technology, innovation, we actually create jobs with innovation. There are different jobs. There is a need to have a transformative policy that deals with this. That is what is happening. When you look at the sense, the knowledge people have, they dont know what this is. And when you dont know what it is and you are told is going to take your job and you are told it is going to it will make your baby have two heads, you worry because if not, you dont have a context to judge the honesty. I just want to Say Something about my concern. I will tell you, i talked with us senator recently. Their staff put together something of him saying he would never say, does that energize politicians to get involved in the ai debate . You Better Believe it. So now they are in this process of trying to understand how that can en, how you protect against it and what you do about it. So i think there is some really smart kind of directional things that are being done in the senate, being done in the house, being done by the industry. I think we are moving in the right direction but we have to ratchet down the rhetoric about being afraid and amp up the rhetoric about how exciting this technology is for human civilization. What you think of senator schumers plan . He announced on monday that he wants to do a series of ai inside forms to get senators up to speed so they are more informed before they regulate. Is that a model that can work in Broader Society . The point is this is an extremely complicated transformational part of our lives. Do we need ai inside town halls all across the country . I think there is a range of ability to understand this, probably a range of ability as well but a range of current understanding of the technology. I think one of the really good news pieces of this is it is bipartisan. The concern and hesitation that we need to do something but also the hesitation to step back and say i dont know everything i need to know to make the right decisions, they are stepping back. They are going to analyze this. That is the problem is typically what you would see in washington would be a big food fight over which committee had jurisdiction whether it is congress, judiciary, treasury. I think what chuck is trying to do is get out ahead of jurisdictional fights and say this belongs to all of us. Lets elevate our understanding so we can be rational and have reasonable debate about the level of regulation. I think you all feel comforted by the bipartisanship but also the measured approach the congress is taking to advancing some kind of framework in which to regulate and assess ai. Two things that were bubbling up there were jurisdiction and knowhow. Are we going to end up needing a particular ai agency . Are we trying to build up knowhow across every agency in order to be able to deal with this . This is to recognize how strongly brought the applications of this technology are. To me it is not a workable model. Certainly in the work we are doing in the executive branch. There is not just one action that will come out right. You really have to understand it as a mosaic and look at all of it. I see that very much reflected in what the euro talked about on the hill. He has run two of them so far. One was a general briefing for senators to learn about the technology and then i was able to participate as one of five people who spoke to National Security. This was about 1. 5 weeks ago. We ended up covering a lot of territory. Not Just National security but the thing i really want to say is we have very bipartisan we have people from both sides of the aisle and it was the second time i had been with a large group of senators talking about ai. I have to share the quality of the questions that were being asked. It is on an upward slope. I think that learning process is underway. I think while we do what we are going to do from the executive branch, we very much want to maintain that Good Partnership and get to some good bipartisan solutions. That is virtually unheard of. Is unlike there was a lot of partnership going on. Im going to assume you would consider yourself a partner with u. S. Government, exploring ai territory. But not all partnerships are perfect and there has been a rough ride for big tech the last few years in washington. Is there something you could nominate that you Wish Congress or the executive branch were doing in this field to make it a more Productive Partnership . I appreciate that they are taking the time to get up to speed. There are broad eras here. Not just in the United States but internationally. Areas trying to provide more transparency. Figuring out what benchmarks make sense in specific areas. Having a riskbased approach. You are looking at very high risk sorts of applications. All of you have been using ai for many years if you use maps or gmail. I think most people would say there are very relatively low risk. This comes out of getting people in a room debating how the tradeoffs work. How we draw on important principles, privacy, nondiscrimination, openness, security. How do we get that right . That requires getting experts in the room. We have seen our range of ceos their willingness to embrace regulation. I have not seen them invite your team to look under the hood of their model. Is that something you would like . For someone to say, we will come and check out how this stuff is coded. I want to step back from that question. But you will have to answer it. [laughter] i think we use the phrase regulate and ai, we use them together. But we do not have a model of what we are talking about. Very broad applications, a lot of the harms we are concerned about are already illegal. When you use ai to commit fraud, to accelerate your ability to commit cyber crimes. There are already things that are not ok. There is an important issue, which is that laws exist, our ability to regulate and enforce as ai changes how people do that. That is the issue. A very important step in that direction, the Consumer Financial protection bureau, the ftc and the department of justice but artist statement reminding people that these things are still illegal and if you are using ai to do them, they will be enforcing against it. That is a grade of example of a step that is essential. We will need to do work. Keeping up with those concerns with that kind of accelerated malfeasance and being able to spot it when you see new forms of problems. And a scale issue we are not ready for. Those are things we are working on right now. Those are important actions that can start getting put in place. The question of what you do about Core Technology itself is what people want to talk about. That is not yet clear. Again, i want to keep coming back, heidi i love your point about terminator and Jurassic Park. We are living in a time in which there are a lot of sciencefiction conversations about ai, a lot of philosophy conversations. I sometimes feel like i am in a freshman dorm room at midnight. There are marketing conversations. All of those should go on. We are going to make sensible policies that change outcomes, the are going to stay anchored in what do human beings and corporations do, what does the human data that they are being trained on, how do humans and corporations decide to use this technology. That impact do they have in the real world . If you stay anchored in that and start working through and figuring out how we mitigate these risks, you get to practical solutions. That has to be the benchmark against which we way any regulatory. If that is part of it, it needs to be considered. The benchmark will always be, did we reduce bio security risks . Reduce the risk of misinformation . If i could jump in. In essence we are externalizing these models, we will be doing external training. Really sophisticated folks trying to break the systems. Which is a Collaborative Learning exercise. It is how do we collectively learn from it, what kind of attacks work well . This has to be a layered system of governance. You have to have Companies Taking responsibility, security by default and design, cross industry groups, to make sure what are the standards. Those can be faster and more nimble than what romance can come up with. An some cases provide a starting point. You are going to need forms of government regulation, and you will probably need International Frameworks to deal with security risks that have already started. It is all of the above. We have been talking about government regulation. There is a rule of law piece of this we have not been talking about. If you look at section 230 of the Communications Decency act, which says we will use these platforms, the systems we have created, they will be like Bulletin Boards. We will not sue the Bulletin Board for what is on the Bulletin Board. There has been a mammoth ability for growth to be free of any kind of interjection of civil liability. That is not true in my opinion