Transcripts For CSPAN Senate 20240704 : vimarsana.com

CSPAN Senate July 4, 2024

Because i think youve provided objective fact based views on what the dangers are and the risks, and potentially even human extinction, an existential threat which i mentioned by many more than just the three of you, experts who know firsthand the potential for harm. But these fears need to be addressed. I think they can be addressed through many of the concessions that you are making to us and others as well. Ive come to the conclusion that we need some kind of regulatory, not just a reactive body. Not just a passive rules to the road maker, edicts on what guardrails should be. But actually investing proactively in research so that we develop countermeasures against the kinds of autonomous apple controlled scenarios that are a potential danger. Artificial intelligence device is in effect programed to resist any turning off. A decision by a. I. To begin Nuclear Reaction to a nonexistent attack. The white house certainly has recognized the urgency with a historic meeting of the seven Major Companies which made families ignition commitments. I command and thank the president of the United States for recognizing the need to act. But we all know, and you have pointed out in testimony and that these commitments are and specific and unenforceable. A number of them on the most serious issues say that they will give attention to the problem. All good, but its only a start. I know the doubters about congress and about our ability to act. But the urgency here demands action. The future is not Science Fiction. Its not even the future, its here and now. A number of you have put the timeline at two years before we see some of the biological, most severe dangers. It may be shorter because the kinds of pace of development is not only stunningly fast, it is also accelerated at a stunning pace because of the quantity of chips, the speed of chips, the effectiveness of algorithms. It is and flow of development. We can condemn it. We can regret it. But it is real. And the white houses principal aptly aligns with a lot of what we have said, among us, in congress and notably in the last hearing we held. We are here now because a. I. Is already having a Significant Impact on our economy, safety, and democracy. The dangers are not just extinction, but the loss of jobs. One of potentially the worst nightmares that we have. Each day, these issues are more common, more serious, and more difficult to solve. And we cant repeat the mistakes we made on social media, which was to delay and disregard the danger. So the goal for this hearing is to lay the ground for legislation, go from general principle to specific recommendations, to lead this hearing to right, real laws, enforceable laws. In our past two hearings, we heard from panelists that section 2 30, the legal shield that protects social media should not apply to a. I. Based on that feedback, senator hawley and i introduced the no section 2 30 immunity for a. I. Apps, building on our previous hearing, i think there are our core standards that we are building bipartisan control consent around. I welcome the hearing from many others on the potential rules establishing a licensing regime for companies that are engaged in high risk a. I. Development. A testing and auditing regiment by objective third parties or by, preferably, the new entity that we will establish, imposing legal limits on certain uses related to elections. Senator klobuchar has raised this danger directly, related to nuclear warfare. China apparently agrees that a. I. Should not govern the use of nuclear warfare. Requiring transparency about the limits and use of a. I. Models. This includes water marking, labeling, disclosure when a. I. Is being used, and data access. Data access for researchers. So i appreciate the commitment that has been made by anthropogenic, openai, and others, at the white house related to security testing last week. It shows these goals are achievable. And that they will not stifle innovation which has to be we need to be creative about the kind of engine c or entity, the body administration, the administration, the office, i think the language is real, and forced the power. And the resources invested in it. We are really lucky, very fortunate to be going by three true experts today. One of the most distinguished panels ive seen in my time in the United States congress, which is only about 12 years. One of the leading a. I. Companies, which was founded with the goal of developing a eye that is helpful, honest, and harmless. A researcher who did groundbreaking work that led him to be recognized as one of the fathers of a. I. And a Computer Science professor whose publications and testimony on the ethics of a. I. Have shaped regulatory efforts by the eu a. I. Act. Welcome to all of you and thank you so much for being here. I turn to the Ranking Member, senator hawley. Thank you very much, mister chairman. Thanks to all our witnesses for being here. I want to start by thanking the chairman, senator blumenthal, for his terrific work on this hearing. Its been a privilege to work with. Him these have been incredibly substantive hearings. Im looking forward to hear from each of you today. I want to thank his staff for their terrific work. It took a lot of effort to put together the hearings of such substance. I want to saying thank senator blumenthal for willing to be doing something about this problem. As he alluded to a moment, ago he and i feel weeks go introduced the first bipartisan bill to put safeguards around a. I. Development. And the first bill to be introduced in the United States senate, which will protect the rights of americans, to vindicate their privacy, their personal safety, and their interests in court against any company that would develop or employee. This is an absolutely critical foundational right. You can give american paper rights, parchment rights, as our founder said, all you want. If they cant get into court to enforce them, they dont mean anything. I think it is significant that our first bipartisan efforts to guarantee every american will have the right to vindicate their rights, their interests, their privacy, their data protection, their kids safety in the courts. I look forward to more to come with senator blumenthal, other members i know are interested in this. I think for my part i have expressed my own sense of what our priorities ought to be when it comes to legislation, very simple. Workers, kids, consumers, and National Security. As a eye develops, weve got to make sure we have safeguards in place that will ensure this new technology is actually good for the American People. Im confident it will be good for the companies. I have no doubt about that. The Biggest Companies in the world who currently making money hand over fist in this country and benefit from our laws, i know they will be great. Google, microsoft, meta. Many of whom have invested in the companies we will talk to. Today we will get to that a bit more in just a minute. Im confident they will do great. What im less confident of his the American People are going to do all right. So im less interested in the corporations profitability. In fact, im not interested in that at all. I am interested in protecting the rights of American Workers and American Families in American Consumers against these Massive Companies that threaten to become a total lock to themselves. You want to talk about dystopia . Imagine a world in which a. I. Is controlled by one or two or three corporations that are basically government under themselves and then the United States government a foreign entity. Talk about a massive accretion of power from the people to the powerful. That is the true nightmare and for my money, and that is what this body has got to prevent. We want to see Technology Developed in a way that actually benefits the people, the workers, the kids, the families of the country. And i think the real question before congress is, will Congress Actually do anything . As senator blumenthal put his finger on precisely. Look at what this congress did or did not do with regard to the very same company, in the same behemoth companies when it came to the social media. Its all the same players, lets be honest. We are talking about the same people. A. I. Is like social media. Its google again, its microsoft, its meta, all the same people. And what i notice is, in my short time in the senate, theres a lot of talk about doing something about big tech. Absolutely zero movement to actually put meaningful legislation on the floor of the United States senate and do something about it. I think the real question is, will the senate actually act . Will that leadership in both parties, both parties, will it actually be willing to act . Weve had a lot of talk. But now is the time for action. I think the urgency of the new generative a. I. Technology does not make that clear to folks, then you will never be convinced. To me, that really defines the urgency of this moment. Thank, you mister chairman. I am going to turn to senator klobuchar, in case she has some remarks. Thank you. A woman of action, i hope, senator hawley. Someone who has invested a lot of time. I want to thank both of you for doing this. I mostly did want to hear from the witnesses. I do agree with both senator blumenthal and senator hawley, this is the moment. The fact that this hasnt been bipartisan so far in the work that senator schumer, senator young are doing, the work that is going on in this subcommittee with the two of you, and the work senator hawley and i are also engaged in on some of the other issues related to this. I actually think that if we dont act soon we could decay into not just partisanship but inaction. The point senator hawley just made his right. We didnt get ahead of, the congress didnt get ahead of section 230 and alike, and the things that were done for maybe good reason at the time and didnt do anything. Now youve got kids getting addicted to fentanyl and youve got officers that get online, our privacy issues, youve got kids being exposed to content they shouldnt. See if got Small Businesses that have been pushed out and the like. I think we can fix some of that still. But this is certainly a moment to engage. Im actually really excited about what we can get done, the potential for good here. What we can do to put in guardrails and have an american way of putting things in place and not just defer to the rest of the world, which is what is starting to happen on some of the other topics i raised. Im particularly interested, not as much our focus today, on the election side and democracy, and making sure that we do not have these ads and the real people. I dont care what the real Political Parties people are with, that we give voters information they need to make a decision and that we are able to protect our democracy. There is good work being done on that front. So thank you. Let me introduce the witnesses and seize this moment to let you have the floor. We will be joined by del rio who is the ceo of anthropogenic, an a. I. And research company. Its a puffin benefit combination dedicated to building durable a. I. Systems that people can rely on and Generating Research about the opportunities and risks of a. I. Anthropic a. I. Assistance is based on its research into training helpful, honest, and harmless a isoms. It is a recognized worldwide recognized leading expert in Artificial Intelligence. He is known for his conceptual and engineering breakthroughs in Artificial Neural Networks he pioneered made many of the lead to this point today. Hes a full professor in the department of Computer Science at the university of montreal and the finder ands quebec Intern National institute, one of the largest academic in deep learning and one of the three federally funded [interpreter] of excellence in a. I. Research and innovation in canada, with im not going to repeat all the awards, recognition that youve received because it would probably take the rest of the afternoon. We are also honored to be joined by stuart russell. He received his b. A. With first class honors in physics from Oxford University in 1982 and a ph. D. In Computer Science from stanford in 1986. He joined the faculty of the university of california berkeley where he is a professor and formally leach air of Electrical Engineering and Computer Sciences and the holder of the smith chair and engineering director of the center for human compatible a. I. And director of the happily center for ethics, science, and the public. Hes also served as an adjunct professor of neurological surgery at you see san francisco. Again, many honors and recognitions, all of you. In accordance with the custom of our committee, im going to ask you to stand and take an oath. Do you solemnly swear that the testimony you are about to give is the truth, the whole truth, and nothing but the truth, so help you god . Thank you. Mr. Amodei, we will begin with you. Excuse me. Chairman blumenthal, Ranking Member hawley, and members of the committee, thank you for the opportunity to discuss the risks and oversight of a. I. With you. Anthropic is a Public Benefit corporation that aims to lead by example in developing techniques to make a i systems safer and more controllable. By deploying these safety techniques and stateoftheart models. Research conducted by anthropic includes constitutional a. I. , a message for training a. I. Systems according to explicit principles. Early work on adversarial testing of a. I. Systems to uncover bad behavior, and foundational work in a i interpret ability, science trying to understand why a. I. Can behave the way it does. This, month after extensive testing, we were proud to launch our a. I. Model for u. S. Users. , many of these safety improvements into practice, while we are the first to admit that our measures are still far from perfect, we believe they are an important step forward in a race to the top on safety. We hope we can inspire other researchers and companies to do even better. A. I will help our country accelerate progress in medical research and many other areas. The opening remarks, the benefits are great. I would not have founded anthropic if i did not believe ais benefits could outweigh its risks. However, it is very critical that we address the risks. My written testimony covers three categories of risk. Shortterm risk we face right now, such as biased, privacy, misinformation. Medium term risk related to misuse of a ices them as they become better at science and engineering tasks, and long term risks relate to weather models might threaten humanity as they become truly autonomous. We also mentioned this in the opening system one. In the short remarks, i want to focus on the medium term risk, which present an alarming combination of evidence and severity. Specifically, anthropic is concerned a. I. Could empower a much larger set of actors to over the last six months, anthropic, in collaboration with world class bio security experts, has conducted a study of the potential for a. I. To contribute to the misuse of biology. Today, certain steps in via weapons production involve knowledge that cant be found on google are in textbooks and acquires high level of high level expertise. This is one of the things that is currently from attacks. We found that todays a. I. Tools can fill in some of the steps, all billy its incompletely and an reliably. In other words, they are showing the first signs of danger. However, a straightforward extrapolation of todays systems, to those we expect to see in 2 to 3 years, suggests a substantial risk that a. I. Systems will be able to fill in all the missing pieces, enabling many more actors to ferry out largescale biological attacks. We believe this represents a grave threat to u. S. National security. Weve instituted mitigations against these risks in our own deployed models, briefed a number of u. S. Government officials, all of whom found the results disquieting. They are piloting a response of the disclosure process with other a. I. Companies to share information on this and similar risks. However, private action is not enough. This risk and many others like it requires systemic policy response. We recommend three broad classes of actions. First, you the u. S. Must secure the a. I. Supply chain in order to maintain its lead while keeping these technologies out of the hands of bad actors. The supply chain runs from Semiconductor Manufacturing equipment to chips, and even the security of a. I. Models stored on the servers of Companies Like ours. Second, we recommend testing and auditing regime for a new and more powerful model. Similar to cars or airplanes, a i models of the near future will be powerful machines that possess great utility but can be lethal and designed incorrectly or misused. New a. I. Models should have to pass a rigorous battery of safety test before they can be released to the public at all, including tests by third parties and National Security experts and government. Third, we should recognize science testing and auditing for a. I. Systems. It is not currently easy to detect all the bad behaviors in a. I. System is capable, of without first broadly deploying it to users, which is what creates the risk. Thus, it is important to find both measurement and research measurements, to ensure testing and auditing that is actually effective. Funding this and the national a. I. Resource are two examples of ways to ensure america leads here. The three directions involved are responsible supply chain fallacies help give america a lot of rigorous standards on our own companies without giving our National Lead to adversaries, and it makes these rigorou

© 2025 Vimarsana