vimarsana.com

Cox supports cspan as a Public Service along with their other Television Providers giving a front row seat to democracy. Open a icao sam altman whose Company Created chatgpt was my three Artificial Intelligence experts to testify on oversight of the swiftly developing technology. On the subcommittee hearing mr. Altman stated could cause significant harm to the world heres the rest of that hearing. [background noises] welcome to the hearing of the Privacy Technology and the lotom subcommittee. I think my partner in this effort senator hawley, Ranking Member and i particularly want to thank senator durbin, chairman of the Judiciary Committee. He will be speaking shortly. This hearing is on the oversight of Artificial Intelligence. The first in a series of hearings intended to write the rules of ai. Our goal is to demystify and hold accountable of new technologies to avoid some of the mistakes of the past. And now, for some introductory remarks. Too often we have seen what happens when Technology Outpaces regulation. The unbridled exploitation of personal data, the proliferation of disinformation, and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice. In how the lack of transparency can undermine public trust. This is not the future we want. If you are listening from home you might have thought that voice was mine. And the words from me. But in fact that voice was not mine. The words were not mine. Theas audio wasnt ai voice cloning software trained on my four speeches. The remarks were written by chatgpt when itgp was asked howi would open this hearing. And you heard just now j the results. I asked chatgpt why did you pick those themes in that content question and answered and im coding, blumenthal has a strong record in advocating for Consumer Protection and civil rights breeze been vocal about issues such as data privacy. And the potential for discrimination algorithmic decisionmaking. Therefore the statement emphasizes these aspects. Mr. Altman i appreciate chatgpts endorsed endorsement. This apparent reasoning is espretty impressive. I am sure we will look back at a decade and view chatgpt like we do the first celll phone for ths big clunky things we used to carry around. But we recognize we are on the verge really of a new era. The audio and my playing it may strike you as curious or humorous. But what reverberated in my mind was what if i had asked it and what if it had provided in endorsement of ukraine surrendering. Four of Vladimir Putins leadership. That would have been really frightening. The prospect is more than a little scary to use the word, mr. Altman you have used yourself. I think you have been very constructive and calling attention to the pitfalls and the promise and that is the reason we wanted you to bewe hee today. We thank you and our other witnesses for joiningjo us. For several months of the public has beend fascinated with gpt, dalle and other ai tools. The homework done by chatgpt the articles and op ed that it can write feel like novelties. But the underlying advancement of this era where than Just Research experiments. They are no longer fantasies of science fiction. They are real and present. The promises of curing cancer developing new understanding of physics and biology or modeling climate and weather, all very encouraging and hopeful. But we also know the potential harmsy. And we have seen them already. Weapon ice disinformation. Housing discrimination, harassment of wind and women and impersonation fraud. Voice cloning, deep fakes. These are the potential risks despite the other rewards. And for me perhaps the biggest nightmare is that looming new Industrial Revolution. The displacement of millions of workers. The loss of huge numbers of jobs. They need to prepare for this new Industrial Revolution and skill training. And a relocation that may be required. Already Industry Leaders are calling attention to those challenges. To quote chatgpt, this is not necessarily the future we want. We need to maximize the good over the bad. Congress has a choice now. We had the same choice but wait face social media. Wet failed to seize that moment the result is predators on the internet, toxic contents, exploiting children creating dangers for them and center blackbird and i and others like senator durbin on the Judiciary Committee are trying to deal with it, kids Online Safety act. But congress failed to meet the moment on social media. Now we have the obligation to do it on ai before the threat and the risk become real. Sensible safeguards are not in opposition to innovation. Accountabilityot is not a burde. Far from it. They are the foundation of how we can move ahead while protecting public trust. They are how we can lead the world in technology and science. But also in promoting our Democratic Values. Otherwise, in the absence of that trust i think we may well lose both. These are Sophisticated Technology but they are basic expectations common in our law. We can start with transparency. Ai companies ought to be able to test their system to disclose known risks and allow independent researcher access from we. Can have scorecards. To encourage competition based on safety and trustworthiness. Limitations on youth their risk of ai ise so extreme or brand that yes especially when it comes to commercial invasions of privacy. To affect peoples livelihood. And of course accountability and reliability. Ai companies and appliance cause harm they should be held liable. They should not repeat our past mistakes. For example, section 2 30 seen companies to think ahead and be responsible for the ramification of their Business Decisions can be the most powerful tool of all. Garbage in, garbage out. The principal skill applied got to be aware of the garbage. Whether it is going into these platforms are coming out ofe them. And the idea we develop in this hearing i think will provide a solid path forward. I look forward to discussing them with you today. And i will finish on this note, the ai industry does not have to wait for congress. I hope their ideas and feedback from this discussion and from the industry involuntary action such as we have seen lacking in many social media platforms. And the consequences have been huge. I am hoping we will elevate rather than have a race to the bottom. And i think these hearings will be an important part of this conversation. This one is only the first. Ranking member and i have agreed there should be more. We are going to invite other Industry Leaders but some have committed to comfort experts, academics, and the public we hope will participate. But s that i will turn to the Ranking Member senator hawley. Thank you very much mr. Chairman. Thank you to the witnesses for being here. I appreciate several of you had long journeys to make in order to be here. Tii appreciate you making the te for look forward to your testament. I want to thank center blumenthal for convening this eahearing on this topic. A year ago we could not of had thisgo hearing. The technology we are talking about is not versed into public consciousness. Just how rapidly this technology we are talking about today is changing, and evolving, and transforming our world right before our very eyes. I was talking to someone just last night a researcher in the field of psychiatry he was pointing out to me that chatgpt these large language of models is really like the extension of the internet in scale. At least, at least and potentially far far more significant than that. We could be looking at the most significant Technological Innovations in human history. I think my question is what kind of innovation is it going to be . Is it going to be like the Printing Press that diffused knowledge and power and learning widely across the landscape that empowered ordinary everyday individuals that lead to greater flourishing, that led to greater liberty . Is it going to be more like the atom bomb . Huge technological breakthrough. But the consequences were severe, terrible, continue to haunt us to this day. I do not of the answer to that question out think any of us in the room to the answer to that question i think the answer is not yet been n written. Enjoy certain extent it is up to us here and to us as the American People to write the answer. What kind of technology will this be . How will we use it to better our lives . How will we use it to actually harness the power of technological innovation for the good of the American People . For the liberty of the American People . Not for the power of the few. I was reminded of the psychologist and writer karl young who said the beginning of the last century that our ability forvo technological innovation for our capacity for technological revolution have far outpaced our ethical and moral ability to apply and harness the technology we develop. That was a century ago. The story ofth the 20th century helargely bore him out and i jut wonder what we will say as we look back at this moment about these new technologies . About degenerative ai these language models and the host of other ai capacities that are even right now under development. Not just in this country but in china, the countries of our adversaries in all around the world. The question posed is the question that faces us, will he strike that balance between technological innovation and ethical and moral responsibility . To humanity, to liberty, to the freedom of this country and hope for todays hearing will take us a step closer to that answer every think it mr. Chairman. Thank you, senator holly. Going to turn to the chairman of Judiciary Committee the Ranking Member senator graham if they have opening remarks as well be. Yes mr. Gentlemen, thank you very much. Sutter holly as well. Last week in this community full committee the senate Judiciary Committee we dealt withth an ise that it been waiting for attention from us two decades. That is what to do with the social media when it comes to children. We had four bills initially that were considered by this committee. What may be history in the making, we passed all four bills with unanimous protocols. Unanimous roll calls. I cannot remember another time that we have done that on an issue that important. It is an indication i think of the important position of this Kitty Committee at a National Debate that affects every Single Family affect our future. 1989 was a historic watershed year in america because that is when seinfeld arrived. We have a sitcom which was supposedly about little or nothing which turned out to be enduring. I like to watch it obviously. I always marvel at when they show the phones he used in 1989. I thinkey about those in comparison to wake kerri right in our pockets today. It is a dramaticc change. And i guess the question as i look at that, this change in phone technology weve witnessed simplify a profound change in america . Still unanswered. The very basic question we face is whether or not this issue of ai is a quantitative change in technology or a qualitative change . The suggestionsns ive heard frm experts in the field suggest it is qualitative. As ai fundamentally different . Is it a game changer . Is it so disruptive we needed treated differently than in other forms of innovation . That is the starting points. The second starting point is humbling o when you look at the record of congress in dealing with Innovation Technology and rapid change, we are not designed for that. In fact the senate was not created for that purpose. Just the opposite, slow things down. Take hyder look at it. Do not react to public sentiments. Make sure you were doing the rightnene thing. I am part of the positive potential of ai and it is enormous. We can go through list of the deployment of technology is stay tuned idea you can sketch out on a website for a website on a napkin can generate functioning code. Pharmaceutical companies can use the technology to identify new candidates to treat disease. The list goes on and on. Then of course the danger and it is profound as well. So i am glad this hearing is taking place. Think its important for allla f us to participate. Glad it is a bipartisanhe approach. We are going to have to scramble to keep up with the pace of innovation in terms of our government and public response to it but this is a great start. Thank you mr. German. Thank you, senator and is very much a bipartisan approach very deeply and broadly bipartisan and in that spirit im going to turn to my friend senator graham. [inaudible] thank you that was not written by ai for short. [laughter] let me introduce now the witnesses we are very grateful to you for being here. Sam altman is cofounder and ceo of openai. The ai research and Deployment Company behind chatgpt and dalle. Mr. Altman was president of the earlystage Startup Accelerator from 1914 im sorry from 2014, 2019 openai was founded in 2015. Christina montgomery is ibms Vice President and privacy and trust officer overseeing the companys Global Privacy Program pop policies compliance and strategies she also shares ibms aix ethics board a Multidisciplinary Team responsible for the governance of ai and emerging technology. Christina has served in various roles at ibm including corporate secretary to the companys board of directors. She is a Global Leader in ai ethics in government. And ms. Montgomery is also a member of the nine States Chamber of Commerce Ai Commission and the United States national ai advisorymi committe. Which was established in 2022 to advise the president on the National Ai Initiative office on a range of topics related to ai. Marcus is a leading voice in Artificial Intelligence but he is a scientist, bestselling author and entrepreneur founder of the robust ai geometric ai quae bird uber from not mistaken. Emeritus professor of psychology and neuroscience. At nyu. Mr. Marcus is well known for his challenges to contemporary ai anticipating many of the current limitations decades in advance. For his research in human languaget development and cognitive neuroscience. Thank you for being here. And as you may know are custom on the Judiciary Committee is to swear in our witnesses before they testify. I feel at all please rise and raise your right hand. Do you solemnly swear the testimony that you are going to give is the truth, the whole truth and nothing but the truth so help you god . Thank you. Mr. Altman we are going to be with you if that is okay. Thank you freethinker chairman blumenthal. Ranking member holly, members of the Judiciary Committee, thank you for the opportunity to speak with you today. It is really an honor to be here even more so in the moment than i expected. My name is sam altman and the chief executive officer ofi. Opn jet junkie wasnd on the [bleep] Artificial Intelligence has the potential to improve nearly every aspect of our thlives. Also create serious risks we have to Work Together to manage. We are here because people love this technology. It can be a Printing Press moment but we Work Together to make it so. Openais Unusual Company was set up that way because ai is unusual technology were governed by and are driven by the mission and charter. Which committed to working the benefits of ai to maximize the safety of the ai system. We are working to build tools that when they can help us make new discoveries and address humanitys Biggest Challenges like Climate Change curing cancer. Our current systems are not yet capable of doing these things. But its been immensely gratifying to watch many people around the world get so much of value from what the systems can already do today. We love seeing people use our tools to create, to learn, to be more productive. Or very optimistic they will beat fantastic jobs in the future current traps and get much better. We also see what developers are doing to improve life. For example beat my eyes is a new Multi Mobile Technology gp 40 habitually impaired individuals navigate their environment. We believe the benefits of the tools we have deployed so far, vastly outweigh the risk for destroying their safety is vital to our work for the mixed significant efforts to ensure safety is built into our systems at all levels. Before releasing any new system openai has extensive testing external experts for detailed reviews and independentge audit. Improve sleep models behavior influence robust safety and monitoring systems. Before we released our latest model we spent over six months conducting extensive evaluations, external red teaming and dangerous capability testing. We are proud of the progress we made. Gpt four is more likely to respond truthfully refused harmful requests and any other widely deployed model similar capability. However, within direct or intervention by governments will be critical to mitigate the risks of increasing models. For example u. S. Government might consider combination of licensing and testing requirements are Development Release of ai models above a threshold of capabilities. There are several other areas that mention in my written testimony were i believe Companies Like ours can partner with governments including ensuring the most powerful ai models adhere to safety requirements. This exciting process is to update Safety Measures in examining opportunities for global coordination. And asor she mentioned i think t is Important Companies have their own responsibility here no matter what congress does. This is a remarkable time to be working on Artificial Intelligence. But assess Technology Advances we understand people are anxious about how it can change the way we live. We are two. We believe we can and must Work Together to identify and manage the potential downside so we can all enjoy the tremendous upsides. It is essential its with Democratic Values in mind this means u. S. Leadership isti critical. I believe the belt to mitigate the risks in front of us really capitalize on the Technology Potential to grow the u. S. Economy and the world. I look forward to work with you all to meet this moment i look forward to answer your questions, thank you. Folks thank you mr. Altman. Ms. Montgomery . Ms wickstrom blumenthal, Ranking Member and members of the subcommittee. Thank you for todays opportunity to present. Ai is not new but is certainly having a moment. Generated ai the technologies dramatic surge in the public attention has rightfully raised serious questions at the heart of todays hearing. What are ais potential impact on society . What do we do about bias . What about misinformation . Misuse or harmful content generated by ai systems . Senators, these are the right questions i applied to convening todays hearing to address them head on. Ai may be having at the moment the moment for government to play a role has not passed us by. This prayer to focus public attention on ai is precisely the time to the fight and build the right guardrails to protect people and their interests. But at its core ai is just a tool and tools can p serve different purposes. Too that end ibm addresses congress to adopt a precision regulation approach to ai. This means establishing rules to govern the deployment of ai specific use cases. Not regulating the technology itself. Such an approach would involve four things. First, different rules for different risks. The strongest regulation should be applied to cases with the greatest risks to people and society. Second clearly definingy risks. There must be clear guidance on ai uses are categories of ai supported activities that are inherently high risk for this common definition is key to enable a clear understanding of what regulatory requirements will apply in different use cases in context. Third, be transparent ai should not be hidden. Consumers should know that interacting with aney ai system they have free course to engage with a realre person should they desire but no person anywhere should be tricked into interacting with an ai system. And finally, showing impact for higher risk use cases companies should be required to conduct impact assessments to show how their systems perform against and other ways they can potentially impact to the public into a test they have done so. By phallic riskbased use the core persistent regulation congress could mitigate the potential risk of ai without hindering innovation. But businesses also play a Critical Role in ensuring the responsible deployment of ai. Companies active ing developing or using ai must have strong internal governance, including among other things, designate elite ethics official responsible for organizations trustworthy ai strategy. Standing up at ethics board or function as a centralized clearinghouse for resources to help guide implementation of that strategy. Ibm is taken both of these steps but we continue calling on our industry peers to follow suit. Our ai ethics board plays a Critical Role in overseeing internal Ai Governance processes. Grading reasonable guardrails to ensure we introduce technology into t the world at a responsibe and safe manner. To provide centralized governments and accountability will still been flexible enough to support decentralized initiatives across ibms global operation. We do this because we recognize society grants to operate. I must not under mine the public trust for the air of ai cannot be another era of move fast and break things. We do not disclaim breaks on innovation either. These systems are within our control today as are the solutions. But we need at this Pivotal Moment is clear, reasonable policy and sound guardrails briggs guardrails should beat match with meaningful steps by the Business Community to do their part. The Business Committee must r wk together to get this right. The American People deserve no less. Thank you for your time and i look forward to your questions. Thank you. , [inaudible] think he said his todays meeting is historic i am profoundly grateful to be here bro to come as and scientist, someone who has founded ai companies seven who genuinely loves ai but is increasingly worried. They are benefits but we dont yet know whether they will outweigh the risks. Fundamentally these new systems are going to be destabilizing but they can and will create persuasive lies in a scale humanity is never seenn before. Outsiders will use them to affect our elections, insiders to manipulate our markets and political systems. Threatened. Self is potential exceeding what social media candidi do. Trace just about data sets ai Companies Use will have enormous unseen influence. Those who choose the day that will make the rules of shaping in subtle but powerful ways. There are other risks too many stemming from the inherent unreliability of current systems. A law professor for example is accused of Sexual Harassment untrue that pointed to a Washington Post article that did not even exist. The more that happens the more anybody can deny anything. One prominent lawyer told me on friday defendants are starting top claim plaintiff start makig up legitimate evidence these allegations undermine the abilities of juries to decide what or who to believe and contribute to the undermining of democracy. Port medical advice can have serious consequences two. Open source large language model played a role at a persons decision to take their own life. The large language model asking him if you wanted to die at why you do it earlier question rick then followed up with were you thinking of me when you overdose without referring the patient to human help that was obviously needed. Another systemm rushed out made available to millions of children a person posing as a 13yearold had lied to her parents about a trip with a 31yearold man. Further threats continue really a month after gpt for open ai relates plugins which quickly led others to develop something called auto gpt with direct access to the internet the ability to write source code and increased powers of automation. This may well have drastic difficult to predict security consequences. What criminals are going to do is create counterfeit people. Its hard to envision the consequences of that. We have built machines like bulls in a china shop, powerful, reckless and difficult to control. We all more or less agree on the values we would like for ai systems to honor but we want for example for our systems to be transparent, to protect their privacy to be free of bias and above all else to be safe. Current systems are not in line with these values by current systems are not transparent, they cannot adequately protect our privacy they continued to perpetuate biasma prevent makers do not entirely understand how they work. Most of all weight cannot remotely guarantee they are safe. Hope is not enough. Big tech companys preferred plan. To trust us but why should we . The sums of money at stake her mind boggling. Missions jeff open aii original mission same center goal is to advance ai the way this most likely to benefit humanity as a whole unconstrained by need to generate financial return. Seven years that theyrere largy beholden to microsoft umbrella imparted epic battles, Search Engines that routinely make things up. Has forced alphabet to rush out products and deemphasize safety brady maddy taken a backseat ais would bee incredibly fast with lots of potential but alsos lots of risk. We obviously need government involved and we detect companies involving both big and small. We also need independent scientists break not just so we scientist can have a voice public in particular project the patent directly and evaluate solutions and not justt after products are released but before and im glad to mention that. When he tight collaboration between independent scientists and governments in order to hold the companys feet to the fire. Eleven independent access to independent scientists access to these systems before they are widely released as part of a Clinical Trial like safety evaluation is a vital first step. Ultimately we may need something Global International neutral buc focused on ai rather than highenergy physics. We have unprecedented opportunities we are also facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation inherent unreliability. Ai is among the most world changingor technologies ever ary changing things more rapidly than almost any technology in history. We acted too slowly with social media but many important decisions got locked in with lasting consequence for the choices we make now will have lastingnt effects for decades maybe even centuries for the iovery fact we are here today bipartisan fashion to discuss these matters gives me some hope. Thank you, mr. Chairman. Thank you very much professor. Were going up seven minute rounds of questioning. I will begin. First well professor we are here today because we do face that perfect storm. Some of us might characterize it more like a bomb in a china shop not a b bowl. And as senator hawley indicated there are precedents here not only the atomic warfare era but the research on genetics where there was International Cooperation as a result. We want to avoid this past mistakes as i indicated in my opening statement. We are committed on social media. That is precisely the reasonn we are here today. Chatgpt makes mistakes. All ai does. It can be a convincing liar. What people call hallucination. That might be an innocent problem in the opening of the judiciary, subcommittee hearing or a voice is impersonated, mine in this instance. Or quotes from Research Papers that do not exist. But chatgpt are willing to answer questions about life or deather matters example, drug interactions. And those kind of mistakes can be deeply damaging but im interested in how wein can have reliable information but the accuracy and trustworthiness of these models and how we can create competition and consumer disclosures but will ward greater accuracy . The National Institute of standards and Technology Actually already has an ai accuracy test. The faith recognition test. It does not solve for all of the issues with facial recognition but a scorecard does provide useful information about the capabilities and flaws of these systems. So there is work on models to assure accuracy and integrity. Myul question, let me begin with you, should we consider independent testing labs to provide scorecards . And nutrition labels are the equivalent of nutrition labels. Packaging that indicates two people whether or not the content can be trusted. What the ingredients are and what the garbage goingbe in maye because it could result in garbage going out. I think that is a great idea. I think companies should put their own here as a result of our test of our model before we release it. Heres where it has strengths. But also independent audits for that part a very important. These models are getting more accurate over time. As we have said as loudly as anyone this technology is in its early stages it definitely still makes mistakes. We fight users are pretty sophisticated and understand where the mistakes are likely to be that they need to be responsible for verifying what the models say. They go off and check it. I worry as the models get better and better users can have it less and less of their own discriminating fall process aroundat it. I think users are more capable than we often give them credit for in conversations likee this. I think a lot of disclosures if you watch jen for the inaccuracies the model are also important. I am excited for a world where companies publish with the models information about how they behave where the inaccuracies are independent agencies provide that as well. Think thats a great idea. I alluded in my opening remarks to the jobs issue the economic effects on employment. I think you have said in fact and im going to quote development of superhuman cmachine intelligence is probay the greatest threat to the continued existence of humanity. The effect on jobs which is really my biggest nightmare, in the longterm. Let me ask you what your biggest nightmare is and whether you share that concern. Mr. Altman like with all technological revolutions, i expect there to be Significant Impact on jobs. Bureau exactly ha that im what that impact looks like is very difficult to predict. If we went back to the other side of the previous technological evolution talking about the jobs that exist on the other side to go back and read books of this and what people said at the time its difficult. I believe there will be far greater jobs on the other side into the jobs of today will get better. I think its important first of all to understand as a tool not a creature which is easy to get confused and its a tool people have a great deal of control. People that are using to do their job much more efficiently. It will create new ones that will be much better. Its one long technological evolution but this has been continually happening. As our quality of life raises and it can help us live better lives the bar raises for what we do and our human ability and what we spend our time going after goes after more ambitious projects so there will be an impact on jobs. We try to be clear about that and it will acquire partnership betweenov the industry and government opposed by the government to figure out how we want to mitigate that. But im veryy optimistic about how great the jobs of the future will be. Let me asks professor marcus about the questions as well. On the jobs point, it is a iohugely important question and its one that weve been talking about for a reallyly long time. We dosa believe and weve said r a long time its going to change new jobs will be created. Many will be transformed into some will transition away. Im a personal example of the job that didnt exist and i have a team of governance professionals who are in new roles thatrl we created as early as three years ago. They are new and growing. The most important thing we should be doing is to prepare the workforce of today added the workforce of tomorrow partnering with Via Technologies and weve been very involved for years now focusing on the skills based hiring and educating for the skills of the future the platform has 7 million learners and over a thousand courses worldwide and weve pledged to train 30 million individuals by 2030 in the skills that are needed for society today. On the subject of nutrition labels i think we absolutely need to do that. The biggest scientific challenge is how they generalize what do they memorize and what new things do they do. The more there is in the data set that you want to test accuracy on the less you can get an accurate reading so to be part of that process and a second we have greater emtransparency about what goes into the systems. We dont know what is in them then we dont know how well they are doing and we dont know how good a benchmark that will be for something that is novel. I could go into that more but i want to flag that. The second is on jobs. Pastrm Performance History we he more jobs and professions come in. I think this one is going to be different over what timescale is it going to be ten years, 100 years and i dont think anybody knows the answer to that question. In thec long run the socalled Artificial Intelligence will replace a large faction of human jobs. We are not that close to the intelligence despite the media hype and so forth i would say what we have right now is a small sampling that we will build. People will a laugh at this as i think senator holly or may be me senator durbin mades the example about cell phones when we look back at the aia of today 20 years from now it will be like that stuff is really unreliable. It couldnt do planning which is a technological aspect. Its reasoning abilities were limited, but when we get to agi, artificial general intelligence, that is going to have i think profound affect on labor and last i dont know if im allowed to do this but i will note the worst fear i do not think is employment, and he never told us what his worst fear is and i think that it is germane to find out. Im going to ask mr. Altman if he cares to respond. Wevet tried to be clear abt the magnitude of the risks here. I think jobs and employment and what we are going toit do with r time really matters. I agree whenn we get to a powerful systemc i think im moe y optimistic that we are incredibly creative and find things to do with better tools that keep happening. My worst fears are we leave the field and cause significant harm to the world. Its why we started the company. Its a big part of why im here today and buy a weve been here in the past and able to spend time with you. We want to be vocal with you and work with the government to prevent that from happening that we try to be very clear about what the downside is our hope is that the rest of the industry will follow the example that you and ms. Montgomery have set by coming today and meeting with us as you have done privately so that we can target the harms and avoid unintended consequences to the good. Senator holly. Thank you mr. Chairman and to the witnesses for being here. I think you grew up in st. Louis. I want that noted especially underlined in the record miserably is a great place. That is the take away from the hearing. Lets stop there, mr. Chairman. Letaw me ask you, mr. Altman i will start with you and preface by saying my questions are an attempt to get my head around it and ask all of you to get our heads around what this ai can do. Im trying to understand its capacity and significance. Looking at a paper here entitled large language model can predict Public Opinion. This iss just posted on the monh ago. At thehe conclusion that this wk was done at mit and also google, the conclusion the Large Language Models can indeed predict Public Opinion. They go through and model how this is the case and conclude ultimately a systemic can predit human survey by the pre trained language model to set the population specific media so you can see the model and the particularca site and with remarkable accuracy the paper goes into predict what peoples opinions will be. I want to think about this in the context of elections. If these Large Language Models can even now based on the information we put into them quite accurately predict Public Opinion ahead of time, i mean, before you even ask the public these questions, what will happen when entities whether itsen corporate entities or governmental entities or campaigns or foreign actors take this survey information about Public Opinion and then finetune strategies to elicit a certain responses into behavioral responses . We already know the Committee Heard testimony i t think three years ago now about the effect of p something as prosaic and nw it seemsse as Google Search of e affect that this has on voters in an election particularly on the final days of an election who may try to get information from Google Search and what an enormousn effect the ranking of the Google Search, the articles have an enormous effect on the voter. This of course is orders of magnitude far more powerful and significant, directed if you like. Maybe you can help me understand some of thee significance of ths issue should we be concerned about models that can predict survey opinion and help organizations into these strategies to elicit behaviors from voters. Thank you for the question. Its one of my areas of greatest concern. The more general ability of these models to manipulated to persuade to provide sort of oneonone information is a broader version of what youre talking about, but given that we are going to face an election and the models are Getting Better this is a significant area of concern. Theres a lot of policies and im happy to talk about what we do b there. I do think some regulation would be quite why is on this topic. As someone mentioned earlier people need to know if they are talking to nai, if content might be generated or not, i think its a great thing to do to make that clear. I think we also will need rules, guidelines about what is expected in terms of disclosure from a Company Providing a model that could have these sorts of abilities that you talk about so im nervous about it. I think people are able to adapt quite quickly when photoshop came up to the scene a long time ago, for a while people were really quite fooled by photoshop images and then pretty quickly developed understanding that images might be photoshopped. This will be like that but on steroids and the interactivity and ability to model and predict humans as well as you talked about i think is going to require a combination of companies doing the right thing regulation and public education. Do you want to address this . One is the appendix in my remarks i have two papers to make you even more concerned. One is in the wall street journal a couple of days ago called my political beliefs were altered by a chat about and i think the scenario you raised is we might basically observe people land use surveys too figure out what they are saying. But to acknowledge the risk is worse the systems will directly maybe not even intentionally manipulate people and that was the thrust of the wall street journal article and it links to be an article called interacting that isnt yet published, interacting with opinion opinionated language models and this comes back ultimately to data. One of the things im the most concerned about is that we dont know what it is trained on. Sam knows what the rest of us do not, and what it is trained on has consequences for essentially the biases of the system. We can talk about that in technical terms but hows it mit lead people depends very heavily on what the data is on them so we need to transparency about that and we probably need to scientists doing an analysis in order to understand what the political influences for example of the t systems might be and is not just about politics it can be about health and about anything. The systems absorb a lot of data and then what they say reflects that data and they are going to do it differently depending on what is in that data is what makes a difference if they are trained on the wall street journal as opposed to the times were read it. They are largely trained on all this stuff but we dont understand the composition of that so we have this issue of potential manipulation and its even more complex than that because its a subtle manipulation. People may not be aware of what was going on. That was the point of the wall street journal article and the other that i called yourbo attention to. Let me ask about the er systems trained on the kind of data that for instance the social Media Companies and platforms collect on all of us routinely that weve had many chats about this in this committee over many a year now but the massive amounts of data, personal data the companies have on each one of us and ai system that is trained on that individual data that knows each of us better than ourselves and also knows the billions of data points about Human Behavior can we foresee a system that is extraordinarily good at determining what will keep an individuals attention. The number of clicks currently going on on the platforms im just imagining these models supercharging that technology that will allow individual targeting and the kind weve never imagined before. It will know what to sam altman finds attention grabbing and exactly what i find attention grabbing and will be able to elicit and responses from us in a way that we have heretofore not been able to imagine. Should we be concerned about that for the corporate applications, for the monetary applications and the manipulation that could come from that . Yes, we should be concerned about that. To be clear, we are not trying to build up these profiles of the users or get them to use it more. We would actually love if they useca it less but i think other companies are already certainly will in the future use ai models to create very good out predictions of what a user would like. Hyper targeting of advertising is definitely going to come. I agree that hasnt been the ai business model. Of course now they are working for microsoft and i dont know what are microsofts thoughts but we will see it with open source language models. But i dont know. The technology is lets say partway they are to be able to do that and we will certainly get there. We are in Enterprise Technology company not consumer focused, so it isnt one that we necessarily operate in in terms of these issues are hugely important issues and its why weve been out ahead in developing the technology that will help to ensure that you can do things like produce a fact sheet that has the ingredients of what the data is trained on. Data sheets, model cards, all those type of things and calling for as i mentioned today, transparency. So, you know what the algorithm was trained on. And then you also know and can manage and monitor continuously over the lifecycle of the nai model the behavior and the performance of that model. Senator durbin. Thank you. I think whats happening today in this hearing room is historic. E i cant recall when weve had people representing large corporations or private sector entities come before us and plead with us to regulate them. Many people in the senate based their careers on the opposite if the economy will thrive, if the government gets the hell out of the way. What i am hearing today instead is stop me before i innovate again message. Im just curious how we are going to achieve this. As i mentioned in section 230 in my openingne remarks, we learned something there. We decided then in section 230 we were basically going to absolve the industry from liability for a period of time as it came into being. On theti podcast earlier you agreed with of the. Host to section 230 doesnt apply to generative ai but developers shouldnt be entitled to full immunity for the harm caused by the products. So, what have we learned that applies to your situation with ai . Thank you for the question, senator. I dont know yet exactly what the right answer is. I would love to collaborate with you to figure it out. I do think for a very new technology we need a new framework. Certainly Companies Like ours bear a lot of responsibility for the tools we put out in the world but tool users do as well and how we want and people that will build onth top of it betwen them and the end consumer and how we want to come up with a liabilityy framework is a super important question and we would love to Work Together. The point i want to make is when it came to Online Platforms the inclination of the government was get out of the way this is a new industry, dont over regulate in fact give them some breathing space. See what happens. Im not l sure im happy with of the outcome as a look at Online Platforms into the harms theyve created problems weve seen demonstrated in this committee, child exclusion, cyber bullying, online drug sales and more. I hear the opposite suggestion that is to establish some liability standards and regulations for a Major Company to come before the committee and say to the government please regulate us, can you explain the difference in thinking . Absolutely. So for us this comes back to the issue of trust in the technology. Weve been calling for precision regulation of Artificial Intelligence for years now. This isnt a new position. Wehi think that Technology Needs to be deployed in a responsible and a clear way that people weve taken principles around that trust and transparency thats why we are here regulating for the position regulatory approach because it should be regulated at the point of risk and that is the point at which Technology Needs society. Lets look at what that might appear to be. Members are maybe not as smart as we think we are many times into the government certainly has a capacity to do Amazing Things but when you talk about our ability to respond to the current challenge and perceived challenge of the future, challenges which you all have described in terms that are hard to forget as you said things can go quite wrong and as you said democracy is threatened. The magnitude of the challenge you are giving us is substantial. Im not sure that we respond quickly with enough expertise. Professor marcus, youve made a reference to the International Arbiter of Nuclear Research i dont know if that is a fair characterization but the characterization i will start. Ith we have many agencies that can respond in some ways for example the ftc. There are many agencies that can come about my view is that we probably need a cabinet Level Organization within the United States in order to address this. My reasoning for that is the number of risks is large. The amount of information to keep up on is so much i think we need a lot of Technical Expertise and a lot of coordination of these efforts so there is one model here where we stick to only existing law and try to shape all of what we need to do and each agency does their own thing, but i think that ai is going to be such a large part of the future and is so complicated and moving so fast this doesnt fully solve the problem about at Dynamic World but its a a step in the directn having an agency that is a fulltime job to do this. I personally suggested that we should want to do this in a global way i wrote an article in the economist and have a link suggesting we might want an international agency. Thats what i want to go to next and the fact i will cite the examples because the government was involved in that from day number one. Now we are dealing with innovation that doesnt necessarily have a boundary. We may create a great u. S. Agency, and i hope that we do that may have jurisdiction over the u. S. Corporations and activities but it doesnt have a thing to do with what is going to bombard us from outside of the United States. How do you get this international authority, the authority to regulate in a fair way with all entities . That is probably over my pay grade. I p would like to see it happen and it may be inevitable that we push there. Theen politics behind it are obviously complicated. Im hardened by the degree the room is bipartisan and supporting the same things and that makes me feel like it might be possible. I would like to see the United States take leadership in such an organization that has to involve the wholerl world and nt just the u. S. To work properly even from the perspective of the companies it would be a good thing so the companies themselves do not want a situation where you take these models that are expensiveve to train and you have to have 190 some of them for every country that wouldnt be a good way of operating. We think of the energy cost alone just for training the systems it wouldnt be a good model for every country has its own policies for eachmp jurisdiction every company has to train another model and different states are different so missouri and california have different rules so then that requires even more training of these expensive models with huge climate impact. It would be difficult for the companies to operate if there was no global coordination so i think we might get the companies oon board if there is bipartisan support here and i think theres support or around the world that is entirelyou possible we could develop such a thing but there are many nuances here of diplomacy that are over my pay grade. I would love to learn from you all to try to make that happe. Cani i weigh in briefly . Briefly please. The u. S. Should lead here and do things first but to be effective we do need to something global as you mentioned this can happen everywhere. There is precedence. I know it sounds hard there is precedence weve done it before. Weve talked about doing it for other technologies. They are given what it takes to make these models on the chip supply chain the sort of limited number of competitive gp use the end of the power they have over the companies. I think that there are paths to the u. S. Settings of International Standards that other countries would need to collaborate with and be part of that are more global even though on the face it sounds like an and practical idea and i think it would be great for the world. I think we are going to hear more p about what europe is doi, the parliament already on the act on social media europe is ahead of it. We need to be in the lead i think your point is very well taken. Let me turn to senator graham. Senator blackburn. Thank you all for being with us today. I put in an account should congress regulate ai chat gpt and c it gave me fors and cons and says ultimately the decision rests with congress and deserves careful consideration. So, it b was very i recently visited with a technology council. I represent tennessee and of course you have people there from Healthcare Financial Services logistics, educational entities and they are concerned about what they see happening with ai with utilizations for their companies. A similar to you weve got healthcare people looking at disease analytics, looking at predictive diagnoses. Howth this can better the outcos for patients Logistics Industry looking at ways to save time and money and yield efficiencies. Others financial services. How does this work with quantum, how does this work with, how can we use this. But i think as we have talked with them one of the things that continues to come up is yes professor marcus as you were saying the different entities are ahead of us but we have never established a preemption for online privacy, for Data Security and some of those foundational elements which is something that we need to do as we look at this and it will require the commerce committee, Judiciary Committee decides how we move forward so that people own their virtual you. I was glad to see you last week the models are not going to be trained using consumer data. That is important and if we have a second round i have a host of questions for you on Data Security and privacy. But i think its important to let people control their virtual you, the information in the settings and i want to come to you on music and content creation because we have a lot of songwriters and artists and we have the greatest community on the face of the earth. They should be able to decide if the copyright to the songs and images are going to be used to train these models. I am concerned about the ai jukebox. It offers renditions in the style of garth brooks that suggests that its trained on these songs. I went in this weekend and said write me a song that sounds like garth brooks and it gave me a different version of simple man so its interesting that it would do that but youre training it on these copyrighted songs so as you do this, who owns the right to that generated material and using your technology, could i remake a song and insert content from my favorite artist and then own the creative rights to that song . Thank you senator. This is an area of great interest to us. I would say first of all we think that creators deserve control over how their creations are used and what happens beyond the point of releasing it to the world. A secondk i think we need to figure out new ways the creators cann win, succeed, have a vibrat life and im optimistic how do you compensate the artist . We are working with of the artists now to figure out what people want toly do. Do you favor Something Like sound exchange that has worked in the area of radio . Youve got your team behind you, get back to me on that. That would be a thirdparty entity. So lets discuss that and let me move on. Can you commit as youve done withth consumer data not to tran chat gtp open ai jukebox or other models on the artists andd songwriters copyrighted works or use their voices and likenesses without first receivingng their consent . First aid jukebox isnt a product we offered a release but its unlike chat [inaudible] thats something that cost a lot of artists a lot of money. I dont know the numbers from the top of my head. I can follow up with your office but its not something that gets much attention. It was put out to show something is possible. Senator durbin said and i think its a fair warning to you all iff we are not involved in this from the getgo and you all are already a long way down the path on this but if we dont step in, then this get away from you. So are you working with the Copyright Office . Are you considering protections for content generators and creators in generative ai . We are engaged on that. To reiterate the earlier point we think the content creators needed toed benefit from this technology exactly what the economic model is we are still talkingg to the artists about what they want. Theres a lot of ways this can happen but no matter what the law is the right thing to do is make sure people get a significanty. Upside benefit frm this and we believe that its going toto deliver that. But the content owners, likenesses, people deserve control over the how that is used and to benefit from it. So privacy then how do you plan to account for the c collection of voice and other user specific data, things that are copyrighted, user specific data through the applications because if i can go in and say right now sing a song that sounds like garth brooks andnd t takes part of an existing song, there has to be a compensation to that artist for that utilization and use. If it was radio play it would be there. If it were streaming it would be there. So if you are going to do that, what is your policy for making certain you are accounting for that and protecting not individuals right to privacy and their rights to secure that data and Creative Work . A few thoughts about this. Think that people should be able to say i dont want my personal data turned on. I think thats you National Privacy law which many of us here are working towards getting something that we can use. My time is expired. I will yield back. Thank you senator blackburn. Thank you mr. Chairman. Senator blackburn, im from tennessee and love the music. I asked what are the top creative song artists of all times and two of the top three were from minnesota. That would be prince and bob dylan. One thing ai wont change and youve seen it here. On a more serious note, my staff and i in my role as the chair we just introduced a bill that is representative from new york introduced over the house on political advertisements but that is just of course the tip of the iceberg you know this with your discussions about the images of section 230 that we just cant let people make stuff up and then not have any consequence but im going to focus on one of my jobs on the rules committee and that his iselection misinformation. We just ask chat gdp to do a tweet about their polling location in Bloomington Minnesota and said they are along the lines of this polling location set lutheran church, where should we go. Albeit its not an election right now but the answer that was drafted was a completely fake thing. Go to 1234 elm street so you can imagine what im concerned about here with an election upon us that we are going to have all kinds of misinformation. And i just want to know what youre planning on doing about it. I know we are going to have to do something soon, not just for the images of the candidates but also for the misinformation about the actual polling places and the rules. Thank you, senator. We talked about this earlier we are quite concerned about the impact this can have this is an area hopefully the industry and the government can Work Together quickly. If theres many approaches and i will talk about that but before that, i think its tempting to use the frame of social media but this isnts a social media. This is different so the response we need is different. This is a tool that a user is using to generate content more efficiently than before. They can change it and test the accuracy and if they dont like it they can get another version but it still then spreads through social media or otherwise like a Single Player experience. As we think about what to do thats important to understand. Theres a lot we can and do do. We have policies and also have monitoring so at a scale we can detect. And of course theres going to be other platforms and if they are all spouting out fake election information i think whats happened in the past with russian interference and the like is going to be a tip of an iceberg so thats number one. Number two is the impact on intellectual property. Senator blackburn was getting into some of this with song rights and i have serious concerns about that, but news content so senator kennedy and i have a bill that was quite straightforward that would simply allow the news organizations and exemption to be able to negotiate with basically google and facebook. Microsoft was a supporter of the bill but basically negotiate with them to get better rates and be able to not have some leverage and other countries are doingar this, so my question is when we are writing a study by northwestern predicting one third of the u. S. Newspapers are in the last two decades are going to be gone by 2025 unless you start compensating for everything from movies, books but also news content, we are to lose any realistic content producers so i would like your response to that and of course there is an exemption for copyright in section 230 but i think asking little newspapers to go out and sue all the time cant be the answer. They wont be able to keep up. The tools like what we are creating can help news organizations do better. I think having a vibrant National Media isg critically important. It hasnt been great for t that. We are talking here about local and a scandal in the city council, those kind of things those are the ones that are getting the worst, the radio stations and broadcast, but you understand this could be potentially exponentially worse if they are not compensated . Because what they need is to be compensated for their content and not have it stolen. The current version its not a good way to find recent news and its not a survey that can do a great job although it is possible. If there are things we can do to help the news we would certainly like to. I think it is critically important. More transparency on the platforms. We have the platform accountability Transparency Act to get researchers access to this information of the algorithms and the like. Would that be helpful and then why dont you just say yes or no . It is absolutely critical to understand the political ramifications and so forth. We need to transparency about the data and more how the models work and we need to scientists to have access to them. I was going to amplify your local point about the news. A lot of news is going to be generated. Showing Something Like 50 websites are already generated by the box. We are going to see much more of that and its going to make it even more competitive for the local news organizations, so the quality of the overall news market is going to decline as we have more generated content by systems that arent reliable in the content they create is. Thank you on a timely basis to make the argument why we have to mark up this bill again in june i appreciate it. Senator graham. Thank you for having this. Trying to find out how it is and to learn from theke mistakes wee made. The idea of not assuming social Media Companies is to allow the internet to flourish because if i slander you, you can sumi. If youre a Billboard Company and you put up slander can you sue the Billboard Company we said no. Basically section 230s being used for social Media Companies to avoid liability or activity other people generate when they refused to comply with their terms of use. Another motheral calls up the company and says this is being used to bowling my child. You promise the terms you would prevent bullying and she calls three times and gets no response. The child kills herself and they cant sue. Do you all agree we dont want to do that again . If i may speak for a second there is a fundamental distinction between reproducing content and generating content. About you would like liability where people are harmed . Absolutely. Yes in fact ibm has been publicly advocating to condition liability on a reasonable care standard. Make sure i understand the laws. Thankk you for coming. Your company isnt claiming that section 230 applies tool that you have created. We are claiming we need to Work Together to find a totally new approach. I dont think section 230 is even the right framework. Under the law as it exists today if im harmed by it, can i sue you . Yes we have gotten sued before. And what for . Theyve mostly been pretty frivolous things like about the examples my colleagues have given from Artificial Intelligence that could literally ruin our lives can we go with the company that created is that your understanding . I think there needs to be clear responsibility by the companies. About youre not claiming any kind of Legal Protection like section 230 that applies to the industry is that correct . I dont think we are saying anything likee that. When it comeshe to consumerst seems to be ways to protect against a product. Statutory schemes, which are nonexistent here, Legal Systems which may be here but not social media, and agencies. Going back, the atom bomb has put a cloud over humanity but power could be one of the solutions to Climate Change. So what im trying to do is make sure you cant justan go build a Nuclear Power plant, what would you like to do today lets build a plant. We have a commission that governs how you build a plant. Do you agree the tools you are creating should be licensed . Thats the simplest way you get a license and do you agree with me the simplest and most effective way is to have an agency that is more nimbly and smarter than congress what should be easier to create overlooking what you do . Do you agree with that . Absolutely. I would have some nuances. We need to build on what we already have in place today. We dont have an agency that regulates the technology but a lot of the issues a lot of the issues. So ibm says we dont need an agency. Interesting. Should we have a requirement for these tools . Should you get a license to produce one of these tools . I think it comes back to some of them potentially, yes. What i said at the outset is that we need to clearly do you claim section 230 applies in this area at all . Wee are not a platform compay and weve long advocated for the reasonable care standard. I dont l understand how you could say that you dont need an agency to deal with of the most Transformative Technology may be ever. I think its a Transformative Technology, certainly. The conversations we are having today have been bringing to light the fact that the domains and the issues. Its been very enlightening to me. Why are you so willing to have an agency . Weve been clear about the upsides and youou can see how mh they enjoyed how much value they are getting out of it but weve also been clear about the downside. Its a major tool to be used. If you make a ladder and it doesnt work you can sue the people that made the latter. I think youre on the right track. Heres my two cents for the committee. We need to end power an agency that issues a license and can take it away. Wouldnt that be some incentive . You also agree china is doing ai research is that right . This World Organization that doesnt exist maybe it will but if you dont do something about the china part of it you will never quite get this right to do you agree . Thats why i think it doesnt necessarily have to be . A World Organization but theres a lot of options here. There has to be some sort of standard, some sort of controls that have a global effect. Because other people are doing i this. Military applications, how can ai change the warfare . And youve got one minute. Thats a tough question for one arminute. This is out of my area of expertise. One example, a drone. You can plug into a drone the coordinates and they can fly out and go over this target and it drops a missile on this car down the road. Could ai create a situation where a drone could select the target itself . I think we shouldnt allow that. Can it bee . Done . Sure. Thanks, senator graham. Thank you senator blumenthal. We are working closely together to come up with this Compelling Panel of witnesses and beginning a series of hearings on the transformational technology. We recognize the immense promise and substantial risks associated with generative Ai Technologies and we know they can make us more efficient to learn more skills, the business of creativity. But we also know that generative ai can authoritatively deliver wildly incorrect information. It can hallucinate as is often described and impersonated loved ones and encourage selfdestructive behaviors and it can shape Public Opinion and the outcome of elections. Congress thus far has demonstrably failed to responsibly enact meaningful regulation of social Media Companies with serious harms that have resulted that we dont fully understand. At the senator referenced in her questioning a bipartisan bill that would open up social media platforms underlining algorithms. We have struggled to even do that. To understand the Underlying Technology and then to move towards responsible regulation. We cannot afford to be as leads to responsibly regulating as we have been to social media because the consequences both positive and negative will exceed those of social media buying orders of magnitude so let me ask a few questions designed to get at how we assess the risk, what is the role of International Organization and how does this impact . I appreciate your testimony about the ways in which the open ai assesses the safety a model through the iterative deployment. The fundamental question is how you decide whether or not a model is safe enough to deploy and safe enough to have been built and then let go into the wild. I understand one way to prevent the models from providing harmful content is to have humans identify that content and then train the algorithm to avoid it. Theres another approach could constitutional ai that gives the model a set of values or principles to guide its decisionmaking. Would it be more effective to give these kind of rules instead of to require or compel training the model on all the different potentials for content . Thank you senator its a great question. Itm like to frame it by talking aboutt why we deploy at all and put these systems in the world. Theres the obvious answer about theres benefits of people using it for all sorts ofef wonderful things. That makes us happy. But a big part of why we do it is that we believe that giving people into the institutions time to come to grips with this technology to understand it and find its limitations that benefit the regulations and what it takes to make it safe, thats important. Going off to buildg a super powerful system in secret and then dropping it on the world once i think would not go well so a big part of our strategy is while the systems are relatively weak to find people that had a a context in reality and find out what wee need to do to make it safer and better and that is the only way that ive seen in the history of new technology and products inod this magnitude to get to a good outcome so that interaction is important. Of course before we put something out it needs to meet the safety and again we spent over sixr months before landing with the standards were going to be. Btrying to find the harms thate knew about and how to address those. One of the things thats been gratifying if you could focus briefly on whether the model would be worth it. I think giving the models value upfront is an extremely importantnt facet of that same thing but somehow or another you are saying here are the values, heres what i want you to reflect or heres everything and from there you pick as the user if you want value system over here or over there we think thats important. Theres multiple technical approaches that we need to give policymakers the tools to say heres the values and implement them. You serve on an ethics part of a longestablished company with a lot of experience. I am concerned that they can undermine the faith of Democratic Values and the institutions that we have. The chinese are insisting that ai has been developed and reinforce the core values of the communist party and im concerned about how we promote ai that reinforces and strengthens open markets and societies. In your testimony you are advocating for the regulation tailored to the specific way its being used not the underlining technology itself and the eu ahead with the act thatan categorizes the products based on level of risk you all in different ways have said that you view elections in the shape of the outcomes and disinformation that can influence as one of the highest risk cases. Its entirelyte predictable we have attempted so far unsuccessfully to regulate social media after the demonstrably harmful impact of social media on our last several. What advice do you have for us about what kind of approach we should follow and whether this is the right one to pursue. The perception is very consistent with this concept of precision regulation where you are regulating the use of the technology and context. Obviously that is what i advocated for at the onset. The different rules for different risks, so absolutely any algorithm being used in that context should be required to have disclosure around the data being used, anything along those lines its important. Guardrails need to be in place. And coming back to the question of whether we need an independent agency, i think we dont want to slow down the regulation to address the risks right now so we have existing regulatory authorities in place. So ive been clear that they have the ability to regulate in their respective domains. A lot of the issues span multiple domains, elections and fall like. I will assert they are under resourced and lack the powers they need. Even though the industry is asking us to regulate the data privacy im interested in what International Bodies are best positioned to convene discussions to promote responsible standards. Weve talked about the model being soon and nuclear energy. Im concerned about proliferation and nonproliferation. I would suggest they at least provide a scientific a baseline of what is happening and Climate Change. So even though we may disagree about the strategies, globally weve come to a common understanding of whats happening. I would be interested if you could give your thoughts on the right body to convene a conversation and our values. I think Global Politics isnt my specialty but ive moved towards policies in recent months because of my great concern about all these risks. Certainly the un and unesco has its guidelines to be involved and on the table and maybe things work on them and they dont but they should have a strong voice and help develop this. Been thinking greatly about these organizations. I dont feel like i personally am qualified to say exactly what the right model is. The chair of the Judiciary Committee in june and july we will be having impact hearings on the patents and copyrights you can already tell from the questions of others it will be interesting i look forward to following up on that topic. I look forward to helping as much asuc possible. Thank you all for being here. Permit me to share with you three hypotheses i would like you to assume for the moment to be true. Hypothesis number one many members of congress do not understand Artificial Intelligence. Hypothesis number two, that absence of understanding may not prevent c congress from plunging in with enthusiasm and trying to regulate this technology in a way that could hurt this technology. Hypothesis number three that i would like youou to assume, thee is likely a berserk weighing of the Artificial Intelligence community that intentionally or unintentionally could use Artificial Intelligence to kill all of us and hurt us the entire time that we are dying. Assume all of those to be true. Please tell me in plain english two or three reforms, regulations if any that you would implement if you are a queen or king for today . I think it comes back again to transparency and explain ability. We absolutely need to know and have companies as test. What do you mean by transparency. Disclosure of the data thats used to train ai, disclosure of the model and how it performs and making sure that there is continuous governance over these models. But we are the leading edge. Technology governance, organizational governance, rules and clarifications thatic are needed. This is your chance to tellnc us how to get this right. The rules should be focused on the use of ai in certain contexts. If you look at the act it has certain uses that it says are simply too dangerous and will be outlawed. So we first have to pass a law that says you can use ai for these uses but not others is that what youre saying . We need to define the highest risk. Is there anything else . Requiring things like impact assessments and transparency. Requiring companies to show their work. Protectingro data that is used o train ai in the first place. Professor if you could be specific, this isy your shot. The talk in plain english and tell me what if any rules we ought to implement. Please dont just use concepts. Im looking for specificity. A safety review like we use with the sta prior to widespread deployment. Introduce somethingty to 100 Million People somebody has to have their eyes on it. There you go. Okay, thats a good one. You didnt ask for three that you would agree with. Number 280 and nimble agency to follow whats going on not just pre review buts also post as things are out there in the world with authority to call things back as weve discussed today and number three would be funding geared towards things like ai constitution that can reason about what its doing. I wouldnt leave things entirely to the Current Technology which is poor honest fashion so i would try to focus on the safety research. Theres a lot of complications in my field, shortterm and longterm and i think we need to look at both rather than just funding models to be bigger. Sorry to cut you off because i want to hear from mr. Altman. I would form a new agency licenses any effort above a certain scale of capabilities and can take that license away. Number two i would create a set of Safety Standards on what you said in your third hypothesis. One example used is looking to see if the model can self replicate. In a third i would require independent audits not just from the company or agency but experts that can say it is or isnt in compliance with of the thresholds and performance on the questions. Can you send me that information . Would you be qualified to if one promulgated those rules to administer those . I love my current job. [laughter] we would be happy to send lorecommendations. You make a lot of money . [inaudible] thats interesting. You need a lawyer or an agent. Im doing this because i love it. Thank you, mr. Chairman. Thank you, senator kennedy. Thank you, mr. Chairman. Listening to all of you testifying, thank you for being here. Clearly ai is a Game Changing tool and we need to get the regulation of this because my staff for example asked ai it might have been gtp. I dont know one of the other entities to create a song that my favorite band, a song that theyey would sing but neither of the artists were involved in creating what sounded like a really genuine song so you can do a lot. We also ask can there be a speech created the talking about the Supreme Court decision and the chaos that it created using my voice and a speech that was good and made me think about what do i need my staff for. Dont worry. [laughter] theres laughter behind you. Their jobs are safe. Theres so much that can be done and one of the things you mentioned you said gtp for can refuse harmful requests so you must have put some thought into how your system can refuse harmful requests. What do you consider a harmful request . I will give a few examples. I mother would be about content and encouraging self harm and adult content not that we dont think that its inherently harmful but theres things that could be associated that we cannot differentiate so we refuse all of it. Those are some of the more obvious harmful information but in the election context for aample, i saw a picture of former President Trump being arrested and that went viral. I dont know is that considered harmful. Ive seen statements attributed to any of us that could be out there that may not rise to your level of harmful content, but there you have it. So, the two of you said that we should have a licensing scheme. I cant envision or remember right now what kind of licensing scheme we would be pretty much to regulate the vastness of this Game Changing tool. Are you thinking of an ftc kind of system, fcc . What do you even and vision as a potential licensing scheme that would provide the kind of guardrails that we need . To protect our country from harmful content . To touch on the first part of what you said there are things besides should this content be generated or not theres an image generated i think it would be a great policy to see generated images need to be made clear in all contexts that they were generated and then we still have the image out there but its at least requiring people to say this is a generated image. You dont need an entire licensing scheme to make that reality. I think the licensing scheme comes in not for what the models are capable of today because as you pointed out you dont need a new Licensing Agency to do that, but as we headed may take a long time as we head towards the artificial general intelligence and the power of that technology that is where i personally think we need such a scheme. We are talking about major harms that can occur. Professor marcus, what kind of a regulatory scheme would you envision. Its going to take care of the issues that would arise in the future so what kind of scheme would you contemplate . To remind for just a moment i think ifif you put your finger n the scientific issue in terms of the challenges and building Artificial Intelligence we dont know how to build a system that understands harm so what we do right now is gather examples and say is this like the examples before. It outlined the challenges. We want ai it to solve to understand harm and that may requirema a new technology so is important on the second part of your question the model that i tend to gravitate towards, but im not an expert here is the fda or in part of it you have to make a safety case and say why the benefits outweigh the harm in order to get that license. Probably we need elementsd of multiple. I think the safety case is incredibly important we have to have external reviewers that are scientifically qualified to look at this andki say have you addressed enoughhi so i will gie one example. This isnt something that ai you made about it did made call gtp chat a few weeks later to a building open source software. It allows the systems to access source code and the internet and so forth and theres a lot of potential cybersecurity risks there. There should be an external agency that says we need to be reassured if youre going to release this product that there arent going to be cybersecurity problems or theres ways of addressing it. I am running out of time i just want to mention ms. Montgomery, the model similar to what the u. S. Comes up with with the vastness of ai i think would require more than looking at the use of it and based on the plane hearing today, dont you think that we are probably going to need to do more to focus on what you see its being used for for example you can ask ai to come up with a funny joke or something but use the same tool to generate something that is like a fraud kind of situation. I dont know how you will make a determination based on where youre going with that they usee modeling and how to distinguish those kinds of uses of the tool. So if we are going to go towards this scheme we have to put a lot of thought in how we are going to come up with a scheme that is going to provide the kind of future referencece that we needo put in place. So i think you all for coming in and providing further food for thought. Thank you. Thank you mr. Chairman. I appreciate the flexibility of the back and forth between this committee and Homeland Security committee where theres a hearing going on right now on the use of ai and government, on the hill or the senate apparently. Forfo folks watching at home if you never thought about it until the recent emergence of the tools the developments in this space it just happened all of a sudden but the fact of the matter is they havent. It isnt new, not for governments, not for business, not for the public in fact the public uses it all the time and for folks to be able to relate to offer the example of anybody with a smart phone many features on your device leverage ai including autocorrect features. Im excited how we can explore positive innovation that benefits society while the alreadyome of harms into fees or the tools today. Now the language models are becoming ubiquitous. I want to make sure theres a focus on ensuring the equitable treatment of the diverse demographic groups. My understanding is most researchers to evaluate and mitigatingmi the fairness harms have been traded while nonenglish languages have received little attention or investment and that weve seen this before. Social Media Companies have not adequately invested in content moderation tools and resources. I share this not just out of concern. If youm could repeat the social media failure in the tools and applications how are they ensuring the cultural inclusivity that they are in these large box. So, bias and equity in technology is a focus of ours and always has been. Diversity in terms of the development of the tools and the deployment they are having Diverse People that are training and considering the downstream effect as well as we are very cautious, very aware of the fact we cant just be articulating and calling for these type of things without having the tools and the technology to apply governments so we were one of the first teams and companies to put toolkits on the market to contribute them to open source that would do things like help to address the technical aspects in which we help address issues like that. Can you speak on the language inclusivity . We dont have a consumer platform but we are very actively involved with ensuring the technology we help deploy in the Large Language Models and using our client to Deploy Technology is focused on available in many languages. This is a really important. Its a language of fewer speakers to ensure that its included in the model and weve had many similar conversations. I look forward to many similar partnerships to get them into our models. Unlike previous models good at english and not other languages. Good out a large number of. For the small languages we are excited about custom partnerships to include such language. The part of the question you asked about the values. I appreciate what you said about the benefits of the systems. Why they think it is so critical and a lot of the actors in this space can afford. For the tools and approaches it only exacerbates the inequities that exist in society so a lot of fault to do there and in the time remaining i want to ask one more question the risk profile of other tools and applications that are tangible for the public due to the nature of the user interface and outputs that they produce but i dont think that we should lose sight of the broad system as you consider the impact on society as well as the design of the appropriate safeguards. In your testimony aseg you noted can find some of the different applications that the public and policymakers should also keep in mind as we consider possible regulations . I think the systems that are available today are creating new issues that needed to be studied. New issues around the potential content that could be misleading. Thank you for opening this up to all Committee Members if we are going to open up a framework we will have to define what it is that we are regulating. It doesnt stop the innovation from happening with the open source models and researchers in this ecosystem we dont want to slow that down there still need to be some rules there but i think that we could all a lion at the systems that need to be licensed in a very intense way. We could define a threshold it could go up or down as we discover more algorithms that says above this amount of computer you are in this regime. We dont want to stop the open Source Community or individual researchers. Understanding you are responding. I think a model that can persuade and manipulate influence on a persons behavior would be a threshold. A model that could help create would be a great threshold. Things like that. I want too talk about the predictive capabilities of the technology and we have to think about a lot of very complicated constitutional questions that arise from it with massive data sets. The integrity and accuracy with which such technology can predict the future of Human Behaviors potentially pretty significant. At the individual level, correct . We dont know the answer to that for sure but itt could have some impact. We may be. Compounded by cony situations where Law Enforcement agencies deploying such technology seeks some kind of judicial consent for a search or to take some other Police Action on the basis of a modeled prediction about some individuals behavior but thats different from the kind of evidentiary area predicate than normally police would take to a judge to get a warrant. Talk me through that issue. I think its important that we continue to understand that thesee are tools that humans use to make human judgments and we dont take away human judgment. I dont think people should be prosecuted based on the output of nai system for example. We have no National Privacy law. Europe has rolled one out to mixed reviews. Do you think we need one . I think it would be good. And what would be the quality or the purpose that b you think would make the most sense based on your experience . This is far out of my expertise. Theres many people that are privacy experts that could weigh in. I would still like you to weigh in. I think at a minimum is that the users should be able to sort of opt out from having their data used from Companies Like ours where the social Media Companies. It should be easy to delete your data. But the thing that i think is important from my perspective is that if you dont want the data used for Training Systems you have the right to do that. As i understand it your tool andoo certainly others one up te inputs will be scraping for the lack of a better word data off of the open web as a lowcost way of gathering information and there is a vast amount of information out there about allo of us. How would such a restriction on the access or use or an analysis of such be practically implemented . I was thinking about something a little different which is the data that someone generates and training on that. The data that is on the public web is accessible even if we dont train on that the models can link to it so that is not what iat was referring to. I think that if theres ways to have the data there should be more ways to have it taken down from the public web but it models the capabilities that we will be able to search the web and look out to bid. When you think about implementing a safety or a Regulatory Regime to constrain such software and mitigate some risk, is your view that the federal government would make laws such that certain capabilities or functionalities themselves are forbidden and potential other words one cannot deploy or execute code capable of acts or is that the act it only when actually executed . I am a believer in defense i think there should be limits on what a deployed model is capable of and what it does. What do you think of how kids use your product . You have to be, i mean, you have to be 18 or up or have your parents permission to use the product but we understand people get around the safeguards all the time so what we try to do is designs. A safe product. There are decisions that we make that we would allow if we knew only adults were using it but we know children will use it some way or another in particular given how much the systems are being usedsy in education. We want to be aware that that is happening. I think senator blumenthal has done extensive work investigating this. What we have seen repeatedly is Companies Whose revenues depend upon the volume of use, screen time, intensity of use designed the systems in order to maximize the engagementes of all includig children with perverse results in many cases and what i would humbly advise you is that you get way ahead of this issue. The safety for children of your product or i think youre going to find senator blumenthal, others on theat subcommittee ani will look very harshly on the deployment of technology that harms children. We couldnt agree more. Im happy to talk about that if i could. I think we try to Design Systems that do not maximize engagement in fact we are so short the less people use our product the better but we are not an advertising base model. We are not trying to get people to use it more and more. And i think that is a different shape. Second, the systems do have the capability to influence an in anobvious and nuanced ways ai think thats particularly important for the safety of children but that will impact all of us. One of the things we will do ourselves, regulation or not but the regulatory approach would be good for also is requirements about how the values of the systems are set and how the systems respond to questions that can cause influence. So we would love to partner with you and we couldnt agree more with of the importance. For the record i want to say that the senator from georgia is also very handsome and brilliant. [laughter] i will allow that comment to stand without objection. This has been one of the best hearings ive had this congress. A lot of people have been talking about regulation so i use the example of the automobile. Specifically focused on regulating cost and so this idea is equally transforming technology is coming and for congress to do nothing, which is not what anybody here is calling for, little or nothing is obviously unacceptable. I appreciate senator walsh and i have been going back and forth during this hearing and talking about trying to regulate in this space. Not doing so for social media has been i think very destructive and allowing a lot of things toot go on that are causing a lot of harm. The question is what kind of relation to my colleagues and ms. Montgomery i have full disclosure but you talked about the findings the highest uses. We dont all of them we really dont. We cant see where this is going and regulating at the point of risk and you sort of called not for an agency and i think when someone asked you to specify because you dont want to slow things down. Ywe should build what we have n place that but you can envision that we can trydo to work on two different ways of ultimately a specific, like we have in cars epa nhtsa the federal motor car carrier safety administration. All ofinll these things you can imagine something specific that is as mr. Marcus points out a nimble agency that can monitor other things and you can imagine the need for Something Like that correct . Absolutely. Just for the record in addition to trying to regulate what we have now you would encourage congress and my colleague senator welch to move forward to figure out the right tailored agency to deal with what we now want perhaps things that mightt come up in the future. I would encourage congress to make sure it understands the technology and the skills and the resources in place to impose regulatory requirements on usage of technology and understand that. Theres no way to put this genie in a bottle. Globally its exploding. I appreciate your thoughts and i share them with my staff about your ideasas about the International Context is that theres no way to stop this from moving forward so with that understanding building on what ms. Montgomery said what kind of encouragement you have specifically to forming an agency to using current rules and regulations and can you put clarity on what youve stated . Ou there are more genies yet to comment more bottles but some genie starting out and we dont have a machinene that can self improve themselves. We dont have machines that have selfawareness and we may not ever want to go there and their other genies to be concerned about. The main part of your question i think we need to have interNational Media for people with expertise in how you grow agencies and we need to do that at the federal level and the International Level and one thing i havent discussed that i would like to is science test to be an important part of it. Talking about misinformation we dont have the tools right now to detect labeled misinformation like nutrition labels and we have new technologies for that. We dont have tools to detect a wide uptick in cyber crime and we need tools and science to help us to figure out what we can build and also what it is we need to have transparency around. Ill just go to you with my time quickly your bid at the unicorn when you are first but can you explain why nonprofits in other words the tcp will just quickly i want folks to understand that. We started as a nonprofit focused on how the technology was going to be built at the time was outside of the window that wasdo even even possible on that ship to love. We did note the time howho important skill is going to be but we did know we wanted to build this with humanity interest at heart and this technology could if it goes the way we wanted to he could do some of those things that professor marcus mentions. To really deeply transform the world and we want to be a force then it let me interrupt you. The second part of my question is while i found it fascinating. Are you ever going to for a revenue model for return on investment are you going to do do i wouldnt say never. I think there may be people that we want to offer services to and theres no tomorrow. Works but i like a prescription base model. We have api can i just jump in quickly. One of my biggest concerns in the space is a party scene this space of web two and with three massive concentration. Its really c terrifying to see how Companies Control the lives of so many of us in these companies are getting bigger and more powerful and i see with microsoft and google. Google has its own in house so im really worried about that. Im wondering if you can give me a quick acknowledgment. Are you worried about the corporate concentration and what effect it might have and the associated risks perhaps with market compensation an ai . Thereat were many the many people that develop models. There will be a relatively small number of writers that can make models. I think there is a danger to that. We are talking about the dangers of ai and if you are that you have to keep a careful eye on are the absolute leadingedge capabilities. Their benefits there but i think there needs to. Be enough in their well because theres so much value that we have consumer choice. Theres a real risk of the technocracy combined with all lagarde you were a small number of companies influence peoples beliefs through the nature of the systems and i put something in the record about the wall street journal and how these systems can subtly shape our beliefs and they have an enormous influence on how we live our lives and having a small number of players do that withvi data we dont even know about that scares me. One more thing i want to add. One thing that is important if what they get aligned to whose values and what those bounds are that is somehow set by society as a home a whole and government is also creating that dataset in the ai constitution or whatever it is that has got to come broadly to society. Thank you very much. My time has expired. Thank you senator booker. Senator welch. First of all of about to thank you senator blumenthal and senator hawley for this tremendous hearing. You are noticed for our short attention spanned that i sat through this whole hearing enjoyed every minute of it. Thats one of our longer Attention Spans to your great credit. Its incredibly important issue s and i dont know if all the questions i have been asked. Heres a take away and what i think is a major question that we are going to have to answer from congress. Number one you are here because ai is this extraordinary new technology that everyone says can be transformative as much as the Printing Press. Number two is really unknown whats goingng to happen but theres a big fear youve expressed all offe you about wht bad actors can do and will do if there is no rules of the road. Number three, is a member who served in the house and now the senate i have come to the conclusion that its impossible for congress to keep up with the speed of technology and there had been concerns expressed about social media and now about ai as it relates to fundamental privacy rights, intellectual property and the spread of disinformation whichn in many ways fermis the biggest threat because that goes to the core of our capacity for selfgoverning. There is the Economic Transformation which can be profound and safety concerns. I have come to the conclusion that we absolutely have to havev an agency. Its scope of engagement is have to be defined by us but i believe unless we have an agency that is going to address these questions from Q Social Media ad ai, we dont have much of a defense against the badai stuff. The bad stuff will come so last year on the house side andus senator bennett at the end of the year that digital act and we will see that this year and the two things that i want to ask one, if somewhat answered it because two of the three of you of the three of you who said you think we need an independent commission. It establishes an independent commission to rampant over the interest of farmers on wall street had no rules of the road when we had the sec. I think we are at that point now but with the commission does it have to be defined and circumscribed but also their saris a question about the use of Regulatory Authority and the recognition that it can be used for good or j. D. Bands mentioned that when we were considering his and senator brownsville about rural roads in the east palestinian regulation for the Public Health but theres also a legitimate concern about regulation giving away things and beinger cumbersome and beina negative influence. So a2 of the three of you said you think we need an agency. What are some of the perils of an agency that they would have to be mindful of in order to make certain that its goals of protecting many of those centrist i just mentioned privacy intellectual property dissemination would be the winners and not the and i will start with you. Thank you senator. One i think america will continue to lead. This happened and a and im proud that happened in america. By the way i think thats right and thats why id be much more confident if we had our agency is opposed to getting involved in internationalrn discussions ultimately you want the rules of the road. If we get rules of the road that work for us thats probably more effective way to proceed. Personally believe theres a way to do both and its important to have a global view on this because this technology will impact americans and all of us wherever its developed but we want america to lead. So get to the apparels issue. A slowdown american industry in such a way that china or somebody else makes progress. I think this can happen with the regulatory pressure should be on google and it should be on the small people who lead. We dont want to slow down smaller storage a a slowdown opn source because we still need them to comply. You can still cause greater harm at the smaller model but we need the room on the space for new ideas and new companies and independent researchers to do theirut work and not put in the Regulatory Burden like a company like us could handle but the smaller ones couldnt. Its clearly the way regulation is gone. E. Y professor marcus. Some of the other obvious peril is regulatory capture. If you make it appear as if we are doing something but its more like greenwashing and nothing really happens and we just keep up the little players because they put so much burden that only the big players can do it so there are those perils can agree with everything that mr. Altman said said in id add that to the list. Ms. Montgomery. One thing i would add is not Holding Companies accountable for the harms that they are causing today. We talk about misinformation and an electoral system. We need to hold Companies Accountable for the ai that they are applying. Disseminate misinformation and elections. Regulatory agencies do a lot of the things that senator graham is talking about. You dont build a Nuclear Reactor without a license they dont build an ai system without getting a license to test it and apparently. We need both. Deployment and postdeployment. Thank you all very much and i yield back mr. Chairman. Thank you senator welch. Youve all been verynk very patient and the turnout today which is beyond their subcommittee reflects both your value in what you are contributing as well as the interest inas this topic. There are a number of subjects that we havent covered at all. One was just alluded to by professor marcus which is the monopolization danger, the dominance of markets that exclude new competition and thereby inhibits or prevents innovation and attention which we have seen in social media as well as some of the Old Industries airlines, automobiles and others were consolidation has narrowed competition. So i think we need to focus on anan old area of antitrust which dates more than a century in adequate to deal with the challenges we have in our economy and certainly we need to be mindful of the way that rules can a bill the big guys to get bigger and exclude innovation and competition and responsible good guys so chairs our rappers and its in this industry right now. We havent dealt with National Security. Their huge implications and i dont have to tell you as a member of the Armed Services committee. Thisified briefings on issue have bounded and the threats that are posed by some of our adversaries. China has been mentioned here. Sources in are very real and urgent. We t are going to deal with them today but we do need to deal with them and we will hopefully in this committee and then on the issue of the new agency ive been doing this stuff for her while. Rs i was attorney general and connecticut for 20 years and i was a federal prosecutor as a u. S. Attorney. Most of my career has been in enforcement and i will tell you something you can create 10 new agencies. If you dont give themie the resources and im talking not just about dollars. Im talking about scientific expertise, you guys will run circles around him. And it isnt just the models order the ai that will run models and run circles around them but it is the scientist in your company. For every Success Story and government regulation you can think of five failures. Thats true of t the fda, its true of the iaea and its trip the sec, stirred the whole Government Agency andlp i hope r experience here will be different but the pandoras box requires more than just the words or the concept of licensing new agency. There are some real hard decisionmaking that ms. Montgomery has alluded too about how to frame the rules to fit the risk. First do no harm and make it effective, make it enforceable, make it real. I think we need to grapple with the hardrd questions here that frankly this initial hearing i think is great and veryy successful but not answered and i thank our colleagues who have participated and made the creative suggestions. Im very interested in enforcement. Literally 15 years ago i think i advocated on this and what is old is new again. Now people are talking about abolishing section 230 and back then it was considered completely unrealistic. Enforcement really does matter. I want to ask mr. Altman because of the privacy issue and he suggests that you have an interest in protecting the privacy of the data that may come to you or be available. What specific steps can you take to protect privacy . One is that we dont train on any data so if youre an official customer of ours we dont at all. We do retain it for 30 days for the purpose of safety enforcement but thats different. If he is chad gpt you can opt out of training on your data and you can delete your conversation. Ms. Montgomery i know you dont deal with it directly but can you take steps to protect privacy . Absolutely. Y. Is there a Large Language Models for content that it may been pulled from public data. So we have an additional level of filtering. Director marcus you made reference to selfawareness, self learning and already we are talking about jailbreaks potential. How soon do you think that new generated ai will be usable and new ai selfaware . I have no idea on that one. We dont understand what selfawareness is so its hard to put a date on it but in terms of selfimprovement there is modest selfimprovement incurred systems that one could imagine a lot more. It happened two years and it could happen in 20 years. The basic paradigms havent been invented yet then something me we may want to discourage but its hard too put timelines on them. Going back to enforcement for one second one thing thats paramount is far Greater Transparency about what the models are and what the data are. That doesnt necessarily mean everybody in the general public as to know exactly whats in one of theseac systems but it means there needs to be enforcement on looking at these systems they came we look at the data and performor tasks and so forth. Let me ask you, all of you i think there has been a reference toct elections and banning outpt involving elections. Are there other areas . What are the other highrisk for highest risk areas where you would either ban or established especially strict rules ms. Montgomery . The space around misinformation i think its hugely important and coming back tos the point of transparency knowing what context would generate it by eight i is going to be a critical area we need to track. Any others . Medical misinformation is something to worry about. We have systems to Senate Things prism of the advice he gives as good and some is that only the tight replaced around that and the same with psychiatric. Advice people using these things as therapists. We need to be concerned about that and i think we need to be concerned about Internet Access for these tools when they can start making requests of people and internet. Its okay if they just do search. If they do more intrusive things on the internet do we want them to be able to order equipment or order chemistry and so forth . As we empower the systems more by giving them Internet Access would need to be concerned about that and we talk about long term risk. I dont think thats where we are right now but as we start to approach machines that have a larger footprint beyond just having a conversation we need to worry about that and think about how we will regulate that and monitor it and soo forth. In a sense we have been talking about bad guys or certain bad actors manipulating ai and manipulating people. Manipulating people. Manipulating people. Ai manipulating the manipulator. A fair many waves of an appalacian that are possible and we dont understand the consequences. I was sent the manuscript last Night Company called counterfeit people. Its a wonderful metaphor and the systems are like counterfeit people and we dont understand that what the consequence of that is. If its good enough to a lot of people at the time that introduces lots of problems for example cyber crimes in how people might try to manipulate markets markets and so forth so its a serious concern. In my opening i suggested three principals, transparency, accountability and limits on use. Would you agree that those are good starting points . 100 and you also mentioned industry for congress in what we are doing at ibm. Professor marcus . I think those three big great start and their things like the White House Bill of rights to show a large consensus of the guideline so large consensus around what it is when you get the real question is how we going to put teeth into it and make these things and forced . We dont have transparency yet and we all knowan we wanted. Mr. Altman . I certainly agree those are important points. I would add in professor marcus touched on this. We most of the time on current wrist today and thats appropriate im glad we have done it for the systems become more capable and not sure how far way that is but maybe not super far i think its important these also timewe talking about how we are going to look at those challenges. I will talk to you privately. I agree that you care deeply also that prospect of increased danger or risk resulting from even moreen complex capable ai mechanisms certainly may be closer than a lot of people appreciate. I would add for the record that im sitting next to sam closer than ive ever said to him before my life and his sincerity and talking about this veis very apparent specificallyn a way that doesnt communicate on television. Thank you. Senator hawley. Thank you for great. Thank you to the witnesses. Ive been keeping a list of potential downsides or harms and risks that generative ai in its current o form and jobs. I think your company ms. Montgomery hasy, announced their laying off seven or 8000 people in their workforce acus ai. Loss of jobs invasion of privacy, personal privacy on a scale we have never before seen manipulation of personal behavior manipulation of personal opinion and manipulation of free elections ineg america. This is quite a list. I noticed an eclectic group of 1000 technology and ai leaders everything from andrew yang to elon musk recently called for a sixmonth moratorium on any further ai development. Were they right . Do they have a right to do that and should we pause for six or eight months . That is quite correct. I signed a letter in 27,000 people signed it. It did not call for a ban on all ai research or all ai but on the specific thing which would be systems like gpt five. Every other piece of research that was done was supported and we are neutral them specifically calls for more ai specifically call called for more research untrustworthy safe ai. Smack you think we should take the moratorium a sixmonth moratorium oniu anything beyond chat gpt . I look at the famous phrase spiritually not literally peer im for your opinion now though. My opinion is the moratorium that we should focus on this deployment until we havent good safety cases but i dont know that we need to parse that particular project. An emphasis on focusing more on ai trust worthy reliable ai is exactly right. Im not making it available publicly . Back my concern is deployed on a scale of 100 Million People without external we should think carefully about doing that. Do you agree it do you agree about it then would depos further development for six months or longer . We waited more than six months to deploy it and we are not currently in training. We dont have plans to do it in the next six months. I think the frame of the letter is wrong. What matters is products Safety Standards before training if we posit for six months to a posit for another six or come up with then the standards we is for gpt for development you want to build on those but we think thats the right direction. I expect they will be times when you find something we dont stand that we need to take apart but we dont see that yet. Nevermind all the benefits. You dont see what . Are comfortable with all the potential ramifications . I dont see reasons for not training anyone to deploy it and its all risky behavior and theres limits and we have to pull things back sometimes. I dont see anything stopping us from training the next model to create anything dangerous let deployment. What are some of them . We need time to prioritize technology. Wouldnt a positive help the developmentnt for protocols and Safety Standards . Im not sure how practical it is to pause that we should be prioritizing safety protocols. The point about practicality im interested in this talk about an agency that maybe that would work although having seen out agencies work in this government they usually get captured by the interest they are supposed c to regulate. They usually get controlled by the people who they are supposed to bees watching. Maybe this agency would behi different. Why dont we just let people see you . We know how to do that. We can create a federal action that private individuals that are harmedt by this technology o get into court and bring evidence in the court and it can be anybody. Crowdsourcing. We will open the courthouse doors and define prior action private business and will just opened up and have people go to court and representative say theyey were harmed, they were given medical misinformation or election misinformation whatever. Why not do that . Please forgive my. Cant people doan this . Is protected by section 230 but theres not currently i dont think the federal private right of action that says if you were harmed by Ai Technology we agreed that the ability to get into court. There are a lot of other lawss were Technology Standards unless i misunderstand how things work. The question is are clearer laws about the specifics of this technology and its protections good thing . I would say definitely yes. The laws wees have today were decided long before we had Artificial Intelligence. The plan you propose i think is a hypothetical and would make a lot of lawyers wealthy but it would be too slow to affect a lot of the things we care about an care about america of several offered sample. You think litigation can take a decade or so. Litigation is a powerful tool. Im in no way asking to take litigation off the table among the tools. I forcht sample that i can continue areas like copyright we dont have laws and we dont have a way of thinking about wholesale ms. Misinformation and individualal pieces were a foren actor might make millions of pieces of information. We have some laws about market manipulation that would apply and we get a lot of situations where we dont know which laws apply and there would be loopholes. This is not really thought through and in fact we dontro even know if to 30 does or does not apply as far as i know. Its something a lot of people speculated about this afternoon. We can fix that. The question is how . It would be easy for us to sayse section 230 doesnt applyo ai. A duty of care fits the idea of private right. Shield. Not a if the Company Discriminates in granting credit for example in the hiring process by virtue of the fact that they were lied to on ai tool they are responsible for that regardless of whether they use the tool or human made that decision. Im going to turn it to senator booker foron final questions bui want to make a quick point here on the issue of the moratorium. Think we need to be careful. The world wont wait. The rest of the Global Scientific Community isnt going to pause. We have adversaries that are moving ahead and sticking our heads in the sand is not the answer. Safeguards and protections yes but a flat stop sign sticking our head in the sand that would be very very worried. Without any sort of depos i wouldnt be the difference between research which surely we need to do to keep pace with their foreign rivals and deployment and massive scale predicted to play things onth te scale that a Million People are hundred Million People are a billion people and be able to close the doors before rather than after. Senator booker. There is no enforcement body. Its nice to call for it just reasons whatsoever. Nobody is pausing it. I dont think its realistic and the reason i personally find the letter is to call attention to a series of problems werent emphasized how untrustworthy and save ai in making a bigger version where he know where he know to be under mind. I love the future and theres a famous question if you couldnt control your race or gender where you land on the planet earth what kind what time would you want to be born . Its still the best time to be alive because of Technology Innovation and im excited about what the future holds that the ive seeneness that in transformative technologies of a lot of the technologies overst last 25 years is what really concerns me. One of the things especially with companies that are designed to want to keep my attention on screens and im not just talking about new media. The news is a great example of people who want to keep their eyes on screens. I have a lot of concerns about corporate contention and this is why i find your story so fascinating to me and your values that i believe in from our conversations so telling to me that absent that i really want to explore what happens when these companies that are controlling so much of our lives what happens when they are the ones dominating this technology as they did before. Does that have any concern the role that Corporate Power and corporate concentration has in this realm that a few Companies Might control this whole area . I i radically have changed te shape of my own life in the past humans because of what happened withw microsoft. He didnt go the way i thought it would. One way it did which is ii anticipated and i wrote an essay when youre inspecting gpt for it and it was a tool for misinformation and along came sidney and initial press reports were quite favorable in the famous article by kevin ruse which recommended a and i had seen tapes and meta and those who didnt pull after they had problems. What i would have done had i had microsoft were to clear did did not would have temporarily withdrawn from the market and they didnt pay that was a wakeup call to me and a reminder t even if you have a company like open ai which is a nonprofit the values that come clear to the other people can buy those companies and do what they like with them. Maybe we have a stable set of factors now and the amount of power that these systems have to shape our views and our lives is really really significant in that disney can get into the risk that someone mayan deliberately. The endpe of february i stopped writing much about technical issues and ai which is most of what ive read l about for the last decade instead i need to work on policy. I want to give an opportunity for the last question or so. Dont you have concerns, i mean i graduated from stanford and i know so many players in the valleys to a lot of founders of companies that we all know. Do you have some concern about few players that have extraordinary resourcesur and power and power to influence washington. Im a big believer in the free market but the reason i walk into an ortega were a is cheaper a apple or happy mills a buck less than a salad is because the way the government tips the scales for winners and. The free market is not whatsc it should be when you have Corporate Power. Do you have concerns about that in this next era . Again that so much of why of. We have huge concerns about that and its important to democratize the input and the values we are going to align to and they think its important at give people wide use of these tools. We started theta api strategy making our systemsan available r anyone to use there was a huge mound of skip the system and it does come with challenges thats for sure for putting this in the hands of a knot of people and not in the hands of a few companies is quite important and we are seeing innovation boons from that. But its absolutely true the number of companies that can train the true frontier models is going to be small just because of the resources required so there must be incredible scrutiny on us and our competitors. Theres a rich and exciting industry out there and new research and startups that are not just using the model by creating their own is important to ensure that whatever whatever agencies may happen we reserve that. I am a big believer in the potential of technology but i see the promise of that failed time and time again where people say this is a democratizing force. My team works on a lot of issues about the reinforcing of bias or algorithms and the failure to advertise certain opportunities and certainn zip codes but you seem to be saying this is going to be decentralized and all these things are going to happen but but this t seems to me not o offer that promisese because the people who are designing it. And the use of technology further democratizing further are possible in the technology that is ultimately i think very decentralized to a few players who control so much already. This point that i made about use of the model building on top of that is a new platform. Its important to talk about whos creatingng the model and i think its really important insight to whose values we are going to align these models. The people that build on top of the open ai do incredible things and people frequently comment like is. Cant believe you get this Much Technology for this amount of money. People are putting ai everywhere which lets us put safeguards in place or i think thats quite exciting and thats not how its going to be but its being democratized right now per theres a whole b new explosionf new businesses and new products and services happening by lots of different companies. As i closed most industries resist reasonable regulation for laws that we have been talking about like rail safety. The only way we are going to see the democratization of values i think are the if we create rules of the road that enforce certain Safety Measures like the scene with other technologies. Say thank you. Thank you senator booker and i couldnt agree more in terms of Consumer Protection which ive been doing for while, participation by the industry is tremendously important andnt not just rhetorically but inca real terms. S. We have a lot of industries that come before this they oh we are four rules but not those rules. Those rules wee. Dont like and its everyones fact that they dont like and i think its genuine and authentic. I thought about asking chat gpt to do a new version to dont stop thinking about tomorrow because thats what we need to be doing here and senator hawley pointed out congress doesnt always move at the pace of Technology Man that may be a reasonably need a new agency but we also need to recognize it. Youve been enormous for helpful in focusing and eliminating some of these questions and perform a Great Service by being here today so thank you to everyone of our witnesses and im going to close the hearing and leave the record week in case anyone wants to submit anything. Ra ive heard many of you who have either manuscripts that are going to be published or observations from your companies to submit them to us and we look forward to our next hearing. This hearing is closed. [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] just looking at the record not just in the words and the rhetoric. Actual actions to participate and commit to specific action. Some of the Big Tech Companies have consent decrees which they have violated and thats the kind of corporation that was promised and given his track record has seemed to me to be pretty sincere. The range of concerns here are you talking about election to National Security to medical deployment and what kind of challenge to zappos in trying to former response . We have to have a system with broad and without congress beingg engaged but every time theres something new. So probably creating an agency or delegating a high degree of responsibility foreg rulemaking makes a lot of sense in this area. Is a challenging to confirm . If so many different concerns floating around this. Theres no question that its complex with a lot of constituencies and the wreck nation that we are going to solve the problem by an excruciatingly detailed prescriptive formula in other words would have to say look make the rules and develop the expertise and congress cant act on it. That degree of humility is required and they think that degree of humility in this space. When you think about a Regulatory Agency rethinking a Regulatory Agency for all of technology . The question of the newhe Regulatory Agency i think is the answer. At its new or an existing agency. Certainly it should be more than just ai and probably Technology Privacy clearly the ftc doesnt hate have the capability right now so if you are going to rely on the ftcu have to in effect claimant within itself so to speak. I think there is a powerful argument for an entirely new agency thats given the resources to really do the job because as i said here you can create an agency just by signing bills that an agency alone is not the solution. Its resources and expertise and a genuine commitment to make it work. Senator how soon realistically do you think action to could take place . For this session. Senator schumer is working on a framework. There should be a bill of rights. European parliament is including an ai act o and all kinds of private groups are issuing ideas for legislation. There is certainly a lot of legs for a bill and a lot of momentum and its a clear need. He know people are excited but also anxious. Nat itself that you can share any details i senator schumer that you can say . Certainly is later ship is very important and i have to run. Why should consumers [inaudible] they should demand better protection and privacy as well as ai. There is a need for privacy legislation and a need for ai protection and theres a need for kids on line safety act. Social media so we all understand we have a bill now that will protect kids from ai. Algorithms are a form of ai that is deploying dede disorders suicidal thoughts,gh drug abuse. They are out there right now in effect ricocheting that toxic content back and forth from kids to social media and theres certainly a sense of urgency around that issue. Thanks. Thank you senator. [inaudib conversations]

© 2025 Vimarsana

vimarsana.com © 2020. All Rights Reserved.