vimarsana.com

Congressional hearings an other important events throughout the day. Weekdays at 5 00 p. M. And 9 00 p. M. Eastern check with the washington today for a fastpaced report on story os they have day. Tell your smart speaker, play cspan radio. Cspan, powered by cable. Open a. I. C. E. O. Sam altman whose Company Created chatgpt was one of three experts to testify on oversthiefght swiftly developing technology as a Senate Judiciary subcommittee hearing mr. Altman stated that a. I. Could, quote, cause significant harm to the world. Heres the rest of that hearing. Welcome to the hearing of the privacy, technology, and the law subcommittee. I thank my partner in this effort, senator holly, Ranking Member, and senator durbin, he will be speaking shortly. This hearing is on the oversight of Artificial Intelligence, the first in a series of hearings intended to write the rules of a. I. Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past. And now for some introductory remarks. Too often we have seen what happens when Technology Outpaces regulation. The unbridled exploitation of data. The deepening of societal inequalities. We have seen how algo rhythmic biases can per pitch wait discrimination and prejudice and how the lk of transparency can undermine public trust. This is not the future we want. If you were listening from home you might have thought that voice was mine and the words from me but in fact that voice was not mine, the words were not mine, and the audio was an a. I. Voice cloning software trained on my floor speeches. The remarks were written by chatgpt when it was asked how i would open this hearing. And you heard just now the result. I asked chatgpt why did you pick those themes and that content and it answered, and im quoting, blumenthal has a strong record in advocating for Consumer Protection and civil rights, he has been vocal about issues such as data privacy, and the potential for discrimination in algorithmic decision making. Mr. Altman, i appreciate chatgpts endorsement. In all seriousness, this apparent reasoning is pretty impressive. I am sure that well look back in a decade and view chatgpt and gpt4 like the first cell phones, those big, clunky things we used to carry around. But we recognize that we are on the verge, really, of a new era. The audio and my playing it may strike you as curious or humorous, but what reverberated in my mind was, what if i had asked it and what if it had provided and endorsement of ukraine surrendering, or putins leadership . 245 would have been really frightening. And the prospect is more than a little scary, to use the word, mr. Altman, you have used yourself. And i think you have been very constructive in calling attention to the pitfalls as well as the promise and thats the reason why we wanted you to be here today and we thank you and our other witnesses for joining us. For several months now, the public has been fascinated with g. P. T. And other a. I. Tools. These examples, like the homework done by chatgpt, or the articles an op edz and opeds it can write feel like novelties but the underlying advance. S of this era are more than Just Research experiments. They are no longer fantasies of science fiction. They are real and present. The promises of curing cancer, developing new understandings of physics and biology, modeling climate and weather, all very encouraging and hopeful. But we also know the potential harm. And we have seen them already. Weaponized disinformation. Housing discrimination. Impersonation fraud. Voice cloning. Deep fakes. These are the potential risks despite the other rewards. And for me, perhaps the biggest nightmare is the looming new Industrial Revolution. The displacement of millions of workers, the loss of huge numbers of jobs, the need to prepare for this new Industrial Revolution in skill training. And relocation. That may be required. And already Industry Leaders are calling attention to those challenges. To quote chatgpt, this is not necessarily the future that we want. We need to fax miez the we these to maximize the good over the bad. Congress has a choice now. We have the same had the same choice when we faced social media. We failed to seize that moment. The result is predders on the internet, toxic content, exploiting children career ating dangers for them. And senator blackburn and i and others like senator durbin on the Judiciary Committee are trying to deal with it. The kids Online Safety act. But congress failed to meet the moment on social media. Now we have the obligation todo it on a. I. Before the threat and the risk becomes real. Sensible safeguards are not in opposition to innovation. Accountability is not a burden. Far from it. They are the foundation of how we can move ahead while protecting public trust. They are how we can lead the world in technology and science. But also in promoting our Democratic Values. Otherwise in the absence of that trust, i think we may well lose both. These are sophisticated technologies but there are basic expectations common in our laws. We can start with transparency. A. I. Companies ought to be required to test their system, disclose known risk, and allow w independent researcher access. We can establish scorecards and nutrition labels to encourage competition based on safety and trustworthiness. Limitations on use. There are places where the risk of a. I. Is so extreme that we ought to impose restrictions or even ban their use. Especially when it comes to commercial invasions of privacy for profit and decisions that affect peoples livelihoods. And of course accountability or liability. When ad companies and their clients cause harm they should be held liable. We should not repeat our past mistakes. For example, section 230. Forcing companies to think ahead and be responsible for the rm fay cases of their Business Decisions can be the most powerful tool of all. Garbage in, garbage out, the principle still applies. We ought to be we ought to beware of the garbage. Whether its going into these platforms or coming out of them. And the ideas that we develop in this hearing, i think, will provide a solid path forward. I look forward to discussing them with you today and i will just finish on this note. The a. I. Industry doesnt have to wait for congress. I hope their ideas and feedback from this discussion and from the industry and voluntary action shuch as weve seen lacking in many social media platforms and the consequences have been huge. So im hoping that we will elevate, rather than having a race to the bottom. And i think these hearings will be an important part of this conversation. This one is only the first. The Ranking Member and i have agreed there should be more and were going to invite other Industry Leaders. Some have committed to come. Expert, academics, and the public. We hope will participate. And with that, i will turn to the Ranking Member, senator law lee. Senator sen. Hawley thank you vy much. Thanks to the witnesses for being here. I appreciate some of you had long journeys to be here. Thank you for taking the time. I look forward to your testimony. I want to thank senator blumenthal for convening this hearing, being a leader on this topic. A year ago we couldnt have had this hearing because the technology were talking about had not burst into public consciousness. That gives us a sense, i think, of just how rapidly this technology that were talking about today is changing and evolving and transforming our world right before our very eyes. I was talking with someone just last night, a researcher in the field of psychiatry, who is pointing out to me that the chatgpt and genretific a. I. , these Large Language Models, its like the invention of the internet in scale. At least. And potentially far more significant than that. We could be looking at one of the most significant Technological Innovations in human history. And i think my question is, what kind of an innovation is it going to be . Is it going to be like the Printing Press that diffused knowledge and power and learning widely across the lands escape, that empowers ordinary, everyday individuals that led to greater flourishing, that led above all to greater liberty . Or is it going to be more like the atom bomb . Huge technological break through, but the consequences, severe, terrible, continue to haunt us to this day. I dont know the answer to that question. I dont think any of us in the room know the answer to that question because i think the answer has not yet been written. To a certain extent its up to us here and to us as the American People to write the answer. What kinds of technology will this be . How will we use it to better our lives . How will we use it to actually harness the power of technological innovation for the good of the American People . For the liberty of the American People. Not for the power of the few. You know, i was reminded of the psychologist and writer, carl young, who said, at the beginning of the last century, that our ability for technological innovation, our capacity for technological revolution had far outpaced our ethical and moral ability to apply andujar nehls the technology and harness the technology that we developed. That was a century ago. I think the story of the 20th century largely bother him out and bore him out and what will we say when we look back at this moment, about these language models and about the host of other a. I. Capacities that are even right now under development, not just in this country, but in china, the countries of our adversaries and all around the world. I think the question that young posed is really the question that faces us. Will we strike that balance between technological innovation and our ethical and moral responsibility to humanity, to liberty, to the freedom of this country . And i hope that todays hearing will take us a step closer to that answer. Thank you, mr. Chairman. Sen. Blumenthal thanks. Im going to turn to the chairman of the Judiciary Committee and the Ranking Member, senator graham. Sen. Graham yes, mr. Chairman. Thank you very much and senator hawley as well. Last weekend this committee, full committee, Senate Judiciary committee, we dealt with an issue that had been waiting for attention for almost two decades and that is what to do with the social media when it comes to the abuse of children. We had four bills initially that were considered by this committee and what may be history in the making, we passed all four bills with unanimous roll calls. Sen. Durbin unanimous roll calls. I cant remember another time when weve done that. On an issue that important. Its an indication, i think, of the important position of this committee and the National Debate on issues that affect every Single Family and affect our future in a profound way. 1989 was a historic watershed year in america because thats when seinfeld arrived. And we had a sitcom which was supposedly about little or nothing, which turned out to be enduring. I like to watch it, obviously, and i always marvel whether she show the when show the phones they used in 1989 and compared to those we carry around in our pockets today, its a dramatic change. I guess the question when i look at that is does this change in phone technology that we witnessed through this sitcom, really, exemplify a profound change in america . Still unanswered. But the basic question we face is whether or not the issue of a. I. Is a quantitative change in technology or a equal tative change or a qualitative change. The suggestions ive heard from experts in the field suggest its equal tative. Is qualitative. Is a. I. Fundamentally different . Is it a game changeer . Is it so disruptive that we need to treat it differently than other forms of innovation . Thats the starting point. And the second starting point is when you look at the record of congress in dealing with innovation, were not designed for that. In fact, the senate is designed for the opposite. Ive heard of the potential, the positive potential of a. I. And it is enormous. You can go through lists of the deployment of technology to say that an idea, you can sketch it on a website, for a website on a napkin, can generate functioning code, pharmaceutical companies could use the technology to identify new candidates to treat disease, the list goes on and on. And then of course the danger and its profound as well. So im glad that this hearing is taking place. I think its important for all of us to participate. Im glad that its a bipartisan approach. Were going to have to scramble to keep up the pace of innovation in terms of our government public response to it but this is a great start. Thank you, mr. Chairman. Sen. Blumenthal thanks, senator durbin. It is very much a bipartisan approach. Very deeply and broadly bipartisan. And in that spirit im going to turn to my friend, senator graham. [indiscernible] sen. Blumenthal thank you. That was not written by a. I. , for sure. [laughter] let me introduce now the witnesses. Were very grateful for you being here. Sam altman is the cofounder and c. E. O. Of openai, the a. I. Research and Deployment Company behind chatgpt and dalle. Mr. Altman was president of the early stage Startup Accelerator from 2014 to 2019. Openai was founded in 2015. Christina montgomery is i. B. M. s Vice President and chief privacy and trust officer, overseeing the companys Global Privacy Program policies, compliance and strategy. She also chairs i. B. M. s a. I. Ethics board, a Multidisciplinary Team responsible for the governance of a. I. And emerging technology. Christina has served in various roles at i. B. M. , including corporate secretary to the companys board of directors. Shes a Global Leader in a. I. Ethics and government and ms. Montgomery is also a member of the United States chamber of commerce a. I. Commission and the United States national a. I. Advisory committee. Which was established in 2022 to advise the president and the national a. I. Initiative office on a range of topics related to a. I. Gary marcus is a leading voice in Artificial Intelligence. Hes a scientist, best selling author and entrepreneur. Founder of the robust a. I. And geometric a. I. Acquired by uber, if im not mistaken. And emeritus professor of psychology and new orleans science at n. Y. U. Mr. Marcus is well known for his challenges to contemporary a. I. , anticipating many of the current limitations decades in advance and for his research in human language, development and cognitive neuroscience. Thank you for being here and as you may know, our custom on the Judiciary Committee is to swear in our witnesses before they testify. So if you would all please rise and raise your right hand. Do you solemnly swear that the testimony that you are going to give is the truth, the whole truth and nothing but the truth, so help you god . Thank you. Mr. Altman, were going to begin with you, if thats ok. Mr. Altman thank you. Thank you, chairman blumenthal, Ranking Member hawley, members of the Judiciary Committee. Thank you for the opportunity to speak to you today about this. Its really an honor to be here. My name is sam altman. Im the chief executive officer of openai. Openai was founded on the belief that Artificial Intelligence has the potential to improve nearly every aspect of our lives. But also that it creates serious risks we have to Work Together to manage. Were here because people love this technology, we think it can be a Printing Press moment. We have to Work Together to make it so. Openai is an unusual company. And we set it up that way because a. I. Is an unusual technology. We are governed by a nonprofit and our activities are driven by our mission and our charter. Which commit us to working to ensure that the broad distribution of the benefits of a. I. And to maximize in the safety of a. I. Systems. We are working to build tools that one day can help us make new discoveries and address some of humanitys Biggest Challenges like Climate Change and curing cancer. Our current systems arent yet capable of doing these things, but it has been immensely gratifying to watch many people around the world get so much value from what these systems can already do today. We love seeing people use our tools to create, to learn, to be more productive. Were very optimistic that theyre going to be fantastic jobs in the future and current jobs can get much better. We also love seeing what developers are doing to improve lives. For example, be my eyes used our new technology in gpt4 to help visually impaired people navigate their environment. We believe the benefits of the tools we have deployed so far vastly outweigh the risks but ensuring their safety is vital to our work. And we make significant efforts to ensure that safety is built into our systems at all levels. Before releasing any new system, openai conducts expensive testing extensive testing, engages external experts, improves the models behavior. Before we release gpt4, our latest model, we spent over six months conducting ex tensive evaluations, external red teaming and dangerous capability testing. We are proud of the progress that we made, gpt4 is more likely to respond helpfully and truthfully and refuse harmful question requests than any other model of similar capability. However, we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models. For example, the u. S. Government might consider a combination of licensing and testing requirements for development and release of a. I. Models above a threshold of capabilities. There are several other areas i mentioned in my written testimony where i believe that Companies Like ours can partner with governments, including ensuring that the most powerful a. I. Models adhere to a set of safety requirements, facilitating processes to develop and update Safety Measures and examine opportunities for global coordination. And as you mentioned, i think its important that companies have their own responsibility here, no matter what congress does. This is a remarkable time to be working on Artificial Intelligence. But as this technology advances, we understand that people are anxious about how it could change the way we live. We are too. But we believe we can and must Work Together to identify and manage the potential down sides so that we can all enjoy the tremendous upsides. It is essential that powerful a. I. Is developed with Democratic Values in mind and this means that u. S. Leadership is critical. I believe that we are able to mitigate the risks in front of us and capitalize on the Technology Potential to grow the u. S. Economy and the worlds and i look forward to working with you to meet this moment and i look forward to answering your questions. Thank you. Sen. Blumenthal thank you. Ms. Montgomery. Ms. Montgomery chairman blumenthal, Ranking Member hawley and members of the subcommittee, thank you for todays opportunity to present. A. I. Is not new but it certainly is having a moment. Recent breakthroughs in generaltive a. I. And the dramatic surge in the publics attention has rightfully raised serious questions at the heart of todays hearing. What are a. I. s potential impacts on society . What do we do about misinformation, misuse or harmful content generated by a. I. Systems . Senators, these are the right questions and i applaud you for convening todays hearing to address them headon. While a. I. May be having its most, the moment, the moment for government to play its role has not passed us by. This period of focused public attention on a. I. Is precisely the time to define and build the right guard rails to protect people and their interests. But at its core, a. I. Is just a tool and tools can serve different purposes. To that end, i. B. M. Urges congress to adopt a precision regulation approach to a. I. This means establishing rules to govern the deployment of a. I. In specific use case, not regulating the technology itself. Such an approach would involve four things. First, different rules for different risks. The strongest regulation should be applied to youth cases with the greatest risks toker society. Second, clearly defining risks. There must be clear guidance on a. I. Uses or categories of a. I. Supported activities that are inherently highrisk. This common definition is key to enabling a clear understanding of what regulatory requirements will apply in different use cases and contexts. Third, be transparent. So a. I. Shouldnt be hidden. Consumers should know when theyre interacting with an a. I. System and that they have recourse to engage with a real person should they so desire. No person anywhere should be tricked into interacting with an a. I. System. And finally, showing the impact. For higher risk use cases, companies should be required to conduct impact assessments that show how their systems perform against tests for bice and other ways that bias and other ways they can potentially impact the public and to attest that theyve done so. By following riskbased usecasespecific approach at the core of precision regulation, congress can mitigate the potential risks of a. I. Without hindering innovation. But businesses also play a Critical Role in ensuring the responsible deployment of a. I. Companies active in developing or using a. I. Must have strong internal governance, including, among other things, designating a lead a. I. Ethics official responsible for an organizations trustworthy a. I. Strategy, standing up an ethics board or a similar function as a centralized clearinghouse for resources to help guide implementation of that strategy. I. B. M. Has taken both of these steps and we continue calling on our industry peers to follow suit. Our a. I. Ethics board plays a Critical Role in overseeing internal a. I. Governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner. It provides centralized governance and accountability while still being flexible enough to support decentralized initiatives across i. B. M. s global operations. We do this because we recognize that society grants our license to operate. And with a. I. The stakes are simply too high. We must build, not undermine the publics trust. The era of a. I. Cannot be another era of move fast and breaks things. But we dont have to slam the brakes on innovation either. These systems are within our control today as are the solutions. What we need at this pick of toll moment is clear, reasonable policy and sound guardrails. These guardrails should be matched with meaningful effects by the Business Community to do their part. Congress apblgd the Business Community must Work Together to get this right. The American People deserve no hrels. Thank you for your time and i look forward to your questions. Sen. Blumenthal thank you. Professor marcus. Mr. Marcus thank you, senator. [indiscernible] thank you, senators. Todays meeting is historic. Im profoundly grateful to be here. I come as a scientist, someone who has founded a. I. Companies and as someone who genuinely loves a. I. But who is increasingly worried. There have there are benefits but we dont yet know whether they will outweigh the risks. Fundamentally these new systems are going to be destabilizing, they can and will create per swaysive lies at a scale humanity has never seen before. Outsiders will use them to affect our elections, insiders to manipulate our markets and political systems. Democracy itself is threatened. Chatgpt will also shape our prpbgs potentially exceeding what social media can do. Choices about data sets that a. I. Companies use will have enormous unseen influence. Those who choose the data will make the rules shaping society in subtle but powerful ways. There are other risks too. Many stemming from the inherent unreliability of current systems. A law professor, for example, was accused by chatpwot of sexual harassment, untrue. And it pointed to a the Washington Post article that didnt even exist. The more that that happens, the more that anybody can deny anything. As one prominent lawyer told me friday, defendants are starting to claim that plaintiffs are making up legitimate evidence. These sorts of allegations undermine the abilities of juries to decide what or who to believe and contribute to the undermining of democracy. Poor medical advice could have serious consequences too. An open source large language model recently seems to have played a role in a persons decision to take their own life. The large language model asked the human, if you wanted to die, why didnt you do it earlier . Then followed up, were you thinking of me when you overdosed, without ever referring the patient to the human help that was obviously needed. Another system rushed out and made available to millions of children told a person posing as a 13yearold how to lie to her parents about a trip with a 31yearold man. Further threats continue to emerge regularly. A month after gpt4 was released, openai released plugins which allowed others to write source code with increased powers of automation. This may well have drastic and difficult to predict security consequences. What criminals are going to do here is to create counterfeit people. Its hard to even envision the consequences of that. We have built machines that are like bulls in a china shop. Powerful, reckless and difficult to control. We all more or less agree on the values wed like for our a. I. Systems to honor. We want, for example, for our system to be transparent, to protect our privacy, to be free of bias and to be safe. But current systems are not in line with these values. Current systems are not transparent. They do not adequately protect our privacy and they continue to perpetuate bias and even their makers dont entirely understand how they work. Most of all, we cannot remotely guarantee that they are safe and hope here is not enough. The Big Tech Companies prefer plan is to trust us. Emissions drift, openais original Mission Statement proclaimed, our goal is to advance a. I. In the way that most is most likely to benefit humanity as a whole. Seven years later theyre largely belocaledden to microsoft, emborder patrolleddenned in part in an epic battle of Search Engines that routinely make things up and thats forced alphabet to rush out products. Humanity has taken a back seat. A. I. Is moving incredibly fast with lots of potential but also lots of risk. We obviously need government involved and we need the Tech Companies involved. Both big and small. But we also need independent scientists, not just so that we scientists can have a voice but so that we can participate directly in addressing the problems and evaluating solutions. And not just after products are released but before. And im glad that sam mentioned that. We need tight collaboration between independent scientists and governments in order to hold the companies feet to the fire. Allowing independent access to these allowing independent scientists access to these systems before they are widely released as part of a clinical triallike safety evaluation is a vital first step. Ultimately we may need Something Like Global International and neutral, but focused on a. I. Safety rather than high energy physics. We have unprecedented opportunities here but we are also facing a perfect storm of corporate i were responsibility, widespread corporate irresponsibility, widespread deployment and inherent unreliability. A. I. Is among the most world changing technologies ever. Already changing things more rapidly than almost any technology in history. We acted too slowly with social media. Many unfortunate decisions got locked in with lasting consequences. The choices we make now will have lasting effects for decades, maybe even centuries. The very fact that we are here today in bipartisan fashion to discuss these matters gives me some hope. Thank you, mr. Chairman. Sen. Blumenthal thank you very much, professor marcus. Were going to have sevenminute rounds of questioning and i will begin. First of all, professor marcus, we are here today because we do face that perfect storm. Some of us might characterize it more like a bomb in a china shop, not a bull. And as senator hawley indicated, there are precedents here. Not only the atomic warfare era but also the research ognjen ethics where on genetics where there was International Cooperation as a result and we want to avoid those past mistakes, as i indicate in my opening statement. That were committed on social media. Thats prec. C. Lee the reason were here thats precisely the reason were here today. The. The chaplain makes mistakes the chatgpt makes mistakes, all a. I. Does, and it can be a convincing liar. What people call hallucinations. That might be an innocent problem in the opening of a judiciary subcommittee hearing, where a voice is impersonated, mine, in this instance. Or quotes from Research Papers that dont exist. But chatgpt are willing to answer questions about life or death matters. For example, drug interactions. And those kinds of mistakes can be deeply damaging. Im interested in how we can have reliable information about the accuracy and trustworthiness of these models. And how we can create competition and consumer disclosures that will award greater accuracy. The National Institutes of standards and Technology Already has an a. I. Accuracy test, the face recognition vendor test. It doesnt solve for all the issues with facial recognition, but the scorecard does provide useful information about the capabilities and flaws of the systems. So theres work on models to assure accuracy and integrity. My question, let me begin with you, mr. Altman, is, should we consider independent testinger labs to provide score testing labs to provide scorecard and nutrition labels or the equivalent of nutrition labels packaging that indicates to people whether or not the content can be trusted, what the ingredients are, can what the garbage going in may be because it could result in garbage going out . Mr. Altman yeah, i think thats a great idea. I think that companies should put their own sort of here are the results of our test and model before we release it, heres where it has weaknesses, heres where it has strengths. But also independent audits for that are very important. The models are getting more accurate over time. You know, this is as we have, i think, said as loudly as anyone, this technology is in its early stages. It definitely still makes mistakes. We find that people, that users are pretty sophisticated and understand where its mistakes are that they need or likely to be, that they need to be responsible for, verifying what the models say, that they go off and check it. I worry that as the models get better and better, the users can have sort of less and less of their own script nateing script nateing thought process around it but users are more capable than we often give them credit for in conversations like this. I think a lot of disclosures, which if you use chatgpt youll see about the inaccuracies of the model, are also important. And im excited for a world where companies publish with the models information about how they behave, where the inaccuracies are and independent agencies or companies provide that as well. I think its a great idea. Sen. Blumenthal i alluded in my opening remarks to the jobs issue. The economic effects on employment. I think you have said in fact, and im going quote, development of superhuman Machine Intelligence is probably the greatest threat to the continued existence of humanity, end quote. You may have had in mind the effect on jobs which is really my biggest nightmare, in the longterm. Let me ask you what your biggest nightmare is and whether you share that concern. Mr. Altman like with all technological revolutions, i expect there to be Significant Impact on jobs. Bureau exactly ha that im what that impact looks like is very difficult to predict. If we went back to the other side of the previous technological revolution, talking about the jobs that exist on the other side, you know, you can go back and read books of this, what people said at the time. Its difficult. I believe that there will be far greater jobs on the other side of this and the jobs of today will get better. I think its important, first of all, i think its important to understand and think about gpt4 as a tool, not a creature, which is uasi to get confused and its a tool that people have a great deal of control over in how they use it. Second, gpt4 and things, other systems like it, are good at doing tasks, not jobs. So you see already people that are using gpt4 to do their job much more efficiently. By helping them with tasks. Now gpt4 will totally taoeufrbg some jocks take over some jobs. This has been continually happening. As our quality of life raises and as machines and tools that we create can help us live better lives, the bar raises for what we do and our human ability and what we spend our time going after goes after more satisfying projects. So there will be an impact on jobs. We try to be very clear about that and i think it will require partnership between the industry and government. But mostly action by government to figure out how we want to mitigate that. But im very optimistic about how great the jobs of the future will be. Sen. Blumenthal thank you. Let me ask ms. Montgomery and professor marcus for your reactions to those questions as well. Ms. Montgomery on the jobs point, i mean, its a hugely important question. And its one that weve been talking about for a really long time at i. B. M. We do believe that a. I. , and weve said it for a long time, is going to change every job. New jobs will be created. Many more jobs will be transformed and some jobs will transition away. Im a personal example of a job that didnt exist when i joined i. B. M. And i have a team of a. I. Governance professionals who were in new roles that we created as early as three years ago. Theyre new and theyre growing. So i think the most important thing that we could be doing and can and should be doing now is to prepare the work force of today and the work force of tomorrow for partnering with a. I. Technologies and using them and weve been very involved for years now in doing that. In focusing on skillsbased hiring, in educating for the skills of the future, our skill platform has seven million learners and over a thousand courses worldwide focused on skills and weve pledged to train 30 million individuals by 2030 in the skills that are needed for society today. Sen. Blumenthal thank you. Professor marcus . Mr. Marcus may i go back to the first question as well . Sen. Blumenthal absolutely. Mr. Marcus on the subject of nutrition label, i think we absolutely need to do that. I think there are some technical challenges and building proper nutrition labels goes hand in hand with transparency. The biggest scientific challenge in understanding these molds is how they generalize. What do they memorize and what new things do they do . The more that theres in the data set, for example, the thing that you want to test accuracy on, the less you can get a proper read on that. So its important first of all that scientists be part of that process and, second, that we have much Greater Transparency about what actually goes into these systems. If we dont know whats in them, then we dont know exactly how well theyre doing when we giving is new and we dont know how good a benchmark it will be for something entirely novel. Second is on jobs. Past Performance History is not a guarantee of the future. It has always been the case in the past that we have had more jobs and new professions come in as new technologies come in. I think this ones going to be different and the real question is over what time scale . Is it going to be 10 years, 100 years . I dont think anybody knows the answer to that question. I think in the long run, socalled artificial general intelligence really will replace a large fraction of human jobs. Were not that close to artificial general intelligence. Despite all of the media hype and so forth, i would say that what we have right now is just a small sampling of the a. I. That we will build. In 20 years people will laugh at this as i think it was senator hawley made the example of it was senator duren, made the example about cell phones. When we look back at the a. I. Of today 20 years ago well be like, wow, that stuff was really unreliable, it couldnt really do planning, which is an important technicals a peck, its reason technical aspect, its reasoning abilities were limited. But when we get to a. G. I. , artificial general intelligence, lets say 50 years, that really is going to have, i think, profound effects on labor. Theres just no way around. That last, i dont know if im allowed to do this but ill note that sams worst fear i do not think is employment and i never told us what his worst fear actually is and i think its germane to find out. Sen. Blumenthal thank you. Im going to ask mr. Altman if he cares to respond. Mr. Altman yeah. Look, we have tried to be very clear about the magnitude of the risks here. I think jobs and employment and what were all going to do with our time really matters. I agree that when we get to a very powerful system, the landscape will change. I think im just more optimistic that we are incredibly creative and we find new things to do with better tools and that will keep happening. My worst fears are that we cause significant, we, the field, the technology, the industry, cause significant harm to the world. I think that could happen in a lot of different ways. Its why we started the company. Its a big part of why im here today. And why weve been here in the past and been able to spend some time with you. I think if this Technology Goes wrong, it can go quite wrong and we want to be vocal about that. We want to work with the government to prevent that from happening. But we try to be very cleareyed about what the down side case is and the work that we have to do to mitigate that. Sen. Blumenthal thank you. And our hope is that the rest of the industry will follow the examples that you and i. B. M. , ms. Montgomery, have set by coming today and meeting with us, as you have done privately in helping to guide what were going to do so that we can target the harms and avoid unintended consequences to the good. Thank you. Senator hawley. Sen. Hawley thank you, mr. Chairman. Sen. Hawley let me start with you, mr. Altman. Ill preface by saying, my questions here are an attempt to get my head around and to ask all of us to help us get our heads around what these this genretive a. I. , particularly the Large Language Models, what it can do. Im trying to understand its capacities and then its significance. So im looking at a paper here entitled Large Language Models trained on media diets can predict Public Opinion. This is just posted about a month ago. The authors are two. And their conclusion, this work was done at m. I. T. And also at google, the conclusion is that Large Language Models can indeed predict Public Opinion and they go through and model why this is the case and they conclude ultimately that an a. I. System can predict human survey responses by adapting a pretrained language model to subpopulation specific media diets, in other words, you can feed a model a particular set of media inputs and it can with remarkable accuracy predict what peoples opinions will be. I want to think about this in a context of elections. If these Large Language Models can even now, based on the information we put into them, quite accurately predict Public Opinion, ahead of time, i mean, predict, its before you even ask the public these questions, what will happen when epbtsitys, whether its corporate entities or whether its Government Entities or whether its campaigns or whether its foreign actors, take this survey information, these predictions about Public Opinion, and then fine tune strategies to elicit certain responses, certain behavioral responses . We already know, this committee has heard testimony three years ago now about the effect of something as pro sake as Google Search. The effect that this has on voters in an election, particularly undecided voters in the final days of an election who maybe try to get information from Google Search and what be a efor this muss effect an enormous effect the ranking of the Google Search has an undecided voter. Maybe you can help me understand here what some of the significance of this is. Should we be concerned about models that can large Large Language Models that can predict survey opinion and then can help organizations into these fine tune strategies to illicit behaviors from voters . Should we be worried about this for our elections . Mr. Altman thank you, senator hawley. This is one of the areas of concern, to manipulate, to persuade, to provide sort of oneonone, you know, interactive information. Thats a broader version of what youre talking about. Given that were going to face an election next year and these models are getting better, i think this is a significant area of concern. I think theres a lot of policies that companies can voluntarily adopt and im happy to talk about what we do there. I do think some regulation would be quite wise on this topic. Someone mentioned earlier, something we agree with, people need to know what theyre talking to with a. I. , if their content is generated or not. I think its a great thing to do is to make that clear. I think we also will need rules, guidelines, about whats expected in terms of disclosure from a Company Providing a model that could have these sorts of abilities that you talk about. So i am nervous about it. I think people are able to adapt quite quickly when photoshop came onto the scene a long time ago. People were quite fooled by Photoshopped Images and pretty quickly understand that images might be photoshopped. This is like that but on steroids. The interactivity, the ability to really model predict humans as you talked about i think is going to require a combination of companies doing the right thing, regulation, and public education. Senator hawley professor marcus, do you want to address this . Mr. Marcus yes. One is in the appendo my remarks. I have papers that make you more concerns. Concerned. One is in the wall street journal called help, my political beliefs are altered by a chat bot. The scenario you raised, we might basically observe people and use surveys to figure out what theyre saying but as sam just acknowledged, the risk is actually worse. The systems will directly maybe not even intentionally manipulate people and that was the wall street journal and it linked an article called interacting, is not yet published, not yet peer reviewed. Interacting with opinionated models changes views. This goes back to data. One of the thing with gpt4, we dont know what its trained on. What it is trained on has consequences for essentially the biases of the system. We can talk about that in technical terms. But how these systems might lead people about depends very heavily on what data is trained on them. We need transparency about that and we probably need scientists in there doing analysis in order to understand what the political influences, for example, of these systems might be. Its not just about politics. It can be about health. It can be about anything. These systems absorb a lot of data and what they say reflects that data and theyre going to do it differently depending whats in that data so it is different on if theyre trained on the wall street journal or the New York Times or reddit. They are trained on all of this stuff. We dont understand all of that. So we have potential manipulation. Its even more complex than that because its subtle manipulation. People may not be aware of whats going on. That was the point of both the wall street journal article and the other article i called your attention to. Senator hawley let me tell you the data that major platforms, google, meta, etc. , collect on all of us routinely. We had many a chat about this in this committee for a year now. The amount of data, personal data, that each company has on us. An a. I. System thats trained on that individual data that knows each of us better than ourselves and also knows the billions of data points about human behavior, human language, interaction, generally, cant we foresee an a. I. System that is extraordinarily good at determining what will grab human attention and what will keep an individuals attention . For the war for attention, the war for clicks thats currently going on all of these platforms, this is how they make their money, im just imagining an a. I. System, these a. I. Models supercharging that war for attention such that we now have technology that will allow individual targeting of the kind we have never even imagined before where the a. I. Will now what sam alts ation grabbing, we know what josh hawley finds attention grabbing, and it will elicit responses from us that we have not been able to imagine. Should we concerned for the corporate applications, for the monetary applications, for the manipulation that could come from that, mr. Altman . Mr. Altman yes. We should be concerned about that. Open a. I. Does not were not we are not trying to build up these profiles of our users. Were not trying to get them to use it more. Actually, wed love if they use it less because we dont have enough g. P. U. s. I think companies are and in the future use a. I. Models to create, you know, very good ad predictions of what a user is like. I think its already happening in many ways. Mr. Marcus perhaps ms. Montgomery will as well. Hyper targeting of advertising is definitely going to come. I agree thats not openais model. They are working for microsoft. I dont know what are microsofts thoughts. Maybe it will be with open source language models. I dont know. But the technology is lets say part way there to being able to do that and well certainly get there. Ms. Montgomery so were an Enterprise Technology company, not consumer focus, so the space is not one we necessarily operate in in terms of. But these issues are hugely important issues and thats why weve been out ahead in developing the technology that will help to ensure that we can do things like produce a fact sheet that has the ingredients of what your data is trained on. Data sheets, model cards, all those types of things and calling for, as i mentioned today, transparency. So you know what the algorithm was trained on. And then you also know and can manage and monitor continuously over the life cycle of an a. I. Model the behavior and performance of that model. Chair blumenthal senator durbin. Senator durbin i think whats happening in this room is historic. I cant recall when we have people representing large corporations or private sector entities come before us and plead with us to regulate them. In fact, many people in the senate have based their careers on the opposite. That the economy will thrive if government gets the hell out of the way. And what im hearing, instead, today is that, stop before i innovate again message. Im just curious as to how were going to achieve this. As i mentioned section 230 in my opening remarks, we learned something there. We decided that in section 230 that we were basically going to absolve the industry from liability for a period of time as it came into being. Well, mr. Altman, on a podcast earlier this year you agreed with cara swisher that section 230 doesnt apply to a. I. And those like openai should not be open to immunity for harms from their products. What have we learned from 230 with a. I. . Mr. Altman thank you, senator. I dont know what the answer is. Id like to collaborate with you to figure it out. I think for a very new technology we need a new framework. Certainly, Companies Like ours bear a lot of responsibility for the tools that we put out in the world but tool users do as well. And how we want and also people that will build on top of it between them, the end consumer, and how we cant to come up with a liability framework there is a super important question. And wed love to Work Together. Senator durbin the point i want to make is this. When it came to Online Platforms, the inclination of the government was get out of the way. This is a new industry. Dont overregulate it. In fact, give them some breathing space and see what happens. I am not sure im happy with the outcome as i look at Online Platforms and then the harms theyve created. Problems that weve seen demonstrated in this committee. Child exploitation, cyber bullying, online drug sales and more. I dont want to repeat that mistake again. What i hear is the opposite suggestion from the private sector. And that is come at the front of this thing and have precision regulation. For a Major Company like i. B. M. To come before the committee and say, government, please regulate us, can you explain the difference in thinking from the past and now . Ms. Montgomery yeah, absolutely. So for us, this comes back to the issue of trust and trust in the technology. Trust is our license to operate, as i mentioned in my opening remarks. So we firmly believe and weve been calling for precision regulation of Artificial Intelligence for years now. This is not a new position. We think that Technology Needs to be deployed in a responsible and clear way, that people weve taken principles around that trust and transparency we call them thats why were thinking a. I. Should be regulated at the point of risk, essentially, and thats the point at which Technology Meets society. Senator durbin lets take a look at what that might appear to be. Members of congress are smart, a lot of people, maybe not as smart as we think we are sometimes, and government certainly has the capacity to do amazing things. But when you talk about our ability to respond to the current challenge and perceived challenge of the future, challenges which you all have described in terms which are hard to forget, as you said, mr. Altman, things can go quite wrong. As you said, mr. Marcus, democracy is threatened. I mean, the magnitude of the challenge youre giving us is substantial. Im sure we respond quickly and with enough expertise. Professor marcus, you made a reference to cern, the International Arbiter of nuclear research. I dont know if thats a fair characterization. But one ill start with. What agency of this government do you think exists that can respond to the challenge that you laid down today . Mr. Marcus we have many agencies that respond in some ways. For example, the f. T. C. The fcc. Many agencies. My view is we probably need a cabinet Level Organization within the United States in order to address this. My reasoning for that is that the number of risks is large. The amount of information to keep up on is so much, i think we need a lot of technical expertise. I think we need a lot of coordination of these efforts. There is one model here where we stick to only existing law and try to shape all of what we need to do. And each agency does their own thing. I think that a. I. Is going to be such a large part of our future and so complicated and moving so fast, it does not fully solve your problem of a dynamic world, but its a step in that direction. To have an agency thats fulltime job is to do this. I personally have suggested that we should want to do this as a global way. I wrote an article in the economist i have a link and invited essays suggesting we might want an international agency. Senator durbin thats where i wanted to go to next. Ill get aside from the cern and nuclear examples, government was involved in that from day one, at least in the United States. Now we are dealing with innovation which doesnt necessarily have a boundary. We may create a great u. S. Agency, i hope we do, that may have jurisdiction over u. S. Corporations and u. S. Activity. But doesnt have a thing to do with whats going to bombard us from outside the United States. How do you get this International Authority the authority to regulate in a fair way for all entities involved i mr. Marcus i think thats probably over my pay grade. I think the politics behind it are obviously complicated. Im really heartened by this room is bipartisan in supporting the same things. That makes me feel like it might be possible. I would like to see the United States take leadership in such organization. It has to involve the whole world and not just the u. S. To work properly. I think even from the perspective of the companies, it would be a good thing. The companies themselves do not want a situation where you take these models expensive to train and you have to have 190 some of them, one for every country, that wouldnt be a good way of operating. When you think about the energy costs alone for training the systems it would not be a good model. If every country has its own policy for each jurisdiction every company has to train another model. Maybe different states are different. Missouri and california have different rules. So then that requires even more training of these expensive models with huge climate impact. It would be very difficult for the companies to operate if there was no global coordination. I think we might get the companies onboard. If there is bipartisan support here, and i think there is support around the world, it is entirely possible we could develop such a thing. Obviously there are many nuances here, diplomacy over my pay grade. I would love to learn from you all to help make that happen. Senator durbin mr. Altman. Mr. Altman can i weigh in briefly . I want to echo support for what mr. Marcus said. The u. S. Should lead. To be effective we need something global. This can happen everywhere. There is precedent. It sounds hard. There is precedent. We have done it before with the iaea. We have talked about doing it for other technologies. Given what it takes to make these models, the chip supply chain, the limited number of competitive g. P. U. s, the power the u. S. Has over the companies, i think there are paths to the u. S. Setting international standards. Other countries would need to collaborate with and be part of that are workable. Even though it sounds on its face like an impartial idea. Mr. Altman it would be great for the world. Chair blumenthal thanks, senator durbin, i think what European Parliament already is passing on an a. I. Act. On social media europe is ahead of us. We need to be in the lead, i think, your point is very well taken. Lets turn to senator graham. Senator blackburn. Senator blackburn thank you, mr. Chairman. And thank you for being here with us today. I put my chatgpt account, should congress regulate a. I. , chatgpt. Four pros, four cons, ultimately the decision risks with congress and deserves careful consideration. So on that it was very balanced. I recently visited with a Nashville Technology council. I represent tennessee. Of course you had people there from health care, financial service, logistics, educational entities. They are concerned about what they see happening with a. I. With the utilizations for their companies. Ms. Montgomery, similar to you. They have Health Care People are looking at disease analytics. They are looking at predictive diagnosis. How this can better the outcomes for patients. Logistics industry, looking at ways to save time and money and yield efficiencies. Youve got Financial Services that are saying how does this work with quantum . With block chain . How can we use this . But, i think as we have talked with them, mr. Chairman, one of the things that continues to come up is, yes, professor marcus, as you were saying, the e. U. , different entities are ahead of us in this. But we have never established a federally given preemption for Online Privacy for Data Security. And put some of those foundational elements in place, which is something that we need to do as we look at this. It will require that commerce committee, Judiciary Committee decides how we move forward. So people own their virtual you. Mr. Altman, i was glad to see last week that your open a. I. Models are not going to be trained using consumer data. I think that that is important. If we have a second round i have a host of questions for you that Data Security and privacy. But i think its important to let people control their virtual you. Their information in these settings. I want to come to you on music and content creation because we have a lot of songwriters and artists. I think we have the best Creative Community on the face of the earth. They are in tennessee. And they should be able to decide if their copyrighted songs and images are going to be used to train these models. Im concerned about open a. I. s juke box. About openais jukebox. It offers some rerenditions in the sound of garth brooks, which suggests that openai is trained on garth brooks song. I went in this weekend and i said write me a song that sounds like garth brooks, and it gave me a different version of simple man. Its interesting that it would do that. But you are training it on these copyrighted songs, these mini files. These sound technologies. As you do this, who owns the rights to that a. I. Generated material . And using your technology could i remake a song, insert content, from my favorite artist and then own the creative rights to that song . Mr. Altman thanks, senator. This is an area of great interest to us. I would say first of all we think that creators deserve control over how their creations are used. And what happens beyond the point of them releasing it into the world. Second, i think that we need to figure out new ways with this new technology that creators can win, succeed, have a vibrant life. Im optimistic senator blackburn how do you compensate the artist . Mr. Altman thats what i was going to say. We are working with artists now, visual artist, musicians, to figure out what people want. There is a lot of different opinions unfortunately. Senator blackburn do you favor Something Like sound exchange . That has worked in the area of radio. Mr. Altman im not familiar with that. Senator blackburn you have your team behind you. Get back to me on that. That would be a third party entity. Lets discuss that. Let me move on. Can you commit as you have done with consumer data, not to train chatgpt, openai, jukebox or other a. I. Models on artists and songwriters copyrighted works or use their voices and their likenesses without first receiving their consent . Mr. Altman first of all juke box is not a product we offer. That was a research release. Unlike chatgpt or dalle. Senator blackburn napster was something that cost a lot of artist as lot of money. Mr. Altman i understand. Sen. Blackburn digital distribution era. Mr. Altman i dont know the numbers on the top of my head for jukebox. Juke box doesnt get much attention or usage. It was put out to show something is possible. Senator blackburn senator durbin just said, and a fair warning to you all f. We are not involved in this from the getgo and you all already are a long way down the path on this, but if we dont step in, then this gets away from you. So are you working with the Copyright Office . Are you considering protections for content generators and creators in generative a. I. . Mr. Altman we are engaged on that. To reiterate my earlier point. We think content creators, owners, need to benefit from this technology. What the economic benefit is we are talking to artists what they want. No matter what the law is, the right thing is to make sure people get upside benefit from this technology. We believe its going to deliver that. But content owners totally deserve how they control. Senator blackburn privacy, how are you going to the users and specific data, things that are copyrighted, userspecific data through your a. I. Applications if i can say write me a song that sounds like garth brooks and takes an existing song it has to be a compensation to that artist for that utilization. If it was streaming, it would be there. So if you are going to do that, what is your policy for making certain you are accounting for that and you are protecting that individuals right to privacy and their right to secure that data and that Creative Work . Mr. Altman number one, we think people should say i dont want my personal data turned on. Senator blackburn that gets to National Privacy law which many of us are working towards getting something we can use. My time has expired. Senator klobuchar i love tennessee and love your music and i use chat gpt and two of the top three were from minnesota, prince and bob dylan. Let us one thing that we wont change. Senator klobuchar my staff and i in my role as chair of the rules committee and leading election bills and introduced eight bills that representative that representative clarke and doing on political advertisements and this is the tip of the iceberg and you know that about the images and my own view and senator grahams section 230 is that we just company cant make stuff up and not have any consequence. One of my jobs will be on the rules committee and that is election misinformation. And we asked chat gpt a tweet about a polling location in bloomington, minnesota and said there are long lines at this location, where should we go. It is not an election right now. The tweet that was drafted was a completely fake thing. Go to 1234 elm street. You can imagine what im concerned about here, with the primary elections upon us, we will have all kinds of misinformation and i want to know what you are doing about it . We have to do something soon not just for the images of the candidates but also about misinformation about the actual polling places and election rules. Mr. Altman we are quite concerned about the impacts this can have on elections and hope we can Work Together quickly. Theres many approaches and talk some of the things you do but before that, it is tempting to use the frame of social media, but this is not social media and this is different and the response is different. This is a tool that a user is helping to generate content more efficiently and test it and if they dont like it they can get another version but it still then spreads through social media. Chat gpt is a Single Player where they are just using this. What we think about what to do. The organization is an exemption to negotiate with google and facebook and microsoft is supportive of the bill, but basically negotiate with them to get better rates and be able to not have some leveraging and other countries are doing this. And so my question is when we have a onethird of the newspapers that are existed two decades are going to be gone by 2025 unless you start compensating from books, movies and news content, we are going to lose any relate particular content producers. I would like your response to that and there is an exemption to that in section 230 but asking little newspapers cannot keep up. Mr. Altman tools that we are creating to help news organizations having a vibrant and round one of the internet. Senator klobuchar locals, reporting high School Official sports and scandals in city council. And but you understand that this could be worse in terms of local news content if they are not compensated . What they need is to be compensated for their content and not have it stolen. Mr. Altman the current version [indiscernible] not a good way to find recent news and not a service that can do a great job. Its possible. There are things to help local news, we would like to. It is critically important. Can i add something there . Senator klobuchar more transparency on the plaintiff and senator coons and cassidy have the Transparency Act to give researchers access to the algo rhythms. Would that be helpful . This is absolutely critical tore understand the political ramifications. We need transparency and models work and have scientists have access to them. And your report about local news, a lot of news are not reliable and showing that Something Like 50 web sites are generated by boats bsh. S. And so the quality of the overall news market is going to decline as we have more generated systems that are not reliable. Senator klobuchar thank you to make the argument why we have to mark this bill up in june. Ms. Graham thank you for having this and learn from the mistakes we have made with social media. The idea of not suing social Media Companies is to allow the internet to flourish because if i slander you, you can sue me. If you are a Billboard Company and put up the slander can you sue the Billboard Company, we said no. Section 230 is used to avoid liability for activity that other people generate when they refuse to comply with their terms of use. A mother calls up the company and said this app is being used to bully my child you promised you would prevent bullying and calls three times and gets no response and the child kills herself and cant sue. Do you agree we dont want to do that again . Yes. If i may speak. There is a distinction between reproducing content and generating content. Ms. Graham you would like liability where people are harmed . Yes. Ibm has been able to condition liability. Ms. Graham snaim snaim. We are claiming we need to Work Together to find a new approach. I dont think section 230 is the right framework. Under the law that exists today, this tool created, if im harmed about it, can i sue you . That is beyond my area . No. Have you ever been sued . Yeah, we have beep sued before . And what for . Frivolous things. Examples my colleagues have given from Artificial Intelligence that could ruin our lives, can we sue them . There is clear responsibility. But you are not claiming any kind of Legal Protection such as 230 applies to your industry, correct . No. When it comes to consumers, there is three timetested ways to protect consumers. Statutory schemes, which are nonexist ant here. Legal systems, which may be here, but not social media and agencies. Go back to senator holly. And Nuclear Power could be one of the solutions to Climate Change. So what im trying to do is make sure you cant go build a Nuclear Power plant. Hey, bob, lets build a Nuclear Power plant. Nuclear power commission, do you agree these tools you are creating should be licensed . Mr. Altman yes. You get a license and do you agree with me the simplest way and most effective ways has an agency that is more nimble and smarter than congress overlooking what you did, you agree with that . Professor marcus yes. Ms. Montgomery i would have some nuances. We dont have an agency no. No. Ms. Montgomery we dont have an agency that regulates the technology. Should we have one . Ibm says we dont need an agency. Interesting, should we have a license required for these tools . Ms. Montgomery we simple question should you get a license to produce one of these tools . Ms. Montgomery potentially, yes. We need to do you claim section 230 applies in this area. We are not a Platform Company and long advocated. You dont need an agency to deal with the most Transformative Technology ever . Ms. Montgomery we have is this a thrans formative that we know it good or bad. Ms. Montgomery the conversations that we are having here today have been bringing to life the fact the domains and the issues mr. Alt man why are you so willing to have an agency . You can see from users how much value they are going to have and what the downsides are. A major tool to be used in the technology. If you make a ladder and the ladder doesnt work you can sue the people that made the ladder but there are some standards. Ms. Altman we agree with you. My two cents worth. We need to empower an agency that issues a license and can take it away. Wouldnt that be do it right and can be taken out of business. Mr. Altman yes. You agree that china is doing a. I. Research . This World Organization that doesnt exist maybe at will, but if you dont do the china part of it, youll never get this right, do you agree . Mr. Altman doesnt have to be a World Organization and there are a lot of options. There has to be some sort of standards and controls. Other people are doing it. Military application. How can a. I. Change the warfare . And you got one there . Mr. Altman thats a tough question for one minute. This is very far out of my area of expertise. Can a drone. You can plug into a drone and fly out and goes over this target and drops this missile. Could a. I. Create a situation where a drone can select a target itself . Can it be done . Mr. Altman sure. Thank you for working this hearing to come up with this Compelling Panel of witnesses and beginning a series of hearings on this technology. We recognize the promise and substantial risks associated with generative a. I. Technologies. They can help us learn new skills, open whole new vistas of creativity but generative a. I. Can deliver wildly incorrect information. It can hallucinate as it off described and impersonate loved ones and can shape Public Opinions and the outcome of elections. Congress has failed to responsibly enact legislation of social Media Companies with serious harms. Senator klobuchar said a bipartisan bill that would open up platforms. We have struggled to understand the Underlying Technology and move towards responsible legislation. We cannot to be to response apply generative a. I. As we have been to social media. The consequences will exceed by orders of magnitude. Let me ask a few questions about how we access the risk and what is the role and how does it impact a. I. I appreciate your testimony in which open a. I. Assesses the safety of your models through deployment. The question is how you decide whether or not a model is safe enough to deploy and safe enough to have been built and then let go into the wild. I understand one way to prevent is to have humans identify that content and train it to avoid and then there is constitutional a. I. That gives values or principles. Would it be more effective to give these kinds of rules instead of training the model on all the different potential for harmful content . Mr. Altman i think why we deploy at all, why we put these systems out in the world. The obvious, there are benefits and wonderful things and great value and makes us happy, but a big part of why we do it, is we believe deployment and giving people and institutions and you all time to come to grips with this technology and understand it and find the benefits and the regulations to make it safe, that is important. Going off to build a super powerful platform make it secret will not go well. While these systems are relativey weak, to find ways to get people to have experience with them and to figure out what we need to do to make it safer and better is the only way i have seen in the history of products is to go to their outcome. And their interaction. Before we put something out, it needs to meet a bar of safety and we spent well over six months after we finished training it going through all these Different Things and deciding what the standards were going to be, trying to find the harms we knew about and how to address those. One of the things grat filing to us is critics looked at it and senator coons focus briefly on a constitutional model would benefit. Mr. Altman giving the model values up front is extremely important. And somehow or another, synthetic data, saying, here are the values, heres what i want you to reflect and the wide bounds that society will and if you are the user, we think thats very important. Multiple technical approaches. But we need to give policy makers the tools to say heres the values and. Senator coons ms. Montgomery you are on the board of a large company. Im concerned that technologies can undermine the faith of Democratic Values. Chinese are insisting that a. I. Reenforced the chinese system and im concerned about how we promote a. I. How it strengthens. And in your testimony you are advocating for regulation tailored the way the technology is used and e. U. Is moving with an a. I. Based on levels of risk. You view elections and shaping of Election Outcomes as one of the highest risk cases. We have attempted so far unsuccessfully to regulate social media after the harmful impact on social media. What advice do you have for us what kind of approach we should follow and the e. U. Direction is the right one to pursue . Ms. Montgomery the consumption of the e. U. A. I. Act is consistent with this where you are regulating the context. We dont want to slow down regulation tore address real risks right now. We have existing regulatory authorities in place that have been clear that they have the ability to regulatory in their respective domains. Elections and the like. Senator coons i will assert that those regulatory bodies are underresourced and lack much of those regulatory powers they need. We have failed to deliver data privacy even though industry is asking us to regulatory it. What International Bodies are best positioned to convene Multi Lateral discussions to have standards . We have talked about a model and Nuclear Energy and im concerned about prosers and nonprosers and the u. N. Body of what is happening in Climate Change and even though we may disagree about strategies and we have come to a common understanding of what is happening and what should be the direction of intervention. And give us your thoughts of what is the right body to convene a conversation and one that reflects our values . Professor marcus i think Global Politics is not my specialty but i have moved toward policy in recent months because of my great concern of all of these risks. U. N. And unesco should be involved and at the table and maybe things work under them but maybe should have a strong voice. I dont think im qualified to see the right models. Mr. Chairman, i yield back. Thank you very much. Senator kennedy. Senator kennedy thank you for being here. Permit me to share with you three hype thesis that i would like you to assume to be true. Number one, many members of congress do not understand Artificial Intelligence. Number two, that absence of understanding may not congress from plunging in with enthusiasm and trying to regulatory this technology that could hurt this technology. Number three and i would like you to assume, there is likely a best eric wing of the Artificial Intelligence community that intentionally or unintentionally could use Artificial Intelligence to kill all of us and hurt us the entire time we are dying. Assume all of those to be true, please tell me in plain english two or three reforms, regulations, if any, that you would if you were king or queen for a day, ms. Montgomery. Ms. Montgomery i think it comes back to transparency and explain built in a. I. We absolutely need to know and ken ken what do you senator kennedy what do you mean by transparency . Ms. Montgomery disclosure to train a. I. And the model and how it performs and make sure there is continuous governance over these models. Senator kennedy governance by whom . Ms. Montgomery rules and clarification that are needed. I mean this is your chance, folks, to tell us to get this right. Please use it. Ms. Montgomery the rules should be focused on the use of a. I. In certain context. So if you look at e. U. A. I. , it has certain uses of a. I. That are just simply too dangerous. Senator kennedy we ought to say you can use a. I. For these uses and not others, is that what you are saying . Ms. Montgomery we need to define the highest risks. And impact assessments and transparency. Requiring companies to show their work, protecting data that is used to train a. I. In the first place. Senator kennedy professor marcus, this is your shot, man, talk in plain english and tell me what if any rules we ought to implement and dont use concepts. Im looking for specificity. Professor marcus safety like we use with the f. D. A. Somebody has to have their eyeballs on it. Senator kennedy thats a good one. What else . Professor marcus number two, a nimble Monitoring Agency to follow what is going and things out there in the world with authorities to call things back. And number three would be funding geared towards like a. I. Constitution, reason about what its doing. I would not leave things to Current Technology which i think is poor behaving in ethical fashion. Senator kennedy mr. Altman, heres your shot. Mr. Altman i would form a new cage agency and number two, i would create a set of Safety Standards focused on what you said as the danger. One example we have used to see if the model can selfreplicate and can give your office a lot of lists and specific model that has to be deployed before in the world. And independent audits not just from the company or agency. And these percentages of performance. Senator kennedy can you send me that information . Mr. Altman we will do that. Senator kennedy would you be qualified to administer those rules . Mr. Altman i love my current job. Senator kennedy are there people that are qualified . Mr. Altman we would be happy. Senator kennedy you make a lot of money . You need a lawyer or an agent. Mr. Altman im doing it. Senator kennedy thank you, mr. Chairman. Senator hirono. Senator hirono listening to you and thank you very much for being here. Clearly a. I. Truly is a gamechanging tool. And we need to get the regulation of this tool right because myself, for example, asks a. I. , might have been g. P. T. Or one of the other entities to create a song that my favorite band, b. T. S. , a song they would sing, somebody elses song and neither of the artists were involved in what sounded like a again you insong and can do a lot. And can there be a speech created talking about the Supreme Court decision in dobbs and it created a speech and made me think what do i need my staff for . Dont worry they are behind you. Senator hirono they are safe. And one of the things you mentioned, you said g. P. T. 4 can refuse harmful requests so you must have put some thought can refuse harmful requests. What do you consider a harmful requests . Mr. Altman one of it is content that is encouraging selfharm, adult content, but there are things that could be associated with this that we cant differentiate. Senator hirono those are some of the most obvious. But in the lexicon text, for example, i saw a picture of former President Trump being arrested by a p. D. And i dont know, is that considered harmful and all kinds of statements attributed to one of us that may not rise to your level of harmful content. Two of you said we should have a licensing scheme. I cant envision or imagine what kind of licensing scheme we would be able to create to regulate the vastness of the gamechanging tool. Mr. Altman as we head, may take a long time towards Artificial Intelligence and the impact that will have and the power of that technology, i think we need to treat that as seriously as we treat other powerful technologies and we need such a scheme. Senator hirono i agree. And talking about a. I. G. And major harms that could occur through the use of a. G. I. What kind of a regulatory scheme would you envision . And we just cant come up with something that is going to take of the issues that will arise in the future. What kind of a scheme would you contemplate . Professor marcus i think you put your finger on the scientific issue in terms of building Artificial Intelligence. We dont know how to build it. We gather examples and say is this the examples that we have labeled before . But thats not broad enough. Your questioning beautifully outlined the challenges that a. I. Has to face to deal with this. We want a. I. To understand it and that may require new technology. Second part of your question, the mile that i gravitate towards is the f. D. A. At least in part of it you have to make a safety case why the benefits outweigh the harm to get that license we need multiple agencies. But i think the safety case part of it is incredit apply important and have boards that are looking at this. Ill give one example. Thats not something i need but the plugness led to auto g. P. T. And it allows systems to access codes and the internet and there are cybersecurity risks there and should be an external agency and we should be assured if you are going to release that there arent going to be cybersecurity problems. Senator hirono i am running out of time. Ms. Montgomery your model is that a use model but the vastness of a. I. And the complexities involved will require more than looking at the use of it. Based on what im hearing today, dont you think we will need to do a heck of a lot of more of what use a. I. Is being used for. You can ask them to come up with a funny joke and ask the same a. I. Tool to general roit something that is an election fraud. I dont know how you have a determination based on where you are going to distinguish those kinds of uses of this tool. I think if we are going toward a licensing scheme we need to put a lot of thought how to come up with an appropriate scheme that is going to provide the kind of future reference that we need to put in place. I thank you for coming in and providing food for thought. Thanks, mr. Chairman. Senator padilla. I appreciate the flexibility of the back and forth in this committee and Homeland Security committee where there is a hearing on the use of a. I. And government. A. I. Is on the hill. For folks watching at high pressuresystem, if you never thought a. I. Until the recent emergence, the developments in this space may feel like it happened all of a sudden, but the fact of the matter is, it is not new for government, business, not for the public. In fact the public uses a. I. All the time and for people to offer an example, anyone with a smartphone, many features leveraging a. I. Including suggested replies for texting and autocorrect and text applications. So i am excited to explore how to facilitate positive a. I. Innovation that benefits society while addressing some of the already known harms that stem from the development and use of the tools today. Language models becoming increasing you bilk kitous and i want to make sure there is demographic roots. Most research in evaluating and mitigating harms has been compensated on the english language while nonenglish languages have received little attention or investment. And we have seen this before. Social Media Companies have not adequately invested in content moderation tools and resources for the nonenglish language. And i their that out of concern for nonu. S. Based users. So many u. S. Based users prefer english. Im concerned about repeating social medias failure in a. I. And tools and application. How is a. I. And ibm language and incluesity in their language . An area focused in the development of a. I. Ms. Montgomery bias is a focus of ours. Diversity in terms of the development of the tools, and deployment having Diverse People that are training and considering the downstream effect as well. We are also very aware of the fact that we just cant be articulating and calling for these types of things without having the tools and technology and apply governance across the life cycle. So we were First Companies to put tool kits to contribute them. And do things that would help to address the technical aspects to address issues like that. Can we speak to language incluesist. Ms. Montgomery we dont have a language platform but we are involved that the language we help to deploy and large language model we use to help our clients is focused on available in many languages. Mr. Altman we think this is really important. We worked with the government of iceland and many of the languages that are well represented on the internet to make sure their language was included in our model and had many similar conversations and i look forward to similar partnerships with lower resourced languages to get them into our models. G. P. T. 4 is not pretty good at a large number of languages. You can go on the list of number of speakers. But small languages we are excited about partnerships to include that language into our model and the part of the question you asked about values and cultures are involved. And we are excitedded to work with people to have a collective set of values around the world. I appreciate what you said about the benefits of these systems and make sure we get them to as wide a group as possible. But in particular, underrepresented groups and technology. And this technology seems like it can be a big lift up. I know my question was specific to languages. But rather create a broader commitment to diversity and give a couple more reasons why it is so critical. The largest actors can afford the maximum amount of data and have the resources to develop complex a. I. Systems. But in the space we havent seen from a work force standpoint the gender equipment quit in United States of america and risks contribute to develop the tools that approaches the only exascerbates the biases and inequities that exist in our society. And i want to ask one more question. This committee and the public are to look at it and different risk profile than other a. I. Tools and these are tangible due to the user interface and outputs they produce but we shouldnt look at the broader ecosystem and a. I. s impact on society as well as the design of appropriate safeguards. Ms. Montgomery, a. I. , you highlighted some of the different applications that the policy makers should keep in mind as you consider possible regulations. Ms. Montgomery the a. I. Systems are creating new issues that need to be studied around the potential to generate content that could be misleading, deceptive and the like. They need to be studied. But we should know that a. I. Has capabilities beyond just generative capabilities and i think going back to this approach and regulating where it is touching people and society is the way to address it. A very big deal. I have a meeting at noonan grateful to you, senator booker, for yielding your time. You are always brilliant and handsome and thank you to the panelists for joining us and subcommittee leadership for opening this up to all subcommittee members. If we are going to contemplate a Regulatory Framework we have to define what we are regulating and include a section that defines technology, tools and products, just take a stab at it. Mr. Altman i think there are very Different Levels here and important that any new approach, any new law doesnt stop it. Researchers that are doing work on a smaller scale. Thats a wonderful part of the ecosystem of america and there needs to be some rules there, but i think we could draw a line of systems that need to be licensed in an intense way. The easiest way would be to talk about the amount of compute that goes into the model so we can define these models. It could go up or down. Down as we discuss more. This says above this amount of compute, you are in this regime. What i would prefer is to define sole capabilities of thresholds and say a model that can do x, y and x and models less capable, we dont want to stop individual researchers. And startups can proceed with a different framework. Which capabilities you propose we consider for the purposes of this definition . Mr. Altman rather than do offthecuff, follow up with your office. Opine that you are just responding. Mr. Altman in the spirit of just opining, a model that can persuade and manipulate influence a persons beliefs would be the threshold, i think a model that could create biological agents would be a great threshold. I want to talk about the predictor technology and constitutional questions that arise from him which is massive data sets. The integrity and accuracy with which such technology can predict future behaviors. Is that correct . Mr. Altman we dont know the answer but it could have an impact. We may be confronted with a Law Enforcement agency deploying such technology seeks a judicial consent to execute a search or other action based on a model on an individuals behavior. But thats very different from the kind of evidentiary predicate that police would take to a judge to get a warrant. Talk to me how you go through that issue . It is very important to understand these are tools that humans use and we dont take away human judgment. I dont think people should be based on the a. I. System. We have no National Privacy law. Europe has ruled one out to mixed reviews. Do you think we need one . Mr. Altman i think it would be good. What would be the qualities of such a law that would be good . Mr. Altman this is past my expertise. I still would like you to weigh in. Mr. Altman i think a minimum users should opt out from having their data used. And easy to delete your data. If you dont want your data used for training systems, you have the right to do that. As i understand it, your tool and certainly similar tools, one of the inputs will be scraping, for lack of a better word, data off of the open web, a lowcost way of gathering information and there is a vast amount of information out there about all of us. How would such a restriction on the access or use of analysis would be practically implemented . Mr. Altman i was thinking about the information that someone. The models can link out to it. That is not what i was referring to. I think there is ways to have your data taken down from the public web but they can search the web. When you think about implementing a safety or Regulatory Regime to constrain such software and mitigate some risk, is your view that the federal government would make laws that such capabilities or functionalities themselves are forbidden in other words, one cannot deploy or execute code of x or the app acts only when actually executed . Mr. Altman i think both. There should be limits on what a deployed model is capable of and what it does, too. How are you thinking about kids using your product . You have to be 18 or up or parents permission 13 and up. But people get around this. We try to design a safe product and there are decisions. If we knew adults are using because we know children will use it and given how much the systems are being used in education, we want to be aware that is happening. And senator blumenthal has done extensive work and what we have seen Companies Whose revenues depend on intensity of use, screen time, design these systems to maximize with perverse results in many cases and i would advise you is that you get way ahead of this issue, the safety for children of your product or i think you are going to find, senator blumenthal and others on the subcommittee and i will look harshly on the deployment of technology that harms children. Mr. Altman i think we try to Design Systems that do not have engagement. [indiscernible] we are not trying to get people to use it more and more. And i think thats a different shape than adsupported media. The systems do have the exaibts to influence in very obvious ways and i think that is for the safety of children but that will impact all of us. One of the things we will do regulation or not and requirements about how the values of these systems are set and how the systems respond to questions of influence. We would love to partner with you and couldnt agree with you. The senator from georgia is also very handsome and brilliant, too. I will allow that comment to stand, without objection. Nice we finally got down here to the bald guys. This is one of the best hearings i have had to this congress and to the testimony and seen the challenges that a. I. Presents. I think very broadly and i will get more narrow, said very broadly, technology has been moving like this and a lot of people have been talking about regulation and i use the example of the automobile. What an extraordinary piece of technology. Er new york city didnt know what to do with horse manure and automobile ends that problem but we have tens of thousands of people dying on the highways. There are multiple federal agencies created or specifically focused on regulating costs and so this idea that this equally transforming technology is coming and for congress to do knowing which congress is not calling for is unacceptable. I would appreciate it, senator welch and i have been talking about it trying to regulate in the space. Not doing so for social media has been i think very destructive and allowed a lot of things to go on that are causing a lot of harm. The question is what kind of regulation. And you have spoken to my colleagues and ms. Montgomery and i have to give full disclosure im the child of two ibm parents but you talked about defining the highest risk uses. We dont know all of them and cant see where this is going. Regulating at the point of risk. And you sort of called not for an agency. And when someone asked you to specify because you dont want to slow things down but you can envision we can work on two different ways that ultimately like we have in cars, e. P. A. , nhtsa, the federal motor safety administration, all of these things, you can imagine as mr. Marcus points us a nimble agency. You could imagine the need for Something Like that, correct . Ms. Montgomery absolutely. Senator booker in trying to regulate what we have now, you would encourage congress and senator welch in trying to figure out the right agency to deal with what we know and things that might come up in the future. Ms. Montgomery i would encourage that we have the skills and resources in place to impose regulatory requirements on the uses of the technology and understand emerging risks as well. Senator booker globally, this is exploding and i appreciate your thoughts and i shared my ideas what the International Context is. But there is no way to stop this moving forward. With that understanding, building on what ms. Month gentlewomany what do you have to form an agency and can you put a clarity. Professor marcus some genies are already out and we dont have machines that have selfawareness and this others to be concerned about. Onto the main part of your question, i think we need to have International Meetings very quickly with people who have expertise and how you grow agencies. We need to do that on the federal level and international level. I havent as much as i would like to, science has to be an important part of we dont have the tools right now to detect and label misinformation, with nutrition labels for example. We dont have the to also detect wide upticks in cyber crime, probably. We need new tools there we need science to help us figure out what we need to build and also what it is that we need to have transparency around. Understood, understood. Youre a bit of a unicorn. Could you explain why nonprofit . In other words, youre not look at this you even i want folks to understand that. We started as a nonprofit focused on how this technology would be built, at the time it was very outside the window that Something Like a. I. Was possible. We didnt know at the time how important scale was going to be but we did know we wanted to build this with humanitys best interest at heart and a belief that this technology could if it goes the way we want to, if we can do some of those things professor marcus mentioned, can really, deeply change the world. Sen. Booker im going to interrupt you. Thats all good the second part of my question as well. I found it fascinating. Are you ever going, to for revenue model, for return on investment, will you ever do ads or Something Like that . I wouldnt say never. I would really like a subscription based mold. Sen. Booker one of my biggest concerns about this space is what ive already seen in the space of web 2, web 3, is the massive corporate cobsen traition concentration. It is terrifying to see how few Companies Control and affect the lives of so manufacturous and theyre getting bigger and more powerful. Im seeing open a. I. Backed by microsoft. Google has its own inhouse parts. Im worried about that. Im wondering if you can give me a quick acknowledgment are you worried about the corporate concentration in this space . What effect it might have . The associated risks, perhaps, of more concentration in a. I. . And mr. Marcus could you answer that as well . I think therell be many people who develop models. Whats happened in the open Source Community is amazing. Therell be a relatively small number of providers who can make models. I think there is benefits and danger to that. Were talking about the diefnlings a. I. , the fewer of us that you have to keep a careful eye on, on the absolute leading edge of capability, theres benefits there. But i think there needs to be enough, and there will, because theres so much value that consumers have choice and we have different ideas. Sen. Booker mr. Marcus . Professor marcus theres a risk of an oligarchy where a small number of people influence peoples beliefs. I put something in the wall street journal about how these systems can shape our beliefs and have enormous influence on how we live our lives. Having a small number of play doars that worries me. One i think is very important is who these systems get aligned to, whose values, that thats set by society as a whole, by governments as a whole. So creating our alignment data set, could be an a. I. Constitution, whatever it is, that has to come from society broadly. Sen. Booker thank you, mr. Chairman. My time has expired. And i guess the best for last. Sen. Blumenthal thank you. Senator . Senators are noted for short Attention Spans but i have sat through this entire hearing and enjoyed it. Sen. Blumenthal you do have one of the longer attention spaps in the country, to your credit. All the questions i have, have been asked. But heres a takeaway, the major question i think were going to have to answer as a congress. Sen. Welch youre here because a. I. Is this new technology that everybody says can be transformative as much as the Printing Press. Number two, its unknown whats going to happen but theres a big fear youve expressed, all of you, about what bad actors can do, and will do, if theres no rules of the road. Number three, as a member who served in the house and now in the senate, ive come to the conclusion that its impossible for congress to keep up with the speed of technology and there have been concerns expressed about social media and now about a. I. That relate to fundamental privacy rights. Bias rights. Intellectual property. The spread of disinformation which in many ways for me is the biggest threat. That goes to the core of our capacity for selfgoverning. Theres the economic transformation. Which can be profound. Theres safety concerns. And ive come to the conclusion that we absolutely have to have an agency. What its scope of engagement is, it has to be defined by us. But i believe that unless we have an agency, that its going to address these questions from social media and a. I. We really dont have much of a defense against the bad stuff. And the bad stuff will come. So last year i introduced on the house side, and senator bennett did in the senate side, it was the end of the year, the Digital Commission act. Well be reintroducing that this year. Two things i want to scrks one, youve somewhat answered. Two of the three of you said you think we do need an independent commission. Congress established an independent commission when railroads were running roughshod over farmers and when f. C. C. Had the rules of the road. Their scope would have to be describe. But theres also questions about the use of regulatory authority. It can be used for good. J. D. Vance mentioned that when we were considering his regulations after the train crash in palestine. Regulations can be good but regulations can get too cumbersmsm two of the three of you think we do need and sijtcy agency. What are some of the perils of an agency wed have to be mindful of to ensure its goals, privacy, bias, disinformation, would be the winners and not the losers . Ill start with you, mr. Altman. Mr. Altman thank you, senator. One, i think america has got to continue to lead. This happened in america. Im very proud it happened in america. Sen. Welch i think thats right. Thats why id be much more confident if we had our agency as opposed to got involved in international discussions. Ultimately you want the rules of the road but i think if we lead and get threufs road that work for us, thats probably a more effective way to proceed. Mr. Altman i personally believe theres a way to do both and i think it is important to have the global view on this because this technology will impact americans and all of us wherever its developed but i think we want america to lead. We want sen. Welch get to the perils issue. Mr. Altman one is the weichih in a or someone else could affect it. But also the regulatory process should be on us, on google, we dont want to slow down smaller startups or open source companies. They can comply with things. You can still cause great harm with a smaller model. But leaving the room and space number ideas and new companies and independent researchers to do their work and not putting Regulatory Burden on, say, a company like us could handle that a smaller one couldnt. I think thats another peril and clearly a way that regulation has gone. Sen. Welch professor marcus . Professor marcus the other obvious peril is regulatory capture if we make it appear were doing something but its like greenwashing, and we just keep out the little players so theres so much that they cant do it. I agree with what mr. Altman said and would add that. Sen. Welch ms. Montgomery . Ms. Montgomery i would add the risk of not Holding Companies accountable for the harm theyre causing. Like misinformation in voting systems we need to hold company responsible, accountable for a. I. That theyre deploying that disseminateses any mftion on misinformation on things like elections. Sen. Welch a Regulatory Agency would do a lot of what senator graham was talking about. You dont build a Nuclear Plant without a license. We need both prepredeployment and postdeployment. Sen. Blumenthal thank you, senator welch. You have all been very, very patient. The turnout today which is beyond our subcommittee, i think reflects both your value in what youre contributing as well as the interest in this topic. There are a number of subjects that we havent covered at all. One was just alluded to by profess marcus which is the monopolization danger, the dominance of markets that excludes new competition and thereby inhibits or prevents innovation and invention which we have seen in social media as well as some of the old industries, airlines. Automobiles. And others where consolidation has narrowed competition. And so i think we need to focus on kind of an old area of antitrust which dates more than a century. Its still inadequate to deal with the challenges we have right now in our economy and certainly we feed to be mindful of the way that rules can enable the big guys to get bigger and exclude innovation and competition and responsible good guys such as our represent such as are represented in this industry right now we havent dolt with national security. There are huge implications for national security, i can tell you as a member of the Armed Services committee, classified briefings on this issue have abounded and the threats posed by some of our adversaries, china has been mentioned here. But sources of threats to this nation in this space are very real and urgent. Were not going to deal with them today. But we do need to deal with them and we will, hopefully, in this committee. And then on the issue of a new agency. Ive been doing this stuff for a while. I was attorney general of connecticut for 0 years. I was a federal prosecutor with the u. S. Attorney. Most of my career has been in enforcement. I will tell you something. You can create 10 new agencies but if you dont give them the resources, and im talking not just dollars but scientific expertise, you guys will run circles around them. And it isnt just the models or the generative a. I. That will run circles around them. But it is the scientists in your companies. For every Success Story in government regulation, you can pick five failures. Thats true of the f. D. A. Its true of the iaea. Its true of the s. E. C. Its true of the whole alphabet list of government agencies. I hope our experience here will be different. But the pandoras box requires more than just the words or the concepts, licensing new agency, theres some real hard decision making, as ms. Montgomery had alluded to about how to frame the rules to fit the risks. First do, no harm. Make it effective. Make it enforceable. Make it real. I think we need to grapple with the hard questions here that, you know, frankly, this initial hearing, i think its raised very successful lew successfully but not answered. I thank our colleagues who have participated made these very creative suggestions. Im very interested in enforcement. I, literally 15 years ago, i think, advocated abolishing section 230. Whats old is new again now people are talking about abolishing section 230, back then it was considered completely unrealistic. But enforcement really does matter. I want to ask mr. Altman, because of the privacy issue and youve suggested that you have an interest in protecting the privacy of the data that may come to you or be available. How do you what specific steps do you take to protect privacy . Mr. Altman one is that we dont train on any data submitted. If youre a Business Customer of ours and submit data, we dont train it. If you use chatgpt, you can opt out of us training on your data. You can also delete your history and your whole account. Sen. Blumenthal ms. Montgomery, i know you dont deal directly with consumers but do you take steps to protect privacy . Ms. Montgomery absolutely. We apply additional levels of filtering. Sen. Blumenthal professor marcus, you made troarches selfawareness, selflearning. Were talking about the potential for jailbreaks. How soon do you think that new kind of generative a. I. Will be usable, will be practical . Professor marcus . New a. I. Thats professor marcus new a. I. Thats selfaware . I dont know. It could happen in two years, or 20 years. Basic paradigms that havent been invented yet. Some of them we may want to discourage. Its hard to put time lines on them. Going back to enforcement for a second, one thing thats paramount is for a Greater Transparency about what the models are. That doesnt mean everyone has to know exactly whats in one of these systems, but there needs to be some enforcement arm that can look at the systems, look at the data, can perform tests and so forth. Sen. Blumenthal let me can you, all of you let me ask you, all of you, i think theres been reference to elections and banning outputs involving elections are. There other areas where you think, what are the other high risk or highest risk areas where you would either ban or establish especially strict rules . Ms. Montgomery . Ms. Montgomery the space around misinformation is a hugely important one. Come back to points of transparency, knowing what content was generated by a. I. Will be a really critical area we need to address. Sen. Blumenthal any others . Professor marcus i think medical information is an important one. We have systems that ha lute nate things, and we need to know what theyve hallucinated and whats accurate. I think we need to be concerned about that. I think we need to be concerned about Internet Access for these to also when they can start making requests, both of people and internet things. Its probably ok if they just do search but as they do more intrusive things on the internet like do we want them to be able to order equipment or order things and so forth. As we empower these shms these systems more by giving them Internet Access. And we Vice President talked about longterm risk. Sam alluded to it briefly. But as we move to machines that have a larger footprint on the world, we need to think about that, how were going to regulate that and monitor it and so forth. Sen. Blumenthal in a sense we have been talk act bad guys or certain bad actors manipulating a. I. To do harm. Professor marcus and manipulating people. Sen. Blumenthal but generative a. I. Can manipulate the manipulators. Professor marcus i can. Dan dennett sent me a manuscript last night on what he calls counterfeit people its a wonderful met tore. Metaphor. These systems are like counterfeit people. We dont really understand what the consequence of that is. Theyre not perfectly human like yet. Theyre good enough to fool a lot of the people a lot of the time that. Introduces a lot of problems for cyber crime and how people may may try to manipulate marks and so forth. Its a serious concern. Sen. Blumenthal in my opening i suggested three principles. Transparency, accountability and limits on use. Would you agree that those a good starting point . Ms. Montgomery . Ms. Montgomery 100 . And as you mentioned, industry shouldnt wait for congress. Thats what were doing at i. B. M. Sen. Blumenthal theres no reason to wait for congress. Professor marcus i think those three are a great start. There are things like the White House Bill of rights that show and the uneskco disbliens. They show what we need. How will we try to make these things enforce wesmed dont have transparency yet we all know we want it but were not doing enough to enforce it. Sen. Blumenthal mr. Altman . Mr. Altman i agree that those are important points. I would add and professor marcus touched on this we spent most of today on current risks. I think thats appropriate, im glad we have done it. As the systems do become more capable and im not sure how far away that is maybe not super far, i think its important that we also spend time talking about how to confront those challenges. Sen. Blumenthal having talked to you privately mr. Altman you know how much i care. Sen. Blumenthal i see that you care but also that increased danger or risk resulting from even more complex and capable a. I. Mechanisms certainly may be closer than a lot of people appreciate. Let me just add, im sitting next to sam, closer than ive sat to him ever in my life, and his sincerity is evident in a way that doesnt show on the television screen. Sen. Blumenthal senator hawley . Sen. Hawley ive been keeping a list of harm, risks of generative a. I. Loss of jobs. This isnt speculative. Your company, ms. Montgomery, anownsed its laying off 7,800, a third of your nonconsumer facing work force, because of a. I. Loss of jobs. Invasion of privacy, personal prief sunny a scale weve never before seen. Manipulation of personal behavior. Manipulation of personal opinions. Potentially the degradation of free elections in america. Did i miss anything . This is quite a list. I notice that an eclectic group of about 1,000 technology and a. I. Leaders, everybody from andrew yang toe lon musk called for a sixmonth moratorium on any further a. I. Development. Are they right . Do you join those calls to do that . Should we pause for six months . Professor marcus your characterization is not correct. I signed that letter. It didnt call for a ban on all a. I. Research or all a. I. But only on a very specific thing. Systems like g. P. T. 5. Every other piece of research that has been done it was supportive or neutral about. It called for more a. I. Research, specifically called for more research on trustworthy and safe a. I. Sen. Hawley you think we should take a sixmonth moratorium on anything beyond chat gpt4. Professor marcus i took the phrase spiritually, not literally. My opinion is that the moratorium we should focus on is deployment until we have good safety cases. I dont know that we need to pause that particular project but i do think its emphasis on focusing more on a. I. Safety, on trustworthy, reliable a. I. Is exactly right. Sen. Hawley so deployment means making available to the public . Professor marcus yes. Making it available to the public without external ve view review. Sen. Hawley after we finished training gpt4 we waited four months to deploy it. Were not currently training gpt5 and have no plans to do that soon. I think the frame thief letter is wrong. What matters is audits, red training if we pause for six months im not sure what we do then do. We pause for another six . Do we come up with rules then . The standards that we have developed and weve used for gpt4 deployment, we want to build on those. But we think thats the right direction, not a calendar clock pause. There may be time, i expect there will be times when we find something we dont understand and we do need to take a pause but we dont see that yet. Never mind all the benefits. Sen. Hawley youre comfortable with the potential ramifications . Mr. Altman i dont think thats reason to not train the next one for deemployment. Theres always risks we dont see snag would stop us training the next model where wed be so worried wed create something dangerous in that proases process let alone the deemployment. Sen. Hawley what about you, ms . Ms. Montgomery i think we need to use the time to develop protocols rather than pausing development. Sen. Hawley wouldnt pausing the development allow more time for that . Ms. Montgomery im not sure how practical it is to pause. Sen. Hawley im talking about i hear talk about the agencies, and that could work but having seen agecies, they usually end up controlled by the people theyre supposed to be watching. Thats our history for 100 years. Why dont we just let people sue you . Why dont we just make you liable in court . We can do that. We know thousand do that. We can pass the statute, create a federal right of action that will allow people harmed by this technology to get into court and bring evidence into court and it can be anybody. I mean you want to talk about crowd sourcing, well just open the courthouse doors. Well define a broad private right of action. Class actions. Well just ep it up. Allow people to go to court, present evidence. Theyll say they were harmed by, they were given medical misinformation. They were given election misinformation, whatever. Why not do that . Mr. Altman . Mr. Altman i mean, please forgive my ignorance, cant people sue us . Sen. Hawley youre not protected by section 230 but theres not currently a federal right of action, private right of action, that says if youre harmed by generative a. I. Technology we guarantee you the ability to get into court. Mr. Altman i think theres a lot of other laws, if technology harms you theres standards we could be sued under unless im really misunderstanding how things work. If the question is, are clearer laws about the specifics of this technology and Consumer Protections a good thing . Definitely yes. Professor marcus the laws designed today were designed long before we had Artificial Intelligence. I think the plan you propose would make a lot of lawyers wealthy but i think it would be too slow to protect things we care about. Sen. Hawley you think it would be slower than congress . Professor marcus litigation can take a decade or more. Sen. Hawley litigation can be a powerful tool. Professor marcus im not asking to take litigation off the table among the tools but if i can continue. There are arias like copyright where we dont really have law we dont have a way of thinking about wholesale misinformation as opposed to individual pieces of it where a foreign actor may make millions of misinformation. We have some laws around market manipulation. We get in a lot of situations where we dont really know which laws apply. There would be loopholes. The system is not thought through in. Fact we dont even know that 230 does or does not apply here as far as i know. I think thats something a lot of people speculated about this afternoon. But its not sen. Hawley we can fix that. Professor marcus the question is, how . Sen. Hawley it would be easy to say section 230 doesnt apply to generative a. I. You talk about a right to action. Ms. Montgomery if a Company Discriminates in granting credit, or in the hiring process, by virtue of the fact that they relied so sig too significantly on an a. I. Tool, theyre responsible for that regardless of whether they used a tool or a human to make that decision. Sen. Blumenthal im going to turn to the senator for more questions but on the issue of the moratorium. We need to be careful. The world wont wait. The rest of the Global Scientific Community isnt going to pause. We have adversaries that are moving ahead and stoirk head in the sand is not the answer. Safeguards and protections, yes. But a flat stop sign, sticking our head in the sand, i would be very, very worried about. Without militating for any sort of pause i would emphasize theres a difference between research which surely we need to do to keep pace if our with our foreign priefls and deployment at massive scale. You could deploy things at a scale of a Million People or 10 Million People but not 100 Million People or a billion people and if there are risks you might find them out sooner and be able to close the barn doors before the horses leave rather than after. Senator booker therell be no pause. Its nice to call for it. But nobody is pausing. I agree. I dont think its a realistic thing. The reason i personally signed the letter was to call attention to how serious the problems were and to emphasize spending more of our efforts on trustworthy, safe a. I. Rather than making a bigger version of something we already know to be unreliable. Sen. Booker im a futurist. I love the excite ovment the future. I guess theres a famous question if you couldnt control for your race, your gender where you land on planet earth, what time and place would you want to be born . Everyone would say right now its still the best time to be here because of technology and everything. Im excited about what the future has hold. But the destructiveness i have seen as a person who has seen a lot of the technologies of the last 25 years is what concerns me. One of the things, especially with companies that are designed to want to keep my attention on screens, and im not just talking about new media. 24 hour cable news is an example of people that want to keep your eyes on screens. I have a lot of concerns about corporate intention. And sam, this is again why i find your story so fascinating to me and your values that i believe in from our conversations so compelling to me. But absent that, i real kri want to just explore what happens when these companies that are already controlling so much of our live, a lot has been written about the fang company, what happens when they are the ones dominating this technology as they did before . So professor marcus, does that have any concern, the role that corporate power, corporate concentration has in this realm, that a few Companies Might control this whole area . Professor marcus i radically changed the shape of my own life in the last few month, because of what happened with microsoft releasing sydney. It didnt go the way i thought it would. I anticipated the h hallucinations. I wrote an index i wrote an article thats in the index. I said it would have trouble with physical reasoning and psychological reasons and along came sydney and the initial press reports were favorable. Then there was the famous article by kevin russe in which it recommended he get a divorce. I had seen tay and others, and those had been pulled after they had problems. Sydney clearly had problems. What i would have done, had i run microsoft, which i do not, was temporarily withdraw frit the market and they didnt. That was a wakeup call to me and a reminer that fenn you have a company like open a. I. Thats a nonprofit and sams values have become clear today, other people can guy bye those exeans and do what they like with them. Maybe we have a stable set of actors now but the amount of power these systems have to shape our views and our lives is significant. That doesnt even get into the risks that someone may repurpose them deliberately for all kinds of bad purposes. In the middle of february i stopped writes much about technical issues and a. I. Which is most of what id written about for the last decade and said i need to work on policy. This is frightening. Sen. Booker i want to give you an opportunity as my sort of last question or so, dont you have conditions concerns about i graduated from stanford, i know so many players in the valley from angel folks to at love a lot of founders of companies we all know. Do you have some concern about a few players with extraordinary resources and power, power to influence washington . I see us, im a big believer in the free market but the reason i walk into a bodega and a twinkie is cheaper than an apple or a happy meal costs less than a bucket of salad is because of the way the government tips the scales so the free market is not what it should be when you have large corporate powers that can influence the game here. Do you have concerns about that in the next era of technological innovation . Mr. Altman yeah. Again, thats so much of why we started openai. I think its important to democratize the inputs to these sthernlings values were going to align to. I think its also important to give people wide use of these tools when we started the a. P. I. Strategy, a big part of how we make our systems available for anyone to use. There was a huge amount of skepticism. That does come with challenge, thats for sure we think putting this in the hands of a lot of people, not in the hands of a few companies is quite important. We are seeing the innovation boom from that but it is absolutely true that the number of companies that can train, the true frontier models, is going to be small. Just because of the resources required. I think there needs to be inceln us and our competitors. I think there is a rich and exciting industry happening and incredibly Good Research and new startups not only using our models but creating our owns. I think whatever regulatory agencies happen, we preserve that fire, thats critical. Sen. Booker im a big bleier in the democratizing of technology, but ive seen the failure time and time again my team works on a lot of issues about the reinforcing of bias through algorithms, the failure to advertise certain opportunities in certain zip codes. But you seem to be saying, and ive heard this with web, this will be decentralized, all these things are going to happen. But this seems to me not toach offer that promise because of the people who are designing these take so much power, energy, resources, are you saying that this that my dreams of technology further democratizing opportunity and more are possible within a technology that is ultimately, i think, could be very centralized to a few players who already control so much . Mr. Altman so, this point that i made about use of the model anded bying on top of it. This is a new platform, right. It is definitely important to talk about who will create the models. I want to do that. I think its also important to decide who who values wie going to align these models. In terms of using the model, the people that build on top of the openai a. P. I. Do great things. People comment, i cant believe you get this Much Technology for this little money. People are building, putting ample i. Everybody where, using our a. P. I. Which lets us put safe imardz in place. I think thats exciting. Thats how it is being democratized not how its going to be but is being democratized right now. Last whole new explosion of new businesses, products, services. By lots of different companies. Sen. Booker so as i close, most industries resist even reasonable regulation from seat belt laws to, weve been talking a lot recently about rail safety. The only way were going to see the dmok rahtyization of values, i think, and while there are Noble Companies out there, is if we create rules of the road and the force certain safety mashes measures like weve seen with other technology. Sen. Blumenthal senator booker, i couldnt agree more in terms of Consumer Protection which ive been doing for a while. Participation by the industry is tremendously important and not just rhetorically but in real terms. We have a lot of industries that come before us and say oh, were all in favor of rules but not those rules. Those rules we dont like. And its every rule, in fact that they dont like. And i sense that theres a willingness to participate here that is genuine and authentic. I thought about asking chatgpt to do a new version of dont stop thinking about tomorrow lawz thats because thats what we need to be doing here. Senator hawley has pointed out congress doesnt always move at the pace of technology. And that may be a reason why we need a new agency. But we also need to recognize the rest of the world will also be moving as well. And youve been enormously helpful in focusing us and illuminating some of these questions. And performed a Great Service by being here today so thank you to every one of our witnesses and im going to close the hearing, leave the record open for one week in case anyone wants to submit anything. I encourageyny of you who have man you vipts that are going to be published or observations from your companies to submit them to us. We look forward to our next hearing this one is closed. [captions Copyright National cable satellite corp. 2023] [captioning performed by the national captioning institute, which is responsible for its caption content and accuracy. Visit ncicap. Org] thank you very much. Thank you for including me in this. Im here all week if you want to talk oneonone. Thank you. Would you make any comparisons between sam altmans testimony here an other testimony . Well, um, you know, just looking at the record, sam altman is night and day compared to most c. E. O. s, not just in the the words and rhetoric but in actions. And his willingness to participate and commit to specific actions. So you know, some of the Big Tech Companies are violating regulations, thats very different than what sam altman promises, and given his track record he seems to be pretty sincere. The hear regular flecked the range of concerns here. Elections, to national security, to medical, what kind of challenge does that pose in trying to craft a response . It means we have to construct a system that is broad and without congress being the gate keeper every thriems some new technological advance. So probably creating an agency or delegating a high degree of responsibility for the rule making makes a lot of sense in this area. Its difficult to come up with consensus on a bill when you have so many different contingencies or concerns around. Theres no question its complex with a lot of constituencies but the recognition that were not going to solve the problem by an excruciatingly detaild, prescriptive formula answering every one of these questions, in other words that were going to have to say to an agency look, do the rule making here. Make the rules, take the risks. Youre going to have to develop the expertise. Congress is not going to have it and congress cant act quickly enough. That degree of humility is required here. I think that you that degree of humility is safe. Are you thinking about a Regulatory Agency for all of technology or a. I. Particularly . You know, the question of a new Regulatory Agency i think is to be answered, whether its awe nor existing agency. It should be more than just a. I. Probably technology, privacy, you know. But clearly the ftc doesnt have the capability right now. So if youre going to rely on the f. T. C. Youve got to in effect clone it within itself so to speak. I think theres a powerful argument for an entirely new agency that given the resources could really do the job. Because as i said here, you create an agency just by signing bills. But an agency alone is not the solution. Its resources. And expertise. And the commitment to make it work. What are the ways you think action could take place . Sen. Blumenthal were working on a framework. The president of the United States has said, European Parliament an ample i. Act, you know, all kieppedz of private groups are issuing ideas for legislation. There is certainly a lot of support for the bill and a lot of momentum. People are excited but also anxious. Are there details on leader schumers bill you can share . Sen. Blumenthal not beyond what hes said. Certainly it is very, very important. I have to run. Why should consumers think ample i. Would get regulations faster than privacy . Sen. Blumenthal they should demand that protection on privacy as well as a. I. Theres a need for privacy legislation. Last need for a. I. Protection. And theres a need for the kids Online Safety act. You know. So we all understand, weve got a bill now that will protect kids. Thats a form of ample i. Thats out there, deploying, bullying, eating disorders, suicidal thoughts, drug abuse. Theyre out there right now in effect ricocheting that toxic content back and forth from kids to social media. And you know, there is certainly a sense of urgency around that issue. Thanks. Thank you. Former officials from the Silicon Valley bank and Signature Bank testified about their conduct and the impact it had on their recent bank failures. Watch tonight before the Senate Banking committee, beginning at 9 00 p. M. Eastern on cspan. Cspan now our free mobile video app. Or online at cspan. Org. Cspans washington journal, every day were taking your calls live on the air on the news of the day and well discuss policy issues that impact you. Coming up wednesday morning, new york republican congressman michael lawler, a member of the Financial Services committee

© 2025 Vimarsana

vimarsana.com © 2020. All Rights Reserved.