Information about hunter biden. I agree with dan your last guest there is not a strong connection at this point. Between the evidence on hunter biden and any evidence you can watch all of our programs on our website c span. Org. We take you live to capitol hill where microsoft president is expected to be joined by a law professor and a scientist to testify about proposed legislation that wrigleys Artificial Intelligence. Live coverage on cspan 3. And also to chairman durbin who supports has been invaluable in encouraging us to go forward here. I have been grateful especially to my partner in this effort senator holly the Ranking Member. He and i as you know have produced a framework. Basically a blueprint for a path forward to achieve legislation. Our interest is in legislation. In this hearing along with the two previous ones has to be seen as a means to that end. We are very resolved oriented. I know you are from your testimony. And i have been enormously encouraged. And emboldened by the response so far just in the past few days and from my conversations with leaders in the industry like mr. Smith. There is a deep appetite indeed a hunger for rules and guardrails basic safeguards for businesses and consumers, for people in general. From potential perils. There is also a desire to make use of the tremendous potential benefits. And our effort is to provide for regulation in the best sense of the word. Regulation that permits and encourages innovation. And new businesses and technology and entrepreneurship. At the same time provide those guardrails enforceable safeguards that can encourage trust and confidence in this growing technology. It is not a new technology entirely. It has been around for decades. But Artificial Intelligence is regarded as entering a new era. And make no mistake, there will be regulation. The only question is how soon and what . And it should be regulation that encourages the best in American Free enterprise but at the same time provides the kind of protections that we do in other areas of our economic activity. To my colleagues no need for new rules. You have enough laws protecting the public. Yes we have laws that prohibit unfair and deceptive competition. We have laws that regulate airline safety. And drug safety. But nobody would argue that simply because we have those rules we dont need specifics protections for medical device safety. Or car safety. Just because we have rules that prohibit discrimination in the workplace does not mean we dont need rules that prohibit dissemination and voting. And we need to make sure that these protections are framed and targeted in a way that applied to the rest risks involved. Risks based rules. Managing the risks is what we need to do here. So, our principles are pretty straightforward i think. We have nope pride of authorship. We have seconded this framework to encourage comment. We wont be offended by criticism from any quarter. That is the way we can make this framework better and eventually achieve legislation we hope i hope at least by the end of this year. And the framework is basically establishing a licensing regime for companies that are engaged in high risk Ai Development. Creating an independent oversight body that has expertise with ai and works with other agencies to administer and enforce the law. Protecting our national and Economic Security to make sure we arent enabling china or russia and other adversaries to interfere in our democracy or violate human rights. Requiring transparency about the limits and use of ai models. At this point includes rules like watermarking, digital disclosure, when ai is being used and data access for researchers. And ensuring ai companies can be held liable in their products breach privacy violate civil rights and endanger the public, deep fakes impersonation, weve all heard those terms. We need to prevent those harms. Senator holly and i former attorneys general of our states, have a deep and abiding affection for the potential enforcement, powers of those officials, state officials. But the point is there must be effective enforcement. Private rights of action as well as federal enforcement are very very important. So let me just close by saying before i turned it over to my colleague. We are going to have more hearings. The way to build a coalition in support of these measures is to disseminate as widely as possible the information that is needed for our colleagues to understand what is at stake here. We need to listen to the kind of Industry Leaders and experts that we have before us today. And we need to act with dispatch. More than just deliberate speed. We need to learn from our experience with social media that if we let this get out of the barn, it will be even more difficult to contain than social media. And we are seeking to act on social media the harms that it portends right now as we speak. We are literally at the cusp of a new era. I asked sam altman when he sat where you are what his greatest fear was. I said mine my nightmare is the massive unemployment that could be created that is an issue that we dont deal with directly here. But it shows how wide the ramifications may be. And we do need to deal with potential worker displacement and training. And this new era is one that portends an enormous promise but also perils. We need to deal with both. I will turn it now to Ranking Member senator hawley. Thank you for organizing this hearing. This is now the third of these hearings that we have done. Ive learned a lot. In the previous couple. I think some of what we are learning about the potentials of ai is exhilarating. Some of it is horrifying. And i think what i hear the chairman saying and what i agree with is we have a responsibility here now to do our part to make sure that this new technology which holds a lot of promise but also peril actually works for the american people. That it is good for working people. That it is good for families. That we dont make the same mistakes that congress made with social media. 30 years ago now congress basically outsourced social media to the biggest corporations in the world. That has been i would submit to you nearly an unmitigated disaster. We had the biggest most powerful operations not just in america but in the globe and in the history of the globe. Doing whatever they want with social media. Running experiments basically every day on americas kids. Inflicting Mental Health harms the likes of which we have never seen. Messing around in our elections anyway that is deeply deeply corrosive to our way of life. We cannot make those mistakes again. So, we are here as senator blumenthal said to try to find answers. To try to make sure that this technology is something that actually benefit the people of this country. I have no doubt with all due respect to the corporate in front of us the heads of these corporations. I have no doubt it will benefit your companys. What i want to make sure is that it is the american people. That is the task we are engaged in. I look forward to this today. Vacuum is the chairman. I want to introduce our witnesses. As is custom i will swear them in and ask them to submit their testimony. Welcome to all of you. Chief scientist. He joined in january 2009 as chief scientists after spending 12 years at stanford university. He was chairman of the Computer Science department. He has published over 205th the papers. He holds 120 issued patents. He is the author of 4 textbooks. Brad smith is the vice chair and president of microsoft. As microsoft vice chair and president he is responsible for spearheading the Companies Working and representing it publicly in ava wide variety of Critical Issues involving the intersection of technology and society including Artificial Intelligence cybersecurity privacy environmental sustainability human rights digital safety immigration philanthropy and products in business. For a nonprofit customers. We appreciate your being here. Professor hartzog is professor of law and class of 1960 scholar at Boston University school of law. Hes also a nonresident fellow at the school of policy in medicine and law. At Washington University. A faculty associate at the center for internet and society at harvard university. A scholar at the center for internet and society at stanford law school. I could go on about each of you at much greater length with all of your credentials. But suffice it to say that very impressive. If you would now stand i will administer the oath. Do you solemnly swear the testimony you are about to give is the truth the whole truth and nothing but the truth . Why dont we begin with you mr. Dally. Chairman blumenthal Ranking Member hawley. They keep the privilege its hot testified today. Im delighted to discuss Artificial Intelligence journey in the future. Invidious at the forefront of accelerated computing genitive Ai Technologies potential to transform industries address global challenges profoundly benefit society. Since our founding in 1993 we have been committed to developing technology to empower people and approve the quality of life worldwide. Today over 40,000 Companies Use invidious platforms across media and entertainment Scientific ComputingHealthcare Financial ServicesInternet Services automotive and manufacturing to solve the worlds most difficult challenges and bring new products and services to consumers worldwide. At our finding in 1983 we were a 3d graphic startup dozens of startups competing to create an entirely new market for accelerators to enhance Computer Graphics for games. In 1999 we invented the Graphics Processing Unit or gpu which could perform calculations in parallel. We launched we recognize the gpus theoretically accelerate in the application that can benefit processing. This bed paid off. Researchers worldwide innovate on nvidia gpus. Collective efforts we have made advances in the ai that will revolutionize tremendous benefits to society across sectors such as Healthcare MedicalResearch Education business cybersecurity climate and beyond. However we also recognize like any new product or service ar products and services have risks. Those who make and use or sell ai enabled products and services are responsible for their conduct. Fortunately many uses of ai applications are subject to existing laws and regulations. The government the sectors in which they operate. Ai enable services and high risk sectors could be subject to enhance edification requirements when necessary. While other applications with less risk of harm may need less strategic stringent licensing a relation. With clear stable and thoughtful regulation Ai Developers work to benefit society while making products and services as safe as possible. For our part committed to the safe and Trustworthy Development and diplomas of ai. For example guardrails are open Source Software empowers developers to guide genitive ai applications to produce accurate appropriate and secure text responses. Nvidia implemented Model Risk Management guidance ensuring a comprehensive assessment and management of risk associated with nvidia develop models. Today nvidia announces it is endorsing the white houses voluntary commitments on ai. We can and will continue to identify and address the risks. No discussion of ai would be complete without addressing what is often described as frontier ai models. Some of express fear frontier models will evolve into uncontrollable artificial general intelligence. Which could escape our control and cause harm. Fortunately uncontrollable artificial general intelligence is Science Fiction and not reality. At its core ai is a Software Program that is limited by its training. Provided to it and the nature of its output. In other words humans will always decide how much Decision Making power to cede the ai models. As long as we are thoughtful in measured we can ensure trustworthy and ethical deployment of ai systems without suppressing innovation. We can spur innovation by ensuring ai tools are widely available to everyone not concentrated in the hands of a few powerful firms. I will close with two observations. First the i is already out of the bottle. Ai algorithms are widely published and available to all. Ar software can be transmitted anywhere in the world at the press of a button and many Ai Development tools frameworks and foundational models are open source. Second, no nation and certainly no Company Controls a chokepoint to Ai Development. Leading u. S. Computing platforms are competing with companies from around the world while u. S. Companies may currently be the most Energy Efficient cost efficient and easiest to use theyre not the only viable alternatives for developers abroad. Other nations are developing ai systems with or without u. S. Components. They will offer those applications in the worldwide market. Safe and trustworthy ai will require multilateral and multi stakeholder cooperation or it will not be effective. The United States is an remarkable position today and with your help we can continue to lead on policy and innovation while into the future. Nvidia stands ready to work with you to ensure the development and employment of genitive ai and accelerated computing serve the best interest of all. Thank you for your opportunity to testify before this committee. Thank you very much. Mr. Smith. Ranking member hawley members of the subcommittee. My name is brad smith the Vice President of microsoft thank you for the opportunity to be here today. More importantly thank you for the work that you have done to create the framework you have shared. I think you put it very well, first we need to learn and act with dispatch. Ranking member hawley i think you offered real words of wisdom. Lets learn from the experience the whole world had with social media. And lets be clear eyed about the promise and the peril in equal measure as we look to the future of ai. I would first say i think your framework does that. It doesnt attempt to answer every question. By design. But it is a very strong and positive step in the right direction. And puts the u. S. Government on the path to be a Global Leader in ensuring a balanced approach that will enable innovation to go forward with the right legal guardrails in place. As we all think about this more i think it is worth keeping three goals in mind. First lets prioritize safety and security. Which your framework does. Lets require licenses for advanced ai models and uses in high risk scenarios. Lets have an agency that is independent and can exercise real and effective oversight over this category. And then lets couple that with the right kinds of controls that will ensure safety of the sort weve already seen i think start to emerge in the white house commitments that were launched on july 21st. Second, lets prioritize as you do the protection of our citizens and consumers. Lets Prioritize National security. Always in a sense in some ways the First Priority of the federal government. But lets think as well as you have about protecting the privacy, the civil rights and the needs of kids. Among many other ways of working and ensure we get this right. Lets take the approach that you are recommending namely focus not only on those companies that develop ai, like microsoft, as well as companies that deploy ai like microsoft. In different categories we are going to need Different Levels of obligations. And as we go forward lets think about the connection between say the role of a Central Agency that will be on point for certain things as well as the obligations that frankly will be part of the work of many agencies. And indeed our courts as well. And lets do one other thing as well. Maybe it is one of the most important things we need to do so we ensure that the threats that many people worry about remain part of Science Fiction and dont become a new reality. Lets keep ai under the control of people. It needs to be safe. And to do that as we have encouraged there needs to be safety brakes especially for any ai application or system that can control Critical Infrastructure. If a Company Wants to use ai to say control the electrical grid or all of the self driving cars on our roads or the water supply, we need to learn from so many other technologies that do great things but also can go along. We need a safety break just like we have a Circuit Breaker in every building and home in this country to stop the flow of electricity if that is needed. Then i would say lets keep one third goal in mind as well. This is the one where i was just you maybe consider doing a bit more to add to the framework. Lets remember the promise that this offers. Right now if you go to state capitals you go to other countries, i think there is a lot of energy being put on that. When i see what Governor Newsom is doing in california or governor bertram in north dakota. I see them at the forefront of figuring out how to use ai to say improve the delivery of healthcare. Advanced medicine. Improve education for our kids. And maybe most importantly make it Government Services or use the savings to provide more and Better Services to our people. That would be a good problem to have the opportunity to consider. In sum, professor hartsock has said this is not a time for half measures. It is not he is right. Lets go forward as you have recommended. Lets be ambitious and get this right. Thank you. Thank you very much. Mr. Hartsock i read your testimony and you are very much against half measures. We look forward to hearing what the full measures that you recommend are. That is correct senator. Chair blumenthal and members of the committee thank you for inviting me to appear before you today. I am a professor of law at Boston University. My comments today are based on a decade of researching law and technology issues. Im drawing from research on Artificial Intelligence policy that i conducted as a fellow with colleagues at the Cornell Institute at Washington University in st. Louis. Committee members up to this point ai policy is largely been made up of industry led approaches like encouraging transparency, mitigating bias and promoting principles of ethics. I would like to make one simple point in my testimony today. These approaches are vital. But they are only half measures. They will not fully protect us. To bring ai within the rule of law lawmakers must go beyond these half measures to ensure that ai systems and the actors that deploy them are worthy of our trust. Half measures like audits assessments and certifications are necessary for data governance. But industry leverages procedural checks like these to dilute our loss into managerial box checking exercises that entrench harmful surveillance based Business Models. A checklist is no match for the staggering fortune available to those who exploit our data, labor and are. i see to develop and deploy ai systems. It is no substitute for meaningful liability when ai systems harm the public. Today i would like to focus on three popular half measures and why lawmakers must do more. If you are doing that and you dont understand the content, you can ask the ai tutor to help you solve the problem and i think is not only for the kids, but its usable for the parents and i think its good. Lets just say a 14yearold or what is the age of eighth grade algebra. I try to help them with their homework and i think we want kids in a controlled way with the safeguards to you something that way. Im not talking about tutors. Im talking about your ai chat. Famously earlier this year your chat pod, you had a technology bod and you wrote about this and your chat pod was urging this person to break up their marriage. Do we went them to be having those conversations. Of course not. Would you commit to raising that. I dont want to chat to break up anybodys marriage i dont either. Yeah, we are not going to make the decision on the exception. It goes to, we have multiple tools. Age is a very red line. It is. Thats why i like it. My point is, there is a safety architecture that we can apply. Your safety architecture did it stop an adult and did it stop the chat pod from having the discussion with an adult in which it said, you dont really love your wife. Your wife isnt good for you. She doesnt really love you. This is an adult. Can you imagine if the kind of things that your chat pod would say to a 13yearold. Im serious about this. Do you think its a good idea. What a second. Lets put that into context. At a point where the technology and 20,000 people, the journalist spent two hours of the evening ignoring his wife and interacting with a computer and trying to break the system which he managed to do. We didnt envision that use. The next day we have fixed it. Are you telling me that youve envisioned all the questions that 13yearolds will ask and that the pair should be absolutely fine with that and i just trust you in the sum of the the New York Times writer did . As we go forward, we have an increasing capability to learn from the experience of real people. Thats what worries me. Thats exactly what worries me. What you are saying is, we have to have some failures. I dont want 13yearolds to be your guinea pig i dont want ,14 to be your guinea pig. I dont want you to learn from their failures. You have to learn from the failures, go right ahead. Lets not learn from the failures of americas kids. This is what happened with social media and we had social media he made billions of dollars giving us a Mental Health crisis in this country and they got rich and kids got depressed and committed suicide. Why would we want to run that experience again with ai . Not just raise the age . We shouldnt want anybody to be the guinea pig. Regardless of the age or anything. Lets roll kids out right now. Lets also recognize that technology does require real uses. Whats different about this technology and from my view with the social media experiences that we not only have the capacity, but we have the will and we are applying that will to fix things in hours and days. Yes. To fix things after the fact. I mean, im sorry. This sounds to me like you are going to say, trust us. We are going to do well with this. Im just asking why we should trust you with our children . Im not asking you to trust, although i hope we will work everyday to earn it and thats why you have the licensing obligation. There is an in licensing obligation. Im asking you to make a choice now to say, you can go every parent in america now, microsoft is going to protect your kids. We would never use your kids as a science experiment ever. Never. Therefore, we are not going to target your kids and we are not going to allow your kids to be used by our chat lights as a source of information if they are younger than 18. I think you were talking about the problem. There are two things that you are talking about. Im talking about kids. Its very simple. We dont want to use kids and monetizing and et cetera, but i am equally of the view, i dont want to cut off an eighth grader today with the ability to use this tool that will help them learn algebra or math in a way that they could it a year ago. With all due respect, it was in algebra or math that your pet thought was recommending were talking about. We are trying to break up some reporters marriage. We are mixing things. No, we are not. Were talk about your chabad. We are talking about the chat. Worse, we are talking about chat and im talking about the protection of children and how we make Technology Better and, yes, there was that episode back in february on valentines day and six months later, if that journalist try to do the same thing again, it will not happen. You want me to be done. I just dont want to miss my boat. I dont want to miss my vote. You are very kind. Thank you. Some of us havent voted yet and i wanted to turn to you. In march, a video announced a partnership with getty images to develop models and develop images in this provides royalties to conduct rent creators. Why was it important to the company to partner with and pay for getting them into the library developing generative ai models . We believe inspecting people and their rights. The rights of the photographers who produced the images that our models are trained on and we didnt want to infringe on this. We did not just grab a bunch of images off a weather train our model. We partner with getty and we trained our models and when people use picasso to generate images, the people who provided the original content were enumerated and we see this as a way of Going Forward in general where people who are providing the eyepiece who trains these models should benefit from the use of the ip. Today the white house announced eight more companies that are taking steps to move towards safe, secure and Transparent Development of ai and videos and one of those companies, you talk about the steps that you have taken and what steps you plan to take for the response will development of ai. Weve done a lot already. We have implemented our guardrails so that we can basically put guardrails around our largely which model so that inappropriate prompts to model dont get a response and the model inadvertently were to generate, that is detected and intercepted before it can reach the user of the model. We have a set of guidance that we provide for all of our generated models and how they should be appropriately used. We provide its to say where the metal came from and what the definition is trained on and we take these models very thoroughly in the testing depends on the use. For certain models, we test them for bias. We want to make sure that when you refer to a doctor, you dont automatically assume its a him. We test them for safety. We have a variant of our nemo model called bio nemo that used in the medical profession and we want to make sure that the advice you give this a there are a number of the other metrics. Very good. Thank you, professor. Do you think congress should be more focused on regulating the inputs and design of generative ai or focus more on outputs and capability . Certainly. Certainly. I think the area that has been ignored up to this point has been the design and input to a lot of these tools. To the extent that area could use some revitalization, i would encourage input and output design and use. I suggest you look at these election bills because as we have been talking about, i think we have to move quickly on those in the fact that it is bipartisan and its been a very probably different thing. Absolutely. I want to thank mr. Smith for wearing a purple vikings type. I know that was an ai generative message that you got too. I know this would be a smart move with me after their loss on sunday. I will will remind you on thursday night. I can assure you that it was an accident. Very good. Thank you, all. I see we have a lot of work to do. Thanks. Thank you, mr. Chairman. Mr. Smith, want to come to you first to talk about china and the Chinese Communist party and the way that they have gone about and we have seen a lot of it on tick tock because they had these implement campaigns that they are running to influence certain thought processes with the american people. I know that you all just did a report on china. You covered some of the disinformation and some that you have obtained. Talk to me a little bit about how microsoft in the industry as a whole can combat some of these campaigns. I think there is a couple of things that we can think more about and do more about. The first, as we all should want to ensure that our own products and systems and services our not used by foreign governments in this matter and i think that there is room for the evolution of export controls and next generation export controls to help prevent that. I think there is also room for our concept that has worked since the 1990s in the world of banking and Financial Services and know your customer requirements and we have been advocates for those so that if there is and know your customer requirement. We have set requirements in effect so that they are deployed to data centers. Let me come to you. I think that what of the things, if we look at ai, is the detrimental impacts. We dont always want to look at the doomsday scenarios, but we are looking at some of the reports on surveillance with the ccp surveilling the winters and with iran surveilling women. I think there are other countries that are doing this same type of surveillance. What can you do to prevent that. How do we present that. Senator, ive argued in the past that facial Recognition Technology is a certain source of surveillance and they are fundamentally dangerous and that there is no world that it would be safe for any of us. We should prohibit them out right and biometric and public spaces a prohibition of emotion recognition that this is what i refer to as the strong bright line measures that draws absolute lines in the sand, rather than the procedural ones that are trenching this kind of harmful surveillance. Mr. Chairman, cannot take another 30 seconds because mr. Daily was shaking his head in agreement on some things and i was catching that. Do you want to weigh in before i close my question he on either of these topics . I was in general agreement. We need to be very careful about who we sell our technology to and we try to sell to people who are using this for good commercial purposes and not to suppress others and we will continue this because we dont want to see this technology misused to impress anybody. Awesome. Thank you. Thank you, senator blackburn. My colleagues, senator holly mentioned that we have a forum tomorrow, which i welcome. I think anything to aid in our education and enlightenment is a good thing and i just want to express the hope that some of the folks who are appearing in that venue will also cooperate here before the subcommittee and we will certainly be inviting more than a few of them and i want to express my thanks to all of you for being here, but especially mr. Smith, who had to be here tomorrow to talk to my colleagues privately and our efforts are combo mentoring, not contradictory. Im very focused on election interference because elections are upon us and i want to thank my colleague for taking a First Step Towards addressing the harms that can result from all of the potential perils that we have identified here and it seems to me that authenticating the truth that embody true images and voices is what approach and banning the defacing impersonations is another approach and obviously banning anything in the public realm and in the Public Discourse endangers the First Amendment, which is why exposure is often the remedy that we see, especially in campaignfinance, so maybe i should ask all of you whether you see that banning certain kinds of election interference and if you raise the specter of foreign interference and frauds and scams that could be perpetrated as they were in 2016 and i think it is what of those nightmares that should keep us up at night. We are an open society and we welcome Free Expression and ai is a form of expression. Free or not. Whether it is generated or high risk or simply catching up some of the background in the tv ad. Maybe each of you talk a little bit about what you see the potential remedies are. It is a great concern that the American Public may be misled by fakes of theirs kinds. As you mentioned, that the use to authenticate a true image and voice added source and tracking that deployment will let us know what a real images. If we insist on ai content and ai generated content be generated as such, the people are kept off that this is generated and not the roofing. You know, i think we need to avoid having some foreign entity interfering in our elections and at the same time, ai generated content is speech and i think it would be a dangerous precedent to try to ban something. I think it is much better to have exposure, as you suggested, and demand something out right. Three thoughts. Number one, this is a critical year for elections. Its not only for the United States, is for united kingdom, india across the European Union and over 2 billion people. This is a global issue for the worlds democracies and number two, i think you are right to focus on First Amendment because it is such a critical cornerstone for american critical life and the rights that we all enjoy. I will also be quick to add, i dont think the russian government qualifies for protection and if they are seeking to interfere in our elections, then i think that the country needs to take a strong stand and a lot of thought needs to be given and how to do that effectively but this goes to the heart of the question on why it is such a good one. I think it is going to require some real thought, discussion and an ultimate consensus to emerge around one specific scenario. Lets imagine, for a moment, that there is a video that involves a president ial candidate. And then lets imagine that someone uses ai to put different words into the mouth of that candidate, and uses Ai Technology to perfect the to a level that it is difficult for people to recognize as fraudulent. Then you get to this question, what should we do . And, at least as ive been trying, and weve been trying to think this through, i think we have two broad alternatives. One is, we take it down, and the other is, we relabel it. If we do the first, that were acting as sensors, and i do think that makes me nervous. I dont think thats really our role to act as censors, and the government really cannot, i think, under the First Amendment. But relabeling to ensure accuracy, i think that is probably a reasonable path. But really, what this highlights is the discussion still to be had, and i think the urgency for that conversation to take place. And, and i will just say, and that i want to come to you, professor hartzog. That i agree emphatically with your point about the russian government or the Chinese Government or the saudi government as potential interference. Theyre not entitled to the protection of our bill of rights when they are seeking to destroy those rights. And purposefully trying to take advantage of the free and open society to infect, decimate our freedoms. So i think there is a distinction to be made there in terms of National Security, and i think that rubric of National Security, which is part of our framework, applies with great force in this area. And that is different from a president ial candidate putting up an ad that, in effect, put words in the mouth of another candidate. And as you, as you may know, we began these hearings with introductory remarks from me that were impersonation, taken from my comments on the floor, taking my voice from speeches that i made on the floor of the United States senate with content generated by chatgpt that sounded exactly like something i would say, in a voice that was indistinguishable from mine. And , obviously, i disclosed that fact at the hearing. But in real time, as mark twain famously said, a lie travels halfway around the world before the truth gets out of bed, and we need to make sure that there is action in real time if youre going to do the kind of identification that you suggested. Realtime meaning realtime in a campaign, which is measured in minutes and hours, not in days and months. Professor hartzog . Thank you, senator. Like you, i, im nervous about just coming out and saying were going to ban all forms of speech, particularly when youre talking about something as important as political speech. And like you, i also worry about disclosure alone as a half measure, and earlier in this hearing, it was asked, what is a half measure . And i think that goes towards answering your question today. I think the best way to think about half measures is an approach that is necessary, but not sufficient that risks giving us the illusion that weve done enough, but ultimately, i think this is the total point. It doesnt really disrupt the Business Model and the financial incentives that have gotten us here in the first place. And so to, to help answer your question, one thing that i would recommend is taking about throwing lots of different tools, which i applaud your bipartisan framework for doing is bringing lots of different tools to bear on this problem, think about the role that surveillance advertising plays in powering a lot of, a lot of these harmful technologies, and ecosystems. That doesnt allow the system, the, the light just to be created, but flourish. And to be amplified. And so i think about rules and safeguards that we could do to help limit those financial incentives, borrowing from standard principle love, of accountability, things like, we use disclosure where its effective. Its not effective, you have to make it safe, and if you cant make it safe, it shouldnt exist. I think im going to turn to senator hawley for more questions, but i think this is a real conundrum. We need to do something about it. We need more than half measures. We cant dilute ourselves by thinking with a false sense of comfort that weve solved the problem. If we dont provide effective enforcement, and, to be very blunt, the federal Elections Commission often has been less than fully effective. A lot less than fully effective in enforcing rules relating to campaigns. And so, there again, an oversight with strong enforcement authority, sufficient resources, and the will to act is going to be very important if were going to address this problem in real time. Senator hawley . Mr. Smith, let me just come back to something you said. Thinking about now, workers, you talked about wendys, i think it was, theyre automating the drive through, and talking about, you know, this is, this was a good thing. I, i just want to press on that a little bit. Is it, is it a good thing that workers lose their jobs to ai, whether its at wendys or whether its at walmart or whether its at, at the local Hardware Store . I mean, is it you, you pointed out that, your comment was that theres really no creativity involved in taking orders in the drive through. But that is a, a job. Oftentimes a first job for younger americans. But, hey, in this economy, where the wages of bluecollar workers have been flat for 30, 40 years and running . What worries me is that oftentimes, what we hear from the tech sector, to be honest with you, is that jobs that dont have creativity, aztec defines it, dont have value. Im, friendly, scared to death that ai will replace lots of jobs that tech types think are creative, and will leave more bluecollar workers without any place to turn but my question of you is, is can we expect more of this, and is it really progress for folks to lose the kind of jobs that, you know, i expect thats not the best paying job in the world, but at least its a job . And do we really want to see more of these jobs lost . Well, to be clear, first, i didnt say whether it was a good or bad thing. I was asked to predict what jobs would be impacted, and identified that job is one that likely would be. So, but lets, i think, step back. Because i think your question is critically important. Lets first reflect on the fact that, you know, weve had about 200 years of automation that have impacted jobs. Sometimes for the better, sometimes for the worse. In wisconsin, where i grew up, or in missouri, where my father grew up, if you go back 150 years, it took 20 people to harvest an acre of wheat or corn , and now it takes one. So 19 people dont work on that acre anymore. And thats been an ongoing part of technology. The real question is this. How do we ensure that Technology Advances so that we help people get better jobs, get the skills they need for those jobs, and hopefully do it in a way that broadens Economic Opportunity rather than narrows it . I think the thing we should be the most concerned by is that since the 1990s, and i think this is the point youre making. If you look at the flow of Digital Technology, you know, fundamentally, weve lived in a world that has widened the economic divide. Those people with a college or graduate education have seen their incomes rise in real terms. Those people with, say, a High School Diploma or less have seen their income level actually drop, compared to where it was in the 1990s. So what do we do now . Well, ill at least say what i think our goals should be. Can we use this technology to help advance productivity for a much broader range of people . Including people who didnt have the good fortune to go, say, where you are i went to college or law school. Can, and can we do it in a way that not only makes them more productive, it actually reaps some of the dividends of that productivity for themselves in a growing income level . I think its that conversation that we need to have. I agree with you, and i, i hope that that is, i hope that thats what ai can do. You talked about the farm, used to take 20 people to do what one person could do. It is to take thousands of people to produce textiles or furniture or other things in this company. Were now at zero. So we can tell the tale in different ways. Im not sure that seeing workingclass jobs go overseas, or be replaced entirely, is a success story. In fact, id argue its not at all. Its not a success story. Id argue more broadly that our Economic Policy of the last 30 years has been downright disastrous for working people. And Tech Companies and Financial Institutions and certainly banks, wall street, they, they have read huge projects, but bluecollar workers could barely find a good paying job. I dont want ai to be the latest accelerant of that trend. And so i dont really want every Services Station in america to be manned by some computer such that nobody can get a job anymore, get their foot in the door, start their climb up the ladder. That worries me. Let me ask you about Something Else here, my expiring time. You mentioned National Security. Critically important. Of course, theres no National Security threat that is more significant for the United States than china. Let me just ask you, is microsoft too entwined with china . You have the Microsoft Research asia that was set up in beijing back in the late 1990s. Youve got centers now in shanghai and elsewhere. Youve got all kinds of cooperation with Chinese State owned misses. Im looking at an article here from protocol magazine, where one of the contributors said that microsoft had been the alma mater of chinese big tech. Are you concerned about your degree of employment with the Chinese Government . Do you need to be decoupling in order to make sure that our National Security interests arent fatally compromised . I think its something that we need to be and are focused on. To some degree, in some technology fields, microsoft is the alma mater of the Technology Leaders in every country in the world because of the role that we played over the last 40 years. But when it comes to china today, we are and need to have very specific controls on who uses our technology and for what. And how. Thats why we dont, for example, do work on quantum computing, or we dont provide facial recognition services, or focus on synthetic media, or a whole variety of things, while, at the same time, when starbucks has stores in china, i think its good that they can run their services in our data center other than a Chinese Companys data center but just on facial recognition, back in 2016, your Company Released this database, 10 million faces without the consent of the folks who were in the database. You eventually took it down, although it took three years. China used that database to train much of its facial Recognition Software and technology. I mean, it, isnt that a problem . You said that Microsoft Might be the alma mater of many companies, ai, but chinas unique, no . I mean, china is running concentration camps using Digital Technology like weve never seen before. And isnt that a problem for your company to be in any way involved in that . We dont want to be involved in that in any way, and i dont believe we are. Are you going to close your treasures in china, your Microsoft Research asia in beijing, your center in shanghai . I dont think that will accomplish what youre asking us. Youre running thousands of people through your centers out into the Chinese Government and Chinese State owned enterprises. Isnt that a problem . First of all, theres a big promise, and i dont embrace the premise that that is, in fact, what were doing. Well, which part is wrong . The notion that were running thousands of people through and then theyre going into the Chinese Government. Is that not right . I thought you had 10,000 employees in china whom you recruited from Chinese State owned agencies, Chinese State owned businesses. They, they come work for you and then they, they go back to the state owned entities. We have employees in china, in fact, we have that number. I dont, to my knowledge, that is not where theyre coming from, that is not where theyre going. We are not running that kind of revolving door. And its all about what we do. And who we do it with. That i think is of paramount importance. And thats what were focused on print you condemn what the Chinese Government is doing and all of that . We do. We do everything we can to ensure that our technology is not used in any way for that kind of activity in china and around the world, by the way. But you condemn it, to be clear. Yes. What are your safeguards that you have in place such that your technology is not further enabling the Chinese Government, given the number of people you employ there and the Technology Developed there . Well, you take Something Like facial recognition, which is a part of your question. We have very tight controls that limit the use of facial recognition in china, including controls that, in effect, make it very difficult, if not impossible, to use it for any time of realtime surveillance at all. And, by the way, the thing we should remember, the u. S. Is a leader in many ai fields. China is the leader in facial Recognition Technology in the ai for it. Well, in part because of the information that you help them acquire, no . No. Its because they have the worlds most data. Well, yeah, but you gave them know. I, i dont think that. You dont think you had anything to do with it . I dont think when you have a country of 1. 4 billion people, and you decide to have facial recognition used in so many places, it gives that country a massive data. But are you, are you saying that, that the database that microsoft released in 2016 youre saying that i wasnt used by the Chinese Government to train their facial recognition . I am not familiar with that. Id be happy to provide you with information. But, my goodness, the advance in that racial Recognition Technology, if you go to another country where theyre using facial Recognition Technology, its highly unlikely its american technology. Its highly likely that its chinese technology, because they are such leaders in that field, which i think is fine. I mean, if you want to pick a field where the United States doesnt want to be a technology leader, id put facial Recognition Technology on that list. But lets recognize its homegrown. How much money has microsoft invested in Ai Development in china . I dont know, but i will tell you this. The revenue that we make in china, which accounts for, what, about 1 out of every 6 humans on this planet, you know, its 1. 5 of our global revenue. Its not the market for us that it is for other industries or even some other Tech Companies. Sounds, then, like you can afford to decouple. But is that the right thing to do . Yes. And again, a regime that is fundamentally evil that is inflicting the kind of atrocities of its own citizens that you alluded to that its doing to the uighurs, what it doing that is running modern day concentration camps, yeah, i think it is. But theres two questions that i think at least are worthy of thought. Number one, do you want General Motors to sell a manufactured car, lets just say, sell cars in china . Do you want to create jobs for people in michigan or missouri so that ours those cars can be sold in china . If the answer to that is yes, then think about the second question. How do you want General Motors in china to run its operations, and where would you like it to store its data . Would you like it to be in the secure data center run by an american company, or would you like it to be run by a Chinese Company . Which will better protect General Motors trade secrets . Ill argue we should be there so that we can protect the data of American Companies, european companies, japanese companies. Even if you disagree on everything else, that, i believe, serves this country well. You know what, i think youre doing a lot more than just protecting data in china. You have Major Research centers, thousands, tens of thousands of employees. And to your question, do i want General Motors to be building cars in china, no i dont. I want them to be making cars here in the United States with american workers. And do i want American Companies to be aiding, in any way, the Chinese Government and their oppressive tactics . I dont. Would you like me to yield to you now . You ready . I have been very hesitant to interrupt the, the discussion, the conversation here has been very interesting, and im going to call on senator allsop, and that i have a couple followup questions but thank you, mr. Chairman, and thank you all for your testimony. Just getting down to the fundamentals, mr. Smith, if were going to move forward with a legislative framework, a regulatory framework, we have to define clearly in legislative text precisely what it is that were regulating. What is the scope of regulated activities, technologies, and products . So how should we consider that question . And how do we define the scope of technologies, the scope of services, the scope of products that should be subject to a regime of regulation that is focused on Artificial Intelligence . I think theres three layers of technology on which we need to focus in defining the scope of legislation and regulation. First is the area that has been the central focus of 2023 in the executive branch and here on capitol hill. Its the socalled frontier, or foundation, models that are the most powerful, save for Something Like generative ai. In addition, there are the applications that use ai, or, as senators blumenthal and hawley have said, the deployers of ai. If there is an application that calls on that model in what we consider to be a high risk scenario, meaning it could make a decision that would have an impact on, say, the privacy rights, the civil liberties, the rights of children or needs of children, then i think we need to think hard and have law and regulation that is effective to protect americans. And in the third layer is the data center infrastructure. Where are these models, and where these applications are actually deployed. And we should ensure that those data centers are secure, that there are Cyber Security requirements, that the companies, including ours, need to meet. We should ensure that there are Safety Systems at one, two, or all three levels if there is an ai system that is going to automate and control, say, Something LikeCritical Infrastructure such as the electrical grid. So those are the areas where we would say to start there, with some clear thinking, and a lot of effort to learn and apply the details, but focus there. As more and more models are trained and developed to higher levels of power and capability, there will be a proliferation, there may be a proliferation of models. Perhaps not the frontier models. Perhaps not those at the bleeding edge that use the most computable compute of all. Powerful enough to have serious implications. So is the question, which models are the most powerful in a moment in time . Or is there a threshold of capability or power that should define the scope of regulated technology . I think youve just posed one of the critical questions that, frankly, a lot of people inside the tech sector and across the government and in academia are really working to answer. And i think the technology is evolving, and the conversation needs to evolve with it. Lets just posit this. Theres Something Like gpd 4 from openai. Lets just posit it can do 10,000 things really well. Its expensive to create, and its, relatively easy to regulate in the scheme of things, because theres one or two or 10. But now lets go to where youre going, which i think is right. What does the future bring in terms of proliferation . Imagine that there is an academic, a professor hartzog is university who says i want to create an open source model. Its not going to do 10,000 things well, but its going to do four things well. It would require as many nvidia gpus. It wont, you know, require as much data. But lets imagine that it could be used to create the next virus that could spread around the planet. Then youd say, well, we really need to ensure that there is safety architecture and control around that as well. And thats the conundrum. Thats why this is a hard problem to solve. Its why were trying to build safety architecture in our data centers so that opensource models can, say, run in the and still be used in ways that will prohibit that kind of harm from taking place. But as you think about a licensing regime, this is one of the hard questions. Who needs a license . You dont want it to be so hard that only a small number of Big Companies can get it, but then you, you also need to make sure that youre not requiring people to get it when they really, we would say, dont need a license for what theyre doing. And, you know, the beauty of the framework, in my view, is it starts to frame the issue. It starts to define the question put let me ask this question. Is it a license to train a model to a certain level of capability . Is a license to sell, or license access to that model . Or is it a license to purchase, or deploy, that model . Who is the licensed entity . Thats another question that is key and may have different answers in different scenarios. But mostly, i would say, it should be a license to deploy. You know, i think that there may well be obligations to disclose to, say, an independent authority when a training run begins, depending on what the goal, when the training run ends, so that an oversight body and follow it. Just the way, say, might happen when a company is building a new commercial airplane. And then they, you know, there are whats emerging, the good news is, theres the emerging foundation of, call it, best practices, for, then, how the model should be trained, what kind of testing there should be, what harms should be addressed. Thats a big topic that needs discussion put when you say a license to deploy, do you mean, for example, if i Microsoft Office product, which is, wishes to use gpt model for some user serving purpose within your sweet, you would need a license to deploy gpt in that way . Or do you mean that gpt would require a license to offer to microsoft . And putting aside whether or not this is a plausible commercial scenario, the, the question is, whats the structure of the licensing arrangement . In this case, its more of the letter. Imagine, look, think about it like going. Boeing builds a new plane. Before it can sell it to United Airlines, and United Airlines can start to fly it, the faa is going to certified that its safe. Now imagine were at, call it, gpt12. Whatever you want to name it. You know, before that gets released for use, i think you can imagine a licensing regime that would save that it needs to be licensed after its been, in effect, certified as safe. And then you have to ask yourself, well, how do you make that work so that we dont have the government slow everything down . And what i would say is, you bring the government three things. First you need industry standards, so that you have a Common Foundation and well understood way as to how training should take place. Second, you need national regulation. And third, if were going to have a global economy, at least in the countries where we want these things to work, you probably need a level of international coordination. And id say, look at the world of civil aviation. Thats fundamentally how it has worked since the 1940s. Lets try to learn from it and see how we might apply Something Like that or other models here. Mr. Dally, how would you respond to the question in a, in a field where the, the technical take abilities are accelerating at an, a rapid rate , future rate unknown . Where, and according to what standard or metric or definition of power , do we draw the line for what requires a license for deployment, and what can be freely deployed without oversight by the government . You know, i think its a, its a tough question, because i think you have the balance two important considerations. The first is, you know, the risks presented by a model of whatever, whatever power, and then on the other side is the fact that, you know, we would like to ensure that the u. S. Stays ahead in that, this field. And to do that, we want to make sure that, you know, individual academics and entrepreneurs with a good idea can, you know, move forward and innovate and, and deploy models without huge barriers. So its the capability of the model, its the risk presented by its deployment without oversight. Is that, is that because the thing is, were going to have to go right legislation. And the legislation is going to have to go, in words, define the scope of regulated products. And so were going to have to abound that which is subject to a licensing arrangement, or wherever we land, and that which is not. I think it is and so how do you i mean and its dependent on the application, because if you have a model which is, you know, basically determining a medical procedure, theres a high risk with that. You know, depending on the patient outcome. If you have another model which is, you know, controlling the temperature in your building, if it gets a little bit wrong, you may be, you know, gives you a little bit too much power, or maybe, you know, youre not as comfortable as you would be, but its not a lifethreatening situation. So you need to regulate the things that are, have high consequences if, if the model goes awry. And im on chairmans borrowed time, so just tap the gavel when you want me to stop. You had to wait, so will i give you a couple okay, professor. And id be curious to hear from others concisely, with respect for the chairmans followups. How does any of this work without International Law . I mean, isnt it correct that a model, potentially a very powerful and dangerous model, for example, whose purpose is to unlock cbr and or mass destructive virological capabilities to a relatively unsophisticated actor, once trained, its relatively lightweight to transport . And without, a, an International Legal system, and, b, a level of surveillance that seems inconceivable into the flow of data across the internet, how can that be controlled and policed . Its a great question, senator, and, with respect to being efficient in my answer, ill simply say that there are going to be limits. Even assuming that we get international cooperation, which i would agree with you, i mean, weve already started thinking about ways in which emma for example, within the eu, which has already deployed some significant ai regulation, we might design frameworks that are compatible with that that require some sort of interaction. But, but ultimately, what i worry about is actually deploying a level of surveillance that weve never before seen in an attempt to perfectly capture the entire chain of ai, and its simply not possible. And i share the concern about privacy, which is, which is in part why i raised the point about, how can we know what folks are loading . A lightweight model, once trained onto perhaps a device thats not even online anymore. Right. There are limits, i think, to what well ever be able to know. Either of you want to take a stab before i get gavel out here . I would just say youre right, theres going to be a need for international coordination. I think its more likely to come from likeminded governments then, perhaps, global governments, at least in the initial years. I do think theres a lot we can learn. We were talking with senator blackburn about the swift system for financial transactions. And, you know, somehow weve managed, globally, and especially in the United States, for 30 years, to have know your customer requirements obligations for banks. Money has moved around the world. Nothing is perfect. I mean, thats why we have laws. But its worked to do a lot of good, to protect against, say, terrorist or criminal uses of money that would cause concern. Well, i think youre right in that these models are very portable. You could put the parameters of, of most models, even the very large ones, on a large usb drive, and, you know, carry it with you somewhere. You could also train them in a data center anywhere in the world, so, you know, i think its, really, the use of the model, and the deployment, that you can effectively regulate. Its going to be hard to regulate the creation of it, because if people cant create them here, well create them somewhere else. I think we have to be very careful if we want the u. S. To stay ahead. If we want the best created model where, you know, the regulatory climate has driven them. Thank you. Thank you, mr. Chairman. Thank you, senator allsop. I hope you are okay with a few more questions. Weve been at it for a while. Weve been very patient. Do we have a choice . No. But thank you very much. Its been very useful. I want to followup on a number of the questions ive been asked. First of all, on the international issue. There are examples and models for international cooperation. Mr. Smith, you mentioned civil aviation. The 737, the 737 max. I think i have it right. When it crashed, it was a plane that had to be redone, in many respects. And companies, airlines around the world looked to the United States for that redesign. And that approval. Civil aviation, atomic energy. Not always completely effective, but it has worked in many respects. And so i think there, there are International Models here, where, frankly, the United States is a leader by example. And best practices are adopted by other countries when we support them, and, friendly, in this instance, the eu has been ahead of us, in many respects, regarding social media. And we are following their leadership by example. I, i want to come to the, this issue of having centers, whether theyre in china or, for that matter, elsewhere in the world. Requiring safeguards so that we are not allowing our technology to be misused in china against the uyghurs , and preventing that technology from being stolen, or people we trained there from serving bad purposes. Are you satisfied, mr. Smith, that it is possible, in fact, that you are doing it in china that is preventing the evils that could result from doing business there in that way . I would say two things. First, i feel good about our track record and our vigilance, and the constant need for us to be vigilant about what services we offer to whom and how theyre used. Its really those three things. And i would take from that what i think is probably the conversation well need to have as a country about export controls, more broadly. Theres three fundamental areas of technology where the United States is today, i would argue, the Global Leader. First, the gpu trips from a company like nvidia. Second, the Cloud Infrastructure from a company like, say, microsoft. And the third is the Foundation Model from a firm such as openai, and, of course, google and aws and other companies are Global Leaders as well. And i think if we want to feel were good in creating jobs in the United States by inventing and manufacturing here, as you said, senator hawley, which i completely endorse, and good that technology is being used properly, we probably need an export control regime that weaves those three things together. For example, there might be a country in the world, lets just set aside china for a moment. Leave that out. Lets just say theres another country where you all, and the executive branch, would say, we have some qualms, but we want u. S. Technology to be present, and we want u. S. Technology to be used properly, the way that it would make you feel good. You might say, then, well, let nvidia export chips to that country to be used in, say, a data center of a company that we trust, that is licensed even here for that use, with the model being used in a secure way in that data center with a know your customer requirement and with guard rails that put certain kinds of use offlimits. That may well be where government policy needs to go, and how the tech sector needs to support the government and work with the government to make it a reality. I think that, that answer is , is very insightful, and raises other questions. I would kind of analogize this situation to nuclear proliferation. We cooperate over safety, in some respects, with other countries. Some of them adversaries. But we still do everything in our power to prevent American Companies from helping china or russia in their nuclear programs. Part of that nonproliferation effort is through export controls. We impose sanctions, we have limits and rules around selling and sharing certain chokepoint technologies related to nuclear enrichment, as well as biological warfare, surveillance, and other National Security risks, and our framework, in fact, envisions sanctions and safeguards precisely in those areas for exactly the reasons weve been discussing here. Last october, the Biden Administration used existing legal authorities as a first step in blocking the sale of some highperformance chips and equipment to make those chips to china, and our framework calls for export controls and sanctions and legal restrictions. So i guess a question that we will be discussing, were not going to resolve it today, regrettably, but we would appreciate your input Going Forward, and im inviting any of the listening audience here in the room or elsewhere to participate in this conversation on this issue and others. How should we draw a line on the hardware and technology that American Companies are allowed to provide anyone else in the world , any other adversaries or friends . Because, as you observed, mr. Dally, and i think all of us except, its easily proliferated. If i, if i could comment on this. Sure. You drew an analogy to nuclear regulation, and mentioned the word chokepoint. And i think the difference here is that there really isnt a chokepoint. And i think theres a careful balance to be made between, you know, limiting, you know, where, you know, our chips go, and what theyre used for, and, you know, disadvantaging American Companies, and the whole food chain that, that feeds them, because, you know, were not the only people who make chips that can do ai. I wish we were, but were not. There are, there are companies around the world that can do it. There are other American Companies, there are companies in asia, there are companies in europe. And if people cant get the chips they need to do ai from us , they will get them somewhere else. And what will happen then is, you know, it turns out that chips arent really the things that make them useful. Its the software. And if all of a sudden the standard chips for people to do ai become something from, you know, you know, pick a country, singapore, you know, all of a sudden, all the Software Engineers will start writing all the software for those chips. Theyll become the dominant chips, and, you know, the leadership of that technology will ship it from the u. S. To singapore, whatever other country becomes dominant. So we have to be very careful with balance, you know, the, you know, National Security considerations and the abuse of Technology Considerations against preserving the u. S. Lead in the technology area. Mr. Smith . Yeah. Its a really important point, and what you have is the argumentcounterargument. Let me, for a moment, channel what senator hawley often voices that i think is also important. Sometimes you can approach this and say, look, if we dont provide this to somebody, somebody else will, so lets not worry about it. I get it. But at the end of the day, you know, whether youre a company or a country, i think you do have to have clarity about how you want your technology to be used. And, you know, i fully recognize that there may be a day in the future after i retire from microsoft when i look back, and i dont want to say, oh, we did something bad, because if we didnt, somebody else would have. I want to say, no, we had clear values, and we had principles, and we had in place guardrails and protections, and we turned down sales. So that somebody couldnt use our technology to abuse other peoples rights. And if we lost some business, thats the best reason in the world to loosen business. And whats true of a company is true as a country. And so im not trying to say that your view shouldnt be considered. It should. Thats why this issue is complicated. How to strike that balance. Professor hartzog, do you have any comment . I think that was well said, and i would only add that its also worth considering in this discussion about how we sort of safeguard these incredibly dangerous technologies, and, and the risk that could happen if they, for example, proliferated. If its so dangerous, then we need to revisit the existential question again, and i just bring it back to thinking not only about how we put guardrails on, but how we lead by example, which i think you, you brought up, which is really important. And we, we dont win the race to violate human rights, right . And thats not one that we want to be running. And it isnt simply Chinese Companies importing chips from the United States and building their own data centers. Most ai companies have capabilities from cloud providers. We need to make sure that the cloud providers are not used to circumvent our export controls, or sanctions. Mr. Smith, you raised the know your customer rules, knowing your customers would require cloud, ai cloud providers whose models are deployed to know what companies are using those models. If youre leasing out a supercomputer, you need to make sure that your customer isnt the Peoples Liberation army, that it isnt being used to subjugate uyghurs, that it is used to do facial recognition on dissidents or opponents in iran, for example. But i do think that youve made a critical point, which is, there is a moral imperative here. And i think there, there is a lesson in the history of this great country, the greatest in the history of the world, that when we lose our moral compass, we lose our way. And when we simply do economic or political interests, sometimes its very shortsighted, and we wander into a geopolitical swamp and quicksand. So i think the, these kinds of issues are very important to keep in mind. When we lead by example. I want to just make a final point, and then, if senator hawley has questions , im going to let him ask. But on this issue of worker displacement, i mentioned at the very outset, i think we are on the cusp of a new industrial revolution. Weve seen this moving before, as they say. And it didnt turn out that well. In the industrial revolution, where workers were displaced en masse, those textile factories and the mills in this country and all around the world went out of business, essentially. Replaced the workers with automation and mechanics. And i, i would respond by saying , we need to train those workers. Education, you deleted to it, and it neednt be a fouryear college. You know . In my state of connecticut, electric boat, pratt whitney, sikorsky, defense contractors are going to need thousands of welders, electricians, tradespeople of all kinds who will have not just jobs, theyll have careers that require skills that, frankly, i wouldnt begin to know how to do. And i havent the aptitude to do. And thats no false modesty. So i think there are tremendous opportunities here, not just in the creative spheres that, that you have mentioned, where, you know, we may think higher human talents may come into play, but in all kinds of jobs that are being created daily already in this country. And as i go around the state of connecticut, the most common comment i hear from businesses, we cant find enough people to do the jobs we have right now. We cant find people to fill the openings that we have. And that is, in my view, maybe the biggest challenge for the American Economy today. I think that is such an important point. Its really worth putting everything we think about for jobs, because i will certainly endorse senator hawley, what you were saying before about thats, we need, we want people to have jobs, we wanted to earn a good living, et cetera. First, lets consider the demographic context in which jobs are created. The world has just entered a shift of the kind that it literally hasnt seen since the 1400s. Namely, populations that are leveling off, or, in much of the world now, declining. One of the things we look at is every country, and measure over fiveyear periods is the working age population increasing or decreasing, and by how much . From 2020 to 2025, the working age population in this country, able aged 20 to 64, its only going to grow by 1 million people. The last time it grouped by that small in number, you know who was president of the United States . John adams. Thats how far back you have to go. And if you look at a country like italy, take that group of people, over the next 20 years, its going to decline by 41 . And whats true of italy its true almost to the same degree in germany, its already happening in japan and korea. So we live in a world where, for many countries, we suddenly encounter. What you actually find, i suspect, when you go to hartford or st. Louis or kansas city, people cant find enough police officers, enough nurses, enough teachers. And that is a problem we need to desperately focus on solving. So how do we do that . I do think ai is something that can help. And even in Something Like a call center, one of the things thats fascinating to me, we have more than 3000 customers around the World Running proofs of concept. One fascinating one is a bank in the netherlands. They said, you go into a call center today. The desks of the workers look like a Trading Floor in wall street. They have six different terminals, somebody calls, theyre desperately trying to find the answer to a question. You know, with Something Like gpt4, with our services, six terminals can become one. Somebody whos working there can ask a question, the answer comes up, and what theyre finding is that the person whos answering the phone, talking to a customer, can now spend more time concentrating on the customer and what they need. And i appreciate all the challenges. Theres so much uncertainty. We desperately need to focus on scaling. But i, i really do hope that this is an era where we can use this to, frankly, help people fill jobs, get trading, and focus more, lets to, i think, americas economy in the future, and ai can help promote development of that workforce. Senator hawley, anything . You all have been really patient, and it still has our staff, i want to thank our staff for this hearing, but most important, were going to continue these hearings. It is so helpful to us i can go down our framework and tie the proposals to specific comments made by sam altman, or others who have testified before us. We will enrich and expand our framework with the insights that you have given us. So i want to thank all of our witnesses and, again, look forward to continue our bipartisan approach here. You made that point, mr. Smith. We have to be bipartisan, and adopt full measures, not half measures. Thank you all. This hearing is adjourned. This year, book tv marks 25 years of shining a spotlight on leading nonfiction authors and their books. From author talks, interviews, and festivals, book tv has provided viewers with a front row seat to the latest literary discussions on history, politics, and so much more. You can watch book tv every sunday on cspan 2, or online at book tv. Org. Book tv. 25 years of television for serious readers. Cspan student cam documentary competition is back, and this time, were celebrating 20 years with this years theme, looking forward while considering the past. The youth of today are the leaders of tomorrow. It is imperative that we take care of them and a groundwork that will help them succeed as they progress through life but with more awareness, we can Work Together to prevent fentanyl from becoming the worlds next pandemic. Inflation realy matters, so its important to understand the ramifications of allowing inflation to go out of control. Were asking middle and High School Students to create a 5 to 6 minute video addressing one of two questions. We want to know, in the next 20 years, what is the most important change that youd like to see in america . Or, over the past 20 years, what has been the most important change in america . Youll need to stroke supportive and opposing perspectives. As we do each year, were having a late 100,000 in total prizes, with a grand prize of 5000. And because were celebrating 20 years,every teacher who has students participate in this years competition has the opportunity to share a portion of an additional 5000. The deadline for students to submit documentaries is friday, january 19th, 2024. For more information about this years contest and rules, visit our website at studentcam. Org. There there efforts to improve va Financial Management, officials from the Va Department spoke specifically about its Business Transformation Program which is the vas third attempt to modernize its financial and accounting system. Good afternoon, the subcommittee will come to order. Good afternoon, the subcommittee will come to order. We are here to review vas progress in the Financial ManagementBusiness Transformation Program. Fbmt is the third attempt to evolve its hodgepodge of aging Financial System. These systems are a serious problem, every year with the financial audit, clean opinion despite carrying the same weaknesses and deficiencies for a decade. At the same time, it continues to be much similar to the wild west. It has been 10 years since the former executive blew the whistle on billions of dollars of unauthorized commitments. Nothing is fundamentally changed, with so many purchase cards and so many different facilities, and no central tracking, the department is practically helpless to enforce its policies much less root out waste and fraud. In basic Financial Management functions, stretching the capabilities of the systems like maintaining records when va transferred care and arp funds. There is basic questions about how the funding was handling during last months hearing, the va was struggling to answer. Additionally, one witness showed contempt at members for even trying to perform our basic oversight duties. This situation is untenable, i appreciate our Witnesses Today are attempting to solve there. Simply put, the fm bt program has to succeed, after a false start in 2016 and 2017, va relaunched the effort in 2018. Since then, the integrated financial and Acquisition Management systems have been implemented in National Cemetery administration, if you offices within the benefits administration, the office of information technology, and the office of the Inspector General, and part of the office of acquisition logistics, and construction. For the information we have, the system seems to be relatively successful in those offices. There is still reason to be concerned, these organizations only add up to if you thousand users and a small fraction of the vas budget, implementing them in a Major Organization like the Veterans Health administration and the big spenders within the better lips as Veterans Benefit Administration keeps getting pushed out. Now it is not scheduled for rollout until 2024. Meanwhile, the programs implementation continues to rise, im not suggesting that we have another v hrm, let me be clear, i believe most of the premises of the fbmt are sound, they are similar suffering from similar problems like poor coordination between organizations within the va, struggles to va applications with commercial software and extremely long schedules, its been 3 1 2 years since the subcommittee last examined the fbmt program, i think veterans and taxpayers are overdue for an update. I appreciate witnesses joining us today to help us better understand challenges that you face. I look forward to working to overcome the difficulties and deliver this system successfully. With that, i will yield Opening Statement time to the corpsman. Im happy to say we are having this hearing on the Monetization Program that is so future pivotal to the future of ea. The use of the aging Financial Management system has led to manual workaround which impedes vas and congresss inability to conduct oversight spending, its the second largest federal Energy Agency and it relies on an infrastructure that is decadesold, this program has largely gone unnoticed. This is a good thing, when i. T. Programs go well, usually you dont hear about them, on alternately, this program is now experiencing delays, given the importance of this program, this committee needs to understand the underlying issue, this program is foundational to creating not only financial efficiency, but the department builds accountability and oversight of congress, i hope to hear from va and the cgi federal today how we can assure the successful and timely development, i will relay a point i may get every hearing, va obviously does not have the management infrastructure in place to cordon eight and ensure the success of these large monetization efforts, there are bills that have cosponsors that would at the very least start moving them in the right direction. I hope that we can start acting on the soon here in the house, modernization is mandatory, not optional. It is everyones interest to do this in a way that does not affect veterans and employees, and wastes billions of dollars. Commitment to management and standardization of processes is essential to our future success, thank you again, chairman, i look forward to hear from our witnesses this morning. Thank you, i will now introduce the witnesses on a first panel, first the department of Veterans Affairs, we have missed teresa, the Deputy Assistant secretary for Financial Management. We also have mr. Charles tapp, the chief Financial Officer for the veterans of administration and mr. Daniel, deputy chief information for Software Product management at the office of information and technology, next we have mr. Sydney gets, senior Vice President for cgi federal, then we have mr. Nick dall, Deputy AssistantInspector General for evaluations at the office of Inspector General of the department of Veterans Affairs. If you would please rise and raise your right hand. Do solemnly swear under penalty of perjury that the testimony you provide is the truth, the whole truth, and nothing but the truth . Thank you, let the record reflect that all witnesses have answered affirmative, you are now recognized for five minutes to deliver your Opening Statement. Good afternoon, chairman and all members of the subcommittee, thank you for the opportunity to testify today for the department of Veterans AffairsFinancial ManagementBusiness Transformation Program , and its implementation in an integrated financial and Acquisition Management system. I am accompanied by daniel mccue, deputy chief Information Officer for product management, the va cannot continue to rely on its legacy Financial Management asked him due to the enormous risk it presents to va operations, it is becoming increasingly difficult to support from a technical and functional ability standpoint, to not correct new audit findings and is not compliant with internal control standards, im proud to report it is no longer a proof of concept, it is successfully replacing the 1980s arab Management System, it has been successfully up and running for almost three years, they completed six successful deployments of ifam with 4700 users across the enterprise, thats the national, a portion of Veterans Benefits administration and major staff offices, including the Office Information and technology, office of Inspector General. Ifams users have collectively processed 3. 5 million transactions representing almost 10 billion in treasury disbursement, it is a staple, achieving 39. 9 uptime, on june 12, va went live with its largest appointment to date increasing the current user base by 60 . It was also the first time va went live simultaneously with both the finance and acquisition components of ifam which demonstrates that ifam is a viable solution capable of becoming the next generation financial acquisition solution for va, it is important to understand it is not just a new core accounting and acquisition system, it is crucial to transforming the business processes and capabilities so we can meet our goals and objectives in compliance with Financial Management legislation and continue to successfully execute our mission to provide veterans with the health care and benefits they have earned and deserved. With so much at stake both in terms of taxpayer dollars on the departments ability to serve veterans, it is vital that va accurately track and report how funds are used, fortunately, ifam significantly improves Fund Tracking abilities among any other benefit, will ensure proper tracking of expenditures. Removal of ifams increasing the transparency , accuracy, and timeliness, and reliability of Financial Information, va is gaining enhanced planning, analysis, and decisionmaking because of improved data integrity, functionality, and business intelligence, they are demonstrating these achievements through ranges of metrics based on industry best practices, ifams changes are part of va strategy to resolve long standing financial Material Weaknesses and strengthen internal control for example, in contrast to our Current System which cannot capture transaction approval, ifams routing documents to officials and supports documentation to be attached to the transaction, additionally ifam requires additional levels of approval. It also eliminates the need for an external tool to adjust Financial Reporting, perhaps most importantly, ifam complies with reporting requirements of the department of treasury to capture various account attributes and conform to the u. S. Standard general lantern. The vas Current System is unable to meet those requirements which he has led to extended and inefficient manual workaround and ifams remediate all of these. Our success has been and continues to be built on partnership, Mutual Respect and two way collaboration with their users, ifam has established a dead a dedicated chief experience officer two course interactions and change management activities, the change management practice places a heavy emphasis on improvement using Customer Feedback in our own observation of audit endings and industry best practices, we establish how to learn each wave and incorporate those lessons into wave deployment, fbmt continues to remain on budget, our successes would not be possible without ongoing support from congress and we appreciate the opportunity today to discuss this important initiative, we will continue to work judiciously with veterans to modernize vas financial Acquisition Management system and provide you with updates as we make further progress. Although we are encouraged by our success, we are keenly focused on the difficult work that lies ahead an hour steadfast commitment to see this initiative through, chairman rosen dale, mccormick and subcommittee members, this concludes my Opening Statement, i would be happy to answer any questions. Thank you, this will be entered into the hearing record, mr. Goetz, you are now recognized for five minutes to deliver your Opening Statement. Chairman rosen dale, Ranking Member mccormick and other distinguished members, thank you for the opportunity to appear today, my name is sydney gets, im the senior Vice President. For the last five years, i have served as project integer on contract with the va for the Financial ManagementBusiness Transformation Program known as fbmt, after the subcommittees invitation, im here to providing requested status of day and underscores ongoing commitments to the success of the va fbmt program. As you know, in 2016, the va established the fbmt program to modernize its 30 year old legacy core Financial Management system, fms. In compliance with applicable regulatory requirements, to accomplish this complex modernization effort, the va selects tgi federal to deploy its momentum enterprise resource program, momentum known to the va as the integration financial acquisitions Management System or ifam is a Financial Management system that is operational and many government agents fees, to mitigate programmers, the fbmt program is migrating users from the vas Legacy Financial and acquisition systems , ifam using an incremental deployment approach, each deployment or wave delivers specifically configured capabilities to a defined set of va organizations using an agile based implementation methodology. To date, the mbd program has completed six waves deploying ifam 240 700 users at 20 different va offices, while there are still milestones and challenges ahead to be sure, we are really delivering benefits to the vas finance and acquisition user community. These benefits include improved strategic and daily Decision Making, process automation, compliance with federal accounting regulations, maintaining clean audits and accommodating new regulatory requirements. A prime example of how it is helping the va use the community, the power of its realtime Transaction Processing and ondemand reporting. Today, its users can easily generate financial acquisition reports and drill down into current accurate data on demand. This is because when transactions are entered, they are first verified to meet va standards and then automatically up date budgets in the general ledger in real time. Its users also have the capability to refresh reports hourly. Rather than daily, and can run most reports at the enterprise level, administration level, or lower levels of the va organization. Before ifam, some similar reports took days to produce through manual resource intensive spreadsheetbased processes. As with other conflict programs, success often depends on the stakeholders focus on Key Performance factors, the same holds true here with the teams focus on collaboration and transparent, enterprise wide standardization, continued improvements, diligent change management, and execution of its riskbased incremental delivery approach has kept the program moving forward. To illustrate this point, let me share how the team is maximizing the value of user acceptance testing, in the first few waves, users performed handson towards the end of each wave, this is a common and standard approach. Lessons learned told us that we would improve user adoption but having users perform iterative handson testing of ifams functionality much earlier into each wave. We are refining a Program Implementation methodology and applying this approach to three most recent waves by helping users gain an earlier appreciation, the team gains useful feedback that change management and training. It also has allowed us to identify and resolve issues earlier saving both time and resources. I end this testimony where i began by reiterating cgi federals unwavering commitment to collaborating with the f mbd program to deliver ifam for the entire user base for the advancement of our veterans, i look forward to answering your questions. A written statement will be entered into the hearing record, mr. Dall, you are now recognized for five minutes to deliver your Opening Statement. Subcommittee members, thank you for the opportunity to discuss our oversight of vas management challenges, since 2015, the audit of vas Financial Statements has reported a Material Weakness due to problematic Financial Management systems. Full implementation could help resolve this persistent Material Weakness and decreased the transparency, accuracy, timeliness, and reliability of Financial Information across va. Accordingly, we began oversight of implementation shortly after it went live in november 2020. Prior modernization efforts failed in part because of poor planning and flawed execution combined with challenges and transitioning from legacy system, decentralized oversight, unrealistic timelines, and out of it engagement of stakeholders and endusers and minimal testing have plagued i. T. Progress, resulting in changes in direction and vendors, and all steep costs. The most recent audit of vas Financial Statements, the auditor found three Material Weaknesses and two significant deficiencies. The Material Weakness most pertinent to this testimony focuses on limited functionality of the Current System to meet vas Financial Management and reporting needs. Over time, the vas complex and antiquated Financial System has deteriorated and no longer meets increasingly stringent requirements are inundated by the Treasury Department and omb, deficiencies in vas Financial Management system are illustrated in findings we made to vas use of covid19 funding which showed va lacked assurance that those funds were spent as intended. Generally, our free reports found vas complying with transparency reporting requirements, however, we identified concerns with completeness, and accuracy of vas reporting. A major cause for this is reliance on several systems and payroll, to purchase Card Transactions requiring manual entries by staff which increases the risk of reporting errors. The practice of manual expenditure transfers led to a lack of transparency and accountability over vha purchases seen on our audit. We found vha staff not properly documenting the transfers and inadequate guidance from vhas office of finance. This happened because of Financial Reporting systems limitations, and a lack of oversight that resulted in vha medical facility staff determining on their own what constituted a procurement documentation. Additionally, staff did not follow basic controls like documenting purchasing authority, splitting duties between requesting and purchasing items and verifying audits were received. As a result, we reported an estimated 187 million