Transcripts For CSPAN3 Federal 20240703 : vimarsana.com

CSPAN3 Federal July 3, 2024

Welcome back, everyone. Pursuant to the previous order the chair declares the committee in recess. What started over. The committee will come back to order. The chair recognizes representative langworthy for five minutes. Thank you, mr. Chairman. I would like to thank all of our witnesses for being here today to continue driving the ai conversation forward. The opportunity that the federal government has to implement ai into its everyday operations is potentially exciting for the future of the country and for the modern workforce. However, i would like the subcommittee, and all of us to consider the impact of ai in other emerging technologies on our younger generations. While ai has numerous benefits that im sure will be discussed here today, there are serious implications on our youth. Especially when it comes to generative images in child explication. I would be more than happy to and the rest of the Oversight Committee to address these concerns. Before we do that i want to speak about some of the ai frameworks that have been developed specifically the National Institute of standards in technology. A well established track record of developing frameworks and recommendations to improve Cyber Security outcomes in the federal government. Earlier this year published a ground raking framework which was developed that congresss direction in an open multi stakeholder process. Leading companies are already using the framework for managing ai risks just says that they use the cybersecurity framework. In other recognitions. Dr. , i would like to ask you whether or not you see the framework being taken up by the federal government in the same way that cybersecurity work is being used today and what steps if any that your office is taking to implement the ai framework . Thank you so much. I had the great pleasure of leading this many decades ago when my hair was still black. And i share your important point about the role that organization has played in cybersecurity and other important areas. In ai the Risk Management framework, when they put that out i think it was one important step in a longer journey to getting to where we can actually have safe and effective ai. Whether it is private or Public Sector use. As you have seen with industries of i see that approach also starting now to be used within government. With that allows people to do is to know what questions to ask about how to make an ai system safe and effective. Depending on the application the questions will be different in the process they go through will be different. But that is a starting point. To me it is just important to know if your organization is using that framework. It is table stakes to know youre actually asking the question. I want to step back though and also be clear what we are all, we all understand what we need in the future where i systems and they dont do dangerous or inappropriate things you dont want them to do. I think we should all be very clear the companies, researchers, nobody really quite knows yet how to do that. So i think the community of the Technology Communities work that is still ahead is to continue to develop tools and methods so we can get as good a desert sanding with her and ai system is safe and effective as we know for physical products and many other areas. That is some of the work that still remains to be done. I want to followup and ask about criticism towards the ai blueprint. The blueprint has been criticized for being in conflict with the framework. Can you address this . I would be happy to. The ai bill of rights focused on our values. Which are so important. We are very choppy times and choppy waters with the technology. If you go back and look at the bill of rights it talks about how important it is to make sure people have, who are not discriminated against and have access to safe and secure systems. A lot of the same things youll find in the Risk Management framework and everything we have been talking about today. That is very consistent with the bill of rights. That work was developed by ost people working closely with others across government with many many inputs from private organizations, companies, Civil Society organizations, academics. And then the Risk Management framework, on the heels of that there was a lot of close communication and coordination. To me part one was value. Part two was the initial steps of, how does an Organization Start grappling and what are the processes they need to put in place to manage these risks . Unfortunately i am out of time and yelled back it will be following back with questions. The gentleman from new york yields. But i would like to yield my five minutes back to mr. Langworthy. Thank you very much. I also wanted to bring up an executive order issued by the last administration requiring federal agencies to post for public view most of their ai use cases. This is intended to give the public a view into the administration current and planned systems. Many of these agencies are missing or incomplete according to a Stanford University ai Institute White paper, which was issued last december. You agree the public has a right to know for what purposes ai is being used by the federal agencies . And it is important for these inventories are done consistently, completely, and accurately . We pledged to work to continue to ensure that is the case . Thank you. I share your focus on the value of the use cases for other reasons you mentioned. It is important for the public to know and across government for people to understand how ai has been used. There is important progress we are making and we will continue to make it as the federal government on those use cases. Thank you. Transparency i think is something that we all need to fight for. Especially this emerging technology is coming at us so quickly. I want to see if regulatory sandboxes have imparted your conversations. The European Parliament approved its ai act which includes a conversation about setting up coordinated ai regulatory sandboxes to foster innovation and ai across the eu. Do you see regulatory sandboxes having success in the eu and whether or not you think they will be successful in the United States . My colleagues have answers on that. I dont think i have enough information to give you a complete answer. I will just note that we continue to work with our colleagues and allies in europe and around the world simply because ai is happening everywhere in different regions are taking different approaches. We are finding with our like minded allies we all share this focus on getting effective ai future. I think it will be an important collaboration. We think being able to work effectively with ai and our partners and allies is extremely important. We have been focusing a lot not only on the data sharing and how we do that effectively, but also, how do we build models together and evaluate the effectiveness of those models . We have a number of initiatives. Nothing to add on. In the meantime i want to focus on dhs. Are you concerned ai systems as they become more mature and complicated terminals were greater opportunity to commit heinous crimes like child exportation . We absolutely are concerned there. We are also looking to harness ai to combat those crimes. I shared earlier our work with Homeland Security investigations and operation we hope we can use ai to help rescue victims from active abuse as well as to arrest suspected perpetrators. So as we are looking to better defend against the use of ai to commit these crimes we are also using it to defending against them. The protections have to be built as the same time as all of the fruits of the what i i can bring us. Our most vulnerable i believe are those most likely to be harmed by a lot of this Ai Technology. If you can expand the scope and include american adversaries unleashing powerful Cyber Attacks against you is critical systems. What is dhs doing in preparation to respond . We are and have been concerned about adversarial use of ai against federal and Critical Infrastructure networks. Has established the ai task force, which i colead. Looking at the use of ai to secure Critical Infrastructure is one of our critical objectives. We are working with the cybersecurity and infrastructure search Security Agency to look at how we can effectively partner with Critical Infrastructure organizations on regarding their uses of ai and strengthening their cybersecurity practices to defend against threats. I yield back. The gentleman yields. The chair recognizes the honorable mr. Connor for five minutes from california. Thinking. Dr. , i thought your description of ai statistics on scale was one of the best ive heard. Was that your phrase . I think it is my but it might not be so i want to claimant and its not. But it is when i have been using for a while. I appreciate it. I think, i dont always agree with but i thought his oped in the New York Times where he talked about human intelligence and what that entails and how it is different than a predictive model that is taking a lot of data and putting probabilistic outcomes was very thoughtful. One of the concerns i have is there has been in overhyping of ai as a form of human intelligence. I think it is giving our species less credit than we deserve. I appreciate it your clarification. Nancy mace and i have a bill called the search act which would basically require a Government Agency to use Ai Technology to help improve this search function in their own websites and collecting data. Could you help describe what the benefits of having a i do that kind of search for governor agencies could be . Thank you for your leadership on that matter. As well as other issues related to ai. I think you have described it very clearly. If you step back and think about how much the government does that is about interacting with citizens providing information, taking information, those are areas where this new generation of languagebased ai can have tremendous benefits. But it has to be used thoughtfully and carefully. Searches is a great example. If you can imagine the search, and people are starting to use generative ai to summarize complex documents to synthesize arguments from across many different perspectives to draft responses. I emphasize draft because anyone who has worked with these technologies knows i think what we are seeing in the public and private sector are finding that there are few cases where we are simply relying on a chat bot to solve a problem. But there are many cases where that interaction might be the beginning of accelerating improving. I think those are interesting examples. They are different to build on top many ways Government Agencies are using ai on sensor data or data that they collect that is not languagebased. I think this is the next chapter that is starting to unfold. I appreciate your focus. When you look at ai and i was sick these things are hard to predict, how do you think over the next 10 years it will have an impact on jobs . Is it a case of augmenting peoples talent . My concern isnt if they had ai bots to write all the scripts for hollywood that it is not able, that they wont be able to do it. My concern is it will be terrible. They will not reduce hamlet. It will just be the further evolution of entertainment. Many times i have used chet gpt and i have encouraged my staff to use it for speech and it is not as good as cliff notes. If professors are having students use it and not getting good grades, it is probably because theyre not asking the right questions. The class is probably not challenging enough. My point is, where is it that it is going to display things, how do we prepare for it, where is it that it will be creating opportunities . The focus on the impact of Ai Technology on jobs is critically important. We have a long history. We know technology does change work. In all kinds of ways. Let me start by saying it is very early. Right now we dont fully know. It has not fully played out how this new generation of language based ai will, how will blossom and what effects it will have. The best understanding that many experts have in this area is that there were things that look a lot like prior changes with technology coming in. There are things that will not look the same. What i think we can expect is some jobs make it upscaled, become more available, allow people to earn more for their labor. Another jobs will get displaced. That is happened with every wave of technology for not just decades but probably for millennia. I think what is very different about this new generation is the fact it can be used to do administrative tasks, creative tasks everything from Graphic Design and image generation two writing documents, to even legal analysis. So a lot of the kinds of professions years ago i think people imagine were not going to be touched by Ai Technology now will come into the limelight. My time is expired but im still waiting for chet gpt to come up with statistics led scale. But we will see. I dont think thats possible. With that the chair now recognizes the honorable representative timmons from South Carolina for five minutes. Thank you. Dr. , i want to start out with you. You to find ai in a way that i have never heard before. In a way that is not really what other searches would define it. Can you elaborate on that definition . Sure. It actually doesnt define all of it. It is modern ai. There are a lot of modern will based expert systems where theyve written a bunch of statements. I would not call that statistics. But modern ai and modern ai is based on gathering massive amounts of data from the past, that is our lens into the world. Particularly highly curated label data with represents the task at hand. Using that to build a model. If you can think back to any simple class you had where your regression is the model. So it builds the model and that it uses that model to predict the future. I dont think any of the Scientific Community would disagree with that as a general characterization. I appreciate you elaborating. I see your point. I agree. Can we talk about possible uses of ai within dod or adversary notes every capabilities . One of the reasons if i may i describe it like that it is it helps people realize ai is not monolithic. When we say ai what we really mean is specific aibased technology for specificbased technology. It is important to differentiate that. We might be doing really well for one use case and poorly in another and that may be so for our adversaries. If we focus mostly on aim monolithic thing, if we have a wii when then they lose or vice versa particular capabilities we want to deliver or what we want to defend against. We spend a lot of energy characterizing those and that is a conversation i will be happy to have her you at a different venue. There are lots of use cases within the business aspects of the department of defense. To analysis of the documents with modern languagebased ai is really effective. Understanding the environment using Computer Vision is really helpful. In that case when you think about understanding a document or an image being analyzed and action being taken from the image, it is really important to remember a statistical answer. Some action being taken from that analyzed damage. It is really important to us when its not simply dependent on that algorithm. A human could say, we got it wrong. Its statistical, so there will be a case where every model will sometimes get it wrong. You need to have a human machine structure so it can correct the system and make it better. One of the benefits is the speed in which it can act. If you have a drone swarm with ai, how do you incorporate the human aspect . The whole benefit of using ai, would be the speed in which to get it back pressed on that is an excellent question. Thank you. One thing the military does well, to train with technology. You can think about the way our training works over and over again, r a way to justify confidence within a tool. If you have confidence within your weapon, sometimes it will jam, you still get a sense of the conditions in which it might and you learn how to use it. I see where you are going. Using the component to make sure you answer the question 4000 times before it has dealt g with live fire. Sometimes they will get it wrong. Whoever made the decision to deploy that will be responsible. There always is a responsible agent making decisions. Im concerned that the military will likely make sure e the component that does not care about collateral damage, the consequences of their actions, maybe using the same Technology Without regard to necessary collateral damage. That is 100 correct. What are the tools and countermeasures we need for that situation . It is really important to not think about the monolithic but thank you for being here. The gentleman yells. The chair recognizes representative higgins from louisiana for five minutes. Thank you, mr. Chairman. I appreciate the committee waving me on to address this topic. Thank you for being here. Dr. , you have a lovely name and we appreciate you being here. Madame, in your opening statement, you say in one of your quotes that ai also will bring the risk of privacy. More and more Sensitive Information is used to train the ai system. The government is already using ai to censor and repress expression and abuse human rights. Is that part of your statement . Yes, sir, it is. Okay. Im clarifying. I brought a concern i would like to focus on regarding governments use of ai in enforcement of laws and regulations. I think that i am strongly against that. I will ask you, Law Enforcement is my background, you may not know i appreciate the work that has gone on the ground on the enforcement level. I have my concerns there. You referenced authoritarian governments using ai. Talk to us about criminal enterprise or space sponsor cyber threat enterprise, how that

© 2025 Vimarsana