Transcripts For CSPAN3 Senate Commerce Hearing On Tech Compa

Transcripts For CSPAN3 Senate Commerce Hearing On Tech Companies Use Of Algorithms 20240714

On internet platforms each witness has information on algorithms and Machine Learning, as well as engagement and persuasion and brings unique perspectives to these matters. Your participation in this hearing is appreciated mr. Rosensteinly as this committee continues to work on drafting data privacy legislation. Icon veeped this hearing in part to inform legislation im developing that would require internet platforms to give the user information to engage with the platforms without using algorithms pushed by certain data. The vast majority of content on these platforms is at its best, entertaining and beneficial to the public. However they also have the ability or at least potential to influence the thoughts and behaviors of literally billions of people. As one reason why theres widespread unease about the power of these platforms and why its important for the public to understand how they use the algorithms to make inferences from the reams of data about us that affect behavior and influence outcomes. Without safeguards such as Real Transparency there is a risk some internet platforms will seek to benefit their own interests and not necessarily the consumers interest. In 2013, former google chairman eric schmidt wrote modern Technology Platforms are even more powerful than most people realize and our kmurt will be profoundly altered by their adoption and successfulness everyone in society. Since that time, algorithms and Artificial Intelligence has become an important part of our lives. Large Technology Companies rely on ai powered automation to select and display content to optimize engagement. Unfortunately the use of Artificial Intelligence can have an unintended and possibly dangerous downside. In april, bloomberg reported that youtube spent years searching for engagement, while ignoring calls to monitor content. Earlier it was reported youtube Recommendation System was found to be showing a video of children playing in their back yard pool to other users who watched sexually themed content. That is troubling. It indicates the real risk in a system that relies on algorithms and Artificial Intelligence to optimize for engagement. These are not isolated examples. Some have suggested that the filter bubble created by social media platforms like facebook may contribute to our Political Polarization by encapsulating users in their own comfort zone or echo chambers. Congress has a roll to play in ensuring companies have a way to innovate and ensure keeping others at the forefront of their progress. While there must be a dose of healthy responsibility, companies should provide greater transparen transparency about how the content we see is filtered. We should be able to see content, not powered by algorithms. We are convening this hearing to examine weather explanation and transparency are policy Options Congress should be considering. My hope is at the hearing today we are able to understand how internet platforms use algorithms, Machine Learning to influence outcomes. And we have a distinguished panel before us. Today we are joined by tristan harris. Ms. Maggie stanfield. Dr. Stephen wolfram. And mrs. Rashida richardson. Thank you all again for participating on this important topic and i want to recognize senator shots for any opening remarks that he may have. Thank you, mr. Chairman. Social media and other internet platforms make their money by keeping users engaged so they hired the greatest engineering and tech minds to get users to stay longer inside their apps and on their websites. They discovered one way to keep us all hooked is to use algorithms that feed us a constant stream of increasingly more extreme and inflammatory content. This content is pushed out with very little transparency or oversight by humans. This set up and also basic human psychology make us vulnerable to lies, hoaxes and misinformation. A the washington journal found that youtubes engine leads users to conspiracy theories, partisan viewpoints and misleading videos even when users arent seeking that kind of content. We saw youtubes algorithms were recommending videos of children after users watched sexualized content that did not involved children. This isnt just a youtube problem. We saw the biggest platforms struggle to contain the spread of videos of the Christ Church massacre and its antimuslim propaganda. It was live streamed on facebook and over a million copies were uploaded. Many reported seeing it on autoplay on their social media feeds not realizing what it was. Last month we saw a fake video of the speaker of the house go viral. I want to thank the chairman for h holding this hearing because things are going wrong here. I think its this, silicone valley has a premises, its that society would be better, more efficient, smarter if we would eliminate steps that include human judgment but if youtube, facebook or twitter employees rather than computers were making the recommendations, would they have recommended these awful videos in the first place . Now im not saying that employees need to make every little decision but companies are letting algorithms run wild and only using humans to clean up the mess. Algorithms are amoral. Companies design them to optimize for engagement as their highest priority. And be doing so, they eliminated human judgment as part of their Business Models. As algorithms take on an increasingly important role, we need for them to be more transparent and Companies Need to be more accountable for the outcomes they produce. Imagine a word where pharmaceutical companies werent responsed for their medicine. And engineers werent held responsible and we couldnt view their blueprints. Right now social media is missing that. Right now we have useful tools but theyre inadequate for the rigorous studies we need about the societal affects of algorithms. These are conversations worth having because of the influence that algorithms have on peoples daily lives. This becomes more important as technology continues to advance. So thank you mr. Chairman. We do have a great panel to hear from today. And were going to start on my left, your right, with mr mr. Tristan harris. Ms. Maggie stanfield as i mentioned, a google User Experience director, dr. Stephen wolfram. And ms. Rashida richardson, director of policy research a. I. Institute. If you would confine your oral remarks to as close to 5 minutes as possible it gives us an opportunity to maximize the chance for members to ask questions. Thank you all for being here we look forward to hearing from you. Mr. Harris. Thank you. Everything you said, its sad to me because its happening not by accident but by design. Because the Business Model is to keep people engaged, which in other words this hearing is about Persuasive Technology. And persuasion is about an invisible asymmetry of power. When i was a kid, i was a magician, and magic teaches you you can have power without the other person realizing it. You say pick a card, any card, while meanwhile you know exactly how to get that person to pick the card you want. What were experiencing with technology is an increasing asymmetry of power thats been masquerading itself as an equal or contractual relationship where the responsibilities is on us. Lets walk through why that is happening. In the race of attention because theres only so much attention, companies have to get more of it. It starts with techniques like pull to refresh, that operates like a slot machine, has the same addictive qualities that keep people in las vegas hooked to the slot machine. Other examples, removing stopping cues. If i take the bottom out of the class and keep refilling it, you wont know when to stop drinking. Thats what we do with the feeds and it keeps people scrolling. The race for attention has to get more aggressive. So its not enough to predict your behavior. We have to predict how to keep you hooked in a different way. So it crawled deeper down the brain stem to our social validati validation, that was introduction of likes and followers. It was cheaper, instead of getting your attention to get you addicted to getting attention from other people, this has created the mass narcissism and cultural thing happening with young people today, and after two decades in decline, the Mental Health of 10 to 14yearold girls has shot up 170 in the last eight years. Its been the cause of social media. And in the race for attention, its not enough to get people adilkted to attention, the race has to migrate to ai, who can build a better predictive model of your behavior. Youtube, you hit play on a video, you think youre going to watch one video and you wake up two hours later and what happened . The answer is because you had a super computer pointed at your pain. The moment you hit play it wakes up an avatar version of you, and that avatar, based on everything you have ever watched and liked, everything that makes the avatar look like you so inside a server they can simulate more and more possibilities if i get you with this video, this video, how long would you say, the Business Model is what maximizes what, if anything, watch time. This is what caused 70 of you tubes traffic to be driven by recommendations, not by human choice but the many sheens. Its a race between facebooks voodoo doll and googles. These are metaphors that apply to the whole Tech Industry, its a race to better predict your behave uri. Facebook can predict to an advertiser when youre about to become disloyal to a brand. If youre a mother and take pampers diapers, they can tell pampers this user is about to become disloyal. So they can predict things we dont know about our own selves. Thats a new level of asymmetric power. We have a name for this, which is a duty of care relationship, the same standard we apply to doctors, priests, lawyers, imagine a world in which priests only make their money by selling access to the confession booth to someone else. Except facebook listens to 2 billion peoples confusions and is calculating and predicting confessions youre going to make. I want to finish up by saying this affect everyone, even if you dont use the products, you send your kids to a school where other people believe antivaccine theories or those votes in your elections. In 2011, the quote was software is going to eat the world. What he meant by that, he was the founder of netscape. What he meant by that, software can do every part of society more efficiently than nonsoftware. So were going to allow software to eat up our elections, our media, our taxi, transportation. And the problem was that software was eating the World Without taking responsibility for it. We had rules and standards around saturday morning cartoons when youtube gobbles up that part of society it takes away all the protections. I know fred rogers testified before this committee 50 years ago, concerned about the animated bombardment we were showing children. I think he would be horrified about what we are doing now. And he talked to the committee and the committee made a choice differently. Im hopeful were able to do that today. Thank you. Thank you. Thank you, members of the committee, thank you for inviting me to testify today on googles efforts to improve the digital well being of our users. I appreciate the opportunity to outline our programs and discuss our research in this space. My name is maggie stanfield, i lead the cross google digital well being initiative. Its an initiative thats a top company goal and we focus on providing users with insights about their tech habits and the tools to support an intentional relationship with technology. At google we heard from many of our users all over the world that technology is a key contributor to their sense of well being, it connects them to those they care about, provides information and resources to those that build their sense of safety and security. Its provided services for billions of users around the world. For most people their interaction with technology is positive and theyre able to make healthy choices about screen time and overall use. But as Technology Becomes increasingly prevalent in our daytoday lives for some people it can distract from what matters most. We believe technology should play a useful role in peoples lives and weve committed to helping people strike a balance to what is right to them. This is why our ceo first announced the digital well being, with features across our platforms to help People Better understand their tech usage and focus on what matters most. In 2019 we applied what we learned from users and experts. Id like to go into more depth about our products and tools we have developed for our users. On android, the latest version of our mobile operating system we added key capabilities to help users take a better balance with technology and make sure they can focus on raising awareness of tech usage and providing controls to help them oversee their tech use. This includes a dash board, time on devices, app timers to set time limits on specific apps. It requires a do not disturb function to silence phone calls and texts, at least the phone calls that pop up. And we introduced a wind down feature that puts it into night light mode. Finally weve got a new setting called focus mode. This allows pausing specific apps and notifications that users might find distracting. On youtube we have launched a series of updates to help our users define their own sense of well being,ing this is reminders to take a break, and the option to combine all app notifications into one. Weve also listened to the feedback about the youtube Recommendation System. Over the past year weve made a number of improvements to these recommendations raising up content for when people are coming to youtube for news as well as reducing content that comes close to violating or policies or spreads harmful misinformation. When it comes to children we believe the bar is higher. Thats why we created family link to help parents stay in the loop as their child explores on android. On android q, parents will be able to set limits and remotely lock their childs device. Youtube kids was designed with the goal that parents have control over the content their children watch. We use a mix of filters, User Feedback and moderators. We offer parents the option to take full control over what their children watch by hand selecting the content that appears in their app. Were conducting our own research and engaging in important expert partnerships with independent in partnerships with independent researchers to build better understanding of many impacts of digital technology. We believe this knowledge can shape and drive entire injury toward creating products that create a better section of wellbeing. To make sure were evolving the strategy weve launched longitudinal study to overseas effectiveness of digital tools. We believe this is just the beginning. As Technology Becomes more integrated into peoples daily lives we ensure products and support digital wellbeing. Were committed to investing more, optimizes products and focusing on experiences. Thank you for opportunity to outline efforts in this space, im happy to answer any questions you might have. Thank you. Thanks for inviting me here today. I have to say this is pretty far from my usual venue but i have spent my life working on computation of ai and perhaps some of what i know can be helpful today. First, here is the way one can frame the issue. Many successful Internet Companies like google and facebook and twitter are what one can call automated content selection businesses. They in guest lots of cop tent, then essentially useto show to users. How does ai work, how can one tell its doing the right thing. People assume computers run algorithms someone sit down and wrote. Modern ai doesnt wok that way, usually constructed automatically, lancaster from a numb learning from a number of massive examples. Theres embarrassingly little we humans can understand. Here is the problem. Its factor based science if you insist on accessibility you cant get the power of the system or ai. If you cant open up the ai and understand what its doing, how about putting external constraints on it. Can you write a contract that says what the ai is allowed to do . Partly actually through my own work were starting to be able to formulate computational contracts, contracts written not in legalese but a precise computational language suitable for an ai to follow. What should the contract say . Whats the right answer for what should be at the top of someones news food, the rule for balance of content. Well, as ai starts to run more and more of our world, were going to have to develop a whole network of ai laws and its going to be super important to get this right. Starting off by agreeing on the right ai constitution. Its going to be a hard thing kind of making computational how people want the world to work. Right now its still in the future. What can we do about peoples concerns now about automatic content selection. I have to say i dont see a purely technical solution but i didnt want to come here and say everything is impossible especially since i personally like to spend my life solving possible problems. I think if we want to do it, we actually can use technology to set up a marketbased solution. Ive got a couple of concrete suggestion how to do that based on giving users a choice on who to trust for the final content they see. One is final ranking providers, the other uses constraint providers. In both cases these Third Party Providers who basically insert their own little ais into the pipeline of delivering content to users. The point is users can choose which of these provides they want to trust. The idea is to leverage everything the big automated content selection businesses have but to essentially add a new market layer so users get to know picking a particular well content is selected for them. You get a void all or nothing banning of content and dont have a single point of failure for spreading bad content and you open up a new market potentially delivering higher value for users. Of course, for better or worse, unless you decide to force certain content or diversity of content, which you could, people could live in their o

© 2025 Vimarsana