Transcripts For CSPAN3 Senate Commerce Hearing On Tech Compa

Transcripts For CSPAN3 Senate Commerce Hearing On Tech Companies Use Of Algorithms 20240714

I want to thank everyone for being here today to examine the use of persuasive technologies on internet platforms. Each of our Witnesses Today has a great deal of expertise with respect to to algorithms and as well as in the more narrow context of engagement and persuasion and brings unique perspectives to these matters. Your participation in this hearing is appreciated. I con vain this hearing in part to inform legislation i am developing to give consumers the option to engage with the platform without having the experience shaped by algorithm use specific data. Internet platforms have transferred the way we interact and made positive ways on society in ways too numerous to counting. The vast majority of content is innocuous and at its best it is entertaining, educational and beneficial to the public. However, the powerful mechanisms behind these platforms also have the ability or at least the potential to influence the thoughts and behaviors of literally billions of people. As one reason why theres widespread unease about the power of these platforms and why its better for the public to better understand how these platforms use this intelligence and opaque algorithms that influence outcomes. Without safeguards such as real transparency, there is a risk that some internet platforms will seek to optimize engagement to benefit their own interests and not to benefit the consumers interest. In 2013 eric schmidt wrote that modern Technology Platforms are even more powerful than most people realize and our future will be profoundly altered by their adoption and successfulness in societies everywhere, end quote. Since that time allegorithms ha become a large part of our lives largely without us realizing it. As online content continues to grow Internet Companies rely on a. I. Powered automation. Unfortunately, the use of Artificial Intelligence can have an unintended and possibly even dangerous downside. In April Bloomberg reported youtube has spend years chasing engagement. Earlier this month the New York Times reported youtubes Automation System was found automatically playing video of children playing in their background pool to other users who watched sexually themed content. That is truly troubling and indicates the real risk in a system that relies on algorithms and Artificial Intelligence to optimize for engagement. These are not isolated examples. For instance, some have suggested that the filter bubble created by platforms like facebook may contribute to our Political Polarization by encapsulating users in their own comfort zones or echo chambers. Congress has a role to play in ensuring companies have the freedom to innovate but in a way that keeps consumers interests and wellbeing at the forefront of their progress. While there must be a healthy dose of personal responsibility when users participate in seemingly free online services, companies should also provide Greater Transparency about how exactly the content we see is being filtered. Consumers should have the option to engage with the platform without being manipulated by algorithms powered by their own personal data, especially if they are opaque to the average user. We are convening this to see whether algorithmic transparency are Options Congress should be considering. Ultimately my hope is at this hearing today we are able to better understand how internet platforms use algorithms, Artificial Intelligence and Machine Learning to influence outcomes. We have a very distinguished panel before us. Today we are joined by tristan harris, the cofounder of the center for humane technology, ms. Maggie stanfill, dr. Stephen wolfram and ms. Rasheeda richardson, director of policy research at the a. I. Now institute. Thank you all again for participating on this important topic. I want to recognize senator shotz for any opening remarks that he may have. Thank you, mr. Chairman. Social media and other internet platforms make their money by keeping users engaged and so they have hired the greatest engineering and tech minds to get users to stay longer inside of their apps and on their websites. They discovered one way to keep us all hooked is to use algorithms that feed us a constant stream of increasingly more extreme and inflammatory content. This content is pushed out with very little transparency or oversight by humans. This setup and also basic human psychology makes us vulnerable to lies, hoaxes and misinformation. The wall street journal investigation last year found youtubes recommendation engine often leads users to conspiracy theories, partisan viewpoints and misleading videos, even when users arent seeking out that kind of content. We saw youtubes algorithms were recommending videos of children after users watch sexualized content that did not involve children. This isnt just a youtube problem. We saw all of the biggest platforms struggle to contain the spread of videos of the christchurch massacre and its antimuslim propaganda. The shooting was live streamed on facebook and over a million copies were uploaded across platforms. Many people reported seeing it on auto play on their social media feeds and not realizing what it was. Just last month we saw a fake video of the speaker of the house go viral. I want to thank the chairman for holding this hearing because as these examples illustrate, something is really wrong here. I think its this. Silicon valley has a premise. Its that society would be better, more efficient, smarter, more frictionless if we would just eliminate steps that include human judgment. But if youtube, facebook or twitter employees rather than computers were making the recommendations, would they have recommended these awful videos in the first place . Now, im not saying that employees need to make every little decision, but companies are letting algorithms run wild and only using humans to clean up the mess. Algorithms are amoral. Companies design them to optimize for engagement as their highest priority. By doing so, they eliminated human judgment as part of their Business Models. As algorithms take on an increasingly important role, we need for them to be more transparent. Companies need to be more accountable for the outcomes that they produce. Imagine a world where pharmaceutical companies were not responsible for the longterm impacts of their medicine and we couldnt test their efficacy, or if engineers were not responsible for the safety and structure they design and we couldnt review the blueprints. We are missing that kind of accountability for internet platform companies. Right now all we have are representative sample sets, data scraping and anecdotal evidence. These are useful tools, but they are inadequate for the rigorous, systemic studies that we need about the societal effects of algorithms. These are conversations worth having because of the significant influence that algorithms have on peoples daily lives. This is a policy issue that will only grow more important as technology continues to advance. So thank you, mr. Chairman, for holding this hearing and i look forward to hearing from our experts. Thank you, senator schatz. We do, as i understand, have a great panel to hear from today. Were going to start on my left and your right with mr. Tristan harris, whos cofounder and executive director of center for humane technology. Ms. Maggie stanfill from google, inc. , dr. Stephen wolfram, the founder and chief executive officer of Wolfram Research and ms. Rasheeda richardson. So if you would confine your oral remarks to as close as five minutes as possible it will give us an opportunity to maximize the chance for members to ask questions. Thank you all for being here, we look forward to hearing from you. Mr. Harris. Thank you, senator thune and senator schatz. Everything you said, its sad to me because its not happening by accident but by design because the Business Model is to keep people engaged. Which in other words, this hearing is about Persuasive Technology. Persuasion is about an invisible asymmetry of power. When i was a kid, i was a magician. Magic teaches you that you can have asymmetric power without the other person realizing it. You can masquerade having it while looking like you have an equal relationship. Pick a card, any card and you know how to get them to pick the card you want. What were experiencing is an increasing asymmetry of power thats been masquerading itself as an equal and contractual relationship where the responsibility is on us. Lets walk through why thats happening. In the race for attention, companies have to get more of it by being more and more aggressive. I call it the race to the bottom of the brain stem. So it starts like poulter refresh. That operates like a slot machine, has the same kind of addictive qualities that keep people in las vegas hooked to the slot machine. Other examples are removing stopping cues. If i take the bottom out of this glass and keep refilling the water or the wine, you wont know when to stop drinking. Thats what happens with the infinitely scrolling feeds. But the race for attention has to get mor and more aggressive so its not enough just to get your behavior and predict what will take your behavior, we have to predict how to keep you hooked in a different way. So it crawled deeper down the brain stem into our social validation. That was the introduction of likes and followers. How many followers do i have. It was much cheaper to get you addicted to getting attention from other people. That has created mass narcissism thats happening with young people today. After two decades in decline, the Mental Health of 10 to 14yearold girls has shot up 170 in the last eight years. This has been very characteristically the cause of social media. In the race for attention, its not enough just to get people addicted to attention, the race has to migrate to a. I. Who can build a better predictive model of your behavior. Youre about to play a youtube video. You hit play. You think youre going to watch one video and you wake up two hours later saying what happened . The moment you hit play, it wakes up an avatar voodoo doll like you inside of a server. That avatar based on all the clicks and likes you ever made. That makes the avatar look and act like you so they can simulate more and more possibilities if i pick you with this video, how long would you stay. The Business Model is what maximizes watch time. This leads to the kind of algorithmic extremism youve pointed out and this is whats caused 72 of youtubes traffic to be driven by recommendations. Not by human choice but the machines. Its a race between facebooks voodoo doll, can they predict what to show you next and googles voodoo doll. These apply to the whole Tech Industry. Who can better predict your behavior. Facebook has something called loyalty prediction where they can predict to an advertiser when youre about to become disloyal to a brand. If youre a mother and take pampers diapers, they can tell pampers, hey, this user is about to become disloyal to this brand. They can predict things about us that we dont know about our own selves. Thats a new level of asymmetric spo power. We have a name for this, the same standard we apply to doctors, to priests, to lawyers, imagine a world in which priests only make their money by selling access to the confession booth to someone else. Except in this case facebook listens to 2 billion peoples confessions, has a super computer next to them and is calculating and predicting confessions youre going to make before you know youre going to make them and thats whats causing all this havoc. Id love to talk about all of these things later but i want to finish up by saying this affects everyone, even if you dont use these products. You still sending your kids to a school where other people believing antivaccine conspiracy theories causes impact for your life or other people voting in your elections. When it was said in 2011 the quote was software is going to eat the world. What he meant by that, mark as the founder of netscape. What he meant was that software can do every part of society more efficiently than nonsoftware because its just adding efficiencies. So we are going to allow software to eat up our elections, our media, our taxi, our transportation, and the problem was that software was eating the World Without taking responsibility for it. We used to have rules and standards around saturday morning cartoons. When youtube gobbles up that part of society, it just takes away all of those protections. I want to finish up by saying i know mr. Fred rogers testified concerned about the bombardment we were showing children. I think he would be horrified today about what were doing now. At the same time he was able to talk to the committee and that committee made a choice differently so im hoping we can talk more about that today. Thank you. Thank you, mr. Harris. Ms. Stanfill. Chairman thune, Ranking Member schatz, members of the committee, thank you for inviting me to testify today on googles efforts to improve the digital wellbeing of our users my name is maggie stanphill, im a User Experience director. Google Digital Wellbeing Initiative is an initiative thats a top company goal and we focus on providing users with insights about their individual tech habits and the tools to support an intentional relationship with technology. At google, weve heard from many of our users all over the world that technology is a key contributor to their sense of wellbeing. It connects them to those they care about, it provides information and resources that build their sense of safety and security and this acts as democratized information. For most people their interaction with technology is positive and they are able to make healthy choices about screen time and overall use, but as Technology Becomes increasingly prevalent in our daytoday lives, for some people it can distract from the things that matter most. We believe technology should play a useful role in peoples lives and weve committed to helping people strike a balance that feels right for them. This is why our ceo first announced the Digital Wellbeing Initiative with several new features across youtube, gmail, all of these to help People Better understand their tech usage and focus on what matters most. In 2019 we introduced new features to support our Digital Wellbeing Initiative. Id like to go into more depth about our products and tools we have developed for our users. On android, the latest version of our mobile operating system, we added key capabilities to help users take a better balance with technology and make sure that they can focus on raising awareness of tech usage and providing controls to help them oversee their tech use. This includes a dashboard. It shows information about their time on devices. It includes app timers so people can set time limits on specific apps. It requires a do not disturb function to silence phone calls and texts as well as those visual interruptions that pop up. Weve introduced a new winddown feature that puts the users display into night light mode and that reduces blue light and the temptation to scroll. Weve got a new setting called focus mode. This allows pausing apps and notifications that users might find distracting. On youtube we have updates to help our users define their own sense of wellbeing. This includes time watched profiles, take a break reminders, the ability to disable audible notifications and the option to combine all youtube app notifications into one notification. Weve listened to the feedback to the Recommendation System. Over the past year we made a number of improvements to these recommendations raising up content from authoritative sources when people are coming to youtube for news as well as reducing recommendations of content that comes close to violating our policies or spreads harmful misinformation. When it comes to children, we believe the bar is even higher. Thats why weve created family link to help parents stay in the loop as their child explores on android. On androidq, parents will supervise, set screen time limits, bed times and remotely lock their childs device. Similarly youtube kids was designed with the goal of ensuring that parents have control over the content their children watch. In order to keep the videos family friendly, we use a mix of filters, User Feedback and moderators. We also offer parents the option to take full control over what their children watch by hand selecting the content that appears in their app. Were conducting our own research and engaging in important expert partnerships with independent researchers to build a better understanding of the many personal impacts of digital technology. We believe this can help shape new solutions and drive the entire industry toward creating products that support a better sense of wellbeing. To make sure we are evolving the strategy, we have launched a study to better understand the effectiveness of our digital wellbeing tools. We believe this is just the beginning. As Technology Becomes more integrated into peoples daily lives, we have a responsibility to ensure products support digital wellbeing. We are investing more, optimizing our products and focusing on quality experiences. Thank you for the opportunity to outline our efforts in this space. Im happy to answer any questions you may have. Thank you, mr. Wolfram. Thanks for inviting me here today. I have spent my life working on the science and technology of computation of a. I. And perhaps some of what i know can be helpful here today. Heres a way i think one can frame the issue. Many of the most successful Internet Companies like google, facebook and twitter are what one can call automated content selection businesses. They iin gest lots of content. How does a. I. Work . How can one tell if its doing the right thing . People often assume that computers just run algorithms that someone sat down and wrote but modern a. I. Systems dont work that way. Instead lots of

© 2025 Vimarsana