On internet platforms each witness has information on algorithms and Machine Learning, as well as engagement and persuasion and brings unique perspectives to these matters. Your participation in this hearing is appreciated mr. Rosensteinly as this committee continues to work on drafting data privacy legislation. Icon veeped this hearing in part to inform legislation im developing that would require internet platforms to give the user information to engage with the platforms without using algorithms pushed by certain data. The vast majority of content on these platforms is at its best, entertaining and beneficial to the public. However they also have the ability or at least potential to influence the thoughts and behaviors of literally billions of people. As one reason why theres widespread unease about the power of these platforms and why its important for the public to understand how they use the algorithms to make inferences from the reams of data about us that affect behavior and influence outcomes. Without safeguards such as Real Transparency there is a risk some internet platforms will seek to benefit their own interests and not necessarily the consumers interest. In 2013, former google chairman eric schmidt wrote modern Technology Platforms are even more powerful than most people realize and our kmurt will be profoundly altered by their adoption and successfulness everyone in society. Since that time, algorithms and Artificial Intelligence has become an important part of our lives. Large Technology Companies rely on ai powered automation to select and display content to optimize engagement. Unfortunately the use of Artificial Intelligence can have an unintended and possibly dangerous downside. In april, bloomberg reported that youtube spent years searching for engagement, while ignoring calls to monitor content. Earlier it was reported youtube Recommendation System was found to be showing a video of children playing in their back yard pool to other users who watched sexually themed content. That is troubling. It indicates the real risk in a system that relies on algorithms and Artificial Intelligence to optimize for engagement. These are not isolated examples. Some have suggested that the filter bubble created by social media platforms like facebook may contribute to our Political Polarization by encapsulating users in their own comfort zone or echo chambers. Congress has a roll to play in ensuring companies have a way to innovate and ensure keeping others at the forefront of their progress. While there must be a dose of healthy responsibility, companies should provide greater transparen transparency about how the content we see is filtered. We should be able to see content, not powered by algorithms. We are convening this hearing to examine weather explanation and transparency are policy Options Congress should be considering. My hope is at the hearing today we are able to understand how internet platforms use algorithms, Machine Learning to influence outcomes. And we have a distinguished panel before us. Today we are joined by tristan harris. Ms. Maggie stanfield. Dr. Stephen wolfram. And mrs. Rashida richardson. Thank you all again for participating on this important topic and i want to recognize senator shots for any opening remarks that he may have. Thank you, mr. Chairman. Social media and other internet platforms make their money by keeping users engaged so they hired the greatest engineering and tech minds to get users to stay longer inside their apps and on their websites. They discovered one way to keep us all hooked is to use algorithms that feed us a constant stream of increasingly more extreme and inflammatory content. This content is pushed out with very little transparency or oversight by humans. This set up and also basic human psychology make us vulnerable to lies, hoaxes and misinformation. A the washington journal found that youtubes engine leads users to conspiracy theories, partisan viewpoints and misleading videos even when users arent seeking that kind of content. We saw youtubes algorithms were recommending videos of children after users watched sexualized content that did not involved children. This isnt just a youtube problem. We saw the biggest platforms struggle to contain the spread of videos of the Christ Church massacre and its antimuslim propaganda. It was live streamed on facebook and over a million copies were uploaded. Many reported seeing it on autoplay on their social media feeds not realizing what it was. Last month we saw a fake video of the speaker of the house go viral. I want to thank the chairman for h holding this hearing because things are going wrong here. I think its this, silicone valley has a premises, its that society would be better, more efficient, smarter if we would eliminate steps that include human judgment but if youtube, facebook or twitter employees rather than computers were making the recommendations, would they have recommended these awful videos in the first place . Now im not saying that employees need to make every little decision but companies are letting algorithms run wild and only using humans to clean up the mess. Algorithms are amoral. Companies design them to optimize for engagement as their highest priority. And be doing so, they eliminated human judgment as part of their Business Models. As algorithms take on an increasingly important role, we need for them to be more transparent and Companies Need to be more accountable for the outcomes they produce. Imagine a word where pharmaceutical companies werent responsed for their medicine. And engineers werent held responsible and we couldnt view their blueprints. Right now social media is missing that. Right now we have useful tools but theyre inadequate for the rigorous studies we need about the societal affects of algorithms. These are conversations worth having because of the influence that algorithms have on peoples daily lives. This becomes more important as technology continues to advance. So thank you mr. Chairman. We do have a great panel to hear from today. And were going to start on my left, your right, with mr mr. Tristan harris. Ms. Maggie stanfield as i mentioned, a google User Experience director, dr. Stephen wolfram. And ms. Rashida richardson, director of policy research a. I. Institute. If you would confine your oral remarks to as close to 5 minutes as possible it gives us an opportunity to maximize the chance for members to ask questions. Thank you all for being here we look forward to hearing from you. Mr. Harris. Thank you. Everything you said, its sad to me because its happening not by accident but by design. Because the Business Model is to keep people engaged, which in other words this hearing is about Persuasive Technology. And persuasion is about an invisible asymmetry of power. When i was a kid, i was a magician, and magic teaches you you can have power without the other person realizing it. You say pick a card, any card, while meanwhile you know exactly how to get that person to pick the card you want. What were experiencing with technology is an increasing asymmetry of power thats been masquerading itself as an equal or contractual relationship where the responsibilities is on us. Lets walk through why that is happening. In the race of attention because theres only so much attention, companies have to get more of it. It starts with techniques like pull to refresh, that operates like a slot machine, has the same addictive qualities that keep people in las vegas hooked to the slot machine. Other examples, removing stopping cues. If i take the bottom out of the class and keep refilling it, you wont know when to stop drinking. Thats what we do with the feeds and it keeps people scrolling. The race for attention has to get more aggressive. So its not enough to predict your behavior. We have to predict how to keep you hooked in a different way. So it crawled deeper down the brain stem to our social validati validation, that was introduction of likes and followers. It was cheaper, instead of getting your attention to get you addicted to getting attention from other people, this has created the mass narcissism and cultural thing happening with young people today, and after two decades in decline, the Mental Health of 10 to 14yearold girls has shot up 170 in the last eight years. Its been the cause of social media. And in the race for attention, its not enough to get people adilkted to attention, the race has to migrate to ai, who can build a better predictive model of your behavior. Youtube, you hit play on a video, you think youre going to watch one video and you wake up two hours later and what happened . The answer is because you had a super computer pointed at your pain. The moment you hit play it wakes up an avatar version of you, and that avatar, based on everything you have ever watched and liked, everything that makes the avatar look like you so inside a server they can simulate more and more possibilities if i get you with this video, this video, how long would you say, the Business Model is what maximizes what, if anything, watch time. This is what caused 70 of you tubes traffic to be driven by recommendations, not by human choice but the many sheens. Its a race between facebooks voodoo doll and googles. These are metaphors that apply to the whole Tech Industry, its a race to better predict your behave uri. Facebook can predict to an advertiser when youre about to become disloyal to a brand. If youre a mother and take pampers diapers, they can tell pampers this user is about to become disloyal. So they can predict things we dont know about our own selves. Thats a new level of asymmetric power. We have a name for this, which is a duty of care relationship, the same standard we apply to doctors, priests, lawyers, imagine a world in which priests only make their money by selling access to the confession booth to someone else. Except facebook listens to 2 billion peoples confusions and is calculating and predicting confessions youre going to make. I want to finish up by saying this affect everyone, even if you dont use the products, you send your kids to a school where other people believe antivaccine theories or those votes in your elections. In 2011, the quote was software is going to eat the world. What he meant by that, he was the founder of netscape. What he meant by that, software can do every part of society more efficiently than nonsoftware. So were going to allow software to eat up our elections, our media, our taxi, transportation. And the problem was that software was eating the World Without taking responsibility for it. We had rules and standards around saturday morning cartoons when youtube gobbles up that part of society it takes away all the protections. I know fred rogers testified before this committee 50 years ago, concerned about the animated bombardment we were showing children. I think he would be horrified about what we are doing now. And he talked to the committee and the committee made a choice differently. Im hopeful were able to do that today. Thank you. Thank you. Thank you, members of the committee, thank you for inviting me to testify today on googles efforts to improve the digital well being of our users. I appreciate the opportunity to outline our programs and discuss our research in this space. My name is maggie stanfield, i lead the cross google digital well being initiative. Its an initiative thats a top company goal and we focus on providing users with insights about their tech habits and the tools to support an intentional relationship with technology. At google we heard from many of our users all over the world that technology is a key contributor to their sense of well being, it connects them to those they care about, provides information and resources to those that build their sense of safety and security. Its provided services for billions of users around the world. For most people their interaction with technology is positive and theyre able to make healthy choices about screen time and overall use. But as Technology Becomes increasingly prevalent in our daytoday lives for some people it can distract from what matters most. We believe technology should play a useful role in peoples lives and weve committed to helping people strike a balance to what is right to them. This is why our ceo first announced the digital well being, with features across our platforms to help People Better understand their tech usage and focus on what matters most. In 2019 we applied what we learned from users and experts. Id like to go into more depth about our products and tools we have developed for our users. On android, the latest version of our mobile operating system we added key capabilities to help users take a better balance with technology and make sure they can focus on raising awareness of tech usage and providing controls to help them oversee their tech use. This includes a dash board, time on devices, app timers to set time limits on specific apps. It requires a do not disturb function to silence phone calls and texts, at least the phone calls that pop up. And we introduced a wind down feature that puts it into night light mode. Finally weve got a new setting called focus mode. This allows pausing specific apps and notifications that users might find distracting. On youtube we have launched a series of updates to help our users define their own sense of well being,ing this is reminders to take a break, and the option to combine all app notifications into one. Weve also listened to the feedback about the youtube Recommendation System. Over the past year weve made a number of improvements to these recommendations raising up content for when people are coming to youtube for news as well as reducing content that comes close to violating or policies or spreads harmful misinformation. When it comes to children we believe the bar is higher. Thats why we created family link to help parents stay in the loop as their child explores on android. On android q, parents will be able to set limits and remotely lock their childs device. Youtube kids was designed with the goal that parents have control over the content their children watch. We use a mix of filters, User Feedback and moderators. We offer parents the option to take full control over what their children watch by hand selecting the content that appears in their app. Were conducting our own research and engaging in important expert partnerships with independent in partnerships with independent researchers to build better understanding of many impacts of digital technology. We believe this knowledge can shape and drive entire injury toward creating products that create a better section of wellbeing. To make sure were evolving the strategy weve launched longitudinal study to overseas effectiveness of digital tools. We believe this is just the beginning. As Technology Becomes more integrated into peoples daily lives we ensure products and support digital wellbeing. Were committed to investing more, optimizes products and focusing on experiences. Thank you for opportunity to outline efforts in this space, im happy to answer any questions you might have. Thank you. Thanks for inviting me here today. I have to say this is pretty far from my usual venue but i have spent my life working on computation of ai and perhaps some of what i know can be helpful today. First, here is the way one can frame the issue. Many successful Internet Companies like google and facebook and twitter are what one can call automated content selection businesses. They in guest lots of cop tent, then essentially useto show to users. How does ai work, how can one tell its doing the right thing. People assume computers run algorithms someone sit down and wrote. Modern ai doesnt wok that way, usually constructed automatically, lancaster from a numb learning from a number of massive examples. Theres embarrassingly little we humans can understand. Here is the problem. Its factor based science if you insist on accessibility you cant get the power of the system or ai. If you cant open up the ai and understand what its doing, how about putting external constraints on it. Can you write a contract that says what the ai is allowed to do . Partly actually through my own work were starting to be able to formulate computational contracts, contracts written not in legalese but a precise computational language suitable for an ai to follow. What should the contract say . Whats the right answer for what should be at the top of someones news food, the rule for balance of content. Well, as ai starts to run more and more of our world, were going to have to develop a whole network of ai laws and its going to be super important to get this right. Starting off by agreeing on the right ai constitution. Its going to be a hard thing kind of making computational how people want the world to work. Right now its still in the future. What can we do about peoples concerns now about automatic content selection. I have to say i dont see a purely technical solution but i didnt want to come here and say everything is impossible especially since i personally like to spend my life solving possible problems. I think if we want to do it, we actually can use technology to set up a marketbased solution. Ive got a couple of concrete suggestion how to do that based on giving users a choice on who to trust for the final content they see. One is final ranking providers, the other uses constraint providers. In both cases these Third Party Providers who basically insert their own little ais into the pipeline of delivering content to users. The point is users can choose which of these provides they want to trust. The idea is to leverage everything the big automated content selection businesses have but to essentially add a new market layer so users get to know picking a particular well content is selected for them. You get a void all or nothing banning of content and dont have a single point of failure for spreading bad content and you open up a new market potentially delivering higher value for users. Of course, for better or worse, unless you decide to force certain content or diversity of content, which you could, people could live in their own content bubbles, though importantly they get to choose those themselves. Well, lots of Technical Details about everything im saying as well as some deep science about whats possible and whats not. Ive tried to explain more about that in my written testimony and im happy to try and answer whatever questions i can here. Thank you. Thank you, mr. Wolfram. Miss richardson. Members of subcommittee, thank you for inviting me to speak today. My name is Rashida Richardson and director of ai institute at new york university, which is the First UniversityResearch Institute dedicated to Understanding Social implications of Artificial Intelligence. Part of my role includes researching increasing reliance on ai and algorithmic systems and crafting policy and legal recommendations to identify used in research. The use of datadriven technologies, Predictive Analytics and in frengs systems are expanding in consumer and government sectors. They determine where our children go to school, whether someone will receive medicaid benefits, who is sent to jail before trial, which news articles we see and which job seekers are offered in interview, thus they have a profound impact on our lives and require immediate attention and action by congress. Though these technologies affect every american, they are primarily developed and deployed by a few powerful companies and therefore shaped by these companies incentives, values, and interests. These companies have demonstrated limited insight into whether their products will harm consumers and even less experience in mitigating those harms. So while most Technology Companies promise their products will lead to broad societal benefits, this is little evidence to support these claims, and in fact mounting evidence points to the contrary. For example, ibms watson super computer was designed to improve Patient Outcomes but recently internal ibm documents showed it actually provided unsafe an erroneous Cancer Treatment recommendations. This is just one of numerous examples that have come to light in the last year showing the difference between the Marketing Companies used to sell these technologies and the stark reality of how these technologies ultimately perform. While many powerful industries pose potential harms to consumers with new products, the industry producing algorithmic ai systems pose three particular risks thad current laws and infrastructure fail to address. The first risk is that ai systems are based on compiled data that reflect historical existing social and economic conditions. This data is neither neutral nor objective, thus ai systems tend to reflect and amplify cultural bias used, value judgments and social inequities. Meanwhile most existing laws and regulations struggle to account for or adequate leave remedy these disparate outcomes as they focus on acts of skrim anything or bias coded in the Development Process. The second risk is that many ai and internet platforms are optimization that prioritize monetary interests and results in products being designed to keep users engaged while often ignoring social costs like how the product may affect nonusers, environments and markets. An ai model slot machine, while aibased is navigation ways subject to public scrutiny following many incidents across the u. S. Where the application redirected highway traffic through residential neighborhoods unequipped for the influx of vehicles which increased accidents and risk to pedestrians. The third risk is that most of these technologies are black boxes, technologically and legally. Technology they are black boxes because internal workings are hidden away inside the companies. Legally Technology Companies obstruct accountability efforts through claims of pro priori prioritiary or trade secret Legal Protections even though theres no evidence inspection, auditing or oversight poses competitive risk. Technologies are becoming increasingly common and show narrow goals like engagement, speed and profit at the expense of social and ethical considerations like safety and accuracy. We are at a critical moment where congress is in a position to act on some of the most pressing issues. By doing so paving the way for a technological future that is safe, account able and equitable. With these concerns in mind, i offer the following recommendations, which are detailed in my written statement. Require Technology Companies to wave trade secrecy and other legal claims that hinder oversight and accountability mechanisms, require Public Disclosure of technologies involved in any decision about consumers by name and vendor, empower Consumer Protection agencies to apply truth in advertising laws, revive the Congressional Office of Technology Assessment to perform premarket review and post Market Monitoring of technologies, enhance whistleblower protections for Technology Company employees that identify unethical and unlawful uses of ai or algorithms, require any transparency or accountability mechanisms to include a detailed reporting of the full stack supply chain and require companies to perform and publish algorithmic Impact Assessments prior to public use of products and services. Thank you. Purchase thank you, miss richardson. We know internet platforms like facebook have data about users. What can companies predict about users based on that data . Thank you for the question. Theres an important connection between privacy that isnt linked. Maybe its helpful to link that. Cambridge analytica, that was an event in which based on your facebook likes, based on 150 of your facebook likes, i could predict your political personality and do things with that. The reason in my Opening Statement this is about increasing asymmetry of power, without any of your data, i can predict increasing features about you using ai. There is a paper recently that with 80 accuracy i can predict your same big five personality traits that Cambridge Analytica got from you without any of your data. All i have to do is look at your mouse movements and click patterns. In other words, its the end of the poker face, your behavior is your signature and we can know your political personality, based on clicks, can identify youre a homosexual before you knorr youre a homosexual, predict with 95 accuracy youre going to quit according to ibm, that your pregnant, microexpressions better than a human being can. Microexpressions are your soft reactions to things that are not very visible but invisibly visible, computers can predict that. As you keep going you can start to deep fake things, generate a new synthetic media, synthetic face with these characteristics. The reason i opened the statement we have to recognize what this is all about is a growing asymmetry of power between technology and the limits of the human mind. My favorite sociobiologist wilson said the fundamental problem of humanity we have pal i dont liquidity emotions, medieval institutions and godlike technology. So were chimpanzees with nukes. Our pal i doneo lithic brains. The reason to be extractive, to get things out of you, to be a fiduciary, you cant have asymmetric power designed to extract things from you. You cant have lawyers or doctors whose entire Business Model is to take everything they learned and sell it to someone else, except in this case the level of things we can predict about you is far greater than those films combined, when you add up data that assembles a more and more accurate voodoo doll for each of us. Theres one for every four people on earth with youtube and facebook more than 2 billion people. Miss stanfield, in you were prepared testimony you note Companies Like google have responsibility to ensure Product Support users digitals wellbeing. Does google use Persuasive Technology, meaning technology designed to change peoples attitudes and behaviors . If so, how do you use it . Do you believe Persuasive Technology supports a users digital we will being . Thank you, senator. No, we do not use Persuasive Technology at google. Our foremost principals and transparency, security around data, those are the principles through which we design products at going. Dr. Wolfram you write in your testimony that its impossible to expect useful form of general explainability for automated content selection systems. If this is the case, what should policymakers require or executive of internet platforms with respect to algorithm, exploration or transparency . I dont think explaining how algorithms work is a great direction. The basic issue is if the algorithm is doing something really interesting, then youre not going to be able to explain it. Because if you could explain it, it would be like saying you can jump ahead and say what its going to do without letting it do what its going to do. Its kind of a scientific issue that if youre going to have something explainable, it isnt getting to use full power of computation to do what it does. My own view, disappointing for me as a technologist, have you to put humans in the loop. The thing to understand about ai, we can automate many things about how things get done. What we dont get to automate is the goals of what we want to do. The goals of what we want to do arent definable as a thing. The goals of what we want to do is something humans have to come up with. The most promising direction is breaking the ai pipeline and figuring how you can put into the ai pipeline the right level of human input. My own feeling is the most promising possibility is to kind of insert, leave the great value thats been produced by the current automatic content selection companies, ingesting large amounts of dat, being able to monday ties large amounts of conte content, et cetera, but insert a way for users to be table to choose who they trust about what finally shows up in their news feed or search results or whatever else. I think there are technological ways to make that kind of insertion that will actually, if anything, add to the richness of potential experience for users and possibly the financial returns for the market. Very quickly, miss richardson, what are your views about whether algorithm explanation or algorithm transparency are appropriate policy responses in response to dr. Wolfram . I think a good interim step in that transparency is necessary to understand what these technologies are doing and assess benefits and risk. I dont think transparency or explainability is an end goal because i still think youre going to need some level of legal regulation to impose liability so bad or negligent actors act in a proper manner but also incentivize companies to do the right thing or apply due diligence. In a lotofcases of cases i ci written testimony theres Public Relations disasters on the back end. Many of them could have been assessed or interpreted during Development Process but companies are incentivized to do that. In some ways transparency and in sentability can give legislators and public more insight into choices they are making whether liability should be attached or different regulatory enforcement needs to be pursued. Thank you. Senator schatz. Thank you. First a yes or no question. Do we need more human supervision of algorithms on online platforms. Mr. Harris. Yes. Yes. Yes, although i would put some if the notes. Sure. Yes with footnotes. I want to follow up on what dr. Wolfram said in terms of the unbreakability of these algorithms and lack of transparency sort of built into what they are foundationally. The reason i think thats an important point that youre making, you need a human Circuit Breaker at some point to say, no, i choose not to be fed things by an algorithm. I choose to jump off of this platform. Thats one aspect of humans acting as a Circuit Breaker. Im more interested in the human employee either at the line level or the human employee at the supervisory level who take some responsibility for how these algorithms evolve over time. Miss richardson, i want you to maybe speak to that question. It seems to me as policymakers, thats where the sweetspot is, to find an incentive or requirement where these companies will not allow these algorithms to run essentially unsupervised and not even understood by the highest echelons of the company except in their output. Miss richardson, can you help me to flesh out what that would look like in terms of enabling human supervision. So i think its important to understand some of the points about the power asymmetry that mr. Harris mentioned. I definitely do think we need a human in the loop but we also need to be cognizant of who has power in those dynamics and you dont necessarily want a frontline employee taking full liability for a decision that they for a system they had no input in the design or their current sort of position and using it, so i think it needs to go all the way up in that if youre thinking boult liability or responsibility in any form, it needs to attach it. Those making decisions about the goals, the designs and ultimate lead implementation of use in these technologies and then figuring out what are the right pressure points and dynamics to encourage companies to make decision toss make the right choice to benefit society. I think thats right. None of this ends up coming to much unless executives feel a legal and financial responsibility to supervise these algorithms. Miss stanfield, i was a little confused by one thing you said. Did you say google doesnt use Persuasive Technology. Thats correct, sir. Mr. Harris, is that true . Its complicated. Persuasion is happening throughout ecosystem. In my mind less about accusing one company, google or facebook, its about understanding every company i get that but shes here and she said they dont use Persuasive Technology, im trying to figure out google suite, youtube, whole outfit, pantheon of companies you dont use persuasive technologies. Either i miss understand your company or the definition of Persuasive Technology. Can you help me understand whats goingoon here . Sure. My response, related dark patterns and Persuasive Technology is not core to how we design our products at google built around transparency youtube or whole family of companies. The whole family of Companies Including youtube. You dont want to clarify that a little further . We build our products with privacy, security and control for the users. That is what we build for. Ultimately this builds a lifelong relationship with the user, which is primary. I dont know what any of that meant. Miss richardson, can you help me . I think part of the challenge, as mr. Harris mentioned, is how youre defining persuasive. Both mention a lot of systems and internet platforms are form of an optimization system, which is optimizing enforcer goals and there you could say that is a Persuasive Technology which is not accounting for a certain social risk but i think theres a business incentive to not to take a more narrow view of that definition. Like i cant speak for google, because i dont work for them of the reason youre confused, you may need to clarify definitions of what is persuasive in the way youre asking the question and what is google suggesting doesnt have per subways indochina characteristics in their technology. Thank you. Thank you senator schatz. Senator fisher. Thank you. Usa know i introduced did he tour act with senator warren to curb some of the manipulative user interfaces. We want to be able to increase transparency when it comes to behavioral experimentation online. Obviously we want to make sure children are not targeted with some of the dark patterns that are out there. In your perspective, how do dark patterns thwart that user autonomy online . Yeah. So persuasion is so invisible and so subtle, oftentimes criticized using this language, crawling down the brain stem, people think youre overreacting. Its a design choice. For background i studied at a lab called Persuasive Technology lab at stanford that taught engineering students essentially about this whole feel. My friends in the class were the founders of instagram. Instagram is a product invented copied twitter in the technique what you could call a dark pattern of the number of followers that you have to get people to follow each other. Theres a follow button on each profile and thats meant that doesnt seem so dark, thats whats so insidious about it, youre giving people a way to follow behavior. What its actually doing, causes you to come back every day to see if i have more followers today than yesterday. How are these platforms getting our personal information . How much choice do we really have . I thought the doctors comment about the goals we want to do as humans, we have to get involved in this. Your introductory comments are basically telling us everything about us is already known. It wouldnt be hard to manipulate what our goals are at this point. The goal is to subvert. If the goal is to delete the facebook account, you hit delete. They be are you sure you want to delete the facebook account, then puts up faces of friends. Am i asking those friends to miss me . No. Does facebook ask those friends will they miss me . They are calculating on what five friends would we apersuade not to delete your facebook account. Would you consent to giving data to facebook or location. Oftentimes there will be a big blue button, they have had 100 joers behind the screen testing 100 different generations of where the button would be and a small gray link people dont know is there. What were calling a free human choice is manipulated choice. Like a magician, pick a card, any card, but, skt in, theres an asymmetry of power. When youre on the internet and you try to look something up and you have it pop up on the screen, this is so irritating, and you have to hit okay to get out of it because you dont see the other choice on the screen. Its very light, its gray. But now i know if i hit okay this is going to go on and on and whoever is going to get more and more information about me. They are really invading my privacy but i cant get rid of this screen otherwise unless you turn off your computer and start over, right . Theres all sorts of ways to do this. If im a persuader and i really want you to hit okay on my dialogue to get your data ill wait until the day youre in a rush to get that place youre looking for and thats the day ill put up dialogue, hey, are you going to give me your information and of course youre going to say, yes, whatever. In persuasion hot states and cold states. Hot states, impulsive its easier than when they are in a cold reflective state. Technology can manufacture and wait until your in those hot states. How do we protect ourselves and our privacy, and what role does the federal government have to play in this besides getting our bill passed . At the end of the day the reason we go back to the Business Model, its about alignment of interest. You dont want a system designed to manipulate people. Youre going to have that insofar as Business Model manipulative rather than regenerative, you have subscription style. Facebook fewer dark patterns because its in a subscription relationship with users. When Facebook Says how else can we give people free service. Like a priest to say how else can i serve so many people. How do we keep our kids safe . Theres so much to that. I think what we need is a mass Public Awareness campaign so people understand whats going on. One thing i have learned if you tell people its bad for you, they wont listen. When you tell people this is how youre manipulated, no one wants to feel manipulated. Thank you. Thank you, senator fisher. Senator blumenthal. Thanks, mr. Chairman. Thank you to awful you for being here today. I was struck by what scenario schatz said in his Opening Statement. Algorithms are not only running wild but they are running wild in secrecy. These cloaked in secrecy in many respects from the the people who are supposed to know about them. Miss richardson referred to the black box here. That black box is one of our greatest challenges today. I think we are at a time when algorithms, ai, and the exploding use of them is almost comparable to the time of the beginnings of Atomic Energy in this country. We now have an Atomic Energy commission. Nobody can build bombs, nuclear bombs, in their backyard because of the dangers of Nuclear Fission and fusion, which is comparable to what we have here, systems in many respects beyond our human control and affecting our lives in very direct, extraordinarily consequential terms beyond the control of the user and maybe the builder. On the use of Persuasive Technology, i find, miss stan feld, your contention that google does not build system with the idea of Persuasive Technology in mind somewhat difficult to believe because i think google tries to keep people glued to its screens, at the very least. That Persuasive Technology is operative. Its part of your Business Model, keep the eyeballs. It may in the be Persuasive Technology to get them to the far left or the far right but at the very least the technology is designed to promote usage. Youtubes Recommendation System has a notorious history of pushing dangerous messages and content, promoting radicalization, disinformation and Conspiracy Theory. Earlier this month senator blackburn and i wrote to youtube on reports that its Recommendation System was promoting videos that sexualized children, effectively acting as a shepherd for pedophiles across its platform. Now you say in your remarks that youve made changes to reduce the recommendation and content that, quote, violates policies or spreads harmful misinformation. According to your account, the number of views from recommendations for these videos has dropped by over 50 in the united states. I take those numbers, as youve provided them. Can you tell me what specific steps you have taken to end your Recommendation Systems practice of promoting content that sexualizes children. Thank you, senator. We take our responsibility to supporting child safety online extremely seriously. Therefore these changes are in effect. As you stated, these have had a significant impact. But what specifically . Resulting in actually changing which content appears in the recommendations. So this is now classified as borderline content. That includes misinformation and Child Exploitation content. Im running out of time. I have so many questions. I would like each of the witnesses to respond to recommendations miss richardson made, which i think are extraordinarily promising and important. Im not going to have time to ask you about them here. But i would like witnesses to respond in writing, if you would, please. Second, let me just observe on the topic of human supervision, i think that human supervision has to be independent on human supervision. On the topic of arms control, we have a situation here where we need some kind of independent supervision, some kind of oversight. Yes, regulation. I know its a dirty word these days in some circles but protection will require intervention from some independent source here. I dont think trust me can work anymore. Thank you, mr. Chairman. Thank you senator blumenthal. Senator blackburn. Thank you, mr. Chairman, and thank you to our witnesses. We appreciate that you are here. I enjoyed visiting with you for a few minutes before the hearing began. Mr. Wolfram, i want to pick up where we had discussed. In your testimony computational irreducibility. Look at that for just a moment. As we talk about this, does it make algorithmic transparency sound increasingly ielusive and would you consider moving towards transparency is a worthy goal or should we be asking another question . Yeah, i think there are different meanings to transparency. If you are asking tell me why the algorithm did this versus that, thats really hard. If we really want to be able to answer that, were not going to be able to have algorithms that do anything powerful. In a sense being able to say this is why it did that, we might as well follow the path we used to explain it rather than have it do what it needed to do itself. So the transparency, what we capital do is try to get a pragmatic result. Yeah, we cant go inside, open up the hood and say why this happened. The other problem is knowing what you want to have happen. Like you say this algorithm is bad, gives bad recommendations. What do you mean by bad recommendations. You have to define in a way that says its biased this way, producing content we dont like. You have to have a way to get those bad things. Miss richardson, i can see youre making notes and want to weigh in on this. You talked about compiled data and bias getting the algorithm to yield a certain result. So lets say you build this algorithm, and you build this box to contain this data set or to make certain it is moving this direction. Then as the algorithm selfreplicates and moves forward does it move further that direction or does data inform it and pull a separate direction, if youre building it to yield a specific result . So it depends on what type of technical system were talking about. To impact what im saying, the problem with a lot of these systems is they are based on data sets which reflect all of our current conditions, which also means any imbalances in our condition. One of the examples i give in my written testimony reference amazons hiring algorithm, which was found to have gender disparate outcomes, thats because it was learning from prior hiring practices. Theres also examples of other similar hiring algorithms, one of which found if you had the name jared and you played lacrosse you had a better chance of getting a job interview. Its not necessarily the correlation between your name being jared and playing lacrosse means youre necessarily a better employee than anyone else, its looking at patterns in underlying data but doesnt mean patterns the system is seeing reflects reality or in some cases it does and its not necessarily how we want to view reality and shows the skews we have in society. Got it. Mr. Wolfram, you mentioned in your testimony there could be a single content platform but a variety of final ranking providers. Are you suggesting it would be wise to prohibit companies from using crossbusiness data flows . Im not sure how that relates to the thing i k is the case, its not necessary to have the final ranking of content. Theres a lot of work to be done to have it ranked for news feed and so on. Thats heavy lifting. The choice often made separately for each user about how to rank content i dont think has to be made by the same entity. I think if you break that apart, you change the balance between what is controllable by users and what is not. I dont think its real i think to i think yeah, i would like to say one of the questions about data set implies certain things, we dont like that, so on. One of the challenges is to define what we want. One of the things happening, because they are ai systems, Computational Systems we have to define more precisely than before. Its necessary to write these computational rules. Thats a tough thing to do. Its something that cannot be done by computer and cant necessarily be done from prior data. Its something people like you guys have to decide what to do about. Okay. Thank you. Mr. Chairman, i would like unanimous consent to enter the letter that senator blumenthal and i sent earlier this month. I know my time has expired but i will simply say the evasiveness on answering senator blumenthals question about what are they doing is inadequate to look at the safety of Children Online just to say changing the content that appears, the list is inadequate. Mr. Harris, i will submit a question to you about what we can look at on platforms for combating this bad behavior. Thank you senator, mr. Peters. Thank you. Fascinating discussion. Id like to address an issue i think is of profound important to our democratic republic, thats the fact that in order to have a vibrant democracy, you need to have an exchange of ideas and open platform. Certainly part of the promise of the internet, as it was first conceived, is that we would have this incredible universal commons where a wide range of ideas would be discussed and debated, it would be robust, and yet it seems as if were not getting that. Were actually getting more and more silos. Dr. Wolfram, you talked about how people can make choices and live in a bubble but at least it would be their bubble they live in. Thats what were seeing throughout society as polarization increases, more and more folks are resulting to tribal type belavar. Mr. Harris you talk about medieval institutions and stone age minds, tribalism was alive and well in the past and were seeing advances in technology in a lot of ways bring us back into that kind of tribal behavior. So my question is to what extent is this technology accelerating that and there is a way out . Mr. Harris. Thank you. I love this question. Theres a tendency to think this is human nature. Thats just people are polarized. Its holding up a mirror to society. What its really doing is its an amplifier for the worst parts of us. So in the race to the bottom of the brain stem to get attention lets take an example like twitter. Its calculating what is the thing i can show you that gets the most engagement. It turns out that outrage moral outrage gets the most engagement. Found in the study for every word of moral outrage you add to a tweet it increases your retweet rate by 7 . In other words, the pollizatiarn of society is part of the Business Model. The other thing shorter briefer work than longer nuanced ideas that take a long time to talk about. Thats why you get 140 characters dominating our social discourse. Reality and most important topics are complex, saying simple things about them that automatically creates importantalization. You cant Say Something simple about something complicated and have everybody agree with you. They will hate you for. Its not easy to retweet it and have a mob coming after you. Its called call out culture, chilling effects, other subsequent effects in polarization amplified by the fact these platforms are rewarded to give you the most sensational stuff. One last example of this is on youtube. Lets say he equalized, equal representation left and right on the media, lets say we get that perfectly right. A month ago on youtube, you did a map of the 15 most frequently mentioned verbs or key words in recommended videos, hates, debunks, obliterates, destroys, Jordan Peterson destroys social justice warriors and video and that kind of thing is the Background Radiation were dosing 2 billion people in. Hire content moderators english, 2 billion people in hundreds of languages using this, how many languages at youtube speak the 22 languages in india where theres an election coming up. Thats some context on that. A lot of conteblgt. Fascinating. Im running out of time. I took particular notes in your testimony when you talk about how technology will eat up elections. Youre referencing another writer on that issue. In the remaining brief time i have, whats your biggest concern about the 2020 elections and how Technology May eat up this election coming up . That comment was another example of we used to have protections technology took away, equal priced am pain ads, cost the same at 7 00 p. M. For a candidate to run an election. When facebook gobbles up that part of the media, it eats up those protection. No equal pricing. In terms of what im worried about, im mostly worried about none of these problems have been solved. The Business Model hasnt changed. The reason you see christchurch happen and the video show up everywhere or any examples fundamentally theres no easy way for these platforms to address the problem was the problem is the Business Model. Some small interventions fast lanes for researchers, spotting misinformation. The real example, instead of Nato Department of defense protecting us in Global Information warfare, we have a handful of 10 or 15 security engineers on facebook and twitter. They were woefully unprepared especially in the last election and im worried they still might be. Thank you. Thank you, senator peters. Senator johnson. Thank you, mr. Chairman. Mr. Harris, i agree with you when you say that our best line of defense as individuals is exposure. People need to understand that they are being manipulated and a lot of this hearing has been talking about the manipulation algorithms, Artificial Intelligence. I want to talk about the manipulation by human intervention, human bias. We dont allow or we certainly put restrictions through the fcc on an individual owning their ownership of tv stations, radio stations, newspapers because we dont want that monopoly of content in a community, much less, you know, facebook, google, access, billions of people, hundreds of millions of americans. So i have staff on instagram go to the political account by the way, i have a video of this so id like to enter that into the record. The hit follow and this is what the list they were given and this is in the exact order and i would ask the audience and witnesses to just see if there is a conservative in here how many there are. Here is the list. Elizabeth warren, kamala harris, New York Times, huffing ton post, bernie sanders, economist, nancy pelosi, the daily show, washington post, covering potus, nbc, wall street journal, pete buttigieg, time, new yorker, reuters, kirsten gillibrand, the guardian, bbc news, aclu, Hillary Clinton, real time with bill maher, un, guardian, huff post womens, late show with stephen colbert, moveon. Org. Washington post opinion, usa today, new yorker, williams march, late night with seth meyers, the hill, cbs, justin trude trudeau. It goes on. These are five conservative staff members. If there are algorithms shuffling the content that they might actually want to or they would agree with, you would expect you would see maybe fox news, breitbart, news max. You might even see like a really big name like donald trump, and there wasnt. So my question is who is producing that list . Is that instagram . Is that a politico site . How is that being generated . I have a hard time feeling thats generated or being manipulated by an algorithm or by ai. I dont know any i would be really curious to know what the click pattern was. In other words, you open up an instagram account, and its blank. Youre saying if you just ask who do i follow from an empty account who do i follow and youre given suggestions for you to follow. I honestly have no idea how instagram ranks those things but i would be curious to know what the original clicks were that produced that list. Can anybody else explain that . I mean, i dont believe thats ai trying to give content to conservative staff member of things they may want to read. This to me looks like instagram, if they are actually the ones producing that list, trying to push a political bias. Mr. Wolfram, you seem to want to weigh in. You know, the thing that will happen is if theres no other information it will tend to be just what where there is the most content or where the most people on the platform in general have clicked. So it may simply be a statement in that particular case that and im really speculating, but that the users of that platform tend to like those things and so if theres no other so, again, you would have to assume, then, that the vast majority of users of instagram are liberal progressives. That might be evidence of that. Ms. Stanphill, is that what your understanding would be . Thank you, senator. By the way, we can probably do that on google, too, it would be interesting. I cant speak for twitter. I can speak for googles stance just generally with respect to ai which is we build products for everyone. We have systems in place to ensure no bias is introduced. But we have i mean, you wont deny the fact that there are plenty of instances of content being pulled off of conservative websites and trying to repair the damage of that, correct . I mean, whats happening here . Thank you, senator. I wanted to quickly remind everyone that i am a User Experience director and i work on digital well being, which is a program to ensure that users have a balanced relationship with tech. Mr. Harris, whats happening on here . Again, i think conservatives have legitimate concern that content is being pushed from a liberal progressive standpoint to the vast majority of users of these social sites. I mean, i really wish i could comment, but i dont know much about where thats happening. Ms. Richardson . So there has been some research on this and it showed that when youre looking at engagement levels, there is no partisan disparities, in fact, its equal, so i agree with dr. Wolfram in that what you may have saw is just what was trending, even in the list you mentioned Southern Poverty Law Center and they were simply trending because their executive director was fired, so that may just be a result of the news, not necessarily the organization. But its also important to understand that research has also shown that when there is any type of disparity on partisan lines, its usually dealing with the veracity of the underlying content and thats more of a content moderation issue rather than what youre shown. Okay. Anyway, id like to get that video entered into the record, and we will keep looking at this. Without objection. I think to the senators point, if you google yourself, you will find most of the things that pop up right away will be from news organizations that tend to be to the left. I have had that experience as well. It seems like if that actually was based upon a neutral algorithm or some other form of Artificial Intelligence that since you are the user and since they know your habits and patterns, you might see something instead of from the New York Times pop up from fox news or the wall street journal. That to me has always been hard to explain. Lets Work Together to try to get that explanation because its a valid concern. Senator tester. Thanks. Thank you, mr. Chairman. Thank all the folks who have testified here today. Ms. Stanphill, does youtube have access to personal data on a users Gmail Account . Thank you, senator. I am an expert in digital well being at google so, im sorry, i dont know that with depth and i dont want to get out of my depth. So i can take that back for folks to answer. Okay. So when it comes to Google Search history, you wouldnt know that, either . Im sorry, senator, im not an expert in search and i dont want to get out of my depth, but i can take it back. Okay. All right. Let me see if i can ask a question that you can answer. Do you know if youtube uses personal data in shaping recommendations . Thank you, senator. I can tell you that i know that youtube has done a lot of work to ensure that they are improving recommendations. I do not know about privacy and data because that is not necessarily core to digital well being. I focus on helping provide users with balanced technology usage. So in youtube, that includes a time watch profile, it includes a reminder where if you want to set a time limit you will get a reminder. I got it. Ultimately we we give folks power to basically control their usage. I understand what youre saying. I think that what im concerned about is that if and it doesnt matter if youre talking google or facebook or twitter, whoever it is, has access to personal information which i believe they do, mr. Harris, do you think they do . I wish that i really knew the exact answer to your question. Does anybody know the answer to that question . I mean, the general premise is that with more personal access to information that google has they can provide better recommendations is usually the talking point. Thats correct. And the Business Model because theyre competing for who can predict better what will keep your attention my eyes on that website. Yeah, they would use as much information as they can and usually the way that they get around this is by giving you an option to opt out, but of course the default is usually to opt in. And thats, i think, whats leading to what youre talking about. Yes. So i am 62 years old, getting older every minute the longer this conversation goes on, but i will tell you that it never ceases to amaze me that my grandkids, the oldest one is about 15 or 16, down to about 8, when we are on the farm, is absolutely glued to this. Absolutely glued to it. To the point where if i want to get any work out of them, i have to threaten them. Okay . Because they are riveted. So ms. Stanphill, do you guys when you are in your leadership meetings, do you actually talk about addictive nature of this . Because its as addictive as a cigarette or more. Do you talk about the addictive nature . Do you talk about what you can do to stop it . I will tell you that im probably going to be dead and gone, and i will probably be thankful for it when all this shit comes to fruition because i think that this scares me to death. Senator johnson can talk about the conservative websites. You guys could literally sit down at your board meeting, i believe, and determine who is going to be the next president of the united states. I personally believe that you have that capacity. Now, i could be wrong and i hope im wrong. And so does it do any do any of the other folks that are here, i will go with ms. Robinson, do you see it the same way or am i overreacting to a situation that i dont know enough about . No, i think your concerns are real in that the Business Model that most of these companies are using and most of the Optimization Systems are built to keep us engaged, keep us engaged with provocative material that can skew in the direction that your concerns lead to. And i dont know your mystery, but do you think that the board of directors for any of these companies actually sit down and talk about impacts that im concerned about, or are they talking about how they continue to use what theyve been doing to maximize their profit margin . I dont think theyre talking about the risk youre concerned about and i dont even think thats happening in the Product Development level and thats in part because a lot of teams are siloed, so i doubt these conversations are happening in a holistic way to sort of address your concerns, which is thats good. I dont want to get into a fistfight on this panel. Ms. Stanphill, are the conversations you have since you couldnt answer the previous ones indicate that shes right, the conversations are siloed, is that correct . No, thats not correct, sir, senator. So why cant you answer my questions . I can answer the question with respect to how we think about digital well being at google. Its across company okr so its a goal we work on across the company. I have the novel duty of connecting those dots, but we are doing that and we have incentive to make sure that we make progress. Okay. I just want to thank you all for being here and hopefully you all leave friends because i know that theres certain senators including myself thats tried to pit you against one another. Thats not intentional. I think that this is this is really serious. I have exactly the opposite opinion that senator johnson has in that i think theres a lot of driving to the conservative side, so it shows you that when humans get involved in this were going to screw it up, but by the same token there needs to be Circuit Breakers that senator schatz talked about. Thank you very, very much. Thank you to the old geezer from montana. Senator rosen. Thank you, all of you, for being here today. I have so many questions as a former Software Developer and systems analyst. I see this really as i have three issues and one question. So the issue one really is going to be theres a combination happening of machine language, Artificial Intelligence and quantum computing all coming together that exponentially increases the capacity of Predictive Analytics. It grows on itself. This is what its meant to do. Issue two, the monetization of data brokering of these analytics and the bias in all areas, in regards to the monetization of this data. And then as you spoke earlier, where does the ultimate liability lie . With the scientists that craft the algorithm, with the computer that potentiates the data and algorithm or the company or persons who monetize the end use of the data for whatever means. Right . So three big issues. Many more, but on its face. But my question today is on transparency. So many sectors we require transparency, we are used to it every day. Think about this for potential harm. So every day you go to the grocery store, the market, the Convenience Store and the Food Industry we have required nutrition labeling on every single item, it clearly discloses our nutrition content. We even have it on menus now, calorie count, oh, my, maybe i wont have that alfredo, right, you will go for the salad. We have accepted this, all of our companies have done this, its a state of there isnt any food that doesnt have a label. Maybe there is some food, but basically we have it. So to empower consumers, how do you think we could address some of this transparency that maybe at the end of the day we are all talking about in regards to this these algorithms, the data, what happens to it, how we deal with it. Its overwhelming. I think with respect to things like nutrition labels, we have the advantage that we are using 150yearold science to say what the chemistry of what is contained in a food is. Things like computation and ai are a bit of a different kind of science, and they have this feature that this phenomenon of computational reducability happens, and its not possible to just give a quick summary of what the effect of this computation is going to be. But we know i know having written algorithms for myself, i have kind of an expected outcome. I have a goal in there. You talk about no goals. There is a goal. Yeah. Whether you meet it or not, whether you exceed it or not, about whether you fail or not, there is a goal when you write an algorithm to give somebody who is asking you for this data what they want to do with it. The confusing thing is that the practice of software learn. Thats correct. They can create their own goals. Machine learning does its not its own goals. Its rather that when you write an algorithms, i expect when i started using computers ridiculously long time ago, also you would write a Small Program and you would know what every line of code was supposed to do. But you still should have some ability to control the outcome. Well, i think my feeling is that rather than saying yes, there are you can put constraints on the outcome. The question is, how do you describe those constraints . And you have to essentially have Something Like a program to describe those constraints. Lets say you want to say, we want to have balanced treatment. So lets take it out of technology and just talk about transparency in a way we can all understand. Can we put in an english terms that were going to make your data wellbeing, how you use it, do you sleep, dont you sleep . How many hours a day . Think about your fitbit. Who is it going to. We can bring it down to those english language parameters people could understand. Part of it you could. Were going to make this give unbiased treatment of, say, political directions or something. Im not even talking unbiased and political directions. Theres going to be bias in age, in sex, in race, theres inherent bias in everything. So that given you can still have other conversations. My feeling is that rather than labeling rather than saying well have a nutrition label like thing that says what this algorithm is doing, i think the better strategy is to say, lets give some third party the ability to be the brand that finally decides what you see. Just like with different newspapers. You can decide to see your news through the wall street journal or through the New York Times or whatever else. Theres uwho is ultimately l somebody gets hurt by this data . Thats a good question. It will help to break apart the underlying platform, Something Like facebook, for example. You kind of have to use it. Theres a network effect. And its not the case that you cant say, lets break facebook into a thousand different facebooks. And you can pick which one you want to use. Thats not really an option. What you want to do is say when theres a news feed thats being delivered, is everybody seeing a news feed from the same with the same set of values, the same brand or not . And i think the realistic thing is to say, have separate providers for that final news feed, for example. I think thats a possible direction. Theres few other possibilities. And thats a way and so your sort of label says this is the such and such branded news feed, and people then get a sense of, is that the one i like . Is that the one doing something reasonable . If its not, theyll just, as a market matter, reject it. Thats my thought. I think im way over my time. We could all have a Big Conversation here. Ill submit more questions for the record. Thank you, senator rosen. My apologies to the senator from new mexico who i missed. You were up actually before the senator from nevada, but senator udall is recognized. Thank you, mr. Chairman. Mr. And thank you to the panel. Very, very important topic here. Mr. Harris, i am particularly concerned about the radicalizing effect that algorithms can have on young children. And its been mentioned here today in several questions. Id like to drill down a little deeper on that. Children can be inadvertently can inadvertently stumble on extremist material in a number of ways by searching for terms they dont know are loaded with subtext by clicking on content designed to catch the eye and getting unsolicited recommendations on content to engage their attention and maximize their viewing time. Its a story told over and over by parents who dont understand how their children have suddenly become engaged with the altright and White Nationalist groups or other extremist organizations. Can you provide more detail how young people are uniquely impacted by these persuasive technologies and the consequences if we dont address this issue promptly and effectively . Thank you, senator. Yes, this is one of the issues that most concerns me. As i think senator schotts mentioned, in the last month as recently as that, thesis isse i have been repeated on for years, there was a pattern by youtube that young girls who had taken videos of themselves dancing in front of cameras were linked in usage patterns to other videos like that, that went further and further into that realm. And that was just identified by youtube as a supercomputer as a pattern. Its a pattern of, this is a kind of pathway that tends to be highly engaging. The way we tend to describe this. If you imagine a spectrum on youtube. The calm walture cronkite section of youtube. On the righthand side, ufos, conspiracy theories, big foot, whatever. And if you take a human being and drop them anywhere. In the calm section or crazy town. If im youtube, which direction from there am i going to send you . Im never going to send you to the calm section. Im always going to send you toward crazy town. Now you imagine 2 billion people like an ant colony and its tilting the Playing Field toward the crazy stuff. And the specific examples of this a year ago, a teen girl who looked at a dieting video on youtube would be recommended anorexia videos because that was the more extreme thing to show the voodoo doll that looked like a teen doll. And the next thing to show is anorexia. If you looked at a nasa moon landing, it would show flat earth conspiracy theories. I wrote down another example. 50 of White Nationalists in a study said it was youtube that had red pilled them. Thats the term for the opening of the mind. The best predictor of whether youll believe in a Conspiracy Theory is whether i can get you to believe in one Conspiracy Theory because one conspiracy sort of opens up the mind and makes you doubt and question things and say get really paranoid. The problem is that youtube is doing this en masse and its created sort of 2 billion personalized truman shows, right . Each channel has that radicalizing direction. And if you think about it from an accountability perspective, back when we had Janet Jackson on one side at the super bowl and 60 million americans on the other, we had a fivesecond tv delay and a bunch of humans in a loop for a reason. But what happens when you have 2 billion truman shows, 2 billion possible Janet Jacksons and 2 billion people on the other end . Its a digital frankenstein thats really hard to control. And so thats the way that we need to see it. From there we can talk about how to regulate it. And ms. Stanfill, you have heard him just describe what google does with young people. What responsibility does google have if its algorithms are recommending harmful videos to a child or a young adult that they otherwise would not have viewed . Thank you, senator. Unfortunately, the research and information cited by mr. Harris is not accurate. It doesnt reflect currency policies nor the current algorithm. So what the team has done in effort to make sure these advancements are made, they have taken such content out of recommendations, for instance. That limits the views by more than 50 . So are you saying you dont have any responsibility . Thank you, senator. Because, clearly, young people are being directed towards this kind of material. Theres no doubt about it. Thank you, senator. Youtube is doing everything that it can to ensure child safety online. And works with a number of organizations to do so. And well continue to do so. Do you agree with that, mr. Harris . I dont because i know the researchers who are unpaid who stay up until 3 00 in the morning trying to scrape the data sets to show what these actual results are and its only through huge amounts of public pressure that theyve tackled bit by bit, issue by issue, bits and pieces of it. And if they were truly acting in responsibility, they would be doing so preemptively without the unpaid researchers staying up until 3 00 in the morning doing that work. Thank you, mr. Chairman. Thank you, senator udall. Senator sullivan . Thank you, mr. Chairman, and appreciate the witnesses being here today. Very important issues that were all struggling with. Let me ask ms. Stanfill, i had the opportunity to engage in a couple rounds of questions with mr. Zuckerberg from facebook when he was here. One of the questions i asked, which i think were all trying to struggle with is this issue of what you, when i say you, google or facebook, what you are, right . You think theres this notion that youre a tech company, but some of us think you might be the worlds biggest publisher. I think about 140 Million People get their news from facebook. You, when it combines google and facebook, i think its about somewhere north of 80 of americans get their news. So what are you . Are you a publisher . Are you a tech company . And are you responsible for your content . I think thats another really important issue. Mark zuckerberg did say he was responsible for their content but at the same time, he said that theyre a tech company, not a publisher. And as you know, whether you are one or the other is really critical, almost the threshold issue in terms of how and to what degree you would be regulated by federal law. So which one are you . Thank you, senator. As i might remind everybody, i am a User Experienced director for google. So i support our digital wellbeing initiative. With that said, i know were a tech company. Thats the extent to which i know this definition that youre speaking of. So do you feel youre responsible for the content that comes from google on your websites . When people do searches . Thank you, senator. As i mentioned, this is a bit out of my area of expertise as the digital wellbeing expert. I would defer to my colleagues to answer that specific question. Maybe we can take those questions for the record. Of course. Anyone else have a thought on that pretty important threshold question . Yeah, i think mr. Harris. If its okay if i jump in. Thank you, senator. The issue here is that section 230 of the Communications Decency its all about section 230. Its all about section 230. Its obviously made it so the platforms are not responsible for any content that is on them which freed them up to do what weve created today. The problem is, if you is youtube a publisher . Theyre not generating the content. Theyre not paying journalists or doing that, but they are recommending things. I think we need a new class between the New York Times is responsible, if they Say Something that defames someone else that reaches a certain 100 million or so people. When youtube recommends flat earth conspiracy theories, hundreds of millions of times, and if you consider 70 of youtubes traffic is driven by recommendation. Driven what theyre recommending. When an algorithm is choosing to put in front of the eyeballs of a person. If you were to backwards derive a motto it would be with great power comes no responsibility. Let me follow up on that, two things real quick, because i want to make sure i dont run out of time here. Its a good line of questioning. When i asked mr. Zuckerberg, he actually said they were responsible for their content. That was in a hearing like this. Now that actually starts to get close to being a publisher from my perspective. So i dont know what googles answer is or others, but i think its an important question, and, mr. Harris, you just mentioned something that i actually think is a really important question. I dont know if some of you saw tim cooks commencement speech at stanford a couple weeks ago. I happened to be there and saw it. Thought it was quite interesting. But he was talking about all the great innovations from silicon valley. But then he said, quote, lately it seems this industry is becoming better known for less for a less noble innovation. The belief that you can claim credit without accepting responsibility. And he talked about a lot of the challenges, and then he said it feels a bit crazy that anyone should have to say this, but if you have built a chaos factory, you cant dodge responsibility for the chaos. Taking responsibility means having the courage to think things through. So im going to open this up kind of final question and maybe we start with you. What do you think he was getting at . It was a little bit generalized, but he, obviously, put a lot of thought into his commencement speech at stanford. This notion of building things, creating things and then going, whoa, im not responsible for that. Whats he getting at, and then ill open that up to any other witnesses. I thought it was a good speech, but id like your views on it. Yeah, i think its exactly what everyone has been saying on this panel that these things have become digital frankensteins that are terra forming the world in their image, whether its the Mental Health of children or our politics and political discourse. And without taking responsibility for taking over the public square. So it comes back to who do you think is responsible . I think we have to have the platforms be responsible for when they take over election advertising. They are responsible for protecting elections. When they take or Mental Health of kids or saturday morning, theyre responsible for protecting saturday morning. Anyone else have a view on the quotes i gave from tim cooks speech . I think one of the questions is, what do you want to have happen . That is, what, you know, when you Say Something bad is happening, what is the its giving the wrong recommendations. By what definition of wrong . What is the who is deciding . Who is the moral arbiter. If i was running one of these automated content selection companies, my company does something different, i would be i would not want to be a moral arbiter for the world, which is whats effectively having to happen when theres sort of decisions being made about what content will be delivered, what will not be being delivered. My feeling is the right thing to have happen is to take to break that apart to have a more marketbased approach to have third parties being the ones who are responsible for sort of that final decision about what content is delivered to what users. So that the platforms can do what they do very well, which is the kind of largescale engineering, largescale monetization of content but somebody else gets to be somebody that users can choose from. Third party gets to be the one who is deciding the final ranking of content shown to particular users so users can get brand allegiance to particular content providers that they want and not to other ones. Thank you, mr. Chairman. Thank you, senator sullivan. Senator markey. Thank you, mr. Chairman, very much. Youtube is far and away the top website for kids today. Research shows a whopping 80 of 6 through 12yearolds 6 through 12yearolds use youtube on a daily basis. But when kids go on youtube, far too often they encounter inappropriate and disturbing video clips that no child should ever see. And some instances, when kids click to view cartoons and characters in their favorite games, they find themselves watching material promoting selfharm and even suicide. In other cases, kids have opened videos featuring beloved disney princesses and all of a sudden, see a sexually explicit scene. Videos like this shouldnt be accessible to children at all, let alone systematically served to children. Mr. Harris, can you explain how, once a child consumes one inappropriate youtube video, the websites algorithms begin to prompt the child to watch more harmful content of that sort. Yes, thank you, senator. So if you watch a video about a topic, lets say its that cartoon character, the hulk or Something Like that. Youtube picks up some pattern that maybe hulk videos are interesting to you. The problem is theres a dark market of people who you are referencing in that long article that was very famous who actually generate content that, based on the most viewed videos, theyll look at the thumbnails and say theres a hulk in that video, a spiderman in that video and then they have machines manufacture free generated content and upload it to youtube machines and tag it in such a way that it gets recommended near those content items. And youtube is trying to maximize traffic for each of these publishers. When these machines upload the content, it tries to dose them with some views and saying well, maybe this video is really good. And it ends up gathering millions of views because kids like them. And the key thing going on here is that, as i said in the Opening Statement, this is about an asymmetry of power being masked as an equal relationship. Technology companies claim were giving you what you want a 6yearold to 12yearold, they just keep getting fed the next video, the next video, the next video. Correct. And theres no way that that can be a good thing for our country over a longer period of time. Especially when you realize the asymmetry that youtube is pointing a supercomputer at that childs brain and calculating. 8yearold, 10yearold. Its wrong. Clearly the way the websites are designed can pose serious harm to children. And thats why in the coming weeks, i will be introducing the kids internet design and safety act. The kids act, specifically my bill will combat amplification of inappropriate and harmful content on the internet. Online design features like auto play that coerce children and create bad habits in commercialization and marketing that manipulates kids and push them into consumer culture. So to each of todays witnesses, will you commit to working with me to enact strong rules that tackle the design features and underlying issues that make the internet unsafe for kids . Mr. Harris . Yes. Ms. Stanfill . Yes. Its a terrific goal, but its not particularly my expertise. Okay. Yes. Okay, thank you. Mr. Stanfill ms. Stanfill, a recent reporting suggests that youtube is considering significant changes to its platform, including ending auto play for childrens videos so that when one video ends, another doesnt immediately begin hooking the child on to long viewing sessions. Call for an end to auto play for kids. Can you confirm that youtube is getting rid of that feature . Thank you, senator. I cannot confirm that as a representative from digital wellbeing. Thank you. I can get back to you, though. I think its important and i think its very important that that happen voluntarily or through federal legislation to make sure that the internet is a healthier place for kids. And senators blunt and schots and myself, senator collins, senator bennett are working on a bipartisan children and Media Research advancement act that will commission a fiveyear, 95 Million Research initiative at the National Institutes of health to investigate the impact of tech on kids. It will produce research to shed light on the cognitive, physical and soesio Economic Impacts of technology on kids. I look forward on that legislation to working with everyone at this table as well so that we can design legislation and ultimately a program. I know that google has endorsed the camera act. Can you talk to this issue . Yes, thank you, senator. I can speak to the fact that we have endorsed the camera act and look forward to working with you on further regulation. Same thing for you, mr. Harris . Weve also endorsed it. Thank you. So i think were late as a nation to this subject, but i dont think that we have an option. We have to make sure there are enforceable protections for the children of our country. Thank you, mr. Chairman. Thank you, senator markey. Senator young. I thank our panel for being here. I thought id ask a question about concerns that many have, and i expect concerns will grow about ai becoming a black box where its unclear exactly how certain platforms make decisions. In recent years, deep learning has proved very powerful at solving problems and has been widely deployed for a task like image captioning, Voice Recognition and language translation. As the technology advances, there is great hope for ai to diagnose deadly diseases, calculate multimilliondollar trading decisions and implement successful autonomous innovations for transportation and other sectors. Nonetheless, the intellectual power of ai has received public scrutiny and has become unsettling for some futurists. Eventually Society Might cross a threshold in which a using ai requires a leap of faith. In other words, ai might become, as i say, a black box where it might be impossible to tell how in ai thats internalized massive amounts of data is making its decisions through its Neural Network and by extension, it might be impossible to tell how those decisions impact the psyche, the perceptions, the human understand iing and perha even the behavior of an individual. In early april, the European Union released final ethical guidelines for trustworthy ai. The guidelines arent meant to be or intend to interfere with policies or regulations, but instead offer a loose framework for stakeholders to implement their recommendations. One of the key guidelines relates to transparency and the ability for ai systems to explain their capabilities, limitations and decision making. However, if the improvement of ai requires more complexity, imposing transparency requirements will be equivalent to a prohibition on innovation. So i will open this question to the entire panel, but my hope is that dr. Workman wolfman, im sorry, sorry, you can begin. Can you tell this committee the best ways for congress to collaborate with the Tech Industry to ensure ai system accountability without hindering innovation . And specifically, should congress implement industry requirements or guidelines for best practices . Its a complicated issue. Yes. I think that it varies from industry to industry. I think in the case of what were talking about here, internet, automated content solution, i think the right thing to do is to insert a kind of level of human control into what is being delivered but not in the sense of taking apart the details of an ai algorithm but making the structure of the industry be such that there is some human choice injected into what people whats being delivered to people. I think that the bigger story is we need to understand how were going to make laws that can be specified in computational form and applied to ais. Were used to writing laws in english basically and were used to being able to say, you know, write down some words and then have people discuss whether they are following those words or not. When it comes to Computational Systems, that wont work. Things are happening too quickly. They are happening too often. You need something where youre specifying computationally, this is what you want to have happen, and then the system can perfectly well be set up to automatically follow those computational rules or computational laws. The challenge is to create those computational laws and thats something were just not yet experienced with. Its something that were starting to see computational contracts as a practical thing in the world of block chain and so on. But we dont yet know how you would specify some of the things that we want to specify as rules for how systems work. We dont yet know how to do that computationally. Are you familiar with the eus approach to develop ethical guidelines for trustworthy ai . Im not familiar with those specifics. Okay. Are any of the other panelists . Okay. Well, then perhaps thats a model we could look at. Perhaps that would be ill advised so for stakeholders that may be watching these proceedings or listening to them, they can tell me. Do others have thoughts . So in my written comments i outlined a number of transparency mechanisms that could help address some of your concerns. And some of the recommendations, one specifically which was the last one, is we suggested that companies create an algorithmic Impact Assessment and that framework which we wrote for government use can actually be applied in the private sector, and we built the frahmwork from learning from different assessments. So in the u. S. We use environmental Impact Assessments which allows for robust conversation about Development Projects and their impact on the environment but also in the eu which is one of the reference points that we use. They have a Data ProtectionImpact Assessment and thats something thats done both in government and in the private sector. But the difference here and why i think its important for congress to take action is what were suggesting is something thats actually public so we cant have a discourse about whether this is a technological tool that has a net benefit for society or is something thats too risky that shouldnt be available. Ill be attentive to your proposal. Do you mind if we work with you, dialogue, if we have any questions about it . Yes, very much. Thank you. Others have any thoughts . Its okay if you dont. Okay. Sounds like we have a lot of work to do. Industry working with other stakeholders to make sure that we dont act impulsively, but we also dont neglect this area of Public Policy. Thank you. Thank you, senator young. Senator cruz. Ms. Stanfill, a lot of americans have concerns that Big Tech Media Companies and google in particular, are engaged in political censorship and bias. Google enjoys a special immunity from liability. The pred cat was that google and other Big Tech Media Companies would be neutral public fora. Does google consider itself a neutral public forum . Yes, it does. Okay. Are you familiar with the report that was released yesterday from veritas that included a whistleblower from within google that included videos from a senator executive at google that included documents purportedly internal powerpoint documents from google . Yes, i heard about that report in industry news. Have you seen the report . No, i have not. So you didnt review the report to prepare for this hearing . Its been a busy day, and i have a day job which is digital wellbeing at google, so im trying to make sure i keep the trains on the tracks. Im sorry this hearing is impinging on your day job. Its a great opportunity. I really appreciate it. One of the things in that report, and i recommend that people with political bias at google watch the entire report and judge for yourself. Theres a video from a woman, a secret video that was recorded. Jen jenai is the head of responsible innovation for google. Are you familiar with ms. Jenai . I work in User Experience, and i believe that ai group is somebody we worked with on the ai principles. But its a big company. And i dont work directly with jen. Do you know her, or no . I do not know jen. Okay. As i understand it, she has shown in the video saying, and this is a quote, Elizabeth Warren is saying that we should break up google. And, like, i love her. But shes very misguided. Like that will not make it better. It will make it worse. Because all these Smaller Companies who dont have the same resources that we do will be charged with preventing the next trump situation. Its like a Small Company cannot do that. Do you think its googles job to, quote, prevent the next trump situation . Thank you, senator. I dont agree with that. No, sir. So a different individual, a whistleblower identified simply as an insider at google with knowledge of the algorithm is quoted on the same report as saying google, quote, is bent on never letting somebody like donald trump come to power again. Do you think its googles job to make sure, quote, somebody like donald trump never comes to power again . No, sir, i dont think that is googles job, and we build for everyone, including every single religious belief, every single demographic, every single region. And certainly every political affiliation. Well, i have to say that certainly does not appear to be the case. Of the Senior Executives at google, do you know of a single one who voted for donald trump . Thank you, senator. Im a User Experience director and i work on google digital wellbeing, and i can tell you we have diverse views, but i cant do you know of anyone who voted for trump of the i definitely know of people who vote forward trump. Of the Senior Executives . I dont talk politics with my workmate. Is that a no . Sorry, is that a no to what . Do you know of any Senior Executives, even a single skeern execu Senior Executive who vote forward trump. I dont think this is in my i definitely dont know. I can tell you the public records show in 2016, google employees gave to the Hillary Clinton campaign 1. 315 million. Thats a lot of money. Care to venture how much they gave to the Trump Campaign . I would have no idea, sir. Well, the nice thing its a round number. Zero dollars and zero cents. Not a penny according to the public reports. Lets talk about one of the power points that was leaked. The veritas report has google internally saying, i propose we make Machine Learning intentionally humancentered and intervene for fairness. Is this is this document accurate . Thank you, sir. I dont know about this document. So i dont know. Im going to ask you to respond to the committee in writing afterwards as to whether this powerpoint and the other documents that are included in the veritas report, whether those documents are accurate. And i recognize that your lawyers may want to write explanation. Youre welcome to write all the explanation that you want, but i also want a simple clear answer. Is this an accurate document that was generated by google . Do you agree with the sentiment expressed in this document . No, sir. I do not. Let me read you another, also in this report. It indicates that google, according to this whistleblower, deliberately makes recommendations if someone is searching for conservative commentators, deliberately shifts the recommendation so instead of recommending other conservative commentators, it recommends organizations like cnn or msnbc or leftleaning political outlets. Is that occurring . Thank you, sir. I cant comment on search algorithms or recommendations given my purview as the digital wellbeing lead. I can take that back to my team, though. So is it part of digital wellbeing for search recommendations to reflect the where the user wants to go rather than deliberately shifting where they want to go . Thank you, sir. As a User Experience professional, we focus on delivering on user goals. So we try to get out of the way and get them on the task at hand. So a final question. One of these documents that was leaked explains what google is doing, and it has a series of steps, Training Data are collected and classified. Algorithms of programmed, media are filtered, ranked, aggregated and guaranteed and that ends with people, parentheses, like us, are programmed. Does google view its job as programming people with search results . Thank you, senator. I cant speak for the whole entire company, but i can tell you that we make sure that we put our users first in our design. Well, i think these questions raise very serious these documents raise very serious questions about political bias at the company. Thank you, senator cruz. Senator, anything to wrap up with . Just a quick statement and then a question. I dont want the working of the refs to be left unresponded to. And i wont go into great detail except to say there are members of congress who use the working of the refs to terrify google and facebook and twitter executives so that they dont take action in taking down extreme content, false content, polarizing content. Contra their own rules of engagement. And so i dont want the fact that the democratic side of the aisle is trying to engage in good faith on this Public Policy matter and not work the refs. Allow the message to be sent to the leadership of these companies that they have to respond to this bad faith accusation every time we have any conversation about what to do in tech policy. My final question for you, and this will be the last time i leap to your defense, ms. Stanphill, did you say privacy and data is not core to digital wellbeing . Thank you, sir. I might have misstated how thats being phrased so what i meant what do you mean to say . I mean to say that there is a team that focuses day in, day out on privacy, Security Control as it relates to user data. Right. Which is outside of my area. So youre talking sort of bureaucratically and i dont mean that as a pejorative. Youre talking about the way the company is organized. Im saying isnt privacy arent privacy and data core to digital wellbeing . I see. Sorry, i didnt understand that point, senator. In retrospect, what i believe is that it is inherent in our digital wellbeing principles that we focus on the user and that requires that we focus on privacy Security Control of their data. Thank you. Thank you, senator schatz. And to be fair, i think both sides worked the refs, but let me just ask a followon question. I appreciate senator blackburns line of questioning from earlier which may highlight some of the limits on transparency as we have sort of started, i think, with our Opening Statements today. Trying to look at ways that in this new world we can provide a level of transparency. You said its going to be very difficult in terms of explainability of ai, but just understanding a little bit better how to provide users the information they need to make educated decisions about how they interact with platform services. So the question is, might it make sense to let users effectively flip a switch to see the difference between a filtered algorithmbased presentation and an unfiltered presentation . There are already, for example, Search Services that aggregate user searches and feed them en masse to such engines like bing so youre effectively seeing the results of a generic search, independent of specific information about you. Works okay. There are things for which it doesnt work well. I think that this idea of, you know, you flip a switch. I think that is probably not going to have Great Results because i think there will be, unfortunately, great motivation to have the case where the switch is flipped to not give user information, give bad results. Im not sure how youd motivate giving good results in that case. I think that its also its sort of, when you think about that switch, you can think about a whole array of other kinds of switches and pretty soon it gets confusing for users to decide which switches they flip for what. Do they give Location Information but not this information . Do they give that information, not that information. I mean, my own feeling is, the most promising direction is to let some third party be inserted who will develop a brand that might be 20 of these third parties. Might be like newspapers where people can pick. Do they want news from this place, that place, another place. To insert third parties and have more of a market situation where you are relying on the trust that you have in that third party to determine what youre seeing, rather than saying the user will have precise detailed control. As much as i would like to see more users be more engaged and computational thinking n understanding whats happening inside Computational Systems, i dont think this is a case where thats going to work in practice. All right. Anybody else . So i think that the issue with the flip the switch hypothetical is users need to be aware of the tradeoffs. And currently so many users are used to the conveniences of existing platforms. So if theres a privacy preserving platform called duck duck go which doesnt take your information and gives you search results. But if youre used to seeing the most immediate result at the top, duck duck go, even though its privacy preserving may not be the choice that all users would use, but theyre not hyperaware of what are the tradeoffs of them giving that information to the provider. So i think its while i understand the reason youre giving that metaphor, its important for users to understand both the practices of an of a platform and also to understand the tradeoffs where if they want a more privacy preserving service, what are they losing or gaining from that. Yeah, the issue is also that users, i think, as has already been mentioned, will quote unquote prefer because it saves them time and energy. The summarized feed thats algorithmically sorting it for them. When you show people the reverse chronological feed versus the algorithmic one, it saves them time. Even if theres a switch, most people will quote unquote, prefor that one. Theyd have to be aware of the tradeoffs and have a notion of what fair really means there. What im most concerned about is the fact this is still fairness with respect to increasingly fragmented truth that debases the information environment that democracy depends on of shared truth. But thats more. Id like to comment on that issue. I think the challenge is, when you want to sort of have a single shared truth, the question is, who gets to decide what that truth is . And i think thats, you know, the question is, is that decided within a single company, implemented using ai algorithms . Is that decided in some more market kind of way by a collection of companies . I think it makes more sense in kind of the american way of doing things to imagine that its decided by a whole selection of Companies Rather than being something that is burnt into something a platform that, for example, has sort of become universal through Network Effects and so on. All right. Well, thank you all very much. This was a very complicated subject but one that i think your testimony and responses have helped shed some light on and certainly will shape our thinking in terms of how we proce proceed. But theres definitely a lot of food for thought there so thank you very much for your time and input. Well leave the hearing record open for a couple weeks and ask senators, if they have questions for the record, to submit those. And we would ask all of you, if you can, to get those responses back as quickly as possible. So that we can include them in the final hearing record. I think with that, we are adjourned. Snirks a new report on authoritarian governments and National Security threats is being released this afternoon. Well have live coverage from the American Enterprise institute at 3 00 eastern here on cspan3, online at cspan. Org or listen live on the free cspan radio app. Congress is back on capitol hill while the house returns for legislative work tuesday to work on defense programs for next year. The senate is back today and will work on several judicial and executive nominations, including assistant education and labor secretaries. Watch live gavel to gavel coverage of the house on cspan and live coverage of the senate on cspan 2. The National Park service and other federal Land Management agencies say they face 19 billion in maintenance needs. The Senate Energy and Natural Resources committee is looking into what that includes. Witnesses at the hearing were officials from the u. S. Forest service and interior department along with the Outdoor Industry and nonprofit groups. Morning, everyone. The committee will come to order. Were here to examine the deferred maintenance needs of the public Land Management agencies. This is a topic i care about. I know each of you certainly, the folks on this committee, have expressed concern. Not just coming into this hearing today, but, really, over the years. When we consider deferred maintenance, we most commonly think of the 12 billion maintenance backlog accrued by the National Park service. We consider that here in the committee at great length and it rightfully continues to attract a great deal of national attention. But somehow and sometimes overlooked in this conversation are the deferred maintenance needs of