Them from facebook and tell everyone affected. As for past activity, i dont have all the examples of apps weve banned here, but if youd like, i can have my team follow up with you after this this. Have you ever required an audit to ensure the deletion of improperly transferred data and if so, how many times . Mr. Chairman, yes we have. I dont have the exact figure on how many times we have, but overall the way weve enforced our platform policies in the past is we have looked at patterns of how apps have used our apis and accessed information as well as looked into reports that people have made to us about apps that might be doing sketchy things. Going forward, were going to take a more proactive position on this and do much more regular spot checks and other reviews of apps as well as increasing the amount of audits that we do. Again, i can make sure our team follows up on anything about the specific past stats that would be interesting. I was going to assume that sitting here today you have no idea, and if im wrong on that, youre telling me, i think, that youre able to supply these figures to us at least as of this point . Mr. Chairman, i will have my team follow up with you on what information we have. Okay. But right now you have no certainty of whether or not how much of that is going on. Right . Okay. Facebook collecting massive amounts of data from consumers including content, networks, contact lists, device information, location, and information from third parties. Yet your data policy is only a few pages long and provides consumers with only a few examples of what is collected and how it might be used. The examples given emphasize benign uses such as connecting with friends, but your policy does not give any indication for more controversial issues of such data. My question. Why doesnt facebook disclose to it users all the ways the data might be used by facebook and other third parties and what is facebooks responsibility to inform users about that information . Mr. Chairman, i believe its important to tell people exactly how the information that they share on facebook is going to be used. Thats why every single time you go to share something on facebook whether its a photo in facebook or a message in messenger, every single time theres a control right there about who youre going to be sharing it with, whether its your friends or public or a specific group, and you can change that and control that in line. To your broader point about the privacy policy, this gets into an issue that i think we and others in the Tech Industry have found challenging which is that long privacy policies are very confu confusing. If you make it long and spell out all the detail, then youre probably going to reduce the percent of people who read it and make it accessible to them. One of the things that weve struggled with over time is to make something that is as simple as possible so people can understand it as well as giving them controls in line in the product in the context of when theyre trying to actually use them. Taking into account that we dont expect that most people will want to go through and read a full legal document. Senator nelson. Thank you, mr. Chairman. Yesterday when we talked, i gave the relatively harmless example that im communicating with my friends on facebook, and indicate that i love a certain kind of chocolate. And all of a sudden i start receiving advertisements for chocolate. What if i dont want to receive those commercial advertisements . So your chief operating officer, ms. Sandburg suggested on the nbc today show that facebook users who do not want their personal information used for advertising might have to pay for that protection. Pay for it. Are you actually considering having facebook users pay for you not to use that information . Senator, people have a control over how their information is used in ads in the product today. So if you want to have an experience where your ads arent targeted using all the information that we have available, you can turn off third party information. What weve found is that even though some people dont like ads, people really dont like ads that arent relevant, and while there is some discomfort, for sure, with using information in making ads more relevant, the overwhelming feedback that we get from our community is that people would rather have us show relevant content there than not so we offer this control that youre referencing. Some people use it. Its not the majority of people on facebook. And i think thats a good level of control to offer. I think what sheryl was saying was that in order to not run ads, we would still need a Business Model. And that is your Business Model. So i take it that and i use the harmless example of chocolate, but if it got into more personal thing, communicating with friends, and i want to cut it off, im going to have to pay you in order not to send me using my personal information something that i dont want . That in essence is what i understood her to say. Is that correct . Yes, senator. Although to be clear, we dont offer an option today for people to pay to not show ads. We think an ad supported service is most in line with connecting everyone in the word. We want to offer a free Service Everyone can afford. Thats the only way we can reach billions of people. Therefore, you consider my personally identifiable data the companys data, not my data. Is that it . No, senator. Actually, at the first line of our terms of service say that you control and own the information and content that you put on facebook. Well, the recent scandal is obviously frustrating, not only because it affected 87 million, but because it seems to be part of a pattern of lax data practices by the Company Going back years. So back in 2011, it was a settlement with the ftc and now we discover yet another incident where the data was failed to be protected. When you discovered the Cambridge Analytica that had fraudulently obtained all of this information, why didnt you inform those 87 million . When we learned in 2015 that Cambridge Analytica had bought data from an app developer on facebook that people had shared it with, we did take action. We took down the app. And we demanded that both the app developer and Cambridge Analytica delete and stop using any data that they had. They told us that they did this. In retrospect, it was clearly a mistake to believe them. And we should have followed up and done a full audit then. Thats not a mistake well make. Yes. You did that and apologized for it, but you didnt notify them. And do you think that you have an ethical obligation to notify 87 million facebook users . Senator, when we heard back from Cambridge Analytica that they had told us that they werent using the data and deleted it, we considered it a closed case. In retrospect, that was clearly a mistake. We shouldnt have taken their word for it. Weve updated our policy to make sure we dont make that mistake again. Did anybody notify the ftc . No, for the same reason. We considered it a closed case. Senator thune . And mr. Zuckerberg, would you do that differently today, presumably . In response to the question. Yes. This may be your first appearance before congress, but its not the first time that facebook has faced tough questions about its privacy policies. Wired magazine recently noted you have a 14 hour history of apologizing for illadvised decisions regarding user privacy not unlike the one you made just now in your opening statement. After more than a decade of promises to do better, how is todays apology different and why should we trust facebook to make the necessary changes to ensure user privacy and give people a clearer picture of your privacy policies . Thank you, mr. Chairman. So we have made a lot of mistakes in running the company. I think its pretty much impossible, i believe, to start a company in your dorm room and grow it to this scale without making some mistakes. Because our service is about helping people connect and information, those mistakes have been different in how we try not the make the same mistake multiple times, but in general the mistakes are around how people connect to each other because of the nature of the service. Overall, i would say that were going through a broader philosophical shift. For the first ten or 12 years of the company, i viewed our responsibility as primarily Building Tools that if we could put the tools in peoples hands, then that would empower people to do good things. What i think weve learned across a number of issues, foreign interference in elections, we need to take a more proactive role and a broader view of our responsibility. Its not enough to just build tools. We need to make sure theyre used for good. That means we need to take a more active view in policing the eco system and watching and looking out and making sure that all the members in our community are using these tools in a way thats going to be good and healthy. So at the end of the day, this is going to be something where people will measure us by our results on this. Its not that i expect that anything i say here today to necessarily change peoples view. But im committed to getting this right, and i believe that over the coming years once we fully work all the solutions through, people will see real differences. Okay. Well, and im glad that you all have gotten that message. As we discussed in my office yesterday, the line between legitimate political discourse and hate speech can sometimes be hard to identify. And especially when youre relying on Artificial Intelligence and other technologies for the initial discovery. Can you discuss what steps that facebook currently takes when making the evaluations, the challenges you face in any examples of where you may draw the line between what is and what is not hate speech some. Yes, mr. Chairman. Ill speak to hate speech and then ill talk about enforcing or content policies more broadly. Actually, maybe if youre okay with it, ill go in the other order. So from the beginning of the company in 2004, i started it in my tomorrow room. It was me and my roommate. We didnt have Ai Technology that could look at the content people were sharing, so we basically had to enforce our content policys reactively. People could share what they wanted and then if someone in the Community Found it to be offensive or against our policies, theyd flag it for us and wed look at it reactively. Increasingly were developing ai tools that can identify certain classes of bad activity proactively and flag it for our team at facebook. By the end of the year, were going to have more than 20,000 people working on security and content review working across all these things. When content gets flagged to us, we have those people look at it. If it violates our policies, we take it down. Some problems lend themselves more easily to ai solutions than others. Hate speech is one of the hardest. Because determining if something is hate speech is very linguistically nuanced. You need to understand what is a slur and what whether something is hateful. Not just in english but the majority of people on facebook use it in languages that are different across the world. Contrast that, for example, with an area like finding terrorist propaganda which weve been successful at deploying ai tools on already. Today as we sit here 99 of the isis and al qaeda content we take down on facebook our ai system flags before any human sees it. Thats a success in terms of rolling out ai tools that can proactively police and enforce safety across the community. Hate speech, im optimistic that over a five to tenyear period well have ai tools that can get into some of the nuances, the linguistic nuances of different types of content to be more accurate in flags things but today were not there on that. A lot of this is still reactive. People flag it to us. We have people look at it. We have policies to try to make it as not subjective as possible, but until we get it more automated, theres a higher error rate than im happy with. Thank you. Senator feinstein. Thank you, mr. Chairman. Mr. Zuckerberg, what is facebook doing to prevent foreign actors from interfering in u. S. Elections . Thank you, senator. This is one of my Top Priorities in 2018. To get this right. One of my greatest regrets in running the company is that we were slow in identifying the russian Information Operations in 2016. We expected them to do a number of more traditional Cyber Attacks which we did identify and notify the companies they were trying to hack into them, but they were slow to identifying the type of new Information Operations. When did you identify new operations . It was right around the time of the 2016 election itself. So since then we 2018 is an incredibly important year for elections. Not just with the u. S. Midterms but around the world in india, brazil and mexico and pakistan and hungary that we want to make sure we do everything we can to protect the integrity of the elections. I have more confidence were going to get it right. Since the 2016 election there have been several important elections around the world with a better record. Theres the german election, the u. S. Senate alabama special election last year. Explain what is better about the record . So weve deployed new ai tools that do a better job of identifying fake accounts that may be trying to interfere in elections or spread misinformation. And between those three elections we were able to proactively remove tens of thousands of accounts that before they could contribute significant harm and the nature of these attacks, though, is that there are people in russia whose job it is is to try to exploit our systems and other internet systems and other systems as well. So this is an arms race. Theyre going to keep Getting Better and we need to invest in Getting Better at this too. Thats why one of the things i mentioned before is were going to have more than 20,000 people by the end of this year working on security and content review across the company. Speak for a moment about automated bots that spread disinformation. What are you doing to punish those who exploit your platform in that regard . Well, youre not allowed to have a fake account on facebook. Your content has to be authentic. So we build technical tools to try to identify when people are creating fake accounts, especially Large Networks of fake accounts like the russians have in order to remove all of that content. After the 20 16 election, our top priority was protecting the integrity of other elections around the world. But at the same time we had a parallel effort to trace back to russia the ira activity, the Internet Research agency activity, part of the russian government that did this activity in 2016. And just last week we were able to determine that a number of Russian Media organizations that were sanctioned by the russia regulator were operated and controlled by this Internet Research agency. So we took the step last week that was a pretty big step for us of taking down sanctioned news organizations in russia as part of an operation to remove 270 fake accounts and pages, part of their Broader Network in russia that was actually not targeting International Interference as much as let me correct that. It was primarily targeting spreading misinformation in russia itself as well as certain russianspeaking neighboring countries. How many accounts of this type have you taken down . Across in the ira specifically, the ones weve pegged back to the ira . We can identify 470 in the american elections. And the 270 that we specifically went after in russia last week. There were many others that are systems catch which are more difficult to attribute specifically to russian intelligence. But the number would be in the tens of thousands of fake accounts that we remove, and im happy to have my team follow up with you on more information if thats helpful. Would you please . I think this is very important. If you knew in 2015 that Cambridge Analytica was using the information of the professors, why didnt facebook ban Cambridge Analytica in 2015 . Why did you wait . Senator, thats a great question. Cambridge analytica wasnt using our services in 2015 as far as we can tell. So this is clearly one of the questions that i asked our team sloons i learned about this. Why did we wait until we found out about the reports last month to ban them . Its because as of the time we learned about their activity in 2015, they werent an advertiser. They werent running pages. So we actually had nothing to ban. Thank you. Thank you, mr. Chairman. Thank you, senator feinstein. Now senator hatch. Well, in my opinion, this is the most intense public scrutiny ive seen for a techrelated hearing since the microsoft hearing that i shared back in the late 1990s. The recent stories about Cambridge Analytica and data mining on social media have raised serious concerns about consumer privacy, and naturally i know you understand that. At the same time, these stories touch on the very foundation of the internet economy and the way the websites that drive our internet economy make money. Some have professed themselves shocked, shocked that Companies Like facebook and google share user data with advertisers. Did any of these individuals ever stop to ask themselves why facebook and google dont charge for access . Nothing in life is free. Everything involves paradeoffs. If you want something without having to pay money for it, youre going to have to pay for it in some other way it seems to me. Thats what were seeing here. The great websites that dont charge for access, they extract value in some other way. And theres nothing wrong with that as long as theyre up front about what theyre doing. In my mind the issue here is transparency. Its consumer choice. Do users understand what theyre agreeing to when they access a website or agree to terms of service . Are websites up front about how they extract value from users or do they hide the ball . Do consumers have the information they need to make an informed choice regarding whether or not to visit a particular website . To my mind, these are questions that we should ask or be focus