Transcripts For FOXNEWSW Shepard Smith Reporting 20180410 :

Transcripts For FOXNEWSW Shepard Smith Reporting 20180410

Dont have all the examples of apps that we have banned here. If you would like, i can have my team follow up with you after this. Have you ever required an audit to ensure the deletion of improperly transferred data and if so how many times . Mr. Chairman, yes, we have. I dont have the exact figure on how many times we have. But, overall, the way we have enforced our platform policies in the past is we have looked at patterns of how apps have used our apis and accessed information and looked into reports about reports people have made into apps doing sketchy things. Going forward, we are going to take a more proactive position on this and do much more regular spotchecks and other reviews of apps. As well as increasing the amount of audits that we do. And, again, i can make sure that our team follows up with you on anything about the specific past stats that would be interesting. I was going to assume that sitting here today you have no idea. And if im wrong on that, you are able you are telling me, i think, that you are able to supply those figures to us. At least as of this point. Mr. Chairman, i will have my team follow up with you on what information we have. Okay. But right now you have no certainty of whether or not how much of that is going on, right . Okay. Facebook collects massive amendments of data from consumers including content, networks, contact lists, device information, location and information from third parties. Yet, your data policy is only a few pages long and provides consumers with only a few examples of what is collected and how it might be used. The examples given emphasize benign uses such as connecting with friends. But your policy does not give any indication for more controversial issues of such data. My question why doesnt facebook disclose to its users all the ways the data might be used by facebook and other third parties and what is facebooks responsibility to inform users about that information . Mr. Chairman, i believe its important to tell people exactly how the information they share on facebook is going to be used. Thats why every single time you go to share something on facebook whether a photo or facebook or messenger or whats app. , every single time there is a control right there about how are going to be sharing it with, whether its your friends or public or specific group. And you can change that and control that in line. To your broader point about the privacy policy, this gets into an issue that i think we and others in the Tech Industry have found challenging which is that long privacy policies are very confusing. And if you make it long and spell out all the detail, then you are probably going to reduce the percent of people who read it and make it accessible to them. So, one of the things that we have struggled with over time is to make something that is as simple as possible to supreme can understand it as well as giving them control of inline in the product in the context of when they are trying to actually use them. Taking into account that we dont expect that most people want l. Want to go through and read a full legal document. Senator nelson. Thank you, mr. Chairman. Yesterday when we talked, i gave the relatively harmless example that im communicating with my friends on facebook and indicate that i love a certain kind of chocolate. And all of a sudden i start receiving advertisements for chocolate. What if i dont want to receive those commercial advertisements . So, your chief operating officer, ms. Sandberg, suggested on the nbc today show that users who dont want their personal information used for advertising, might have to pay for that protection. Pay for it. Are you actually considering having facebook users pay for you not to use that information . Senator, will people have control over how information is used in ads in the product today. If you want to have an experience where your ads arent targeted, using all the information that we have available, you can turn off third party information. What we found is that even though some people dont like ads, people really dont like ads that arent relevant. While there is some discomfort for sure with using information in making ads more relevant, the overwhelming feedback that we get from our community is that people would rather have us show relevant content there than not. So we offer this control that they you are referencing. Some people use it its not the majority of people on facebook. And i think thats a good level of control to offer. In order to not run ads at all we need some sort of Business Model. And that is your Business Model. So i take it that and i use the harmless example of chocolate. But if it got into more personal thing, communicating with friends, and i want to cut it off im going to have to pay you in order not to send me using my personal information something that i dont understand . That in essence is what i understood ms. Sandberg to say. Is that correct . Yes, senator. Although to be clear, we dont offer an option today for people to pay to not show ads. We think offering an ad supported service is the most aligned with our mission of trying to help connect everyone in the world. Pause we want to offer a free service that everyone can afford. Thats the only way we can reach billions of people. So, therefore, you consider my personally identifiable data, the companys data, not my data. Is that it . No, senator. Actually, at the first line of our terms of service say that you control and own the information and content that you put on facebook. Well, the recent scandal is obviously frustrating, not only because it affected 87 million but because it seems to be part of a pattern of lax data practices by the Company Going back years. So, back in 2011, it was a settlement with the ftc and now we discover yet another instance where the data was failed to be protected. When you discovered the Cambridge Analytica that had fraudulently obtained all of this information, why didnt you inform those 87 million . When we learned in 2015 that Cambridge Analytica had bought data from an app. Developer on facebook that people had shared it with we did take action. We took down the app. And we demanded that both the app. Developer and Cambridge Analytica delete and stop using any data that they had. They told us that they did this. In retrospect, it was clearly a mistake to believe them and we should have followed up and done a full audit then and that is not a president s stake that we will make. Yes, you did that and you apologized for it but you didnt notify them and do you think that you have an ethical obligation to notify 87 million facebook users . Senator when we heard back from Cambridge Analytica that they deleted the data, we closed the case. In retrospect that was a mistake. We shouldnt have taken their word for it we have updated our policies to make sure we dont make that mistake again. Did anybody notify the ftc. No, senator, for the same reason. We considered it a closed case. Senator thune. Mr. Zuckerberg, would you do that differently today, presumably in response to senator nelsons question . Yes. Having to do it over. This may be your first appearance before congress but its not the first time that facebook has phased tough questions about its privacy policies. Wired magazine recently noted that you have a 14year history of apologizing for ill advised decisions regarding user privacy, not unlike the one that you made just now in your opening statement. After more than a decade of promises to do better. How is todays apology different and why should we trust facebook to make the necessary changes to ensure user privacy and give people a clearer picture of your privacy policies . Thank you, mr. Chairman. So we have made a lot of mistakes in running the company. I think its pretty much impossible, i believe, to start a company in your dorm room and then grow it to be the scale that we are at now without making some mistakes. Because our service is about helping people connect in information, those mistakes have been different in how they we try not to make the same mistake multiple times. In general the mistakes are around how people connect to each other just because of the nature of the service. Overall, i would say that were going through a broader philosophical shift in how we approach our responsibility as a company. For the first 10 or 12 years of the company, i viewed our responsibility as primarily Building Tools that if we could put those tools in peoples hands, then that would empower people to do good things. What i think we have learned now, across a number of i, not just data privacy but also fake news and foreign interference in elections is that we need to take a more proactive role and broader of our responsibility. Not enough to just build tools. Make sure they are used for good. That means we need to now take a more active view in policing the ecosystem and in watching and kind of looking out and making sure that all of the members in our community are using these tools in a way thats going to be good and healthy. So, at the end of the day, this is going to be something where people will measure us by our results on this. Its not that i expect that anything i say here today to necessarily change peoples view. But im committed to getting this right and i believe over the coming years. Once we work all these solutions through people will see real differences. Im glad that yall have gotten that message. As we discussed this office yesterday, the line between legitimate political discourse and hate speech can sometimes be hard to identify. Especially when you are relying on Artificial Intelligence and for the initial discovery. Can you discuss what steps facebook takes when making these evaluations and challenges you face and any examples of where you may draw the line between what is and what is not hate speech . Yes, mr. Chairman. I will speak to hate speech. And then i will talk about enforcing our content policies more broadly. So, actually, maybe if you are okay with it, i will go in the other order. So, beginning of company in 2004 in my dorm room. We didnt have the Technology Look at people were sharing. We basically had to enforce our content policies reactively. People could share what they wanted and then if someone in the Community Found it to be offensive or against our policies, they would flag it for us and we would look at it reactively. Now, increasingly, we are developing ai tools that can identify certain classes of bad activity proactively and tag it for our team at facebook. By the end of this year by the way we have more than 20,000 people working on security and content review working across all these things. When content gets flagged to us, have those people look at it, if it violates our policies then we take it down. Some problems lend themselves more easily to ai solutions than others. Hate speech is one of the hardest. Because determining if something is hate speech is very linguistically nuanced. You need to understand what is a slur and what whether something is hateful. People use it differently in languages across the world. Contrast that for example, with an area like finding terrorists propaganda which we have been very successful on deploying ai tools on already. Today as we sit here 99 of the isis and al qaeda content that would take down on facebook are ai system sees it before any eye sees it. Thats success of ruling out ai tools that can proactively police and enforce safety across the community. Hate speech, i am optimistic that over a five to 10 year period we have ai tools that can get into some of the nuances, linguistic nuances of different types of content to be more accurate in flagging for our systems. Today we are just not there on that. A lot of this is still reactive. People flag it to us. We have people look at it we have policies to try to make it as not subjective as possible. Until we get it more automated there is a higher error rate than im happy with. Senator feinstein. Thanks, mr. Chairman. Mr. Zuckerberg, what is facebook doing to prevent foreign actors in interfering in u. S. Elections . Thank you, senator. This is one of my Top Priorities in 2018 to get this right. One of my greatest regrets in running the company is that we were slow in identifying the russian Information Operation in 2016. We expected them to do a number of traditional Cyber Attacks which we did identify and notify the campaigns that they were trying to hack into them. We were slow to identifying the type of new Information Operations. When did you identify new operations . It was right around the time of the 2016 election itself. So, since then 2018 is incredibly important year for elections not just the u. S. Mid terms but around the world there are important elections in india, brazil, mexico, pakistan and in hungary that we want to make sure that we do everything we can to protect the indid he go at ininf those elections. Since the 2016 election there have been several important elections around the world where we have had a better record. The french president election, german election, u. S. Senate alabama special election last year. Explain what is better about the record. So we have deployed new ai tools that do a better job of identifying Facebook Accounts that that may be trying to interfere in elections or spread misinformation. Between those three elections, we were able to proactively remove tens of how far to sands of accounts that before they could contribute significant harm and the nature of these attacks though is that, you know, there are people in russia whose job it is to try to exploit our system and other internet systems and other systems as well. This is an arms race. They are going to keep on Getting Better at this. We need to keep on investing in Getting Better at this, too. One of the things i mentioned before we will have more than 20,000 people by the end of this year working on security and content review across the company. Speak for a moment about automated bots that spread disinformation. What are you doing to punish those that exploit your platform in that regard. Well, you are not allowed to have a fake account on facebook. Your contents that to be authentic. We build technical tools to try to identify when people are creating fake accounts. Large Networks Like the russians have. In order to remove all of that content. After the 2016 election, our top priority was protecting the integrity of other elections around the world. But, at the same time, we had a parallel effort to trace back to russia the ira activity. Internet Research Activity part of the russian government that did this activity in 2016. And just last week we were able to determine that a number of Russian Media organizations, that were sanctioned by the russian operator were operated and controlled by this Internet Research agency. So we took the step last week. It was a pretty big step for us of taking down sanctioned news organizations in russia as part of an operation to remove 270 fake accounts and pages, part of their Broader Network in russia that was actually not targeting International Interference im sorry, let me correct that primarily targeting spreading misinformation in russia itself as well as certain russian speaking neighboring countries. How many accounts of this type have you taken down . Across in the ira specifically, the ones that we have pegged back to the ira, we can identify the 470 in the american elections. And the 270 that we specifically went after last week. There are many others that our systems catch that are more difficult to attribute to russian intelligence. The number would be in the tens of thousands of fake accounts that we remove. And im happy to have my team follow up with you on more information, if that would be helpful. Would you please . I think this is very important. If you knew in 2015 that Cambridge Analytica was using the information of professor kogans, why didnt facebook ban cambridge in 2015 . Why did you wait . Senator, thats a great question. Cambridge analytica wasnt using our services in 2015 as far as we can tell. One of the questions i asked our team as soon as i learned about this. Why did we wait until we found out about the reports last month to ban them . Its as of the time we learned about their activity in 2015. They werent an advertiser and werent running pages. We actually had nothing to ban. Thank you. Thank you, mr. Chairman. Thank you, senator feinstein. Now senator hatch. This the most intense public scrutiny i have seen sings the microsoft hearing that i chaired back in the late 1990s. The recent stories about Cambridge Analytica and data mining on social media have raised serious concerns about consumer privacy. And naturally i know you understand that. At the same time, these stories touch on the very foundation of the internet economy and the way the websites that drive our internet economy make money. Some have professed themselves shocked, shocked that Companies Like facebook and google share user data with advertisers. Did any of these individuals ever stop to ask themselves why facebook and google dont charge for access . Nothing in life is free. Everything involves tradeoffs. If you want something without having to pay money for it, you are going to have to pay for it in some other way it seems to me. And thats what were seeing here. These great web sites that dont charge for access they extract value in some other way. There is nothing wrong with that as long as they are upfront about what they are doing. In my mind, the issue here is transparency. Its consumer choice. Do users understand what they are agreeing to when they axis a website or agree to terms of service . Are web sites upfront about how they on distract value extry hide the ball. Consumers need information they need to make informed choice whether or not to visit a particular website to. My mind, these are questions that we should ask or be focusing on. Now, mr. Zuckerberg, i remember well your first visit to capitol hill back in 2010. You spoke to the Senate Republican High Tech Tas

© 2025 Vimarsana