Transcripts For CSPAN3 House Intelligence Hears From Twitter

CSPAN3 House Intelligence Hears From Twitter Facebook And Google November 1, 2017

Heard mr. Walker just say my previous conversation with the other two companies. Youre investing significant Corporate Resources. And putting weight behind what youve made today. The numbers on our the five exhibits we showed, and certainly the two that mr. Schiff showed, can you tell us what the difference is between a depression and an ad click is, some brief comment as to context. Those look stunningly impressive on their face. A broader backdrop, could you help us put that in context . An impression congressman is content that is in view for a user, it doesnt necessarily mean that a user stopped and viewed. If you think about how you use your phone, and you open up an app and scroll through it, anything in there would be an impression. A click means engagement with the ad. With these ads in particular, the click may have been to like a content for example. How much influence did these ads and information, miss information have. Are there metrics you use as part of your evaluation of your normal Business Model if youre trying to help a company develop an ad program. Were going to show you we think your ad will have this kind of impact. Are there tools you can use to who to vote for in november 16. We have tools to help investors on investment. There are tools to help them understand different campaigns. For campaigns we saw from from the accounts weve subsequently linked to the Internet Research agency. Theyre typically or they were intended to drive followership of the pages, so getting people to like the page. There the return on investment is clear from how many people like the page. Were they under oath . Congressman, i cant say what their expectations were, i think its clear they were able to drive a significant following. They were or were not. They were. Its why this activity appears so pernicious. It was undertaken by people who understand social media. These people were not amateurs and underscores the threat were facing and why were so focused on addressing it goingforward. Based on what happened in 16, no one looking at that could distinguish that from say a left wing, right wing group that could have been trying to pitch that same message. Would there have been a way a user could have distinguished that that was a foreign actor versus someone here in the United States that may have a horrible opinion. But they wanted to use that platform, could they tell . I think it would have been difficult to do so . Is facebook doing something looking at the 18 election that would help users see who that is . Were taking a number of steps goingforward. We never want to see that content on the site in the first place, because it is so insidious. Because it is an effort to even if it was an american that violate your standards we have a First Amendment issue that. Right, its an excellent question. We believe that when people show up to facebook as their authentic selfs. They have the opportunity and should have the opportunity to speak on important social issues like the ones that are discussed in these ads. The problem with these ads, and they should not run on the site. People were not showing up as their authentic selfs. Do you think youll have tools available before the 18 electi election . They can look through the face value of the ad and see who did it. Id like to make two points. We are and have already incorporated the learnings from this behavior into our automated tools, so our tools are better able to detect and rid the site of these mass car aiding accounts. The second point, to disclosure. With respect to political ads in particular, we want to give advertisers an opportunity to make clear whos behind the ad. That will be a strong symbol for us to require documentation to make sure that people who are running political ads in connection with the u. S. Federal election are authorized to do so. Thank you. The january 2017 assessment concluded that russian messaging strategy that blends covert operations with overt efforts by the russian agencies, state funded media, paid social media users or trolls. We now have a much better sense of how that man tests itself. On facebook we learned of 470 fake accounts tied to the criminal linked Internet Research agency or troll farm. From these accounts more than 80,000 pieces of individual content were produced. Roughly 3400 paid ads were purchased by the troll farm to august 2017, over 11 million americans saw during the campaign season. On twitter, roughly 2700 twitter users linked to the russian troll farm. Automated accounts were also found, these bots tweeted 1 1 2 million times which accounted for 300 million views. With respect to youtube and google, 3 million views on you tube, and 5 billion by russia. The question id like to ask you a all. One that is russia specific. Part of what made the russian social Media Campaign successful, part of mr. Stretch as you point out, they understood the algorithms you use accentuate fear based, it allows it to go viral and be amplified. This is an issue of concern in terms of the degree to which the algorithms are focused on the platform for advertising purpose s may have the urn intended consequence of polarizing our society. Things we were looking for or things the algorithms will capture attention to a greater degree. What corporate obligations do you think your companies have visavis both of these issues. Ed fact that algorithms designed to attract our attention, have the real consequence of pitting american against american in a way that the russians so capably manipulated. Do you have the historic data so that you would be able to analyze the Trump Campaign advertising and its campaigns organic content with that produced by the russian social media farms. And understand whether there was any sophistication in that overlap. We obviously take both of these issues very seriously, our focus, while we do look at content and have rules that talk about a content, we have the greatest successes when we look at behavior. We talk about things like omitting malicious accounts. What weve seen, especially in this investigation is that these malicious actors need ears they need eyes, they need to be able to reach an audience. The way they get that audience is to use automated activity on the platform. Thats where were focused. Over the last year weve impr e improved by almost our ability to challenge accounts. Were challenging 4 million accounts every week, to determine if theyre real. We block 450 suspicious logins every day. As to the ira, the russian based troll farms, the people weve been able to identify to date, we have that information and can share it with your staff. Mr. Stretch and mr. Walker . Gentlemen, with respect being respectful to the other members of your committee, id ask you to be responsive . I will be brief, yes, we do have an obligation to prevent foreign interference in the election, we take that obligation seriously, there are more details in my written testimony as to how were attempting to discharge that. With respect to the algorithm question, our goal is to provide the most relevant information to users, primarily driven by friends and family. We want you to see come to facebook and see information thats important to you, your friends and family. Often times whats important to your friends and family are challenging, provocative social issues, you will see that, our responsibility is to make sure when you see that content its authentic, so you can trust the dialogue thats occurring on the platform. We have not seen overlap in the targeting of the ads weve disclosed and any other advertiser thats been operative on the site, including the Trump Campaign. The accuracy and integrity of our results goes to interference with the election in the United States or anywhere else around the worlds. We have taken safeguards to protect our users. The use of our platforms for advertising was limited about 4700. And generally not micro targeted or finally targeted. Wed be happy to answer any further questions for the committee. Mr. Lobiondo, five minutes. Social media platforms have the responsibility of striking a balance between removing false information and preserving freedom of speech. Can you give us detail of determining between false news were taking a number of different steps to target fake news. Were working on our algorithm to provide training to the raiders who provide quality feedback for us. To improve the ranking of authentic and genuine sites. Were also making broader use of fact check labels, working with third parties for both Google Search and google news. Weve taken steps to disallow advertising on sights that misrepresent their nature of purpose and to add to our policies around hate speech, incitement of violence and the like. I would group our efforts in response to false news in three buckets. Most false news is financially motivated and were making efforts to disrupt the false news. We alert users that have attempted to share it that it has been determined as false news. Users approach the content they see with a more disearning eye. I think the way this was characterized is correct. Its a balance between free speech and whats real and whats false theres a lot of agoivety on the platform. One we took off our platform was tweet to vote. Telling people not to believe that was like between 8 and 10 times what we saw on the actual tweets. Were working on the behavior, thats where were focused right now. Weve had Great Strides in focusing on things like that like terrorism and child sexual exploitation. We have work weve done ad transparency thats going to help educate the consumer about whos paying for an ad, what else theyre running, what theyre targeting, what theyre aft after. Whos paying for it, what are they expecting. We have council around the world who are helping us think through the things that were trying to employ, to tackle these issues and how they will impact the debate of free speech on our platform. Were working hard on this, but its a challenge. All of you have said youve committed Corporate Resources to this. What assurances can you give us that foreign malicious activity in the 2018 elections and beyond are going to be mitigate d. I can assure you we are focused on it, and we are improving. We see really opportunities for improvement in three categories. First, we have to be better technically, we have learned a lot from the 2016 election cycle and the political trolling behavior weve seen worldwide in the last year or so, and weve incorporated that learning into our Automated Systems and are seeing results. The second area we have room to improve is industry cooperation. Theres a real good model for this in terms of how we have shared expertise and threat information in other areas of abuse on the platform. And were looking to standing that up in this this area as well. Third and finally, we think a constructive dialogue with Law Enforcement authorities where again were sharing information and putting us in a much stronger position as we head into next years elections. Since my time is expired, if you could get those answers back to us to the committee that we can refer to. Thank you. I yield back. Id like to use my short time to explore russias use of twitter. First, in a few short words, can you explain to us the difference between a bot and a troll . Bot is an automated account, an account where a machine is largely responsible for the actions. Setting it up, tweeting, retweeting. Its fully automatic . Yes, thats typically the behavior we see. Troll fwarms are a new challenge for us, and a bigger challenge were going to try to tackle in a few ways. We think of a troll as a real human behind the account. Coordinated with others. So with a troll, its a real human, but not necessarily a real human we know who it is. The russians took advantage of this by tweeting messages they thought favorable to their cause . Thats correct. Heres the keyer you u so me. A person doesnt need to disclose their identity on the blatt form. Thats correct. Facebook and youtube, there isnt the same anonymity. A person in st. Petersburg or russia or ukraine could share content. They could pretend to be a person. The every day user has no way of knowing who they are, right . Thats correct. We have a number of signals behind the account that we can share with Law Enforcement with necessary, and we do verify a number of individual accounts. Both Corporate Accounts and individual accounts to help folks understand who the real person is. Let me give an example. The board behind me shows a few of the over 2700, 000 twitter users that have been connected to the kremlin. There is no way to know the content on seattle post was generated by a russian entity . The real seattle post would be verified presumably on the platform, but they wouldnt know just by looking at that user name. Should political skon tent created on one hand by bots or by any other form of Artificial Intelligence, should that be lablgs as such . And if that content is generated by a foreign person. Should it be labelled as such . So on to your first point on automation. Were not only trying to we dont try to label it, we try to remove it. We see automated retweets, were removing those actors from the platform. Because of the information we have behind the scenes, we can connect those accounts often times. Were not just removing the one, were removing the collective. What go think your success rate is . Were Getting Better. We think weve gotten twice as good in the last year. Give me a sense of percentage. But that in context for me . Context of what . I dont know what were Getting Better is based on . Theyre always trying to get better than we are at detecting them, we get better, because those old techniques dont die. And the new techniques we learn from or get ahead of 2 million accounts last year this time. 450,000 suspicious log ins a day, were taking them off the platform before they even retweet. Part of the power of twitter was to make real users share russias propaganda to a wider audience. The board behind me is purporting to be the tennessee gop. This was shared by Trump Campaign officials, donald trump jr. Followed the account until it was shut down in august. Trumps campaign knowingly or unknowingly, helped legitimize and spread russia disinformation. Did i im a big fan and a big user. I hope you remaining committed to uncovering this kind of meddling in the core of our democratic process. Were very committed. When you say you dont necessarily label something that you come to find out is false, that you try to remove it. I just have to be honest with you, i dont personally use twitter, so no offense, but what how hard would it be for them to take down seattle post and do seattle post one after you take it down . I mean rather than letting people know by the way, this is a russian produced propaganda ad or a piece of or a foreign produced news story. Take that for what its worth. Arent you just chasing your own tail all day long . It may appear to be a game of whackamole. We see where people log in from. The devices theyre using, the phone numbers theyre using. Their ip addresses, were able to see and stop someone whos a bad actor. Were able to use signals to maintain the database of bad actor locations and other signals, and to be the account creation. We get better every day, thats why our technology and results are Getting Better all the time at stopping these things. But were using the signal and behaviors behind the scenes, to stop them before they create another account like you said. I appreciate that, and appreciate what you said about blocking malicious activity. Trying to figure out policing authentic cells. The problem i have, i dont know how successful youre being so far, to this day, we see news stories that come out that we come to learn in short order, as recently as a lot of the nfl sneeling scandal was perpetrated abroad to try to pit ourselves against each other. Taking both sides and just throwing it out there. My question to you is, i dont know if i have an opinion on this or not, but weve talked about on both sides of the aisle, do we have a role in this body in assisting you in trying to figure out for foreign entities, not american. Not american journalists, certainly, i might Say Something thats completely opposite of what somebody on the other side of the aisle, in two different networks, both believing its true. You cant police whos right or whos wrong. That would be a violation of my free speech. With regard to foreign entities to infiltrate, does the United States congress have a role in assisting you. If we do, what would that be in your opinion in alerting my constituents in okeechobee florida that this piece of news youre reading, not just a political ad, but a news story on facebook is not true. And i know that because theres a label or disclaimer on that that shows, by the way, what youre reading was produced in a foreign country. Do we have a role in that . And if so, what is that. And how can we make sure were not violating peoples Constitutional Rights by getting involved in that . So the challenge you identified is an acute one. We dont want to put ourselves in the position of being the arbiter of truth, we dont think thats a tenable position for any company or industry to buyer. We think its inconsistent with protection of personal protection thats so foundational to this country. We are taking a number of measures to ensure again the authenticity and the trust is present on the platform, include ing labelling stories that have been disputed as false. Where we see a role for government in assisting in this effort is to ensure that we are all sharing information about the tech neats and threat actors that we need to be alert to. And monitoring on the platform and disrupting when they engage in the activity that chairman and the Ranking Member surfaced earlier. Thats where we feel like theres really the opportunity to come together. Not just as an industry many but as a country to work on this problem together. Anybody else . I second that, id just add any additional leads the government has that they can provide would be helpful. I yield back. Thank you, mr. Chairman. Gentlemen, i think theres no doubt that russia tried to use your platforms to weaponize and meddle in our elections. I think that it has risen to the level of a

© 2025 Vimarsana