Transcripts For CSPAN3 British Committee Hearing On Fake New

CSPAN3 British Committee Hearing On Fake News Twitter Panel February 16, 2018

Any later than we are. If i could the members of the committee are returning but if i could start with some of my questions, youll be aware that the committee has made repeated requests to twitter for information about the activity of accounts, fake accounts, particularly where they may be connected with russian agencies. It is a similar request made by the u. S. Senate intelligence committee. Thank you, chairman. Thank you for having us today. I would like to defer that question to nick, my colleague from the uk, who does have an update to give you. As he said, and in my letters previously, we have been doing further investigation. I would like to read this because it is i dont want to misread them. Is it very short . Two paragraphs. We have a broader investigation that has been identified a a very small number of suspected internet linked accounts, 49 accounts were active. We had 49 accounts that were active, which represents less than 5 of the total accounts that tweeted about the referendum. They collectively posted 942 tweets, representing less than the total tweets posted. These tweets cumulatively were retweeted 461 times and liked 637 times. On average, this represents fewer than ten likes per account and fewer than 13 retweets per account during the campaign with most receiving two or fewer likes and retweets. These are very low levels of engagement. Whats the Audience Reach for those accounts . Less than two retweets. Whats the audience for that . We have a set number of accounts. They have people who follow them. Whats the audience . The engagement metrics that we have been using in this investigation to understand how they have been on the platforms. As i have highlighted, very low levels of engagement. They will directly impact on the viewability of those accounts. If there is very low engagement that would suggest very low views. The number of accounts activated is a number of information but the reach for that information Something Else and what is being shared is of interest too. It would be interesting to know the sharing. Is it content they created or links to other sources of information. Would that be useful to know . So thank you for giving us that update. I think were clearly got other things to follow up on and we need more information on that. And would need more information about that. Wed be interesting to know whether youve restricted your searches to certain already identified accounts or whether youve done a troll across the whole platform for accounts registered in russia that were active during the campaign. We know with twitter, there was evidence of large numbers of suspected bot accounts and then being taken down after the referendum was over. Thats why were persistently asking for this information. And i can touch briefly on the point there. We were asked to look at the City University research. One of the challenges we have, these accounts werent identified by research. Twitter is an open platform. Our api can be used by universities and academics around the world and it is. Unfrntsly, that doesnt give you the full picture. In some cases, people have identified accounts as suspected bots, which have later been identified as real people. One of the things we do, we work with academics closely on asking people to bear in mind when they make assertions about the level of activity on twitter that there may be cases where those assertions are based on very active twitter users who are real people own not bots. One of the dangers of using activity as a metric to identify bots is you may misidentify prolific tweeters who are human. So thats a benefit. Researchers can use our platform. Its a challenge for us. The researchers cant see the defensive mechanisms and the user data we can. Theres been plenty of analysis done looking at characteristics of suspicious bot activity. Twitter also knows where accounts are being operated from. Therefore, you could easily detect the creation of accounts operated in a different country that suddenly start tweeting about something happening in another location. That sort of activity is easy to spot on the sight if youre looking for it. I want to ask carlos, what cooperations were given to the u. S. Senate investigation . Has the evidence of russianlinked activity on twitter just been extrapolated from the work thats been done looking at the facebook pages, or is that separate intelligence youve supplied to the senate . Thank you for that question. You know, were constantly monitoring our platform for any activity thats happening. The Internet Research agency in particular, we came across that information in a number of different ways. Starting in 2015, there was a New York Times article about some of the activity on some of those accounts, and at that time, we started actioning those, in june of 2015. In the course of the followup to the election, we got information from an external contractor whose name is q intel that we share with other platforms that provided a seed of information that they told us was related to this farm in st. Petersburg. The ira. I believe there were about a hundred accounts turned over. Bit by bit, following more and more signals, found accounts that were related to it. Weve improved the information weve provided to the public. As you mentioned, sir, to the u. S. Senate and to the house intel committee. Now that number is 3,814 Internet Research agency accounts that were active. Noting that, you know, we started suspending these accounts in 2015, all of them have been suspended. Theyre not functioning on our platform. And you heard from our Peer Companies earlier today. We are very good at understanding whats happening on our platform, but sometimes it is important to have that partnership with third parties, with contractors, with civil society, with academics, and with government and Law Enforcement in particular, to help us figure out what we dont know, what we cant see thats not on our platform. Were good at tracking the connections on things on twitter and sometimes we need some partnership on the rest of the picture. Does twitter believe that there are likely to be or other farms, agencies running fake accounts from countries like russia . Theres been a focus on one, but some people say if the level of activity that people believe has been the case and it was being carried out, it would probably be too much to be done by one agency, and there will probably be others as well. I think we have to be humble in how we approach this challenge. To say we have a full understanding of whats happening or what happened, we have continued to look. Were constantly looking for efforts to manipulate our platform. Were really good at stopping it, especially when it comes to the malicious automation side. But we will not say or stipulate we will ever get to the full understanding. Some of the evidence that the committee took in when we started the all evidence hearings in westminster related to the referendum in catalonia and research there that had been done suggesting there was not only russian activity but agencies based in venezuela. Is that something twitter has looked at . Nick pickles, not only our uk lead but also one of the leaders in the company when it comes to information quality, i think could perhaps address that more than i can. This is one of the challenges that twitter presents an opportunity. Research is done and published. That particular research wasnt published in a journal. Theres no underlying data. Weve not received a formal communication of the findings. Its very difficult for us to validate those external Research Findings sometimes. What we have is the numbers at an aggregate level. One thing i would say is that the and just to respond briefly to your previous point, chair, the assertion that its easy to identify very quickly where an account is operating on the internet, where someone is based. I was logging into my email earlier on as a standard corporate practice, we use a Virtual Private Network to communicate securely with our company. That took two clicks. As far as google is concerned, im not in d. C. Right now because my Virtual Private Network is connecting somewhere else. So the idea that companies have a simple view of how customers communicate with us, it may be rooted through data centers, it may be rooted through vpns, it may be rooted through users deliberate ri trying to hide their location. I want to caution the idea that somehow saying there isnt absolute certainty there. All of this work is based on a variety of signals, and we make probable decisions based on that, but they are very rarely absolute. If i could, just to build on nicks point, which is an important one. Geographic basis of where tweets are coming from, where users are, are not always the strongest indication of what we use to action accounts based on violating our terms of service, which we take super seriously, which means even if nick is dialing in from a vpn or tour browser or other ways to obfuscate where hes coming from, if he breaks any of our rules, were going to hold him accountable. The explanation youve given there, saying its possible for people to hide where they are, i understand that, but also given that i would imagine if we were talking to your advertising people, they would say its kwiez easy to buy advertising on twitter that targets people based on where they live. That would be one of the rudimentary requirements of a brand seeking to advertise on a platform like twitter. Only about 2 of our users share geo location data in their tweets. Thats not what i asked though. Thats one thing people often assume. Someone may identify their country in a biography. You may be able to infer it from the language they used. I think sometimes and im not saying the research isnt important. Im just saying that sometimes the conclusions reached dont match what we find as a company. And we see that quite regularly. For example, some of the research on bots will identify people. Some of the research will identify other manipulations opt platform that we were able to detect and prevent. But thats not so if an advertiser came and said, i want to pay for promoting my messages on twitter, i want to target people in the state of virginia, we cant do that because thats not the way were set up, or yes, we can . So thats an excellent question, chairman. Thank you. We work with our advertising clients to try to get them the best value for the money. We dont have as much information about our users as some out of our Peer Companies. We try to figure out what are the analogs tries to reach the audience theyre trying to reach. Followers of cnn or fox news or the bbc. We can we do have a degree of geolocation for within a country or within a media market within a country, but we dont overexaggerate our precision on that. But we do provide extremely good value to our advertisers. But again, we are limited by some of the factors. But you would sell advertising based on location, even with those caveats. It is one of the approaches, but often we find people who are interested in certain subjects or search for certain issues can sometimes be a better i understand that. You could sell to an audience based on location. We work with our advertisers to try to get them as close as they can to what theyre aiming. But yes. By country, we do really well, like nick said. About 2 of our users geolocate at any moment. At any given time, theres a degree of uncertainty. I just want to be really clear. The question im asking, would you sell advertising would you sell an audience based on a location in a country. Yes. Thank you, chair. Now, you mentioned the figure before has gone up in terms of what you said of fake accounts effectively. Weve had security briefings from people who stood at this subject since 2014. Theyre suggesting the figure could be in the tens of thousands in terms of fake accounts. In addition, the phenomena we see which is probably more damaging is the means by which this is used to amplify these fake accounts and disinformation, which are often accounts which have a certain bona fide texture to them. If you drill right down, they have all the signatures of falsehood about them in that respect. Now, obviously you dont have the monopolistic position of google. You dont have the money of a facebook. And you seem to have this infestation of these types of accounts. Is this too much for you . Is this a little bit like the wild west . No. That was i dont believe so, sir. We are a smaller company. We have 3700, 3800 employees worldwide. Google has more employees in their Dublin Office than we have in our entire work force. But we have been leaders in many fronts in utilizing Machine Learning ai to address some of the thorniest issues, precisely the ones youre talking about. Ill give you one example, if i may, which is terrorism. 75 of those before they tweeted once. We have incredible engineers. We have incredible ability to address precisely the issue youre talking about, which is malicious automation. We now currently are taking down 6. 4 million accounts per week. Not taking down but challenging them and saying youre acting weird, can you verify youre a human being. We measure our progress in weeks. Thats a 60 improvement from october what are the amplification of those accounts . Thats precisely what were talking about, which is malicious automation. You see a lot of people acting in a coordinated way to push something or try to push something artificially. We have gotten really good at stopping that particular effort to manipulate our platform. Weve protected our trends from that kind of automated interference since 2014. And one of the challenges that we see and that nick referred to is that many of the folks who were investigating our platform are doing so through an api we provide freely. There are a lot of things we do on a daytoday basis to action those accounts, to take down that malicious automation. The way we challenge accounts puts folks in a timeout where they cannot be tweeting, but theyre still visible to the api. Things like safe search, which is the standard default setting. Okay. You say youre exploring whether to allow users to flag tweets with false information. How soon will we see that . One more time, im sorry. Ive had a document given to me about public actions that youre undertaking under the social media companies. What they say is flagging tweets that contain false or misleading information. Where are we with that . Sorry, i just want to clarify. Users, other users saying flagging tweets from other people saying we believe this to be false or misleading. Im just curious of the source of that. Im not aware of that being discussed. Right, okay. So thats not something you would consider. Its more broader than that. Firstly, it goes to the point of some of the wider questions the committee is asking about. What would you do with that information, and whats the response required . Secondly, the likelihood of users gaming it to try and abusively remove people they disagree with. Theyre already gaming you already. This is one were very conscious of. Youve clearly not got the man power with 3,700 people. Thats clearly the case. I do appreciate that. Why dont you allow more of your community to flag up these tweets which are containing misinformation . I just want to clarify this. Were removing currently 6. 4 million accounts every week for breaking our rules, specifically around automation. Thats a huge number of accounts. Now, are you asking us to remove content thats untrue . No, im saying where potentially you could explore this area. And i understood you were exploring this particular area. To allow other users when they see quite clearly a piece of misinformation, much of which is in order to damage political processes within our own country and stability. They could then flag that as a warning to other users when they see that. Its not something you would consider . I think through the whole sweep of your pretty incredible hearing today, youre hearing from a lot of different voices who are trying to approach the issue from different ways. Where theres a piece of information thats against the law, we can action that quickly. During the 2016 election near the end in the u. S. , there were a series of tweets that were spreading the idea you could text to vote or tweet to vote. Its a standard effort to mislead people and voter suppression, which is as old as electoral politics but has moved into the 21st century. Thats against the law in the u. S. Were able to take that down extremely quickly. And the truth reached eight times more twitter users than the initial falsehood. But i think when it comes to plainly false information, its you know, the conversation that happens on twitter at any given moment is a wonderful democratic debate. We are very cognizant of the things we can control and the things we cannot. Were trying to address malicious automation, the kind of things that can take a bad idea and spread it quickly. Were trying to i think a lot of the things monica mentioned in the last panel, elevate voices that are credible. And then give our users more sense of understanding of whos talking, more a broader view of who is actually speaking. Okay. In terms of outside help, if you like, considering your relatively small work force, the need to better understand your users information, what about what was mentioned by my colleague earlier in terms of academic oversight, in terms of allowing them no see the information. I dont mean one or two but a much more open, much more transparent means by which can be see effectively what is being done and what can be done to the max. I think thats absolutely true. Obviously that conference in california last week with academics discussing this product. Were currently hiring for two roles at our headquarters whose jobbing will be specifically to ensure those conversations are happening. One thing were very fortunate for is youve already seen a lot of information academics have produced based on twitter data. Api is open. Researchers already access it every day. Theres a huge amount of research happening right now on twitter data. So were absolutely committed to deepening that relationship and learning more. But i think its important to recognize twitter data is arguably the biggest source of academic data. Only last week we expanded the amount of information available, increasing the ability to search for any tweet ever posted, rather than a fixed time period. Based on your statement at the beginning, it seems these academics are probably more effective at finding problematic content than twitter itself. We spoke about misinformation. If were looking at how misinformation spreads and what kind of content spreads. Twitter is 280 characters. So often the content might be a link to a newspaper, a link to a video. Some of the information may be off platform. We can absolutely learn how those networks are working. It also informs the solution. I think one of the big challenges, and t

© 2025 Vimarsana