Transcripts For CSPAN3 British Committee Hearing On Fake New

Transcripts For CSPAN3 British Committee Hearing On Fake News 20180208

To try to get them as close as they can to what theyre aiming. But yes. By country, we do really well, like nick said. About 2 of our users geolocate at any moment. At any given time, theres a degree of uncertainty. I just want to be really clear. The question im asking, would you sell advertising would you sell an audience based on a location in a country. Yes. Thank you, chair. Now, you mentioned the figure before has gone up in terms of what you said of fake accounts effectively. Weve had security briefings from people who stood at this subject since 2014. Theyre suggesting the figure could be in the tens of thousands in terms of fake accounts. In addition, the phenomena we see which is probably more damaging is the means by which this is used to amplify these fake accounts and disinformation, which are often accounts which have a certain bona fide texture to them. If you drill right down, they have all the signatures of falsehood about them in that respect. Now, obviously you dont have the monopolistic position of google. You dont have the money of a facebook. And you seem to have this infestation of these types of accounts. Is this too much for you . Is this a little bit like the wild west . No, a. That was i dont believe so, sir. We are a smaller company. We have 3700, 3800 employees worldwide. Google has more employees in their Dublin Office than we have in our entire work force. But we have been leaders in many fronts in utilizing Machine Learning ai to address some of the thorniest issues, precisely the ones youre talking about. Ill give you one example, if i may, which is terrorism. 75 of those before they tweeted once. We have incredible engineers. We have incredible ability to address precisely the issue youre talking about, which is malicious automation. We now currently are taking down 6. 4 million accounts per week. Not taking down but challenging them and saying youre acting weird, can you verify youre a human being. We measure our progress in weeks. Thats a 60 improvement from october what are the amplification of those accounts . Thats precisely what were talking about, which is malicious automation. You see a lot of people acting in a coordinated way to push something or try to push something artificially. We have gotten really good at stopping that particular effort to manipulate our platform. Weve protected our trends from that kind of automated interference since 2014. And one of the challenges that we see and that nick referred to is that many of the folks who were investigating our platform are doing so through an api we provide freely. There are a lot of things we do on a daytoday basis to action those accounts, to take down that malicious automation. The way we challenge accounts puts folks in a timeout where they cannot be tweeting, but theyre still visible to the api. Things like safe search, which is the standard default setting. Okay. You say youre exploring whether to allow users to flag tweets with false information. How soon will we see that . One more time, im sorry. Ive had a document given to me about public actions that youre undertaking under the social media companies. What they say is flagging tweets that contain false or misleading information. Where are we with that . Sorry, i just want to clarify. Users, other users saying flagging tweets from other people saying we believe this to be false or misleading. Im just curious of the source of that. Im not aware of that being discussed. Right, okay. So thats not something you would consider. Its more broader than that. Firstly, it goes to the point of some of the wider questions the committee is asking about. What would you do with that information, and whats the response required . Secondly, the likelihood of users gaming it to try and abusively remove people they disagree with. Theyre already gaming you already. This is one were very conscious of. Youve clearly not got the man power with 3,700 people. Thats clearly the case. I do appreciate that. Why dont you allow more of your community to flag up these tweets which are containing misinformation . I just want to clarify this. Were removing currently 6. 4 million accounts every week for breaking our rules, specifically around automation. Thats a huge number of accounts. Now, are you asking us to remove content thats untrue . No, im saying where potentially you could explore this area. And i understood you were exploring this particular area. To allow other users when they see quite clearly a piece of misinformation, much of which is in order to damage political processes within our own country and stability. They could then flag that as a warning to other users when they see that. Its not something you would consider . I think through the whole sweep of your pretty incredible hearing today, youre hearing from a lot of different voices who are trying to approach the issue from different ways. Where theres a piece of information thats against the law, we can action that quickly. During the 2016 election near the end in the u. S. , there were a series of tweets that were spreedi spreading the idea you could text to vote or tweet to vote. Its a standard effort to mislead people and voter suppression, which is as old as electoral politics but has moved into the 21st century. Thats against the law in the u. S. Were able to take that down extremely quickly. And the truth reached eight times more twitter users than the initial falsehood. But i think when it comes to plainly false information, its you know, the conversation that happens on twitter at any given moment is a wonderful democratic debate. We are very cognizant of the things we can control and the things we cannot. Were trying to address malicious automation, the kind of things that can take a bad idea and spread it quickly. Were trying to i think a lot of the things monica mentioned in the last panel, elevate voices that are credible. And then give our users more sense of understanding of whos talking, more a broader view of who is actually speaking. Okay. In terms of outside help, if you like, considering your relatively small work force, the need to better understand your users information, what about what was mentioned by my colleague earlier in terms of academic oversight, in terms of allowing them no see the information. I dont mean one or two but a much more open, much more transparent means by which can be see effectively what is being done and what can be done to the max. I think thats absolutely true. Obviously that conference in california last week with academics discussing this product. Were currently hiring for two roles at our headquarters whose jobbing will be specifically to ensure those conversations are happening. One thing were very fortunate for is youve already seen a lot of information academics have produced based on twitter data. Api is open. Researchers already access it every day. Theres a huge amount of research happening right now on twitter data. So were absolutely committed to deepening that relationship and learning more. But i think its important to recognize fwi recognize twitter data is arguably the biggest source of academic data. Only last week we expanded the amount of information available, increasing the ability to search for any tweet ever posted, rather than a fixed time period. Based on your statement at the beginning, it seems these academics are probably more effective at finding problematic content than twitter itself. We spoke about misinformation. If were looking at how misinformation spreads and what kind of content sprendads. Twitter is 280 characters. So often the content might be a link to a newspaper, a link to a video. Some of the information may be off platform. We can absolutely learn how those networks are working. It also informs the solution. I think one of the big challenges, and theres a lot of Research Done on this, has found some of the Solutions Proposed to educate users have actually had negative effects. Theyve created false trust where it wasnt true or created more hyperpartisan feedback loops where people see their own content, worried it will be removed, so they share it faster. Theres quite a lot of Research Going on there. We want to learn from that research to inform our approach, not just about the behavior, but to improve the quality of debate. If i could add on top of that, because last year twitter offboarded and devoted the money they put on the platform globally towards exactly the kind of research nick talks about. We are unique among the big platforms in the amount of information were giving freely to the world. We know we can do better, and this is part of the answer. Thank you. Julie elliott. Thank you, chair. Youve said about these 6. 4 million accounts you take down, how many accounts do you have at the moment . 330 million. So its a really tiny number, isnt it, in comparison to how many accounts you have. Your colleagues on the science and Technology Committee thought it was a rather large number. Just to clarify, those 6. 4 million, we challenge them. Its all a question of certainty, right. The more certain we are that youre violating our terms of service, the more aggressive our actions are. Sometimes people in a moment of peak or excitement do things that are unusual. They tweet an unusual number of times. What well do is send them a note saying, youre acting a little strange, can you verify youre a human being. If youre really into twitter, you will pass that test. But if youre a bot farm, its too expensive a thing to do. So youve got all of these millions and millions of accounts. The twitter handles people have often bear no relation to who they are, who is following you, who is tweeting you. What are you doing to try and identify who people actually are . So theres two sides of that. One is verification is an important tool weve used for a long time. Were taking a hard look at that and trying to figure out ways we can give our users more context about who the credible voices are on twitter. We, as part of our effort to get ready for the 2018 elections in the u. S. , are working with the major parties to verify a larger number of candidates so that as a hedge against impersonation. Impersonation is against our rules. You cant say i am an mp from manchester if youre not. So thats against our rules. However, we do honor and respect certain voices that have to speak anonymously. Its an important part of democratic debate. Its an important part of satire accounts. Its important to note in various places in the world, there are freedom fighters, there are christians in china, there are people in the middle east who are combatting isis, who are trying to promote counternarratives against radicalization that if they didnt do so, their lives would be in danger. So we try to respect that. It is a challenge, and its a complicated one, but again, as i said, were taking a hard look at how do we give people more context about whos speaking while also giving voice to those who are putting their lives in danger to communicate. So one of the things we are struggling with in the uk at the moment is the very abusive tweets coming to people in public life, particularly women in public life. Weve had debates in parliament on this issue in recent weeks and months. Every day as a woman mp, there are abusive tweets come at me on one thing or another. If i speak on certain issues, you know you think, right, lets wait for this cyberattack going on. You never see who these people are. There might be a random name with numbers. Theres nothing in their descriptor that says who they are. If you call it to the police, the police cant track down who they are. So it goes on. Theres usually a huge level of not just attacking what theyre saying but disinformation as well. These things keep getting retweeted and spiral and spiral. What, as an organization, are you doing to try and stop that kind of thing happening . I can maybe pick up on the committee for standards and public life report. I met with the Committee Twice to discuss their work. Im a former candidate myself. So this is something i have a very personal interest in, and i have many friends who have been in the electoral process. Firstly, when i joined the company at twitter, our safety approach was in a very different place. I joined in 2014, just after the abuse we saw directed to and since then, our approach has mass i have massively improved. We have developed a lot of technology. Weve strengthened our policies. Weve built the tools and the platform to make them much easier. When i joined, i think it was ten clicks it took to report content. Its down to three or four. Weve made it easier for users. We allow users to report other content, not just content directed at themselves. And to give you one idea of the impact we think were having, and i can share the detailed break down, we take action roughly now on ten times as many accounts every day as we did do last year. So in one year, weve been able to massively increase the number of accounts. You may have also seen we made the decision in december to expand our policy on violent extremist groups, which led to britain first, an account being removed if our platform because we felt that was more we could do. Something were also doing, which is relatively new, is the penalty box of when someone crosses the line, how can we change their behavior to stop them crossing the line again. By locking their account, saying this specific tweet has broken our rules, you must delete it before you come back to the platform. You must also give a phone number. Then they can come back on twitter. Were seeing im just double checking the number. Of the accounts that go through that process, about 65 only go through it once. So we think were changing their behavior. Its not to say by any means this is a done issue. Safety is still the companys top priority. Jack dorsey, our ceo, spoke about it being our top priority, but we think were making progress and were encouraged by the results. Okay. Thank you. I think simon has a quick question. On that point, im very encouraged by the shift that youve been able to achieve between 2014 and 2017. But the case i want to bring to your attention, because it demonstrates the problem, former colleague garon davis was subject to a fiveweek Campaign Based on an accusation of a criminal act with which neither he nor his wife had ever been involved. Despite five weeks of efforts to sper sue persuade facebook and twitter to do something about it, he was told that nothing could be done. Are you saying on the record now that couldnt happen again . No, i think its important to stay the story being circulated was published on a british regulated publication thats a member of ipso. The idea this was an unfounded delegation that only lived on social media im not interested in what came up in print media. Im asking you whether that clearly, provably untrue statement which could have arguably cost his seat are you offering us guarantee here that sort of thing couldnt happen again . We are not going to remove content based on the fact its untrue. The one strength twitter has is its a hive of journalists, of citizens, of activists correcting the record and correcting information. If we could have this same conversation about the 350 million i must be quick because im not scheduled. The fact is it qualifies to a great extent as abuse and or intimidation. And its untrue. But youre saying, well, no, nothing were going to do about that. So context is important. If ab account was created for the sole purpose of abusing somebody, that would cross our sole purpose of use. If it was using language based on hateful conduct, wed remove it for that. The truthness of a piece of information is not a ground so the deliberate use of your platform to distribute absolutely false and defamatory information can continue in election time according to what youve just said. I want to cover more than that. I dont think Technology Companies should be deciding what is true and what is not true, which is what youre asking us to do. I think thats a very, very important principle we should recognize. During the brexit referendum, weve heard there were similar claims made on both sides. So you are expecting a different level of regulation than if that had been a public broadcast or a newspaper, that would have legal action could have been taken. You say you should be outside you shouldnt be aligned in the same way as they are. As i said, the information did originate in a currently regulated member of ipso. So i think the idea this is a distinction, we are not going to remove information based on truth because i dont think thats the role of Technology Companies. The idea were not regulated i think is, as youve heard from previous witnesses, not an accurate one. So i think just to be clear for all the bad actors who are listening in, if you set up a false account under an anonymous identity, you can disseminate as much lies as you like and its not a breach of Community Guidelines. No, i didnt say that. No, im just trying to understand it. I said clear we take context into account. If you create an account deliberately to target an individual in a harassing, abusive way, we have rules what youre talking about there is a harassment, abuse. Thats clear. What were talking about here is lies. Someone whos deciding to spread lies about someone else. Theyre not harassing them, not intimidating, not inciting violence, just spreading lies. Theyre using the anonymity of twitter to do that. Theres basically nothing you will do about it. The anonymity on our platform is not a shield against breaking our terms of service. What im trying to get to, this isnt a breach in terms of service. Its conflating a number i think monica earlier, and juniper, both mentioned fake news is an over broad category that doesnt labeling something as not true doesnt necessarily mean the receiver of that information will discount it. What were focused on doing is attacking the things we can control, which is if somethings illegal, its against our terms of service. If its telling people to tweet to vote, thats against our rules. Well take that down. Elevating credible voices is also incredibly important and figuring out bbc, the guardian, what other voices people trust. Just want to finish on this point. I know people want to come in. Telling lies on twitter isnt a breach of Community Guidelines and wouldnt require action to be taken against the account. Thats what youre saying, isnt it . If thats the only ground . We do not have rules based on truth. So and obviously people can be anonymous to protect their identity. Someone can use a false identity to spread lies about someone else. That could be reported to twitter. These could be demonstrable lies. Its not a matter of opinion, but things that are demonstrably untrue and that wouldnt in and of itself require you to take any action. I dont want to sound like a broken record, but the context would be important. I understand what you said. On this particular point, just lies, not inciting violence or not a campaign of harassment but just telling lies, spreading lies is not a

© 2025 Vimarsana