vimarsana.com

Card image cap

Teachers. As a teacher itch see have seen. I am a teacher, thats why i am voting. Voices on the road, on cspan. Next a look at how Political Campaign can fight information on social media. This is an hour and a half. All right everyone, well get started. Thank you so much for joining us today. We do original policy work and in collaboration of our faculty and students and spend a lot of time thinking of how we train lawyers and policymakers to better understand which better way to benefit our society. Election integrity in the Network Information era. Technology is reshaping many facets of our society including changing the venues and tone of information sharing and in the public conversations. Having these changes on our elections is a priority of the highest order. Since the elections of our foundation in an area is above all others. We need public trust. There is no question the public trust and Democratic Institutions is being challenged in seismic ways today. That to say we have a lot to talk about. You will hear from a wide variety of people thinking about these questions from a variety of disciplines. That match up is purely intentional and reflect our approach here at georgetown where we strive to break out of traditional and academic silos. We are approaching these questions to four different lenses in our program today. Looking at questions with information sharing and public discourse. The significant risk of Voter Suppression and turn out, Elections Security and lastly a specific conversation on policy solutions. Well have student presentation and post terrers in the back of room refleckiting of the work tt our students are been doing in the area. We are grateful this year of our editor and chief and the full Journal Staff to their work pulling together in symposium. So immediately i am going to pass it over to josh to welcome you and well start with the program. Thank you. [ applause ] good morning everyone. My name is josh banker, welcome to todays symposium. Thank you all for coming and for our wonderful guests showing up today. I want to extend a quick thanks to georgetown faculty, without your support, none of this would be possible. I want to say how proud i am of georgetown law. We are a student run law review focused on intersections law and technology. The hardest questions faced by legal scholars and technologies and policymakers. We covered online manipulation and regulation of Global Health data, platform speech regulation, deep fakes and their impact on the news. Robot Corporate Board members, the committee on Foreign Investments in the United States and the work of different technologies that impacts our daily lives. During our four years, we established ourselves as a place of ideas. Todays event fits neatly into that tradition, we are incredibly excited of the days ahead between the wonderful speaker that we have and the students that are here to present their posters. We have an event thats going to speak to the issues that we have seen throughout the past week. I look forward to hearing from everyone today. I know the journal is helping and bringing them forward to publication. I encourage you all to look towards that. Thank you all for coming. Without further adieu, i would like to turn it over to our professor carol to start us off now. [ applause ] thanks josh, thank you all of you for being here today. We are grateful. I am glad to have this panel on Network Information ecologies be the first of the day. I see it as establishing a terrain and a landscape that under lines what well be talking about the rest of the day. I want to prevent a warning along those lines. The terrain is rough. Our Whitney Phillips has a piece entitled the internet is a toxic landscape. The sub head on the essay was but we can fix it. I dont want to over promise but in addition to spending some time in surveying the heal healthscape, well be offering some metaphors and possibilities. Maybe for thinking of less hellish what it means to us. You have the four impressive people that you have here in front of you. What i would like to do in introducing them rather than keeping that information is telling you why i am so grateful why each of them are able to join us today and i am going to ask them to say all the good things they have to say to about 12 minutes a piece and i am going to ask a questions or two and open it up to all of you for questions in 20 minutes. Feel free to jotting things down or thinking about them as we are speaking. First to our left, we have mike anney. Hes thinking of the network sphere. What i love about mikes work was when i assigned it is the way he pushes us to think about what our relationships to look like and what our democracy to look like and what we want our public to look like and what we want our press to look like and consider the preconditions for us to get there. Then we have latisha bodi, professor in the communications cultures and technology program. Latisha has done really interesting things on her work on how we get things online. We have Whitney Phillips, an associate professor, whitney has been thinking of information disorder and including information for many years and has done vital work in getting journalists to see how they can contribute to pollution and giving them practical tools to address it. I find her use of ecological t metaphors to be powerful and helping me think through. Finally we have lam vo, a Senior Reporter, lam has made it her job to understand how the social web works and how to help readers and other journalists understand it better. Shes a leader in using data and journalism and she wills her power to make journalism more impactful and diverse and responsive to its audience. With that, i want to hand it over to mike to get us started. [ applause ] thank you very much. That was the kindest and generous introduction i have ever had. I appreciate it. Thank you all for being here. It is a total treat. What i want to do today as somebody whos coming to the question and the intern line as a communication scholar, as somebody who never practiced as a journalist and think of what the press is, i want to tell you a story of a study i did thinking of the intersection between platforms and journalists and offer three ways to think about this terrain that erin describes. Three ways of what platforms do and news and platforms. I am sure doing a little bit of a take on, sort of flop for knowledge or making for people or his arguments. A lot of the ways we think of describing people and think of who they are and putting categories inboxes is an act of making them up or con strustruc them or which parts we are paying attention to. What i want to argue is a lot of the language and language and c contemporary platform is making up and what kind of Political Action they get to do. I want to impact that a little bit. I come to this question sort of two questions. What assumptions do platform make about speech when they cast people as users . Theyre trying to convert people in a way or think about them as users of system and not necessarily political beacons and could use assumptions that platforms are making are being insight of government. This moment where we are struggling of what it means for the platform and engagement moment, understanding our pressure points and opportunities for government that may exist. A lot of work and science and Technology Studies is i try to think about can this concept of infrastructure be sort of a concept that we can think about and deploy against this platform government. If we see platform as infrastructure, what opportunities may that give us to think about regulating platform in different ways and think of tracing public dimensions or tensions of platforms. Infrastructures are one of these concepts that i want to deploy. What i mean by infrastructure and a lot of work that the infrastructure in a few different ways. One is the largely invisible set of infrastructure works best when they are if the background and we dont notice it. It is the condition for shared meaning. All the sorts of rules and consumption and values and language that goes into figuring out and how we think and assume together. We know how something is going to behave. When the system stays invisible, we should not see infrastructure if infrastructure is configured or deployed. They build time of what they call stable bases, you should not be questioning things or you should not be revisiting assumptions or should be thinking of the act of maintaining infrastructure but dont question it. Finally maintaining boundary work. Infrastructures are moments when professionals of different backgrounds of ideologies and languages are coming together to sustain something. A lot of ways our platforms meet a lot of these criteria. We dont know what it means to broken it down or know what it means when they are working. I want to offer that as a way to think about it. To get more grounded and concrete, i want to offer you a story of the study that i did. It was focused on this factchecking network that facebook has with five u. S. News. It changed a lot since you have been following this. I dont talk abouts the current configuration of this, this is as snapshot in time of a particular arrangement between a platform and News Organizations. What i want to do is use this as infrastructure to think with about what kinds of regulatory opportunities may exist. So just to give you a little bit of a background, this partnership that was formed immediately after the 2016 u. S. President ial election, you probably are aware that facebook was under pressure, sort of do something about this phenomenon of fake news. It continues to be in question but i want to drill down and focus on is a particular work flow that the partnership is established, that work flow centered around the dashboard. You want to study the thing that seems not interesting. Dashboard are fascinating even though they looked boring. Factcheckers would do to that dashboard and pick stories of it that they want to work on and theyll do their factchecking work and put it back in the system and facebook suck it up and learns some patterns and do some magic that they would not talk about and they would impact how speech circulated on their platform. This dashboard was a place where facebook engineers, free speech regulators and facebook and journalists all had to figure out how should we Work Together when we think about this circulation of speech of platform. What i want to think about that is how is this an infrastructure of free speech. How is this arrangement between organizations that works through this dashboard whether it works or not. A dashboard that none of us have seen probably. It is largely invisible but incredibly powerful for thinking of how it circulates. What assumption does this infrastructure make . Are the lems something that we can use or something of government. I want the offer three ways as potential opportunities or potential objects of focus for us to think about what would it mean to regulate a Partnership Like that, that was using an infrastructure like that, that had that kind of impact. I will go through them and q a that we can talk more about them. First is assume people talking of the informational idea of citizenship. Facebook to the feedbaacebook t feedback checker, it had the information and things that were wrong. Facebook talked about community comi coming through of truthful reporting. It is very much this Information Center model of what good public life looked like. Factcheckers also disagreed to sequester the information, whether it should be put in a separate area so people could still see the information. Whether you should juxtapose claims or counter claims. Almost no report in removing the information entirely. Factcheckers when i did interviews with them, yeah, thats too far. Well make on strong determination of what is false or true but it is not for us to decide that. Everyone for people heavily invested or making strong claims, there was a strong resistance. This idea that people share things because theyre passionate about it and theyre invested. The second thing i want to talk about is there is this idea in categories. Categories and stability of categories are things of the foundation of this infrastructure made it work c. First is facebook essentially out sourcing the construction of true or false to a set of professionals of these factcheckers, they can point to and say thats where the determination of truth or false lies. We dont want factcheckers to second guess themselves, we want them to be strong. Facebook defines popularity. In the dashboard, one of the columns was popularity. This is highly contentious column that factcheckers, first of all said i have no idea what it means. We asked facebook, what does popularity means . We dont know the mechanism to determine it. We dont see conservative media or no info. Words. Some words are not defined as popular because factcheckers surmise, maybe sometimes false or fake news make money. They wanted facebook to define popular for them to help them organize their work. One of them said you dont want to write something that have not gone viral because you dont want to elevate its visibility. Facebook has changeable words. They talked about false news and misinformation quite interchangeab interchangeably. Last what i want to talk about is how were infrastructure thinking of governing itself. How did it think of the circulation of speech. Probability seems to be this logic and way of dealing with scale that was consistently engaged with. Facebook manager Alliance Says when partners rate something as false, we rank those stories on news feed. This number of 80 came up a lot. Fact checkers surprisingly said i dont know how that number is calculated. We have no proof of that. I cant factcheck that claim. A claim that their partnership produced, i cant factcheck that claim. Thats a problem. Facebook and other places said they dont want to be an a arbitrary of truth. The last point i want to make on this probability thing is facebook is not along in this phenomenon. Youtube, twitter and amazon are all engaging with this and making some what arbitrary claim whether the network should work 80 of the time or 90 of the time or in what context. There is a struggle for way of dealing with scale that engages with probability. Okay, to finish i want to say why does this matter . I want to come back to hacking for a minute. In hacking there is this debate among computer scientists and philosophers in the 80s and 90s that can be resurvivvi. Be careful of the idea of the possibility. So what i try to do this quickly to say if we see platforms and journalists as creating speech infrastructures, if we do that, we can start to see some dimensions coming out of it and maybe those dimensions can be ways of thinking of regulations and opportunities going forward. Okay, i will stop there. Thank you. [ applause ] i apologize for spilling coffee on you. I am from the other campus, i have been here for eight years since the first time i have been to law school. Thanks for getting me downtown. I am going to talk a lot of what will echo of what mike already said but in a much less eloquent and big picture kind of way so you can look forward to that. It does suggest that we are on the right panel and maybe we need to be working together on some of these ideas, too. Thats great for me. Specifically, platforms are doing things right now and how people are thinking how platforms are doing things right now and based on my own research an alternative, a supplement that can help in this idea of combatting this information on social media social media, which is specifically user correction. Umm, so the problem that were talking about today is that the internet is terrible. A specific element of that that im going to talk about is theres a lot of misinformation on social media. I dont think i have to make this argument too strongly, i think people basically agree with me on that, although a caveat to that is that some of the recent studies that have looked at how much misinformation sharing on social media, theres probably less than you think there is, keep that in mind, the best guest based on Empirical Research is about 8. 5 of users have shared misinformation. So i think there is a lot to be done about this. A lot of the proposals that ive seen have been technologybased. So these are technology platforms. We think the problem is technological, the scale is technologic technological. Weve had misinformati informair but this is spreading farther and faster because of the nature of the technology its spreading on. What im going to argue is that there are ethical problems and empirical problems to doing that. This is uncomfortable for me because i almost ever say the word argumenment when im presenting, im very quanti quantitative research. Mike has already talked about it and others have talked about it it too, maybe we dont want Technology Firms to decide what is true and what is false. This is the first big problem. Im going to use an example which is, platforms have been taking a lot of action on misinformation related to vaccines. Here you see on the far side is pinterest, they are kind of the first movers on this issue. If you search for vaccines on pinterest you dont get pins from users. You only get pins from kind of, uh, Certified Health organizations. So here you see the w. H. O. And the aap, the American Association of physicians. Twitter and Facebook Follow suit on this, if you search on twitter, the one on the top, you get recommends to go to health and human services. If you search on facebook you get a recommendation to go to cdc. Gov. Interestingly all the platforms are recommending different organizations but thats a completely different thing to talk about. I think people think of this example as being the way that platforms can do this, right . Its easy enough to identify where theres lots of misinformation and then shut that down for users that are seeking out information. And this ties back to what mike was saying about, like, with a do we want users to be doing if theyre seeking information, thats a particular behavior we want to encourage and reward with good information and not bad information. So the idea we can box off all the crap people share about vaccines on social media and not show it to people, whether thats by downgrading it in an algorithm or not showing it in search results or whatever, seems really appealing. The problem with this is vaccines are really, really the exception to the rule when it comes to types of misinformation. So this is from an article that actually just came out yesterday, so check out my twitter if you want to see it. That i published with my coauthor, emily raga of the university of minnesota, which is talking about trying to think about different types of misinformation. And so what were saying is there are different criteria that we use to define what is true and what is false, to think about what misinformation is. And we kind of do this without thinking about it. But the more evidence there is brought to bear on a subject, so vaccines is a really good example of this, theres lots of evidence. We have 100 years of evidence of the efficacy of vaccines and their safety and all those kind of things. And the kind of quality of the expertise is brought to bear. So these are medical doctors that are doing these studies. We have a lot of confidence in them. All of the organizations that ive just mentioned, the cdc, the hhs, all of those, actually poll really well with the public, so the public trusts the cdc across party lines, even, 80 of people say they trust the cdc. Same is true for organizations like the w. H. O. All the evidence and expertise we actually mostly agree on as a society. That is that settled tiny triangle at the top. Thats the vaccines example. All the other misinformation falls somewhere lower than that. Most even health research, i mostly study Health Misinformation rather than political misinformation, falls somewhere below that settled line. Its more emergent. We have a study that says this but theres also a study that says this but maybe this study is better but this one has a smaller sample size. So its really complicated to kind of synthesize all the information that weve about a particular topic. Some issues we cant even agree on who is an expert. So a really good example that have is Climate Change. You will regularly see these kind of manifestos that are signed by a thousand scientists or Something Like that. But when you look at it its like mostly oil and gas scientists or Something Like that. No disrespect, im from houston, i get it. But maybe thats not the kind of expert that we think should be weighing in on the issue of Climate Change. And then on the controversial side of things is basically all of the stuff that mike was talking about, all political misinformation is basically going to be controversial. The evidence is less clear as a general rule. The experts are less clear as a general rule. So it gets really complicated in a hurry of the vast majority of the information that were dealing with in this ecology is not straightforward which means its not straightforward to deal with. So that leads me to the second problem, which is that this is not just an ethical problem that maybe we dont want platforms making this decision but probably they cant do a very good job of it technologically anyway. So as an example, instead of kind of talking about this in abstract, think about what sapp which is already a complicated platform. Social media is complicated already because of encryption which is a whole separate thing to talk about. There are 60 billion texts posted on whatsapp everyday. 60 billion. Im just going to sit on that number for a second. 60 billion texts per day. Based on the study i mentioned earlier our best guess is 8. 5 of that might be misinformation. Thats actually probably high, because the study that im talking about said 8. 5 of users have shared misinformation not that 8. 5 of the content they share is misinformation but im being generous here so lets say 8. 5 of it is wrong. Lets just say that somehow, and this is very unlikely because classifying is hard. Machine learning is still a work in progress. And as we just spent a long time talking about, misinformation is hard to define in itself. But lets say that they can identify misinformation with 99. 9 accuracy. Again, very, very unlikely, you never see these numbers, especially for something as complicated as misinformation. That means 510 million pieces of misinformation are still going to be out there, even if theyre achieving this Incredible Technology success rate. So i think you might agree that this is not good enough by itself. If our concern is that misinformation exists on a platform, then this is not going to solve that problem. Now, maybe thats not our goal, and i think it was really good how mike was pushing us to think on what are our goals for these platforms. But i think this is the way its been framed for a lot of these platforms and thats not going to do the trick. Im offering a People Powered solution which is also incomplete, i want to be 100 clear about that. But i think achieve some of the goals we have in reducing misinformation and particularly in reducing misperceptions which is belief in misinformation. So if theres lots of misinformation out there and nobody believes it, i dont care that much about it. In theres misinformation out there and it makes people change their mind about things, thats a different thing entirely. My research is focused on reducing misperceptions through looking at social media platforms. This is an example of kind of an experimental stimulus that we use to show people, weve done this on twitter, weve done this on facebook, weve done this on a video platform. And someone else has now replicated it on whatsapp. So it seems to translate to different platforms pretty well, which is important. And basically what we do is we show people misinformation so the top story that you see here posted by Tyler Johnson, the cap is, check out this story caused by gmo mosquitoes. This was fielded in the height of the zika epidemic when people were worried about it. This is pretty clearly misinformation. We know that gm mosquitoes did not create zika, it existed before we had genetically modified mosquitoes, they are a thing but theyre a thing to try to reduce malaria. This was a common misperception particularly in the global south where zika is most prominent. So we tested that as a rumor, we had someone, drew miller, saying the science here is that there is no relationship between gm mosquitoes and the spread of zika and links to the cdc, which is a highly trusted health organization. Weve done this on different platforms, weve done it with different issues, weve done it with zika, weve done it with genetically modified foods and cancer, weve done it with raw milk, people are weird about raw milk misinformation. All sorts of other issues that im happy to talk more about. And it seems to pretty consistently reduce misperceptions, something between 10 and 20 . This isnt going to change the world, its not going to eliminate misperceptions but is an important way for people to engage in this sphere. We think its extra important because of thinking about the networked nature of social media. The feedback we get on this is we dont want to be drew miller correcting Tyler Johnson because Tyler Johnson will get mad at you and it will be a whole thing and you dont want to get in a fight on the internet. When drew miller corrects Tyler Johnson, its not about correcting Tyler Johnson. So he probably will get angry and get defensive and may not even change his mind about the original issue. The issue is that the average facebook person has 700 followers. So 700 people, and again, not all of them see this because of the algorithm, but 700 people have the ability to see this correction happening in real time. Same thing on twitter, right . The average person on twitter has 300 followers. So the ability for this to spread and be seen by lots of people is really powerful. So we think its scaleable in that way, and takes the onus off of the platforms in a way that i think is ethically beneficial for deciding what is true and what is good evidence and all those sort of things. Ill stop there. [ applause ] hi. Im so happy to be here. Thank you for inviting me and thank you for listening and being willing to travel through the rain. So today, im going to be focusing on the idea that light disinfects, an assertion often attributed to Supreme Court Justice Louis brandeis in 1913. Although he was speaking about financial crimes, his argument that sunlight is said to be the best of disinfectants has become an assumption when talking about harms like bigotry. Light disinfects is hands down the most common assertion in my work focused on the rise of White Nationalism and supremacy online and disinformation more broadly. But just because so many people say the same words doesnt mean theyre making the same arguments. Light disinfects follows two tracks with two separate sets of assumptions and stakes. And how i came to this conversation is maybe something for q a, but i was prompted to this by a question i didnt know how to answer and i thought about it for many months and then this is the result of that thinking and fretting, a lot of fretting, everybody. The light of liberalism and the light of social justice are two Different Things. Without breaking those differences down its in turn very difficult to dive into focused analysis and critique of either. And we need to be able to critique the effects of light or to at the very least challenge our assumptions about it. Light can be good and it can disinfect in some cases with some people. Light can also set worse things into place. Light can also be the stuff of nightmares. Exploring this tension and the role Digital Tools play in deepening it helps dispense with the idea that we can take our light to the bank. All we can take to the bank is refraction. So first, the light of liberalism is rooted in the enlightenment and borrows many of the enlightenments visual and symbolic motifs which themselves borrow from the visual and symbolic motifs of christianity, particularly catholicism. Light as spiritual corrective is a conversation all to itself. These images and motifs include images of mirrors showing things as they are, blazing suns, bright horizons, all symbolizing truth with a capital t. This light tends to be aimed at the bad action itself, so the hate, the abuse, and the people who perpetuate it. It disinfects, at least this is the assumption, by filtering those harms through the marketplace of idea, in the process exposing hate and falsehood for what it is, a dark cloud of ignorance. Although the light of liberalism has decidedly pro social aims, the responsibility for bearing all that light and responding responsibly to the lights of others tends to fall to individual citizens. These individual truths will out, and in outing will disinfect and in disinfecting will preserve our personal freedoms from external restriction, namely the need for censorship which is another conversation unto its own. The second meaning and implicit argument of light disinfects alliance with social justice activism and tends to focus on those affected by hate. These images, the quote is from ida b. Wells, the journalist and antilynching activist. And this quote is featured in the pages of the museum book for the Legacy Museum in montgomery, alabama, which focuses on the history of lynching and the legacy of slavery in the United States. This light disinfects by inviting others to bear witness to the affected partys first person subjective experiences of pain. In seeing these effects, a collective process of truth and reconciliation can begin. Its worth noting that the light of social justice does not presume that the marketplace of ideas will filter the best ideas to the top. Instead, it implicitly concedes that the most resonant and popular ideas are the ones often in most need of challenging, for example the widespread prevalence and accepts of lynching in the post reconstruction south, that was very popular. Another divergence from the light of liberalism is that rather than foregrounding the individuals composing society, the light of social justice four grounds the society comprising individuals, a society whose unjust norms, structures, and hierarchies must change if there is any hope for individual citizens actions to change. The light of social justice thus hails the collective we, not the atomistic me. Its worth pursuing for the good of the collective rather than negative freedom from external restriction. Both kinds of light pose challenges in offline contexts and here is a very quick snapshot. The light of liberalism assumes that the marketplace of ideas is a rational process. It is not. It assumes people make decisions solely because of facts. We dont. It assumes most basically that it works but history tells a more complicated story. While it might disinfect for some, overall it amplifies or certainly can amplify and hassam reply fi has amplified and proliferated the spread of hate. Same with the light of social justice, there are also challenges there. The we who looks can be deeply fractured and this fracture can result in deeply unpredictable effects. Not everyone responds to even the most righteous light equally. The emergency claims or the idea that this thing whatever it is is terrible and cannot be allowed to persist can fail. And when the emergency claim fails, the harms it shines on can be normalized or, worse, can result in hardening the ideologies of those doing the harming. In other words, it can backfire. And the idea of an emergency claim and what happens when it fails is a concept from visual culture scholar ariella ozylei. It can be dehumanizing, fetishizing and normalizing the hegemonic justice in its own right. The takeaway here is the light of liberalism and the light of social justice are different and should be analyzed in their own terms. What a mess Digital Tools make of both, raising all kinds of complications on how and when and where and if to shine even our most righteous light and theres one basic reason for why this is the case. Online, predicting an audiences response, even identifying where one audience ends and another begins isnt just difficult. It can be downright impossible by network design. Social sharing spurred on by trending topic algorithms, by streamlined retweeting and reposting functions and by the various attention economy incentives ensures that audiences online remain hopelessly collapsed. Within these networks overlaid on top of networks, a person might shine their light nobly, a steady beam cutting through the darkness. However pure that light might be, however focused, its reception is in fact prismatic, refracting wildly with each network twist and turn. Its colors change. Its wavelengths lengthen and it can never be called back. Because it holds a mirror up to society to reveal its ugliest contours, the light of liberalism is particularly vulnerable to out of control refraction online. The fun house mirror that is the marketplace of ideas only strengthens that light, bending it towards worse and weirder outcomes. Most pressingly because there is no singular marketplace of ideas online. There are ideologically siloed marketplaces, particularly on the far right, thanks to the process of asymmetric polarization and financial incentive structures rewarding increasingly radicalized content which is another conversation all unto itself. The result of this ideological siloing is that a scathing critique spotlighted by the light of liberalism can filter into a reactionary marketplace and emerge transformed as a joke or justification or incentive to do something even worse next time. Digital spaces also pose complications for the light of social justice. As necessary as the light might be, online its potential benefits are matched by its potential fallout. Most basically, context collapsed audiences online allow bigots to weaponize light very easily. For example, the Legacy Museum has disabled all its comments on its youtube channel. The goal of fostering a meaningful dialogue about the legacy of slavery is outweighed by the dangers of providing bigots a platform to add nair two racist cents. Also vexing is the fact that the social sharing of harm even when the purpose of that sharing is to trigger Restorative Justice frequently dovetails into worse abuse and harassment. This is terrible and we cannot let this to persist is online both a critical assertion to make and a target sign. Articulating the difference between the light of liberalism and the light of social justice, an effort that begins with the seemingly simple question, who and what am i shining my light on, is a basic way to identify where the potential hazards lay and what the ambivalent consequences of our light might be. Such a focus in turn cultivates more strategic, economically sensitive approaches spotlights online and here ecological refers to the reciprocal interconnection between our networks, our tools, and ourselves. Applied to how and where and when we shine our light, ecological thinking directs our attention to the out there, to unexpected places and unintended wes. It affirms the value of wanting to rid the world of the scourge of bigotry, wanting to protect groups. At the same time it reminds us gently that intentions arent outcomes, especially online, as attention zooms across networks with so much uncertainty about what happens next. We must be in the world were in, not one built to top our longheld assumptions. Living bravely in that world means applying strategic case by case assessments of the costs and benefits of light and also of darkness. It means acknowledging that our light can be weapons for some, even as its a necessary beacon for others, and never growing too comfortable parsing which is which. It means, most important of all, respecting the power we hold in our hands and making peace with the fact that when it comes to light, there are no easy answers. Thank you. [ applause ] hello. Im the last one. Im going to have fun. Buzzfeed, its even more fun. My name is lam thuy vo. Im a Senior Reporter at Buzzfeed News where i do a lot of weird things on the internet, i basically hoover up information on the social web to better understand how we interact with one another. Sometimes i badger people into giving me or donating their data so i can better understand how they better experience the social web and oftentimes i also look at terrible systems that govern our lives everyday whether its policies or algorithms. And so when it comes to i guess my role in this talk, im very much someone who is going to talk about, umm, systems that govern how we consume information. I think theres been a lot of talk about fake news, bad information, misinformation, in general, that puts the onus on us understanding whether something is good or bad and for me its a lot about the social dynamics that come into play and how we consume information online. So the three ideas that i want to set forward in my paper are, understanding how algorithmically powered filter bubbles. I also like to look into information that is always socially and emotionally contextualized. We dont just read news in a newspaper that sits in front of us. We read it constantly contextualized by the person who posts it and the commentary it solicits. I would love for you to start think about information consumption as a per formatifor act. We were able to have people on deck who have been sitting in 4chan channels for six, seven years before anyone else even understood that that was a terrible place to be, sorry, whitney, you were part of that too, then i would like, after i introduce those three concepts, i would also like you to think about how that has changed the political imagination. What is the overton window, to borrow a sort of a term of understanding, what is considered extreme and what is considered politically popular is very different depending on which information universe you inhabit, right . So lets start with something fun. This is catherine cooper. And youll have to look at things because buzzfeed works with visuals. This is catherine cooper, a conservative mother, and lindsey cooper, who shared their Facebook News feeds with us because we wanted to better understand how two people who were politically different were experiencing information and how maybe algorithmical icallicalli realities where making their relationships worse than they should be. We got i think it was 2,367 posts from each of the their news feeds, you scroll and scroll and scroll and at the bottom it says theres no more stories and sometimes it also prompts you to get more friends so you have more stories. We were able to look into classify the content by posts of their friends, ads, but then also political content and news. And these were two very different ways of understanding politics for them. These are two different memes, on the left side you have a liberal meme, right, and then on the right side you had sort of another meme that looked into barack obama, this was in 2017 when we did this experiment, and, umm, looking at the three stooges, right . I might be dating myself here. Basically that was an interesting understanding of the humor and performativity that goes around politics these days. Politicians still believe that we take them at their face value but everything has been remixed in this area of the internet. What we see and who we see is no longer just someone giving you information about, umm, about, uh, reforms or about their platform. It is always contextualized, right . Whats also interesting is that, umm, i think, uh, leticia talked about the number of followers that we have, the number of people were friends with, the number of people who make up the inventory of our content, right, who were friends with are the people who produce the content that could potentially be part of our information universes. So i would like to introduce to you the concept of the tyranny of the loudest. Algorithms feed in or take in data that you give them, right . And so these are the two different news feeds ranked by friends, not by news institutions, that were showing up the most on these two womens news feed. On the left you see Lindsey Linder who is i guess in her 20s, she works in criminal justice in austin. You can see that she is friends with a lot of lawyers and the person who showed up the most on her news feed is very small, you might not be able to see it, is the aclu regional director, right . And then afterwards is a friend from seattle, then a lawyer, another lawyer, a friend from back home, another friend from law school. So you can start to get a feeling for what her information universe might be like. And guess what, the aclu regional director is probably one of the loudest voices there, probably posts the most, probably posts more than the other friend who may no longer work in law. Not only is there more inventory from this one person to potentially be showing up, this persons content may also be soliciting a lot more extreme emotions and we can talk about that in a second that then buoy that information further. For catherine cooper, a mom who has never left that small little enclave where she grew up, the person who posts who shows up on her news feed the most is a lifelong School Friend followed by her mothers lifelong friend, then her daughters best friend, not even her best friend, right, and the best guy from 50 years ago, the best guy for 50 years, and so you start understanding that the people that were surrounded by start making up our information universes and if we are to believe studies a lot more people are getting their information from the social web, political information in particular as well as news. I work at a News Organization, our industry is dying, please support us, but i think one of the things thats really important to understand in this context as well is that facebook doesnt all of the facebook, umm, twitter, umm, a lot instagram, a lot of these social networks are engineered for maximizing Data Collection and profit, right . So the more one of the reasons why facebook is one of the most, umm, highest, uh, the highest ipo in the history at this point in time is because it has longitudinal behavioral data about you over time. Whats also interesting is that it doesnt measure nuance. Think about the kinds of emotions that facebook measures, ha has, wows, angries, sads, likes, loves maybe, maybe. But these are not going to measure someone going away after reading an article and thinking very deeply about something, right . Were not here to measure quality of information consumption. Were here to measure, uh, emotional reaction to that. And if thats the data that feeds back into what then surfaces on your news feed, then what you have is only content or probably, mostly content that solicits very strong conversation slash emotional reaction, right . So that is one of the things that i think is very important to understand because that may take existing differences politically and really push them further and further apart. And suddenly we have fringes, suddenly we have people who start maybe following a trump website or something else, there was an atlanta article where someone just starting a facebook account and liking the trump pages and starting to see what kind of information would be coming out of that kind of algorithmic experiment and i think once we start understanding that algorithms distort, as whitney was saying, introduce a fun house mirror affect to what gets attention and what doesnt we can also start understanding that information segregation is becoming even worse through algorithmic, umm, content surfacing. Does that make sense . All right. We move on. I teach a lot so this comes together. Another thing that i like to, uh, bring about, the tyranny of the loudest is really well illustrated in this animated gif, thank you buzzfeed, i think this was Donald Trumps very First Press Conference as president and, umm, what you notice here, a lot of emotions. The emotions are also, again, segregated, right . On the right side, fusion, which i guess you could classify as more left leaning, you see a lot of angry faces that are floating across the screen. And then on the left side you see a lot of, yes, good job, and a lot of hearts and likes and so on. And i think that really starkly illustrates just how differently we experience the exact same information. And this was the live stream of a press conference. There was no editorializing. There was nothing else around it, it was just him speaking into the camera and suddenly we are posited with the idea of like do we agree with the hundreds of other people who either agree or disagree with donald trump or not. We cant just take in the information anymore. Suddenly we are posed with we are put into a position to have to react rather than just take in the information and make up our own minds about this. All right. Give th give me two minutes. Last but not least one of the things i want people to understand is that taking in information has become very performative. One of the stories we did about aoc for example was to better understand how political adherence is performed online through fandoms and antifandoms. Its not very different from how we perform our fandoms around captain america, for example, like people will write and make fan fiction about political figures because now more than ever, our consumption of information and our sharing of information has become a way for us to demarcate us politically, right . So this is fandom art around aoc. We scraped i think or we looked at more than 40 gigabytes worth of information for this and were able to look into the common themes about that and then this is aoc on the right side and how shes being dehumanized in certain ways, looking like i guess the trash can monster. I think in many ways one of the things i want to show you through that is that we now live in parallel universes. Sometimes i would invite you to look at the polar opposite of your political spectrum and start experiencing press conferences through the streams of those maybe very extreme facebook groups just to get a better understanding of how maybe those folks political imagination is completely separate from yours and i think thats what happened in the election 2016 when people were asking how did this happen. We all live in different and segregated information universes and now is the time to really find ways to combat that and figure out a way of balancing out Peoples Media diets. Thank you. [ applause ] so thanks to all of you. Youve given me so many good threads to pick up on. But for anyone who knows me it will not surprise you that i want to start by talking about journalism and by talking about the press. So you all either kind of explicitly or implicitly talked about who some of the actors responsible for information pollution are. I think we would agree there are many and that the issue is really systemic. Interestingly, in a poll that the Pew Research Center published last summer, respondents said that they felt that political leaders and activist groups were most responsible for, ill use air quotes here, creating madeup news and information. In that same poll however most of the respondents said they felt the news media was the one most responsible for cleaning that up. I want to ask you, how much can we responsibly lean on journalists right now who, to my mind, are working harder and longer, under more difficult conditions, with more harassment than perhaps ever in american journalism, and im also wondering if the press has is contending with institutional flaws and maybe flaws of process such as lack of diversity as a constitutional flaw or bothsidesism as its been called that may make it poorly equipped to make it the player to ill use the word fix this kind of issue. I dont know if theres somebody who wants to take that up first. Whitney . I actually, yeah, so one of the things that i one of the things that prompted, umm, reflecting on, you know, light disinfects was the fact that i was seeing different people saying it but they meant Different Things or they were referring to Different Things but that difference hadnt really been sort of articulated clearly and what i have found most commonly within journalism, there are some exceptions to this, this is what prompted the thinking, but the most common kind of light that gets shined within sort of establishment media, news media, is the light of liberalism. This assumption that you shine the light on the bigot, you shine the light on the terrible thing and then that is going to be the way that you disinfect it. And because it also tends to track, not always, but tends to track with likeness, that you have folks working in newsrooms for whom racism is an abstract idea or its something to write an article about but not something thats lived so its easy to shine a particular light to write articles about the nazi next door because youre just think about the if i just hold this mirror up to society thats going to solve it. And the problem, its sort of a failure to consider how it is that light ends up refracting and creating a more unsafe world for the people who are the target of these bigots. And so theres a lot of reflection, ironically, i guess, about how do i show how terrible this is and very little reflection about how does this impact bodies that are different than mine. So i think that that conversation of why, you know, how why do certain lights get shined more than others, its fundamentally a conversation about diversity. And, you know, it isnt the case that all reporters of color are, you know, shining a different kind of light but you do see differences in for example and this was the case that prompted by thinking, during the summer of racist president ial tweets when trump told the squad to go back, you did see a difference in how different kinds of journalists responded and the white journalists tended to shine the camera on trumps face over and over, the crowd, over and over, send them back, send them back. And then reporters of color, not to the letter, but often there was a pattern of reporters who had more of an embodied investment in that racism, who had experienced people telling them to go back, those stories tended to be focused on the people who were the target of that kind of racism you. And that was what got me thinking about, within journalism, light gets shined in a different way but you have disproportionate kinds of light because you have disproportionate kinds of people with different assumptions about the value of the light they shine. These conversations are fundamentally tethered to diversity, thats why there needs to be more diverse folks in newsrooms so you dont have a kind of monopoly on the kind of light and the kind of challenges that light causes. Go for it. Okay. Yeah, thanks very much. I echo what whitney had said and i come back to, this was one example, its from a while ago, but the l. A. Times had a moment where they decided to stop allowing letters to the editor that denied the existence of Climate Change. And it was for a News Organization, that was a pretty big deal and it was just in the letters to the editor, it wasnt in their main news page. And it sticks with me as sort of one of these examples where i think one is to say, news in journalism is a very big space, im often hesitant to talk about what journalism is or what news is because it can be so varied but that was a moment for me when there was a risk that was taken because it was even before i think maybe theres a dominant view that Climate Change is now a real thing. But the l. A. Times said no, were going to take a stance to do it. I think about that instance and i think about, well, what were the conditions or what led to them being able to do that. And one sort of i think journalists being embedded in some of the broader social movements and patterns so its sort of knowing the culture youre in so the l. A. Times can say its not that big a risk, its a little bit of a risk but were close enough to it. The other is that this was letters to the editor so it was not the main news coverage, that they were talking about their relationship to the audience. It was about sort of educating the audience, was the thing. But then i also come back to the notion of this marketplace model of journalism which is dominant in this country for sure, i think leads to a lot of things that whitney was saying. I think about how could there be better runways or better, safer contexts for journalists to take some of those risks like saying Climate Change is a thing and we are not going to allow the denial of it in our pages, what kind of insulations or safety nets could be provided to News Organizations to let them do that, to let them actually make some choices that are very different from the choices of the penny press in the 1840s. In some ways were still stuck in this economic model of saying dont offend an audience because that audience is a potential revenue generator, its a potential cultural legitimacy form. I come back to funding, giving journalists a safety net to be able to make choices that i think they actually want to make, i think they actually want to do Something Like that. Can i as the only journalist of color here, and practicing, fulltime journalist, think about that . One of the things that can be kind of frustrating is yes, we do this kind of work. Im going to speak specifically about buzzfeed but ive worked at the wall street journal and npr and various other institutions and can speak about the fact that distribution issues have really, really turned a lot of the work that people do to combat misinformation kind of like into something that doesnt reach its audiences. Like we have for example two experts in misinformation, one is Craig Silverman and one is jane lebanenco. Jane combats hoaxes online. She reaches certain folks on twitter but thats not going to reach the majority of folks, right . I think that work exists and that work is actually like i think a lot of institutions are doubling down on that right now if you look at a lot of, umm, funders theyre giving a lot of money towards combating misinformation. I think fundamentally, yes, the work is there, i think spl politifact is showing this work has been there for a long time but the plumbing of this distribution hasnt done the job of getting or has made it really difficult to bring that information to folks. And to go back to the fun house mirror effect, the idea that our attention is not singularly on like a very evenly balanced like home page anymore. Our attention is this weird magnifying glass that goes from one topic to another very quickly. And if you look at any anything that happened last year, in particular weve reached a really Pivotal Point of absolute compassion fatigue. You can give people all the information you want and we are doing that. I would argue that we have we have the people who have been sitting in 4chan for six, seven years and have been doing these articles and who have been not just covering it as news that takes you one place to another but also explains exactly how these systems are broken, are not good for, i dont know, larger civic society, i think the biggest issues that there is like a need for, a, bringing that information to folks, and b, also figuring out how they can receive it until the right mindset because again, when you look at any just ask anyone how many tabs they have open, look into how people are experiencing anything from, umm, lets say the border crisis all the way to, umm, the ways in which, umm, the impeachment has been covered. Its just a barrage of information that, again, has too much on one thing all at the same time. And then moves on to the next thing. That would drive anyone insane. And i think that is the part where i think we tabbed this offline, there is a lot of confusion about what were supposed to Pay Attention to and what were supposed to compassion with. And i think constantly being in this like emotional glass cage or this glass cage of emotion, to quote anchorman, of what social media means to you has become a really exhausting process for people and has actually debilitated a lot of people from being able to do the basic thing of, is this good information or bad. Because were constantly pushed in a position to react. And, like, as someone who sometimes when we have to do breaking news events, were being sent and deployed into the field, we tweet first, right, then someone at the headquarters takes those tweets and puts them together. And someone who works at a News Organization that is very much on the internet all the time, i think for us its become, uh, a big problem of, okay, we have a very strong audience of very particular audience, but we can only reach so many people, right . Its kind of like a mosaic of information, a mosaic of news institutions, how do you make that a holistic process. Just briefly, i think a lot of good points have been raised. To go back to your original question, i think, no, its not fair to ask journalism to be responsible for this. I do think that question is or the results that have question are slightly misleading because when you phrase it as made up news, people think about news in a particular way. Politicians make news, right, like they do things that are reported in the news, so thats why they say its politicians fault, also we just like blaming politicians for things, and media gives us news, right, they disseminate news so its their problem to fix made up news. Thats a large part of it. Thank you. Even beyond that, i think it like we cant expect the public to know how to fix this. We study this stuff for a living and we still dont 100 know how to fix it. So i think thats a little, umm, unfortunate. I mean, i understand why you asked that question, but im not im not relying on the public to decide how to fix this problem. I think lam is absolutely right, journalists are doing exactly what theyre supposed to be doing, producing good information, making it available to people. There are broader systematic things and i think theres a really large Public Opinion element of this of how do we shift that blame or that responsibility so that its not just thinking about what can the news media do to fix this but who else is responsible, is that regulation, is that, you know, the public, umm, i think there are a lot of actors that need to be held accountable. Do you want to weigh in, whitney . Then ill go to the audience. Yeah, that point about intentionality matters, i think that is sort of at the crux of some of the confusion, some of the problems that we have with the spread of mis and disinformation. The difference between misinformation, its false information, unintentionally spread, disinformation is when its deliberately spread. You cant always parse that online. But were still talking about why someone spread something. So theres this sense of, pointing the blame somehow outwards, externally. And that if our intentions are good, if we want to help, if we want to shine that mirror, if we want to hold if we want to write an article about the nazi next door because we just think nazis are bad, then were off the hook, whether were journalists or individual citizens. But the way the information travels online, algorithms dont care what your intentions are. It cares that you engage with something. And so as long as were framing the conversation as who is getting it wrong, were less likely to start asking, how are our own actions feeding into these polluted information flows without realizing it, that the reporters who really rely on the light of liberalism, their intentions by and large are i mean, theyre good, they want to help, they want to do their best, theyre using the tools they have at their disposal. They are relying on the assumptions theyve audlways ma that make sense to make, let the marketplace of ideas sort it all out, theyre coming at it from a good perspective, but just because you are doesnt mean you arent going to then inadvertently open the over ton window because now youve got all these articles about all these nazis that youve handed microphones to to plead their case. So until we start shifting how we understand intention, blame, responsibility, then were only ever going to be pointing our fingers at other people when we are all part of this process, we are all part of and contribute to this ecology. And that means producing pollution even when we dont intend to, even when all we want to do is help. I just want to put into that, there are those discussions that are happening in newsrooms. And i help administer slack, for example, for journalists of color across the country and there are a lot of us who do that work in there as well as other folks who are editors who have talked about that particular article and are trying to at least its not as clear and simple as like, oh, we didnt intend that and now were going to walk away from that, thats definitely not it. And i know that as the story has shifted towards it also being about media, right, 2016 has fundamentally it turned media into a beat to itself. I think more editors are starting to think deeply about how they phrase what words they use and what they choose to cover. And so i would say its probably a lot more nuanced than just liberal white journalists going after or like portraying nazis in a sympathizing way. There have been internal conversations that i cannot divulge entirely but like that ive seen happen over and over again about whether to publish the name of a shooter or not. Thats for example a thing thats happened in the past few years. Think about the whistleblower whose name was not published by various institutions and that may have been published before. Like theres been a lot of conversations in newsrooms and yeah, we dont always get it right but its a lot more complicated than what it sounds like. Oh, its definitely complicated. And its getting better. I mean, i think in some ways there has been some improvement since 2016 because more people are asking these kinds of questions. You know, but i do think still there is often this assumption that journalists and not just journalists but everyday people who also spread information, right, that somehow theres an outside where we can stand, that if we are calling attention to a hoax to condemn it or to call attention to the fact that its false, were not somehow contributing to the spread of that hoax. And we are. We do. Any time we engage with anything, thats the ambivalence, thats the complication. And so in the work that i do with journalists and following charlottesville in particular and even still, i mean, thats something that folks talk about, that theres an increasing awareness of being in it, but not really knowing what to do when youre in it. And i have found that depending on what your Life Experiences are, that youre maybe more or less likely to think about those nuances as opposed to, i stand outside, its the view from nowhere, im just telling the facts, and then the people will realize that something is problematic. But youre absolutely right, theres tons and tons and tons of nuance but theres also and everyday people do this too, that if im calling attention to how racist and terrible somebody is that thats going to convince someone that racism is somehow bad and maybe that does for your immediate circle but that can still spread that information and entrench that idea in other audiences. I want to let the audience weigh in now, if you have a question, i just ask you to go up to a microphone, if you cant reach a microphone for some reason just wave your hand and well get one you to. If you introduce yourself and speak into the microphone, i would appreciate it. Thanks. My name is walt hauser. Im a retired fed, federal contractor. And i got involved in the internet back when it was a place where everyone knew your name. And now, to call it a hellish landscape might be sugarcoating it. I think two of our panelists have affiliations with wired magazine and i think andy greenbergs book sandworm described very fittingly the specters of fancybears rampaging through cyberspace, sowing chaos and confusion. And im at a loss to see how the United States is going to respond to that threat. Thank you. Anyone want to weigh in . Is your question along the lines of it seems like a hopeless situation, where do we begin . I think greenberg sets an example. Hes gone very deep into the threat and has presented a very coherent timeline as to whats happened and where it might go. And i think that thats an excellent example for journalism in general. But its very expensive. Its very time consuming. He spent years working on that project. Who can afford it . Okay. But your question is . Well, who can afford it . Okay. Any responses . It is expensive. We invest in things we care about. If we decide we care about this, well spend money on it. Yeah, and i also think that where the money gets spent matters too, that its really important to have conversations specifically about the institution of journalism. But when i think about, if i had money, that i could just throw at the problem, i dont know if i would start with journalism. And i dont know if i would start with technology. I think i would start with k12. I mean, so figuring out, where do some of these assumption come from, how is it that young people are raised to not just interact online but interact offline too, how do we teach a more holistic understanding of the world so that people can interact in a more holistic way . Its not just that facebook i guess theyre not on facebook, but its not just that Digital Media sets people up to start compartmentalizing and, you know, deacon textualizing. Thats a process that happens offline too. How do we intervene, how do we set young people up so they can better navigate these challenges theyre going to be inheriting, so yes, like where is the money going to come from, but thats where i would start if it were me. To be honest, i used to volunteer for an Organization Called the News Literacy project. They go into schools and teach just categorization of information, what is propaganda, what is marketing, what is news, what is a primary source. I think what is fundamentally lacking is people taking the time to take a step back and really let their Critical Thinking kick in. Right . I think weve been very conditioned to emotionally react to a lot of information n nowadays. If that were something i could do, i would love to figure out, a, a way to put a button somewhere that delays your gratification so you dont just click share and you dont just engage with the content but actually have to read it, and secondly, to really bring programs like that to schools. Like i also volunteer in high schools every once in a while and i see that a lot, theres like a lot of confusion. Theres a web savviness that happens with that but theres confusion over content that kids are looking at in particular, and these are kids that half the time they understand the infrastructure but i think it is really the content of how to critically approach that. Im david edelstein, im a privacy expert. This was originally with leticias things but i think its a little broader. Dr. Bode, you said that what you were suggesting is sort of People Powered corrections. And you had someone post that false thing about zika being spread by gmo mosquitoes and then someone else came and thzi gmo mosquitoes and then someone posted a correction. How is that true corrections over false corrections, if someone comes and posts something about, posts something true, and i mean, certainly someone is going to be able to produce a link arguing the opposite and say authoritatively saying no, this expert disagrees. How are we empowering true things over false things . Similarly dr. Anani, you said that like the l. A. Times banned letters to the editor denying Climate Change and you said they did this before the strong consensus on this, on the issue. You didnt give exactly when. How, it seems to me like that also isnt necessarily privilege and accuracy, it is perhaps, i mean that could just as well end up setting in stone something that doesnt bear out. How do you make sure that your methods are favoring true results over false ones . Do you want to go ahead. Okay. Thanks for the question. I think, i appreciate it, and im hearing something quite deep in what youre asking that i think it will be good to pull out. I think, and i found this when i was talking to the facebook Fact Checkers and the people working there, it is a very professory answer to say but underlying it, what do you mean by true and i know that is a very abstract concept but i think for the l. A. Times example, was that they were actually i think doing something really interesting that i dont think journalists often do in an explicit way. I think they often do it implicitly, but theyre basically saying, there was Scientific Consensus about Climate Change. There was not yet, they were saying, social consensus, or cultural consensus, so they were making a distinction about when does one truth claim dominate in a particular type of culture versus when does it dominate in another type of culture so they were aligning themselves with a cultural location of truth making, and they were saying we are going to align ourselves with the scientists, where will is consensus and thats what it is going to be so one is they position themselves in relation to the cultures, and that was a stand, that was a stand, and that was an interesting one for journalists to take because they did acknowledge that were not standing from nowhere, the standpoint of histology, the idea that the truth claims you make is where you are, and the second bit is that they were sort of taking this prag tist view of truth which is a very american pragmatist move or position to take and actually were not going to talk about whether this is true or not true, were more interested in investing in the consequences of that claim being considered to be true broadly. And its not denying the existence of it but theyre saying thats what were interesting in contributing to is in a world in which Climate Change is not being denied. And thats where were going to put our stakes so one was to say lets look at a landscape of cultures and how theyre making different claims, they align with the scientists and the other is to say were interested in the consequence of the cha claim being considered true and were ing to going to try to help it by using our News Organization to give it legitimacy but it was really a moment of situating a truth claim in relation to a culture and a consequence and that is what they were doing in that moment. And thats a risky, interesting way. That does seem risky. I mean i certainly think Climate Change is happening, but if it, if it werent, then trying to push that it is would be actively harmful. That is the messy, the mess that were in, so they were taking a stance. I think thats a noble move, an interesting move, but thats what journalist does, is make stances. And do you want to respond as well . Yes, it is something that plagued us for a while, ive been doing this Research Since 2014 and we were worried about this and we tested it and it is not published yet but i can tell you Information Processing works the same whether it is a false story being debunked with the truth or whether it is a true story being debunked with falseness. Assuming that the true story and the false stories that are originally posted are equally implausible. So that is something that is very concerning to us. But what mitigates my concern is empirically, so we have this idea out of the u. K. That looks at people, self reporting, it is a shared information that they later found out was wrong or they knew at the time was wrong, some things like that, and it also asked them if anybody called them out on it, essentially if people corrected them on social media and about 40 of people that say they shared false information say they were corrected which is really heartening, and only 5 of people say they never shared false information and someone tried to quoteunquote correct them. Empirically, it doesnt happen that much but to the extent it does, it should have the same outcome, which is still something to worry about. So when were talking about how to shift Public Opinion and how to shift behaviors, i think that thats an important part of the question, is how to encourage people to do this, not from an evil perspective for lack of a better way to put it. So id like to get one more question in. It looks like we have one more person waiting and about five minutes left. Thank you. Hi, everyone. My name is brittany, im a second year law student and ive worked in tech for a number of years as well. So thank you all for coming. These are fascinating, fascinating presentations and i have many questions and ill keep it to one. Regarding the last presentation, and the graphs that you showed about the distribution of number of posts for the liberal individual, against the conservative individual, i couldnt help but notice that those were like very differently shaped, so on the coopers, you see there are a lot more people who have a much higher volume of posts, compared to linders news feed, and i am just curious if that is also something that you guys are planning to interrogate, like the balance and who is the loudest and who right, not only who is the loudest but also what are the other like offline conditions that are leading to that, the shape of that distribution curve for a conservative individual versus a liberal individual, and what are the like broader things that may suggest about their information in the university. It is very nice it do that. I think one of the things that really doesnt pull out, is these two individuals and trying to acclimate that for multiple people is even worse and i think one of the hardest things is what were trying to do here is an adversarial experiment where we tried to lead slightly, to where they might weigh certain things and i think facebook has an explainer how the algorithm works tand makes it, it is still not enough to be able to draw strong conclusions between the, maybe you can find correlations between why there is more of a balance sort of like distribution of people on there, versus this one, but i mean i could probably do it. And i guess the other thing im thinking, just looking at that, is like what does that suggest about folks who fall into that bucket, versus folks who might fall into the liberal bucket and their engagement with the platform, the value they extract from it, compared to linder, like oh, i think there is a lot more there that is so interesting to dig into. Yes, and the biggest thing i want to make sure that people get out of this is it is very individualistic. Very much more of an emblematic example, rather than something that you can extract from. Of course. And i dont know, if you want to donate data, im down for i dont think it would be that interesting. Just pictures of my dog. But thank you. Ha would be great, too. You never know. Dogs can be politicized. Yes, thats true. Thank you. Thank you so much. Id like to thank all of our guests, and wrap up. Thank you. [ applause ] president wilson goes before a joint session of congress on 2 april, and in that sevenpage speech, we all remember that nine, ten, 11word phrase, where we must fight to make the world safe for democracy. And so when africanamericans hear that, they believe, because theyre citizens, they are third, fourth, and fifth generation americans, and the leading scholar, w. E. B. Dubois is saying close ranks, most africanamericans believed that, and many of them will support the war. However, there is a third conversation going on, and a. Phil randolph, when you walk into the exhibition, you see his quote, we would rather make georgia safe for the negro, and each one of those has an image, and an image under him is of three individuals and a kkk regalia. You can hear more stories of africanamerican soldiers in world war ii, tonight at 8 00 eastern, from the temporary exhibit at the smithsonians africanAmerican History and Culture Museum in washington, d. C. Its part of our museum week series, featuring American History tv programs that youll find every weekend here on cspan 3. Enjoy our visit to the africanamerican museum, tonight at 8 00 eastern, and American History tv, every weekend, starting saturday mornings at 8 00 eastern. U. S. G

© 2024 Vimarsana

vimarsana.com © 2020. All Rights Reserved.