Transcripts For CSPAN3 New York University Hosts Conference

CSPAN3 New York University Hosts Conference On Race Technology Part 5 July 14, 2024

And the International Communication association, this is 1 10. All right. It is 5 00 on the dot and i want to give our speakers every single second possible. It gives me great pleasure to introduce our final keynote. Shes, at the end of the day, it seems youve saved the best for last and weve had a day full of bests and we have our final set of bests. Here with the keynote on Artificial Intelligence and the new gen code professors Ruha Benjamin and Marilyn Broussard and with that, i will turn over the floor to both of you. Thank you. Okay. Thank you charleton and thank you everyone for coming today. This has been a very intellectually stimulating day. It is my great pleasure to be here today with dr. Ruha benjamin who is an associate professor of africanamerican studies at princeton university. She is the author of peoples science, bodies and rights on the stem cell frontier and a new book called race after technology, abolitionist tools for the new gem code which you should get because it is amazing and it is available for preorder now and is coming out in early june. She is also editor of a new book called captivating technology, race, carcerial techno science and literal everyday life which comes out this week and so it is my great pleasure to be on stage with ruha. I get to introduce professor broussard, but i have to say as i introduce her that unfortunately, her book is already dated. I would love to do a plug, but unfortunately shes now an associate professor of journalism at nyu. So we need a naomi wolf moment and recall the book. Professor broussard is at the institute of nyu and auth offer artificial unintelligence, how computers misunderstand the world. Her research focuses on a. I. And investigative reporting with a particular interest in using Data Analysis for social good. A former features editor at the philadelphia enquirer, shes also worked as a Software Developer at m. I. T. Medial lab and with that ill get us started with the first question. One of the things thats really striking about art fshl unintelligence is this balance between tech enthusiasm and tech skepticism. Thery is a line that i love many in the book that you have this balance and so you say, as a car, the tesla is amazing. As an autonomous vehicle, i am skeptical. So tell us a bit about how you balance this together, someone who writes code and is engaged with the entire Tech Industry and how that works together for you and your analysis. I actually opened the book with the story related to this. I opened up to a story about a time when i was little and i had this robot building kit and so this was my introduction to the fact that we can build things with technology and what we imagine the technology is going to do is not necessarily the same as what the Technology Actually does. So there is a big gap between our imagination and reality. And so i took this knowledge into my work as a developer because often, i would write code and i would imagine that the code could do something, but once i implemented it, it couldnt live up to my expectations so this is something that we really need to confront as people who make technology is we need to not get out over our own ski, and so i employ in the case of Autonomous Vehicles which i write a lot about in the book because its important technological issue as well as a social justice issue and the makers of Autonomous Vehicles are focus on the the fantasy of Autonomous Cars and theyre not given enough attention to the often disappointing realities of them. So it is absolutely possible to keep Building Technology in the face of constant disappointment. I mean, this is actually how you build code. You have to be okay with constant failure, basically, because nothing ever works the first time that you build it and you have to bang your head against the wall and it feels amazing once you stop, but this is this gap between imagination and reality is something that we need to be more honest with ourselves about when were thinking about technology and society. So ruha, i want to ask about your work. Your work focuses on science, medicine and technology. So how did you come to this interest and how do you feel it intersects. The short answer is when someone tells me not to do something it makes me just want to do it. So when it comes to all of the things that sociologists study i found when i was an undergrad that there was a black box exceptionalism around science and technology that didnt necessarily pertain to other fields whereas someone is studying economics and politics people dont stop them and say were you a politician before this and were you an economist . The assumption is that you could have some kind of access without being someone trained in that arena which we dont grant to science and technology and i was interested in that exceptionalism and breaking through that, sort of piercing that bubble and within that, as i moved further into the social studies of science and technology i found there were lots of people doing it, and oftentimes the way it was framed was looking at the social impabli impacts of science and technology and how it was impacting different communities and i became interested in the social input and the a sump lgdzs, norms and biases are taking that for granted and studying how it Impacts Society and looking upstream is how were taught is inevitable. This idea of inevitability i think is really interesting. One of the really exciting things about reading your new book was realizing that we were talking about so many of the same things and we were i wish we had been in i wish wed had this conversation about three years ago. I know. I will Say Something about the book is that i do see both of our books in this field. The books as provocations, as conversation starters and not trying to close up or sort of tie a nice bow around it. So in some ways im glad that its sparking a conversation after the fact. Very much so. Very much so. And this feeling of inevitability i think is really interesting because my sense is that people feel disempowered in the face of technology. Theres been this narrative that tech is something that we have to sit back and let it happen to us, that we dont have agency in deciding what are the technologies that get developed or what are the interventions or the invasions in our lives. I think there are two sides to that techno determinism. One is a kind of fatalistic that will devour us and destroy us and the other side of that that its techno deterministic and whether it will save us. Both are deterministic ways of thinking that robs us of our agency, and so just thinking about the crafting of your own book, one of the things that i love is the way that you bring together critical analysis with story telling. There was a panel or two ago where a lot of the conversation was about the role of story telling in both reinforcing certain kind of hierarchies and inevitability, but also as a tool for subverting them and i want you to take us behind the scenes about how you tell in the story crafting and analysis. I get to talk about literary craft . This is very exciting. So i come from a Literary School called immersion journalism which derives from the new journalism of the 1960s and heavily influenced by participant observation. So as an immersion journalist you immerse yourself in a world in order to show your readers what that world is like, and so i do a kind of immersion journalism for technology whereas a data and computational journalist i write code to commit acts of Investigative Journalism and often ill build code in order to demonstrate something about how Technology Works or how particular technological interventions work. So in a couple of episodes in the book i build Campaign Finance software to try and fix the Campaign Finance system which p. S. , is really broken and didnt work. I also build Artificial Intelligence software to try and fix a particular problem in philadelphias public schools. Public schools did not have enough books for the students to learn the material that were on the state mandated, standardized tests and nobody had ever asked the question before of do the schools have enough books and so i tried to answer that question, but i couldnt because the software to answer that question didnt exist, and you cant actually do the calculation in your head because its just, too hideously complicated. And so the process of building that technology was really illuminating in understanding the role of data in that system and then also understanding why it is so hard for kids to succeed inside resourcestarved School Districts and so its not really a data problem. Its a people problem, and so by Building Technology and talking about how i built the technology was a way of accessing kind of larger social issues around what are we doing to kids in public schools. One of the stories that you tell you hinted to a few minutes ago took place in 2007 and this harrowing story of you riding in a driverless car, and so one of the stories that we tell about tech is just give it time, right . Well fix the bugs, and so its been 12ish years. Where are we with that, and would you say that were at a place where its reliable and safe enough for wide use . Not even vaguery. So in 2007 i rode in a driverless car and it almost killed me and it was in an empty parking lot with no traffic. So the technology has come a long way since then, but it has not come as far as the marketers would like you to believe, and so next time you hear people saying Driverless Cars are about five years away, think about how many times youve heard that and how long people have been saying that theyre five years away because theyve been saying it since at least the early 90s and so theyre actually not five years away. They do want work as well as you imagine and, in fact, one of the things ive become increasingly worried about with Driverless Cars comes from something you allude to in your book which is the inability of systems to see darker skin. So Image Recognition systems, facial recognition systems, object detection system, they dont see. First of all, they dont see the way a human being does, and the other thing is they dont detect people with darker skin as well as they detect people with lighter skin. And so the systems that are embedded in these twoton killing machines are not detecting people with darker skin, and i would i would pose the question, who exactly do we think is going to be killed by selfdriving cars out on the street . And so you go into this when you talk about the racist soap dispenser. Are you all familiar with the online video, the viral video of the racist soap dispenser . There is a man with light skin who puts his hand under the soap dispenser and a man with darker skin and it doesnt work. So can you tell us a little bit about how this artifact functions . Sure. One of the things that ill try to do in that section is bring together something that seems kind of trivial. In fact, the people displaying this on youtube theyre kind of giggling through the clip, if you recall. So it seems kind of funny, its entertaining and it doesnt seem that consequential when you just read it as a glitch of one particular technology, but we also have to think about how we got to that point. What are the larger, structural aspects of the research and Development Process that would make it such that that would happen and those same context clues are also operating behind much more consequential kinds of technologies that are making life and death decisions about people and whether its the risk scores that we heard about earlier in terms of whether to parole someone or not and whether its decisions in healthcare, education and whether someone should get a loan that they really need and so the same type of questions about how algorithms or in this case Automated Systems are not developing and who theyre seeing are a set of questions that i think we can apply beyond any particular type of technology and at the same time, i think making distinctions are important and im juggling the attempt to make important decisions and distinctions. So rather than just talk about racist robots which is a great headline which is how i became interested in this is the headline about oh, our Automated Systems are racist. To think about how exactly are they racist . What do we mean by racism and so we have a spectrum from technologies that are explicitly attempting to reinforce hierarchies. Theyre not hide t right . We have to make some distinction on that end of the spectrum on technologies that are designed to and are reinforcing longstanding racial cast gender hierarchies and think about them in relation to and this is what interests me most. Technologies that are being developed to bypass bias that are sold to us as fixes for human biasses and so thinking about those as a kind of technological benevolence, rid . And so tech fixes that are say, we humans are messed up. The judges are racist, the prosecutor, the mreeshpolice an teachers and heres an app for that and an algorithm for that and we need more attention around those things that are purportedly here to do good to address inequity through technology. We have to keep one eye on these obviously racist robot, right . But i think we need more attention around the promise do gooding of so Much Technology and the way that we turn to it in lieu of more transformative processes that we should be investing boeing imagination into and Economic Resources into. And so the system of the way that racism is embedded in everyday technologies is i think what youre referring to when you call it the new gym code which, of course, is taken from Michelle Alexanders version of the new jim crow. So can you unpack for us what you mean by the new jim code and whats going on in the systems . Absolutely. This is my attempt to remind us of the histories embedded in technology. So by invoking this term that was first a folk term that was developed to name a broader and broader rate of practices around whit supremacy that got resuscitated through the book, the new jim crow to describe you on mass incarceration has a license to legally discriminate and enforce caste hierarchies and extending jim crow into the present, for me the current malu is not simply the next step. Its not that were over old School Segregation and mass incarceration, but thinking how technoskie technoscientific fixes containment and how its important to understanding the impact of these technologies so its a combination of coded inequity and often more subtle than past regimes of racial hierarchy, but also the key to the new jim code, really the distinction is that its purportedly more objective and neutral than those prior regimes and so thats where the power and problematic of this regime takes place is that we put our guard down because we are promised that its more neutral and objective and precisely when we put our guard down is when our antennas should go up in some ways to thinking critically about this and this came up a number of times in the previous panels and so i think there is a heightening consciousness around really being aware of anything that is promised to be more objective and neutral, and so what we see here is a whole set of practices under that umbrella. And the notion of objectivity and neutrality and the idea that a machine could be free of bias is something they really grappled with a lot as i was writing my book because as a Computer Scientist, when youre doing math, ultimately when youre doing Computer Science youre doing math and computers literally compute which we kind of forget about and really, its just a machine and its doing math and all of the fancy stuff were doing with computers comes down to math and so when youre doing a math equation, its just an equation, but one of the things that i think is so interesting that has happened, and among the math and Computer Science community is they got carried away with the idea that math was superior. Yeah. So this is this is the basis for an idea that i call techno shoaf chauvenism, that Technical Solutions are superior from Human Solutions and it derives from the idea that math is superior to the humanities and social sciences or any other form of intellectual inquiry and then when we look at the people who for centuries have said oh, math is so superior and we as mathematicians dont have to worry about pesky social issues and our work is so highfalutin and important, those people are homogenous looking and they policed the boundaries of their profession for centuries so that women and people of color, for example, are excluded. So its i mean, its not necessarily that Computer Scientists are out there going out there saying i want to make racist technology and i dont want to make Computer Scientists that are doing that, thank god, but its the unconscious biasses of a Homogenous Group of people that end up manifesting this way and i think oh, its just math and its just computing and it is going to be superior to social issues and its not true. One of the things that i get from wrestling with techno chauvenism isnt solving all of the problems out there, but even when this kind of conversation about equitien justiy injustice brought up it is looking for a mathematical way to define fairness and equity and i presume and ive seen that Computer Scientist and data scientists are forced to grapple with questions of equity and

© 2025 Vimarsana