Race has affected development. Its 5 00 on the dot and i want to give our last speakers every single time as possible. Here with a keynote on Artificial Intelligence and the new jim code professors Ruha Benjamin and meredith broussard. And with that im going to turn over the floor to both of you. Thank you. Thank you. Thank you charlton and thank you everybody for coming today. This has been a really a really stimulating intellectually stimulating day. Its my great pleasure to be here today with dr. Ruha benjamin, an associate professor of africanamerican studies. She is the author of science and rights on the stem cell frontier and a new book called race after technology, Race Technology for the new jim code. Its available for preorder now and coming out in early june. She is also editor of a new book called, captivating technology, race, car serial tech nowhere science and liberty imagination in every day life, which comes out this week. Its my great pleasure to be on stage with ruha. And i get to introduce professor broussard, but i have to say as i introduce her that unfortunately her book is already dated. I would love to do a plug. But unfortunately she is now an associate professor of journalism at nyu. We need a naomi wolf home and recall the become. Professor browse have had a data journalist at the appropriating you are L Carter Journalism Institute and author of artificial unintelligence how computers misunderstanding the world. Her research focusing on ai and investigative reporting with a particular interest in using Data Analysis for social good. A former features editor Philadelphia Inquirer also worked as a Software Developer at a and t bell labs and iit media lab. With that i get started with the first question. One of the things really striking about artificial unintelligence is that balance between tech enthusiasm and tech skepticism. There is a line i love, many in the book that you have this this balance. And so you say as a car the tesla is amazing. As an autonomous vehicle, i am skeptical. And so tell us a bit about how you balance this together, someone who writes codes but also critically engaged with this entire tech industry. How that works together for you in your analysis. Well, so actually opened the book with a story related to this. I hope up with a story about a time when i was little. And i tried to build a robot. And i had this robot building kit. And it had a little motor and i plugged it in and it didnt work. So this was my introduction to the fact that we can build things with technology. And what we imagine the technology is going to do is not necessarily the same as what the Technology Actually does. So there is a really big gap between our imagination and reality. And so i took this knowledge into my work as a developer because often i would write code and i would imagine that the code do something. But once i implemented it it couldnt actually live up to my expectations. So this is something we really need to confront as people who make technology is we need to not get out over our own skis. I think in the case of Autonomous Vehicles, which i write a lot about in the book because i think this is a really important technological issue as well as a social justice issue. The makers of Autonomous Vehicles are really invested in the fantasy of autonomous cars. And theyre not giving enough enough attention to the often disappointing realities of them. And so it is absolutely possible to keep Building Technology in the face of constant disappointment. I mean, this is actually how you build code. You have to be really you have to be okay with constant failure, basically. Because nothing ever works the first time you build it and you have to bang your head against the wall and it feels amazing once you stop. But this is this gap between imagination and reality is something that we really need to be more honest with ourselves about when were thinking about technology and society. So, ruha, i want to ask you about your work. So your sociological work focuses on science, medicine and technology. And how did you come to this interest . And how did these fields intersect . So the short answer the kind of personal answer is that when someone tells me not to do something it makes me just want to do it. And so when it comes to all of the things that sociologists study i found as an undergrad that there was a kind of black box, an exceptionalism around science and technology that didnt necessarily pertain to other fields. Whereas someone studying economics or politics people dont stop them and say well were you a politician before this . Or were you an economist . So the assumption is you can have some kind of access to this arena without being someone trained in that arena which we dont grant to science and technology. So i was interested in that exceptionalism and breaking through that, sort of piercing that bubble. And then within that as i moved further into the social studies of science and technology. I found there were lots of people doing it. But often times the way that Research Arena was framed was looking at the social impacts of science and technology. So the science and technology was a given. And then we wanted to study how it was impacting different communities. And i became really interested in the social inputs of science and technology. That is the way that our social order our assumptions, norms and biases are embedded in the things we take for granted. Widening narrative and not starting with the science and technology as a given, as inevitable and then studying how it impacts society, but looking upstream as how society actually shapes the technology that were taught is inevitable. This idea of inevitability, i think, is really interesting. And one of the really exciting things about readying your new book was realizing we were talking about so many of the same things. And we were i wish we had been in conversation i wish we had this conversation like three years ago. I know. But i will Say Something about the book, is that i do see both of our books and this field the books as provocations, as conversation starters, not trying to close up or sort of tie a nice bow around it. In some ways im glad its sparking a conversation after the fact. Very much so. Very much so. And this this feeling of inevitability i think is really interesting. Because my sense is that people feel disempowered in the face of technology. Theres been this narrative that tech is something that we have to sit back and let it happen to us. That we dont have agency in deciding what our the technologies that get developed or what are the interventions or the invasions in our lives . And i think there is two sides to the tech know determinism. One is a kind of fatalistic technology is going to destroy and devour us. But the other side is tech know deterministic which is that its going to save us. Whether we think of it as saving or slaying, both are deterministic ways of thinking that robs us of our agency. Thinking about the crafting of your own book one of the things i love is the way you bring together critical analysis with storytelling. There was a panel or two ago where a lot of the conversation was about the role of storytelling in both reinforcing certain kinds of hierarchies and inevitabilities, but also as a tool for subverting them. I wanted you to take us behind the scene in terms of how you think about storytelling in the crafting of this analysis. I get to talk about literary craft. This is very exciting. I come from a Literary School called immersion journalism which derives from the new journalism of the 1960s and heavily influenced by participant observation. So as an immersion journalist you immerse yourself in a world in order to show your readers what that world is like. So i do a kind of immersion journalism for technology, whereas a data and computation journalist, i write code in order to commit acts of Investigative Journalism and often build code in order no demonstrate something about how technology works, or how particular technological interventions work. So in a couple of episodes in the book i build Campaign Finance software to try and fix the Campaign Finance system, which p. S. , is really broken and it didnt work. I also built Artificial Intelligence software to try and fix a particular problem in philadelphias public schools. Public schools in philadelphia did not have enough books to for the students to learn the material that was on the state mandated standardized tests. And nobody had ever asked the question before of do the schools have enough books . So i tried to answer that question, but i couldnt because the software to answer that question didnt exist. And you cant actually do the calculation in your head because its just like too hideously complicated. And so the process of building that technology was really illuminating in understanding the role of data in that system, and then also understanding why its so hard for kids to succeed inside resourcestarved school districts. And so its not really a data problem. Its a people problem. So by Building Technology and talking about how i built the technology was a way of accessing kind of larger social issues around around what are we doing to kids in public schools. And one of the stories you tell, you kind of hinted to a few minutes ago, took place in 2007. This kind of harrowing story of you riding in a driverless car. One of the stories we tell about tech is just give it time, right. Well fix the bugs. And so its been 12ish years. Where are we with that . Would you say were at a place its reliable and safe enough for wide use. Not even vaguely. So in 2007 i first rode in a driverless car. And it almost killed me. And it was in an empty parking lot with no traffic. The technology has come a long way since then. But it has not come as far as the marketers would like you to believe. Next time you hear somebody saying about Driverless Cars are five years away. Think about how many times you heard that. And how long people have been saying that theyre five years away. Because theyve been saying it actually since at least the early 90s. And so they are not actually five years away. They do not work as well as you imagine. And, in fact, one of the one of the things that ive been increasingly worried about with Driverless Cars comes from something you allude to in your book which is the inability of systems to see darker skin. So Image Recognition systems, facial recognition systems, object detection systems, they dont see first of all, they dont see the way that a human being does. The other thing is they dont detect people with darker skin as well as they detect people with lighter skin. And so the systems that are embedded in these twoton killing machines are not detecting people with darker skin. And i would i would pose the question, who exactly do we think is going to be killed by selfdriving cars on the street . And so you go into this a little bit when you talk about the racist soap dispenser. Are you all familiar with the online video, viral video of the racist soap dispenser. There is a man with light skin, a man with darker skin who puts his hand under the soap dispenser and it doesnt work. Can you tell us about how this artifact functions in your work . Sure. One of the things i try to do in that section is bring together something that seems kind of trivial. In fact, the people displaying this on youtube they are giggling through the clip if you recall. It seems funny, entertaining, doesnt seem that consequential when you just read it as a glitch of one particular technology. But we also have to think about how we got to that point, what are the larger structural aspects of the research and Development Process that would make it such that that would happen . And those same context clues are also operating behind much more consequential kinds of technology that are making life and death decisions about people. Whether its the risk scores we heard about earlier in terms of whether to parole someone or not. Whether decisions in health care, education, whether someone should get a loan that they really need. So the same type of questions about how algorithms or in this case Automated Systems are developing, who they are seeing or not, a set of questions that i think we can apply beyond any particular type of technology. At the same time i do think making distinctions are important. So im juggling the attempt to make important decisions a distinction. Rather than just talk about racist robots which is a great headline, which is how i sort of became interested in this, is the headlines about, oh the Automated Systems are racist to think about how are they racist . What do we mean by racism we have a strum of technologies explicitly attempting to reinforce hierarchies. They are not hiding it. So we have to make a distinction on that end of the spectrum of technologies that are designed to and are reinforcing longstanding racial caste, class, gender hierarchies. And think about them in relation to and this is what interests me most. Technologies that are being developed to bypass bias, sold to us as actual fixes for human biases. And so thinking about those as a kind of technological benevolence. And so tech fixes that are saying you know, we humans are messed pup the judges, the prosecutors, the police, the teachers are racist. Here is an app for that, an algorithm for that. We need more attention around those things that are purportedly here to do good, to help us address inequity through a particular technology. We have to keep an eye on the obviously racist robots, right . But i think we need more attention around the promised dogooding of so Much Technology and the way that we turn to it in lieu of more transformative processes that i think we should be investing both imagination into and other Economic Resources into. And so the system of the way that racism is embedded in everyday technologies is, i think, what you are referring to when you call it the new jim code, which of course is taken from Michelle Alexanders notion of the new jim crow. Yes. Can you unpack for us what what you mean by the new jim code and whats going on in these systems . Absolutely. And this is my attempt to remind us about the histories that are embedded in technology. So by invoking this term that was first a kind of folk term that developed to name a broader and broader set of practices around White Supremacy that then got resuscitated through the book, the new jim crow to describe how mass incarceration has a license to legally discriminate and reinforce caste hierarchies, extending slavery and jim crow into the present for me the current millieu is not simply the next step. Its not that were over old school segregation. Were not over mass incarceration. But its thinking about now techno scientific fixes create new forms of containment. And how this history is important to understanding the impact of the technologies. So its a combination of coded inequities, often more subtle than past regimes of racial hierarchy. But also the key to the new jim code, the distinction really is that its purportedly more objective and neutral than the prior regimes. And thats where the power and problematic of this regime takes place, is that we put our guard down is, because were promised that its more neutral and objective. And precisely when we put our guard down is when our antennae should go up to some ways to thinking critically about this. And this came up a number of times in the previous panels. I think there is a heightening consciousness around really being aware of anything that is is promised to be more objective and neutral. And so what we see here is a whole set of practices under that umbrella. And the notion of objectivity and neutrality and the idea that a machine could be free of bias is something that i really i really grappled with a lot as i was as i was writing my book. Because as a computer scientist, yeah, when youre doing math ultimately when do Computer Science you are doing math. Because computers any literally compute. Which we kind of forget about. Really its just a machine doing math and ultimately all the fancy stuff we do with computers comes down to math. And so yeah when you do a math equation its just an equation. But one of the things that i think is so interesting that has happened among the math and Computer Science community is they got really carried away with the idea that math was superior. Yeah. So this is this is the basis for an idea that i call techno chaufism not quite technological determinism but techno chauvinism, which is the idea that technology is superior. That Technical Solutions are superior to Human Solutions and it derives from the idea that math is superior to say the humanities or social sciences or any form of intellectual inquiry. And then when we look at the people who for centuries have said, math is so superior, as we as mathematicians dont have to worry about pesky social issues because our work is so high falluting and important they are so homogeneous look and they policed the boundaries of the profession for century so that women and people of color for example are excluded. So its i mean its not necessarily that computer scientists are out there saying i want to go make racist technology. Yeah. I dont actually know any computer scientists doing that. Thank god. But its the unconscious biases of a very homogenuous group of people that manifest that way. And they think its just math, just computing and going to be superior to all the pesky social issues. And its not true. One of the things that i get from wrestling with techno chauvinism as you describe it but its not limited to this but the conversation about equity and justice is abrupt up, the default intervention is to look for a mathematical way to define fairness and equity. Can you say a little