Fiber means of cooling with water and farm. We hope that more support will arrive soon, as well as the Chemical Material needed to help us. On sunday afternoon, firefighters put out the fire in one of the super tankers. That the place continued in the 2nd tanka, late sunday night. There was a huge explosion by vogue that could be seen 50 miles away in havana in the capitol, the smell of south ed augustin, mountains us. I was there with lets take you through some of the headlines here now. Jazeera, now Palestinian Islamic jihad group tells al jazeera itll renew its offensive against israel if it doesnt release detainees. P i. J spokesman, horrible head says, israel agreed to release 2 of their top commanders under a cease fire deal bro, could by egypt. Israel though is denying a group such terms. You must say it has the latest from garza. People are trying to gradually go on with their lives, hearing gods, but as the families more in their loved ones, their beloved ones who they have lost in the past 3 days. Also i want to add that 10 year old child just died of her when sustained in the air strikes in the past 3 days, bringing the total to 45. 00 kil, including 16 children. Until now we have funerals in different areas. Glad if you had is holding a large more for the people who have been killed across the trip. In the past 3 days of air strikes. Israeli forces have demolished palestinian homes during a raid in the northern occupied west bank. Bulldoze is accompanied by Israeli Forces destroyed 2 houses in the village of romana, near janine russia says its ready to facilitate the visit by monitors, from the International AtomicEnergy Agency to these appreciate power plant. Both ukraine and russia have blamed each other for shelling the facility and causing damage. A ship carrying 12000 tons of corn from the ukraine as docs in turkey. The ship left jordan a more sk 3 days ago. It talked to sarah on kindly the cost the widening mortgage boy called in china could from the fact that grumbled as columbia and venezuela, grief and men ties businesses, i apply revival bloss russia wants to pull out of the International Space station, wants legs in all the county, but cost on al jazeera with the history of humanity has been marked by its technological advances. The discovery of fire 2000000 years ago. The invention of the wheel in 3500 b. C. All the way to the Industrial Revolution in the 18th century. Throughout the times we have thought to make our lives easier. Though many argue some of those advancements have proven destructive. In modern times, our ambition for a better life has taken us to the age of information technology, programming and Artificial Intelligence. Ai gives machines the ability to do more. They can think for themselves no. Now preferences and behaviors communicate with us, suggest alternatives, and even do things only humans once did. I like the order, my job. A i had slowly become an essential part of our life. Its use and social media has brought us closer with our families and friends, and its proven valuable at home and at work. But some say theres another more sinister side to Artificial Intelligence. He p. O be an american Computer Scientist to meet gabriel has been one of the most critical voices against the unethical use of ai. Shes been vocal about issues around bias inclusion, diversity, fairness and responsibility. Within the digital space, google asked her to co lead its unit focused on ethical, Artificial Intelligence. But 6 weeks later, the pick giant fired her after she criticized the companys lucrative a. I work gab ruth considered, of the 100 most influential people of 2022 by Time Magazine has no more than independent Research Institute focused on the homes of ai, on marginalized groups. So whos behind Artificial Intelligence, technology, and whose interest does it serve . And with such a big influence in our lives, how democratic is its use Computer Scientists to meet group talks to al jazeera to meet their brut. Thank you for talking to al jazeera. So to start with it start right at the start and just set the scene a little bit for people who might not think about a i in everyday life. As the Technology Stands right now, how much are we using ai in every day to day life . How imbedded is it right now for most people. I dont blame the public for being confused about what it is. I think that many researchers like myself, who have gotten our ph days from a i who have studying a i are also confused. Because the const, the conception of a i now we see in pop culture is in my view, what really, really shapes Public Opinion about what it is. And so it kind of makes me realize that pop culture has this huge power to shape peoples thinking, right . So i think when people think of i, they are thinking about terminator kind of things. These robots that are kind of human like and, or are going to destroy a world, or are, you know, or either good or bad, Something Like that. But thats really not what is being branded as, quote unquote, a i right now, anything you can think of that has any sort of data processed and makes any sort of predictions about people what she tries of off calls surveillance. Capitalism is based on what is currently being branded as, as any sort of chat bought that you use, for instance, on whether it is a lack saw or theory. Or i guess these are voice assistance or chat bots that a lot of Companies Use because they want to hire less Call Center Operators or things like that. There could be some sort of clinical behind it. There is a lot of surveillance on in day to day life, whether it is face recognition or other kinds of tracking that go on. And that, that has some sort of a i in it. There is recommendation engines that, that we, you, that we might not even know exist when were watching, you know, videos on tick tock or Something Like that, or advertise targeted advertisement that we get or music selections that tried it. And for what kind of music we want to listen to next, based on what we listen to before. So its a very broad kind of branding. And it wasnt always the case, but i think that, you know, theres always the language to sure that people use in order to kind of sell more products or hype hype up to many of their products in my opinion. So that is currently in my view, what is being branded as Artificial Intelligence. Thats really interesting because i guess when you think about using like even face recognition or getting a playlist recommended to you as you say. I mean, i dont think about that being a i, im just like opening my phone, i guess thats something you know, people thinking about it as they use it or is this just, i guess going under the radar as, as just the future or what it means to use technology, its very interesting because theyre, in my opinion, there is this deliberate rebranding of Artificial Intelligence thats going on so that people are, can, is by the capabilities of the systems that are being billed as, quote, unquote, Artificial Intelligence. Right. So for instance, we even see these papers that say that computers can now identify skin cancer with super Human Performance there, and theyre better than humans are doing this, which is really not true. Right . So scientists themselves are engaging in this kind of hype and corporations themselves, or engage in this kind of hide. And what that does is, instead of people thinking about a product that is created by human beings, whether theyre in corporations or Government Agencies or military agencies like defense contractors, right . Creating autonomous weapons and drones. So instead of thinking about people creating artifacts that we are then using, we think about quote unquote, as this, you know, some being that has its own agency. So what we do then is we dont as scribe, the issues that we see to the people or corporations that are creating harmful products. We start do really the conversation and talking about whether you can create a moral being or you can impart your values into error or whatever. Because now we are kind of scribing this responsibility away from the creators of these artifacts, like machines, right . To some sort of, you know, being that we are telling people have their own agency. Ok. So thats what is, what go you into your line of work. The ethics of Artificial Intelligence because it hasnt always been an easy path. It seems. Initially i was just interested in the technical details. Face recognition is a, is something that is done under the Computer Vision umbrella. Or the other kind of thing that tries to make sense of images that seemed really cool, that you could infer certain things based on radios and images. And that was what i was interested in. However, there was a confluence of a number of things. So 1st of all, when i went to graduate school, i stanford, i saw this dark, the lack of any black person from anywhere in the world, in graduate school, and especially in, with respect to Artificial Intelligence, developing or researching the systems. So when i, when i was at stanford by then i heard that they had literally only graduated one black person with a patient in Computer Science ever since the inception of the Computer Science department. And you can imagine the type of influence that this school has had on the world, right . You can imagine the kinds of companies that came out of there, including google. So i, that was just such a shock to me. So i saw not only the lack of black people in the field, but also the lack of understanding of kind of what we go through and what systems of discrimination. We go through in the u. S. And globally, really around the same time. I also started reading about systems that were being sold to the public and being used in very, very kind of scary ways. So for instance, there was a public article that showed that there was a company that purported to determine the likelihood of people committing a crime again. And unsurprisingly, of course, it was heavily biased against black people. At the same time, you know, i see the kind of drones as systems purporting to determine whether somebodys a terrorist or not, etc. And my own Life Experience tell, told me, you know, would be likely to be harmed by those systems. And whole would be most likely to be developing those kinds of systems. Right. So that was the time where i started pivoting, from purely studying how to develop the systems and doing research on the technical aspects of the field. To being very worried about the, the way in which the systems are being developed and who they are negatively impacting, learning about the proposal of the existence of an algorithm. A model that purports to print it, someones likelihood of committing a crime again with such a huge shock for me and by then it had existed for a long time. And in addition to, to that, you know, and so this system judges used for sentencing for, for setting bale along with other inputs. And there are other systems other predictive policing systems. So one example of predictive policing system was something called prep poll that actually a la police, a late to do, were, were using and things to a lot of activism from groups like stop l e p, spying this software stop being used by a p. Actually my people in my field statistician, chris, stan long and scientist l. William isaac did a study that actually reverse engineered credit pull and showed that im surprisingly it pin points, neighborhoods with black and brown people and says that these neighborhoods are high hot spots, right. Even if it says a drug use isnt one example. If you look at the National Survey for drug use isnt pretty evenly distributed in for instance oakland. Right. But the, a police like fred pole, they instead of kind of a pinpoint black and brown with neighborhoods, saying that these are hot spots. And why is that . Well, the list of new history and the current reality of us. Were not surprised by that because these systems they feed in, they have Training Data that are labeled and the Training Data does not depend on who commits a crime. It depends on who was arrested for committing a crime. And obviously thats going to be biased. I want to come back to the issues around the dos that you put into a i and, and what the results are that you get in a minute. But lets go back to when you were hired at google, what was it that you were hired to do . I was hired to do the kind of work that im talking about with respect to analyzing the negative societal impacts. And i am working on all aspects of mitigating that whether it is technical or Non Technical or documentation. So i was a Research Scientist with, you know, the freedom to set my own research agenda. And i was also co lead of the ethically, i team with my former cozy mitchell. And so our job there was also to create that agenda of our Small Research team, which is again focused on minimizing the negative societal impacts of a i. And as you say, theres a lack of diversity in the industry, you knew that, you know, if you know that since you got into this. So what was the reality then of, of going into this mega, huge company as a woman of color, trying to do that job. It was incredibly difficult from day one. I faced a lot of discrimination, whether its sexism or racism from day one. I try to raise the issues that it was exhausting. You know, my, my calling Miss Mitchell and i were just so exhausted. And research was basically, you know, working on our paper is and discussing ideas felt like such a luxury because we were just always so exhausted. By all the other issues that we were dealing with, you eventually put out a paper which led to your being dismissed. But google says you resigned. But put that to the side this, this paper looks at the bias is being built into a i machines basically reflecting the mistakes that humanity has made is perpetuating history. A foregone conclusion when we, when we talk about a i or is there another pause . I always believe that we have to believe that there is another path. And this comes to back to the way in which we discuss as just being its own thing. Rather than an artifact, a tool that is created by human beings, better in corporations or an educational facilities or other institutions, right . So as long as we have to know that we control what we build and for, and when we build it for and what its used for. So there is definitely another path. But for, for that other path to exist, we have to uncover the issues with the current path that were going on and remedy them and also invest in terms of research in those other paths. So for instance, on this paper that i put out called on the dangers of, to catch the parents. It talks about this huge race that is going on right now on developing what are called large language models. And so these models are, are trained on massive amounts of data from the internet sprayed from the internet, right . And so you and i are not getting paid for the content that we put on on the internet that, that is being scraped to train these models, something just to just to make it really simple. I mean, something that i hadnt even consider what youre talking about is, you know, giving a, i all of the information of the internet. And of course, its going to, you know, spew out some of the, the worst parts of the internet. Which are, you know, often predominant, but if we give it a smaller dos it, or if we q rate the data, then were going to get something that might be more helpful for people. Is that kind of putting it to simply . No, actually, that is one of you know, we discussed so many different kinds of issues in our paper. And one of the issues we discussed is exactly what you mentioned in terms of curating data and using, you know, large im curious data from the internet with the assumption that size gives you diversity, right . And so what we say a site does not give you diversity and we detail so many ways in which thats not the case. And one of the suggestions that we make is to make sure that we carry our data and we understand our data. And if we believe that the dataset that were using to train these models is too large, too daunting to overwhelming for us to understand it documented and curated, then that means we shouldnt be using that data, right . And so this is, this is kind of one of the things that were talking about. Another thing that i thought was really fascinating that i guess we dont consider in our daily lives is that at a macro level, the funding for a lot of the technological advances that trickle down to us begin either with the military or with these massive tech giants that, you know, can they have our best interests at heart . This is precisely what i talk about to with the found in our new, my name institute, the distributed and Research Institute. Right . So a lot of when you look at history and whether it is things like Machine Translation or self driving cars, right. Self driving cars are good example. They were very much funded by darpa and defense funding agency. Right. So its not because theyre interested in, on accessibility for, for disabled people, right. If theyre interested primarily in autonomous warfare. So how can we assume that something that starts with that goal and that funding insight will end up being something different. And so i often give this example, you know, when people talk about and for social good, right . They talk about kind of reorienting, some of these t