Transcripts For ALJAZ Studio 20240704 : vimarsana.com

ALJAZ Studio July 4, 2024

Comparing his rose mass, killing of palestinians in garza. So the whole of course drew an Immediate Reaction from israels prime minister. Benjamin netanyahu president silly by his grace the memory of 6000000 jews worded by the nazis and his demonize, the jew estates like the most virulent anti semites, he should be ashamed of himself. And on monday, israels foreign minister summoned brazils ambassador to the countrys Holocaust Memorial for a public pointed dressing down, culminating in this will settle model and way as i asked to convey to president lulu would not forget, no, forgive, and i declare him my name and the name of the citizens of israel as persona non grata and his royal im as long as he does not apologize to you and take it back. The president lula is not backing down here at cooper as it was, and busted it to israel for consultations and some of these radium baskets presume for an explanation as this diplomatic dispute continues. John home and ill just say to the widow, a former haitian to president , jovan elmore say is one of 50 people charged in connection with his assassination. Nothing more say is accused of conspiring with full of prominence to call joseph to kill her husband. The former chief of police has also been indicted with i was killed in his time in the capital of port or prince in 2021. When i saw from 8 till mccray, you can find much more information on our website. That is l just sarah, dont com. Studio b v i series is coming up. Next i will be back at the top of the hour with more news to stay with us. The, the President Biden says once a 2 state solution for palestinians and israelis, what does anybody believe its doable . What this is real for . Im gonna say it back to us foreign policy. And what are the long term consequences for the region and the world . A quizzical look at us politics, the bottom line. The Artificial Intelligence a is already transforming our societies and economies, creating jobs and groups. But are we ready for the dark side . How its consolidating, power, weakening nation states, corrupting our information, ecosystem, and destroying democracy. The, my name is maria risk, and in this episode of studio be on Artificial Intelligence. Well be hearing from to inspiring women, courageous whistle blowers and their own rights working to make a i favor safer, more responsible. Camille francois is a researcher experience in combating this information and digital harms. Today, shes helping lead french of president microns Landmark Initiative on a i and democracy marriages whitaker, blues of woodseau, inside the industry about a i largely unchecked car. And let the google walk out in 2019. Now with a shes president of signal secure messaging. So, how do we protect ourselves from mask this information and distinguish between whats real and st online . I am reading surveillance into our lives leading our new discovery. And how do we make the big tex account, the meredith . Im so happy to be here with you tonight. So youre the president of cigna all my favorite way to send messages these days in your, in a scholar in your own right. You co founded in a Research Organization called a i now which you continue to advise. And i actually have the great pleasure of sharing an office with you when were both colleagues at google. Yes. Whos 10 years ago . Well, um, 2 years ago, back then you were already interested in Machine Learning and its impact on society. Yeah, i mean, i remember these formative conversations talking with you talking with a, you know, thats a concern, whispered network about what is going on. Why is this, you know, at that point more on proven still out, improve in technology and being infiltrated into so many products and services at google. Why is everyone being incentivized to develop Machine Learning it . What actually is this and why are we trusting such significant determinations . About our lives and our institutions to systems that are trained on data that we know to be faulty, that we know to be biased, but we know not to reflect in the context or the criteria in which these systems are used. So that was the beginning of what i think at the time we call the Machine Learning fairness conversations. Yes. Over and across the company. Well that was i think it was around 2014. And so i think the button years ago it was. Huh. Uh, looking great um, i think that is an important date because we can, is zoom out on the conversation on Artificial Intelligence, which sort of touches everything and nothing at once in our current context. And we can actually recognize that Artificial Intelligence as a term is over 70 years old. Right . So then we need to confront the question, okay, so why now, why in the last 10 years . Is it so heights . But it is a Pivotal Moment for a i right now within to say that, well, were certainly being told that and theres certainly a lot of resources, a lot of attention, a lot of investments that is riding on this being a Pivotal Moment. But again, what happened in 2012 right and 2012. It was shown by a number of researchers that techniques developed in the late 19 eighties could do new things when they were matched with huge amounts of data and huge amounts of computational power. And so why is that important . Well, huge amounts of data and huge server infrastructure is these massive computers which are new and more powerful chips are the resources that are concentrated in the hands of a handful of Large Companies based in the us and china, largely and that are as of the product of this surveillance advertising Business Model that was shaped in the ninetys that grew through the 2010s. And then, you know, in 2012, there was a recognition that we can do more than so advertisers access to demographic profiles for personalize advertising on your g mail on your facebook, wherever you encounter it. We can use this same resources to train i models to infiltrate new markets, to make claims about intelligence and computational sophistication that can give us more power over your lives and institutions. So i think we really need to have a Political Economic view to look at the Business Models who wins and who loses. And then look at whos telling us this is a personal moment. Right . So thats interesting because the way that our current moment is also being framed is around this rupture. And so sort of the moment that leads to the generation of technology, and they are that we called generating. They are, and i think that what youre saying is, its not particularly helpful to see this as a, as a rupture. And its helpful to see the continued reading a of the development of a i. Well, its helpful to investors who are recovering from losses on the matter of hers who are recovering from losses on web 3 to see this as a transformative moment, right . Theres are a lot riding on this, but youre not necessarily going right. You can make a lot of money by people believing its true long enough that you get an idea or an acquisition. It doesnt need to actually be true. But right, we didnt Start Talking about those in 2017 right. Like we hadnt been involved in, inundated with claims about, you know, a, i mimicking or surpassing humans in the same way we are now chat. Gpc wasnt everywhere. When did we Start Talking about this . I started playing has to be to you, i would say around 2018, but a but i have always been ahead of the curve. I. I, you know its, its, i think its just like playing with those models. I remember playing with gigi to and trying to think about what this means for our society if everybody starts having access to the means to create synthetic text. And at that time, there was not a lot of uses of these generated a models for text. And so, you know, i was paying this bill, this idea that when we had deep fakes, we werent going to have read fakes. But sort of now hold ocean, a synthetic text that was going to take over all of these online spaces. And i thought it would be fun to have to have the a i, right, and all fed around the consequences of read face, which i did it generated this matter for which i thought was really interesting of synthetic texts was going to be the grey girl of the or not, it was going to suddenly creeping everywhere as sort of a Science Fiction id or that it was going to really ruined a year. And as we knew it, and i think that was there was 2019. Yeah. Now were getting self awareness. Yeah. Something along these lines. Yeah. Of course you and some practitioners and people who are some deep in the field had been playing with these different techniques for awhile. But it wasnt until microsoft started spending millions of dollars a month to stand up and you know, and that took to create and deployed chat g p t that we started talking about it. And we need to recognize that a car, us hundreds of thousands of dollars a day to run this, but its actually extremely computationally expensive. And so that these are not, you know, these are not technologies of the many, right . We are the subjects of a i, we are not the users of a i in most cases. I think this is also why it has, for some of us felt like a Pivotal Moment because back when it was very much, you know, still Research Projects or conversations between practitioners. We kind of have the luxury to ask ourselves, well, what does it mean for instance, for this information that now everybody can produce synthetic text or what does it mean when we know that they are biases and stereotypes that are imbedded in this machine . How would we go about thinking the impact on society . And i think that changed a scale in terms of the urgency of those questions when suddenly everybody has access to these technologies and theyre suddenly being deployed really quickly in society. Theres been a community of scholars who have sort of preceded a lot of these, you know, advancements or microsoft deciding just to deploy a text generator with no connection to truce onto the public. You know, those decisions were not made because they were reviewing the scholarship, were reviewing your work. Camille were recognizing other social consequences. Those decisions were made because every quarter they need to report to their board positive predictions or results around profit and growth. And so we have these powerful technologies being ultimately controlled by a handful of companies that will always put those objective functions to use the Machine Learning term. First hardaway leverage power. Well, theres a classic answer to that, and that is, you know, workers banding together to use their collective leverage to push their employers. I love it, i find it very differential here. Yeah, well, i was almost not here because of the eurostar strikes. So i hope they, when im yes, lets talk a little bit of bouts. And those hardens that youre saying, a lot of people have been talking about has been documented. Thats explained for instance, what do we mean when we say theyre coded bias in these . All right, then these Machine Learning systems in this bigger is better paradigm which weve been in since 2012 relies on Data Collected and created about our present. And our past, this data in the context of text generation comes from things like for chance, read it youtube comments, wikipedia. And of course that data reflects, are scarred, past and present, which is discriminatory which has been and is racist. And massage in us, which sees different people as are deserving different treatments. So of course that data is going to be reflected in the outputs of the system. And now the danger here is that we by the height. And that we say this is the product of a sentence, an intelligent machine that isnt giving us objective truth. And that is just where that person fits and society. They have the genes of a janitor, right . And so i think theres, you know, when we see the rise and this eugenics thinking, when we see this blind face in machines, we really need to recognize what exactly that is naturalized and your right. Of course, to talk about how so many of these stereotypes are also here inherited from these Training Data that are taken from, you know, Online Sources that are, that are not diverse and well informed Online Sources. And its also been an issue with the Machine Learning for a long time. And before it was trained on those large Online Sources for face technologies, no matter if its text the matter if its patient analysis was already the case when we were also doing, you know, image recognition. I have have, have these issues is biases and coded biases. And spitting outs, 3rd types, of course when we talk about generated a i, one of the things that i find really important to highlight is so im an optimist. I know it. And i think that sometimes folks have had too much faith. And the idea that with bigger models and more data, we werent going to had in generally the right direction in slowly Getting Better at tackling. Theres vices, those discriminatory impact there was in, but its, there are times and we now recently have research that says this is actually not what happens a be the berryhead and 2 of her colleagues just published a wonderful paper that you actually, when you get to bigger models that are trained on more data, those racial bias is induced, there are types get worse, you get more of them. And here you can sort of see that were losing the race and under investing and how do we tackle and mitigate and understand those . Those are technical impacts of these technologies because were just scaling them too fast, right . Were not catching up with the problems that we know exist. More of the problem doesnt solve the problem. Thats right, right. So i think, you know, i never understood the basis for an assertion that like, well, you know, we have a little bit of trash and that makes the trash you model. But lets poor a bunch more trash on there. Oh, because thats going to scream it out. Right, theres a, you know, magical thinking, and i think a, a real, like almost emotional desire by a lot of the true believers to avoid the fact that maybe some of these problems are intractable. Maybe we cant sort of, you know, create a data set thats unbiased. Because of course, the data as always reflecting the perspective of its creators, and that is always biased, right . How do we change this paradigm . And how do we make sure that the people who also work on making technology safer, more fair, more responsible vera for its can also be accelerated in their voice. Can be centered in the way we talk about a, i mean, i dont think thats a technical problem. Right, that is a problem of the incentives that are driving the tech industry, which are not social benefit. Right. You know, you and i know we got no way of these people a lot. There was not always appreciate and i always loved your your willingness to ask those questions anyway. But i was pushed out of google for asking these questions right for loudly asking those questions for organizing around these questions. Right. So there is a point at which, you know, when youre talking billions of dollars versus a level will future, we have a system that is choosing billions of dollars repeatedly repeatedly, repeatedly and in the context of a system that is now giving this kind of authority, surveillance and social control capabilities to a handful of actors. I think that is a, thats an alarm worth raising pretty broadly can regulation and governments play a role in re establishing a little bit of balance in that system. Yeah, of course if theyre willing, right, labor organizing can help that social movements can help that regulation can help that. But yeah, regulation is an empty signifier till we fill it with specifics. All right, lets most here, lets do a little q, renee and then well talk about how to dismantle those Business Models and how do we build the future we want. Okay, my question is you very briefly touched upon that, but when the systems that we encounter and every day in this digitalization and the or were going, the very basis of it is, is basically based on private interest. How can we really, truly create meaningful change . I think fundamentally thats a question about capitalism, not a question about technology. And so how do we change that hamster wheel . That is sort of, you know, in order to avail ourselves of the resources needed to survive, we do waged work. Our waged work contributes to structures that we may not agree with. And so ive come up with, you know, their social movements, you know, i, i participated in labor organizing after being are kind of in house and public intellectual and the expert in thinking that that was a theory of change. Right. Im now with tech executive trying to do tech another way that is not profit driven, right . Thats another theory of change. And i think theres a International Workers of the world where a, a union in the us that had a phrase, a come slogan that stuck with me. And thats the day where you stand. So its like, what is your role . What is your, you know, who do you know . What can you do with the knowledge and context you have . And i think thats a question to ask yourself every day. But its not a question somebody else can answer. I agree with that, and i will say, i struggle with that question too because im in, in my fields of practice for security and trust and safety. The fires that are in the building are the things that you have to attend to immediately. You also want to think about the immediate future, but its also important sometimes to put out those fires because it can be that your election is at risk. It can be that kids are at risk online. It can be that, you know, terrible, suicidal impacts are and folding in front of your eyes. And, and, you know, one of the things that has been focused on recently is how can we make sure that across the industry, the folks were focused on putting out those fires are better equipped so tha

© 2025 Vimarsana