Transcripts For ALJAZ Talk 20240704 : vimarsana.com

ALJAZ Talk July 4, 2024

Had entered the water and would be in need of assistance that 1st evening and into the early hours of the next morning. It was a total of 17 memorial service, has been held for the assassinated ecuadorian president ial candidate. Fernando. Ever since here be unto corruption. John list was shot dead as he left a campaign event. On wednesday. The government has declared the state of emergency and insists the election will go ahead as planned. In 2 weeks time. Hundreds of supporters of the 2 leaders in these yeah, have gathered outside of French Military base, the capital, the angry because the west african block. It was displaced, the original falls on stand by as a potential environmental disaster has been diverted off the coast of yemen. The United Nations says it successfully removed more than a 1000000 barrels of oil from a rusting stupid tank. The vessel has been stranded off humans, red sea coast since 2015. The operation as a void that the potential disaster would have caused 20000000000. 00 to clean up ukraines president vitamins and landscape has assigned to officials responsible for military recruitment. These dismiss regional offices accusing them of accepting bribes. He says, a 112 criminal cases have been opened. A judge in new york has sent the founder of the failed f. T x cryptic currency to prison. 2 months before his trial was to begin, the judge revoked some vitamin fruits by love to prosecute. A said he tried to influence witnesses. 31 year old has pleaded not guilty to charges that he defrauded investors. When his businesses went bankrupt last year by one fried, was arrested in the bahamas in december and has been under house arrest in california. Yes, taken headlines, one whos coming up here and ill just are right off to the stream. Ive enough the the challenges here. The hes recognized as one of the fathers of Artificial Intelligence. Joshua ventures work in the 1990s in 2000 has contributed to the foundations of chatbox. Like open a chat and googled foreign born in france in 1964. And you grew up in canada where he began programming as 11, inspired by Science Fiction literature and tv shows. Today, venue is one of the biggest voices warning the world about the necessity of having control and regulations on a technology. More than viewing machines turning april and you is more concerned about how human might use Artificial Intelligence to harm others. In march t and other prominence a i, scientists find them thats are urging Tech Companies to pause a development until the industry can agree on regulations. And the long side leading a personalities are you, im all day. And stewart trussel, he testified before a us congressional hearing warning that the frantic pace of development in the wrong hands could be used to create biological weapons. So is a i and exist central sweats of humanity or is it a tool that will transform our lines for the better computer scientists, joshua benji, you talk to of the 0, the, sorry. What is the whole, well, seem to be talking about, hey, i right now, whats changed . I had been reading about the potential dangers of losing control of a i for the last decade. But i didnt pay so much attention because they thought that it was so far into the future case or even centuries. Because the systems weve been building in academia are really, really stupid. I discovered the idea of neural networks. In other words, research and Artificial Intelligence that is inspired by the brain when i was looking for a subject for my grade studies, 1985, and it was a passion right away. I thought this was really uh, something i wanted to do. Um, i was excited by the idea that there would be a few simple principles like the laws of physics that couldnt, you know, help us understand intelligence and build Intelligent Machines. So i was interested in also like how the brain works, as well as the i and in those days, the idea that intelligence could be understood by the few mathematical principles which one was very marginal. But the last few decades really give us a lot of evidence that to its probably true, which it means is easier than people stop in those days to build Intelligent Machines and to understand the brain. And the last few years, whats happened is because of the billions of dollars that are being put into deep learning by industry. We have collectively discovered that the larger we make those systems, the better they are. So it doesnt stop getting better. And more recently with the georgia, the i both of images and text weve passed the kind of well that the differential. So its not just like, oh, it gets better. Now. Its like as good as, as its painting the picture and understanding language to some extent, that fools at least most of us, most of the time. So thats extraordinary, that, that, you know, in 1950 alan during maybe you could call the father of Computer Science design. This test that you was thinking he would have, which should be level intelligence. If we can build machines that we interact with, then we can say if its the machine or ceiling, actually we know that the current, the systems are still lacking some things, at least i and many others think so. But we might not be very far from, from that level of cassidy. And the Artificial Intelligence is that are being developed now whats different about them . If you look at systems like chat and cpg sign, typically speaking doesnt seem to be a lot that is reading you is really the scale at which the systems are built, how much data, trillions of words, the fraction of the all the text on the internet would take tens of thousands of lifetimes of a human reading to, to get all that information and, and the corresponding scale of these models with true use of knobs, that can be set. Thats a huge, its still smaller than the brain. But, but these knobs are very precise much more than yours analysis. So yeah, is scale. Its engineering something we couldnt do it and you know, it could be. Yeah. And the fact that it works so well suggest that there may be just a few elements that especially related to reasoning that were still missing. Its hard to say when will figure them out, but it could come quickly. I dont think society is ready for that. In 2018, you one joint they the, the cheering award. What was a full wont be doing deep learning so that the drink prize was given to jeff and to the kind of myself, because of our contributions to the feel of deep learning. The claim determined, we figured out how to train you on thats that would be able to represent richer things. I worked a lot on language, young work a lot on images. Jeff had some of the early ideas to train many layers. So for me on that, and then i eventually found that there was a simpler way using traditional methods with some little tricks. My group also found the method of introducing attention which has turned out to be essential and really transformed natural language system. Just like the ones we see today. So would that be child, shed be too without that work in charge of you would not exist with the work weve done. And of course, you know, science is the contribution to many, many people. Its, its engineering, its, its scale, but its also algorithms that did not exist 20 years ago. So in may 2033 you and about 350 other people signs a statements warning about that that changes of hey i, why did you do that . Fairly young during the winter, playing with child c p t i started wearing that maybe were not far and what would be the consequences and i started reading more of the research thats being done to study whether we can build machines that we will not lose control of that we can control fully and the horrible realization is we dont know how to do that yet. After a decade of research trying to figure it out. And i dont think we have another decade before we reach a point where our machines are smarter than us. Maybe we maybe, maybe its going to take more time, but at the rate at which we are progressing, i dont think so. So its time to raise the alarm a, you know, we should have done it before, but it, it is chapter p t that made it obvious for everyone, but also for scientists like me. And even for the people who have been in, in the middle of the construction of the systems, that we are at a point where its urgent for society to think through and be wise about how we prevent catastrophes. How this is going to be used for what purpose . How do we make sure it doesnt blow up in our face . What kind of catastrophes are you thinking about . I mean, theres only harm thats being done by i because that these systems, we dont do what we intends. This is called the alignment problem. And if people have built the systems that were intended to do well in say face recognition. But they didnt intend that they would behave badly on people of color, but this is what is happening now. This misalignment between what the humans want and when the machine does can actually get worse as the machines have more capabilities. Let me give you some, some examples from Science Fiction space. How do you see . Because one holland 1000, its smarter than that, at least on some levels. Its smarter than the the os or not. So there are there hands. It has a mission, some military mission. And having such a goal means that it, it, it, it cant allow it to be turn can, can be kind to allow that itd be turned off. It wants to preserve itself to be able to achieve its mission. So when the asymptotes think that there is something maybe going wrong with, with the eye, and think of turning it off, how kills one of them and wants to kill the other one and kill the others. So it, as soon as the machine has self preservation goals, um and, and, and if these goals are specific sufficiently strong compared to other goals, then there theres a chance that what the machine, once im done, what we watch, what we need to are not going to be a line and someone was, this machine isnt smart enough, like, you know, there, every living being wants to survive, but theyre not smarter than us. But these machines are smarter than us, than, as we are potentially in danger. So this, this alignments problem. We have it all around us, Companies Making contracts with each other, rebuild machines that dont do exactly what we want, but when windows system is becoming really powerful, this can turn really bad. Before we get there. There other things that may be at least as wary some because it might happen before we even get that chance, which is humans intentionally using very powerful guy systems for the various reasons. So the example that was raised at the us senate and that i, ive been also talking about his buyer weapons or right now it takes a lot of expertise to, to design and you bankers, virus. So very few people in the world can do it. And that the bad guys typically dont have that expertise, but you can internal gates chat, g p, t, or maybe the next version and get enough information that you would not otherwise have to design such such dangerous pathogens. At least this is the path in which we are. It might be just a couple of years with this as possible. And the bad news is we dont really know how to design the systems to make sure theyre not going to put things that could be dangerous. That could help bad actors, for example, it could help them with my friends that could help them with viable, with chemical weapons, could help them with cyber attacks. We can think of many kinds of scenarios. And of course, one thing that squares, some even, you know, shorter is the next us selection or other elections around the world, which could matter where it doesnt seem far fetched. The thing that you take Something Like these large language models, the state of the art, and you can just tune them a little bit and it doesnt take millions of, of, uh, expensive machines in the computer. You can tune them a little bit for task like be a tool thats gonna push the needle in and you know, and change peoples opinions just enough to win the election. Is democracy safe . And then i, oh i well, no, no, and its not because of the, the eyes because of humans. And because once you give very powerful tools to humans, uh, they will use them in all the possible bad ways. So long as the tools are not, is so powerful. There are other humans who can account for that. And we have the police and we have the military and we have, you know, all kinds of ways to protect ourselves. But if those tools become so powerful that one could kill means that people with, with them like like Nuclear Weapons. So what do we do with new car is do we make Nuclear Weapons available to everyone . No we, we dont do with that obviously. And. And so now its like we are at the point where we can see in coming years that will have Something Like Nuclear Weapons, except that it doesnt require 2 material. It just requires software and hardware and the software can be downloaded and the hardware can be bought when robot often. Honda created the fast but somebody wont be famously said that he become desk the destroyer of wells as a creator of a hi. Do you know how he felt . I wasnt there obviously, but, but i can imagine. Uh and, and he and the other is because it was a group thing that community of people working together really, really thought that they had to do it because they were afraid that the germans would build a bomb. And that will be worse. So it is not an easy moral situation in which they were one thing that is interesting and maybe there are some houses that a lot of scientists, the i scientists are now like they sell for now realizing who we have something more dangerous than we expected in our hands and we have contributed to that. What it means for me and, and i think thats what happened. Open timer is, we also have a responsibility to speak up and say, this is dangerous. We need to be very careful. And we can just, you know, go, there are useful ways this is, this is something that requires a wisdom that requires a democratic discussion so that we take the right decisions. You spend much of your career developing. I. So did you have any regrets about the direction that is going on your role in it . Well, regrets maybe if i had known how things would enroll, i would have started thinking about the safety earlier. But i didnt dissipate to the, the speed at which things moved. So maybe regret is not the right words, is more i, i feel compelled to do something about it. I feel like people who understand how these things work have an extra duty to think about how we can uh yeah, preserves humanity in democracy. So thats, thats what i want to do. Lets talk about that then. How. How do we present it . How do we put the right breaks on until the right regulations in place . So 1st we need to high rate because as i said, the chances that these systems could be used as soon as in the coming year or 2 and. And then as they get stronger, the danger grows. So what can we do . The most immediate danger i think is not loss of control. The most immediate danger are malicious uses. So we want to reduce the number of people who have access to dangerous technology. Frankly, i dont think that the strategy of making your trained models and code available to everyone is wise at least theres a fresh or wherever we should say though this is too dangerous. And i made like little calculations that say Something Like, if we reduce the, the number of people who have access to these systems, uh, by say, a 1000 times we can when almost as much in the probability of something bad happening so. So the 1st thing we need to do, we need to control access. We need to have licensing. So we need that the people who are license to operate these systems also have duties of documenting what theyre doing and uh, using tests to evaluate the potential harm as well as what is it they are doing to mitigate those homes and potentially not be allowed to deploy something that could be dangerous. So its basically done released the card has theres been like, well thats the 1st thing. Yes. But we also need to license the people and organizations that are going to be developing these things. Kind of big Tech Companies that are developing i be the only ones to be overseeing it. So what do we need . Governments inputs. No, its clear that it needs government and i think whats going on right now in the us with the commitments that companies to some companies as ive taken is a good step. But, but it is good to the extent that we dont have to just mation yet. It, we need legislation for a number of reasons. We need legislation because you have to apply to every company. So we need a station to clarify where do you put the bar . Because right now its just like some nice words. So well be careful. But no, we need some concrete benchmarks that say this, this you can this, you can put an open source because open source is a good thing. But this is too dangerous for this system could be exploited by a terrorist. And so no, this is not acceptable but, but this has to be quantified and its going to take some effort. Those Companies Already building those kinds of tests. So its not like were starting from 0. But we need to make that the same for all the companies then we can stop at the us. It has to be something that we do for the whole well, especially right now the countries where there is capacity for larger young systems, notably china and some of the european countries. Um, you k, uh, france, germany. But ultimately, every country, because if they are regulations in a few countries that say youre not allowed to do that, you can just, you know, software and hardware z, debris and who in a different country. And then computer viruses or biological viruses dont see borders or anything just go through. So ultimately we need International Treaties with the National Community coming and, and setting up a minimal guard rails for safety for everyone. And the other thing we need is research because we dont even know if for sure how to make a bill of par for our systems, are sites that are not going to become autonomy. Is that by themselves that that can be exploded for an various purposes. We dont know how to do that, theyre different proposals, Different Companies do things, but really theres a lot of notes. And so what i have to ask the senate is we need to invest globally. Can be companies, government as some mix, at least as much on protecting the public democracy, humanity. In other words, safety as we are in improv

© 2025 Vimarsana