Transcripts For BBCNEWS HARDtalk 20240713 : vimarsana.com

BBCNEWS HARDtalk July 13, 2024

Ill ill be back at one oclock. Now on bbc news, its hardtalk. Welcome to hardtalk, im stephen sackur, what is the most serious existential threat facing humanity . Well, many of us mightve point to nuclear ball of climate change. But some of the greatest minds in the tech sector are looking in a very different direction. Artificial intelligence won the Physicist Stephen Hawking could spell the end of the human race. I guess to stuart russell, a globally renowned my Computer Scientist and advisor to the uk government and you and for top right now ai is being developed asa top right now ai is being developed as a tool to enhance human capability. Is it fanciful to imagine the machines taking over . Stuart russell, think welcome to hardtalk. You have spent a career at the forefront of ideas on Artificial Intelligence. Can you give me a working definition . Very stra ig htforwa rdly working definition . Very straightforwardly it means making machines intelligent. And what that means is traditionally making machines that act so as they achieve their objectives. Thus more or less their objectives. Thus more or less the same definition be applied to human intelligence. Its more or less the same definition be applied to any machine. A card to achieve the objective of travelling down the road in it so is there something more about the intelligence that these machines that you work with and on our providing . Sarah carr is designed to travel down the road but that objective is our objective was of the car doesnt know thats what its for. But these machines and in some sense they do. To have an exquisite internal model of what the objective is and they can figure out how to achieve it. So there might have the goal of winning a game of chess that they can figure out how to do that and beat humans at it. Learning whatever that though, learning . What is a fit . These days following a recommendation following by following a recommendation following by alan turing, we built many of the ai system is not by programming them expertly but by training them. We have general purpose learning algorithms that take examples from the real world and then from those exa m ples the real world and then from those examples that extract general patterns and those patterns allow them to make predictions about the ukjust them to make predictions about the uk just make new cases. Speech recognition for example, is not trained by the right to recognise sounds, we instead take hours and hours and hours of speech with a transcription and then we feed that toa transcription and then we feed that to a Machine Learning algorithm and thatis to a Machine Learning algorithm and that is able to recognise new insta nces that is able to recognise new instances of speech and transcribed. Learning of course, a virgin from thinking. Is the word thinking ever applicable . Does make very different from thinking . It depends which philosophy you ask. Some argue, is the word swimming applicable to submarines . In english, no, we dont say but in other languages i think in russian they do use the win just backward swim for submarines and will use the word flapper aeroplanes which were borrowed from birds, so oui which were borrowed from birds, so our machines actually thinking. I would argue, yes, in a meaningful senseif would argue, yes, in a meaningful sense if you look at what a Chess Programme is doing, it is imagining possible music and make and imagining possible moves that the public could apply. It is plotting out these features and choosing the future that turns our best. One more word before we get into the heart of your work, and that is autonomy because it seems that autonomy is a very important what was one is at machines able to learn and we can obviously argue until the tout cows come home about the thinking but once they have the separateness from us once they have the separateness from us as their masters, they have clearly crossed a very important line. Is autonomy at the heart of todays work on Artificial Intelligence . Only in a very restricted sense, that we can allow freedom to the machine to choose how we achieve the objective but it doesnt use the objective. So the standard model for how we build ai systems is that the machine is something that finds Optimal Solutions for problems that we specify. So we say, i wanted to win this chess game and it figures out how to do it. So how is autonomy in the Decision Making process but not in the objective. And in fact, we dont really even know how a system might come up with its own objectives from scratch. Even mask, i dont know if you know, hes out of the west coast alongside you, elon musk, he said this, im very close to the cutting edge of ai and it scares the hell out of me. Is capable of lasted more than anyone knows and the rate of improvement is exponential. Digital soup intelligence is a species level risk. Do you agree . I would have to say yes. I think he is a brilliant man, he uses colourful metaphors, he talks about summoning the demon and other things that he gets some flak for that but the boy to his making the point is making is that intelligence is what gives us power in the world of other species. If we make Something Else that is more intelligent than us, therefore more powerful than us, how do we make sure that it never has any power . It remains under our control for ever. For eternity. Its a power dynamic. Its for eternity. Its a power dynamic. Its about controversy of innocence, most literally its about the off button. Yes, thats one way of capturing the essence of the problem, is, can you switch it off . And interestingly so, alan turing who is the founder of Computer Science back in the late 1930s, and rode one of the seminal papers on al ini950, rode one of the seminal papers on al in 1950, was quite a matter of fact. He said can eventually we have to expect the machine to take control. So completely resigned and he actually try this idea out, perhaps we might be able to switch the power of at we might be able to switch the power ofata we might be able to switch the power of at a strategic moment but even so asa of at a strategic moment but even so as a species will be humbled. And u nfortu nately, as a species will be humbled. And unfortunately, you cant just as a species will be humbled. And unfortunately, you cantjust switch the power of. Anymore than you can just play better moves in a superhuman Chess Programme. It is superhuman, you cant play better moves on it. And you cant outthink a superhuman machine with which you are in contact. So say that this is are in contact. So say that this is a truck once we go down there does appear to be a truck once we go down there does appearto be an a truck once we go down there does appear to be an air of inevitability about the surpassing of human intelligence by the machine . Which raises the question, and a guest of the very heart of where we are at this stage of humanities development, should be consider not going down this path of Artificial Intelligence . Is it even a possibility that we can say we understand the potentiality but we also see the profound level of potential risk and threat and therefore, we wont do it. Potential risk and threat and therefore, we wont do it. So thats a great question. There is a novel by Samuel Butler written in 1863 that answers that question. He describes a civilisation that developed its Machine Technology and then reach that fork in the road and talk the other fork. They decided to abandon Machine Technology and they consigned the machines to the museum. They said we cant take this risk because we dont want to lose control. So, im not sure we really have that choice. Because the upside of ai is so enormous, if we achieve human level ai of ai is so enormous, if we achieve human level al or superhuman ai, and we remain in control over it, then this could be a golden age for humanity and the value is in the thousands of trillions of powers. Is very ha rd stop thousands of trillions of powers. Is very hard stop the momentum, write out hundreds of billions of dollars being invested every year. Expect in the beginning of this interview you considered that the risk is potentially existential therefore not only is the reward enormous and you have just monetised not only is the reward enormous and you havejust monetised but not only is the reward enormous and you have just monetised but the not only is the reward enormous and you havejust monetised but the risk is beyond any monetary value. You havejust monetised but the risk is beyond any monetary valuem isnt in the future of our species. And so this is the question. Do we follow ala n and so this is the question. Do we follow alan turing and say, the loss of control is inevitable . I dont think we should do that. I think we need to do is understand where is the source of the risk. This is not as ifa the source of the risk. This is not as if a superior alien civilisation just lands on earth and their more intelligent and capable than we are and were toast. This is something we are designing so we are distracted where does the problem come from . What is it about the way we design ai systems that weaves live said to bea ai systems that weaves live said to be a conflict which lose control . Do you think we understand what the point is and how drugs . Because im thinking that right now, and thats get to the nitty gritty of what is happening on al, we have ai been developed to the tune of tens of hundreds of billions of dollars across the world, both by corporate actors, the Big Tech Companies at the forefront that we can all name, and states as well. Whether it be the us in terms of its Defence Department or the chinese and russian and other governments doing it at russian and other governments doing itata russian and other governments doing it at a state level. Do you think those various actors understand precisely the dilemma that you have just laid out . I can say for sure that the Chinese Government does because their official policy position acknowledges that al is an accidental risk to mankind. If its not developed correctly. So to come back to the earlier question, where is the point where it goes wrong . What is the nature of the catastrophe . And i believe it comes from the way we have been thinking about al. So we might call it the standard model for al which is that we build machines that take an objective that we specify, that come up objective that we specify, that come up with a plan to achieve it and then off we go in the problem is something that weve known for thousands of years. That we dont know how to specify these objectives correctly. Soaking my descent, i wa nt correctly. Soaking my descent, i want everything that she turned to gold. And we know what happened to him, right . Is waterand his wine and his food and his family all turned to gold. And he dies in misery and starvation. As an engineering discipline, i would say is that engineering that the only way to get this to work correctly is to specify objectives perfectly. Because that will never happen. Right. Thats like saying you can only find this aeroplane if you have seven arms in five brands and if you dont want is going to crush. That is badly designed aeroplane for a human being to flow and so the idea like the answer comes from this point that you can be pretty sure any objective human state is not the true underlying objective, that they really have for the future, therefore the machine needs to know that it doesnt know what the true objective is. But the human asks is perhaps indicative evidence of what they might like but certainly is not com plete they might like but certainly is not complete so if i ask you for a cup of coffee, that doesnt mean thats my only objective in life. Everything else doesnt matter. Like im allowed to merge on people at starbucks and crashed the car on the way back with a coffee and so on. Of course this is what i mean by the objective. Theres all kinds of understated stuff, some of which i couldnt even state if you tried to get me to stated. So, so far have been fairly abstract, fascinating but fairly abject. Lets just bring you down if we can briefly to the here and now. We know that as said the billions of are being invested ina the billions of are being invested in a particular spheres there is a concentration, defence spending for example, the search for what one might call autonomous weapons and we have seen the rise of the drone and 110w have seen the rise of the drone and now thanks to the turkish government know there are sworn s former drones which according to turkey, which is developed this capability, capable of using facial Recognition Technology, deciding for themselves when that person represents a legitimate target, opening fire or smashing into them with explosives is endemic to be the right target and if not, coming home again. So they have finished the mission and decided there was no attack to be lodged. Right. That kind of weaponisation of ai sounds too many people are extraordinarily scary. Are you scared . Yes, in the near term thats a very, very serious risk. Two years ago we made a small film called slaughter bots which describe something thats very this turkish weapon. And many governments derided this film is science fiction, this couldnt be possible, for decades and its not even worth discussing a treaty to ban this kind of weapon. Its a far off in the future. But already at that time the turkish government and the hours manufacturer when the process of designing and manufacturing pretty much precisely the thing that we should in the movie. So, we have been working very hard to bring a treaty about but the way things work in diplomacy, if the united states, russia, and the United Kingdom are opposed to a treaty, its very hard to move things forward. Film called so when you look across the piece, would you say it is in militarisation of ai that you find most cause for concern . We have to think about different timescales simultaneously. In the short term to me, this is the most serious risk. And i think another big issue its happening is the manipulation of opinions through social media and pa rt opinions through social media and part of that comes down to the way the automated algorithms work and its the same issue, the objective of these automated algorithms is to maximise click through and the way they operate is in fact to manipulate people so that they become more predictable in their opinions and the kinds of things that they will consume. So thats had a dramatically negative effect on, in some sense, the entire world and it was Collateral Damage of an algorithm operating to optimise the wrong objective. Just click through isa wrong objective. Just click through is a poorly designed objective because it doesnt take into account all of these externalities, these changes to peoples opinions which have such a negative effect globally. We see, as ive already said, states pouring money into ai and weve referred to the weapons business but surveillance in china, where we know now that facial Recognition Technology is being married with mass surveillance to, ina sense, married with mass surveillance to, in a sense, creator surveillance where your behaviours, your behaviours attract and then, given Credit Scores and are part of a means of defining your place in society. A really overarching surveillance notion. And then weve got russia which, im going to quote vladimir putin, russia appears to see ai arms race saying that Artificial Intelligence is the future for not only russia but all humankind and whoever becomes the leader in this sphere will become the ruler of the world. Is that a mentality that you think is going to hasten us to this very dark place you talked about . I think its quite possible, yes. If countries feel that they are in this live or die competition with other nations to be the first to create superhuman ai, then they are going to cut corners. They are not going to wait for these more Robust Solutions that allow humans to retain control. Are they going to go straight for pushing forward on the standard model in the hopes they get there first . In terms of standard of living for your people, there is really no need to be saying we want to rule, we want leadership, we want ownership of this technology and a large corporations have actually agreed to an International Organisation called the partnership on al whoever achieves superhuman ai will share it with everybody. Do you really believe that . This gets down to the idea of whether ai can be, in any sense, regulated globally. We made an early sort of comparison with the potential threat of nuclear weapons. The 19 for years, we have had the iaea, the International Watchdog in vienna which is supposed to monitor the development of Nuclear Facilities around the world. In the field of ai, is it possible to imagine that level of Global Cooperation and transparency, both at corporate and state level . Its very interesting to look back at the future of nuclear technology, its a potentially civilisation ending technology but it was also thought to be something that could create a golden age of Unlimited Energy and also peace. Its that same risk reward dynamic. Even though there was a Movement Among scientists at the end of world war ii to Share Technology and put it in trust for humanity, instead, it became used as a tool of domination, first by the us and then the ussr, obtaining the technology and becoming the Nuclear Arms Race so part of the problem is, they tried to put it in trust for humanity after the fact, when it was already owned by the united states. What we are trying to do now is solve this problem before the fact, before the technology is created, do two things. One is to change the way we decided so that we remain in control of ai systems fo

© 2025 Vimarsana