Transcripts For BBCNEWS HARDtalk 20240713 : vimarsana.com

BBCNEWS HARDtalk July 13, 2024

But these machines and in some sense, do. They have an explicit internal model welcome to hardtalk, of what their objective im stephen sackur. What is the most serious existential is and they can figure out how to achieve it. Threat facing humanity . So there might have the goal of winning a game of chess well, many of us might point and they can figure out how to do that and beat humans at it. To nuclear war or climate change. Learning what about that word, learning where does that fit . So these days, following actually but some of the greatest minds a recommendation by alan turing in the tech sector are looking in 1950, we built many of the ai in a very different direction. Artificial intelligence, system, not by programming them warned the physicist explicitly, but by training them. Stephen hawking, could spell the end of the human race. So we have general purpose learning algorithms that take examples my guest tonight is stuart russell, from the real world and then a globally renowned computer from those examples they extract scientist and sometime advisor general patterns and those patterns allow them to make predictions to the uk government and un. About the new cases. Right now ai is being developed as speech recognition for example, a tool to enhance human capability. 00 01 49,588 2147483051 37 39,509 is it fanciful to imagine 2147483051 37 39,509 4294966103 13 29,430 the machines taking over . Is not trained by, you know, we write to recognise a ph, s and p. Instead we just take hours and hours and hours of speech, with a transcription and then we feed that to a Machine Learning algorithm and then its able to recognise new instances of speech and transcribe it. Learning of course, very different from thinking. Is the word thinking ever applicable in the world of Artificial Intelligence . It depends which philosophy you ask. And some argue, you know, is the word swimming applicable to submarines . In english, no, we dont say. But in other languages i think in russian they do use the word swim for submarines. And we use the word fly for aeroplanes, which we borrowed from birds. So are machines actually thinking . I would argue, yes, in a meaningful sense if you look at what a Chess Programme is doing, it is imagining possible moves that it could make and imagining possible moves that the opponent could apply. It is plotting out these futures and choosing the future that turns out best. One more word before we get into the heart of your work, and that is autonomy, because it seems that autonomy is a very important word, once machines are able to learn and we can obviously argue until the cows come home about whether they are thinking, but once they have this separateness from us as their masters, they have clearly crossed a very important line. Is autonomy at the heart of todays work on Artificial Intelligence . Only in a very restricted sense, that we can allow freedom to the machine to choose how it achieves the objective but it doesnt choose the objective. So the standard model for how we build ai systems is that the machine is something that finds Optimal Solutions for problems that we specify. So we say, i want you to win this chess game and it figures out how to do it. So how is has autonomy in the Decision Making process but not in the objective. And in fact, we dont really even know how a system might come up with its own objectives from scratch. Elon musk, who i dont know if you know, obviously hes out on the west coast alongside you, elon musk of so many different ventures from electric cars to space exploration, said this recently, he said im very close to the cutting edge in al and it scares the hell out of me. Its capable of vastly more than almost anyone knows and the rate of improvement is exponential. Digital super intelligence is a species level risk. Do you agree . I would have to say yes. I think elon is a brilliant man, he uses colourful metaphors, he talks about summoning the demon and other things that he gets some flak for that, but the point hes making is the following intelligence is what gives us power in the world, over other species. If we make Something Else that is more intelligent than us, therefore more powerful than us, how do we make sure that it never has any power . It remains under our control for ever. For eternity. Its about that power dynamic. Its about control. And in a sense, most literally its about the off button. Yes, thats one way of capturing the essence of this problem is, can you switch it off . And interestingly so, alan turing who is the founder of Computer Science back in the late 19305, and wrote one of the seminal papers on al in 1950, was quite matter of fact. He said eventually we would have to expect the machines to take control. So completely resigned, and he actually tried this idea out, perhaps we might be able to switch the power off at a strategic moment but even so as a species we will be humbled. And unfortunately, you cantjust switch the power off, any more than you can just play better moves than a superhuman Chess Programme. It is superhuman, you cant play better moves than it. And you cant out think a superhuman machine with which you are in contact. So say that this is a track that, once we go down there does appear to be an air of inevitability about the surpassing of human intelligence by the machine . Which raises the question, and it gets to the very heart of where we are at this stage of humanities development. Should we consider not going down this path of Artificial Intelligence . Is it even a possibility that we could say we understand the potentiality, but we also see the profound level of potential risk and threat and therefore, we wont do it . So thats a great question. There is a novel by Samuel Butler written in 1863 that answers that question. He describes a civilisation that developed its Machine Technology and then reached that fork in the road and took the other fork. They decided to abandon Machine Technology and they consigned the machines to the museum. And said, we cannot take this risk because we dont want to lose control. So, im not sure we really have that choice. Because the upside of ai is so enormous, if we achieve human level al or superhuman ai, and we remain in control over it, then this could be a golden age for humanity and the value is in the thousands of trillions of pounds. So, its very hard to stop the momentum, right now its hundreds of billions of dollars being invested every year, all over the world. Except in the beginning of this interview you conceded that the risk is potentially existential therefore not only is the reward enormous and you have just monetised it, but the risk is beyond any monetary value. It is indeed the future of our species. And so this is the question do we follow alan turing and say, the loss of control is inevitable . I dont think we should do that. I think what we need to do is understand, where is the source of the risk . This is not as if a superior alien civilisation just lands on earth and theyre more intelligent and capable than we are and were toast. This is something we are designing so where does the problem come from . What is it about the way we design ai systems that leaves there to be a conflict in which we lose control . Do you think we understand where that point is and how it works . Because im just thinking right now, if one and lets get to the nitty gritty of what is happening in al, we have ai being developed to the tune of tens and hundreds of billions of dollars across the world, both by corporate actors, you know the the Big Tech Companies at the forefront that we can all name, and states as well. Whether it be the us in terms of Defence Department or the chinese and russian and other governments doing it at a state level. Do you think those various actors understand precisely the dilemma that you havejust laid out . So i can say for sure that the Chinese Government does, because their official policy position acknowledges that al is an existential risk to mankind, if its not developed correctly. So to come back to the earlier question, where is the point where it goes wrong . What is the nature of the catastrophe . And i believe it comes from the way we have been thinking about al. So we might call it the standard model for al which is that we build machines that take an objective that we specify, that come up with a plan to achieve it, and then off they go. And the problem is something that weve known for thousands of years that we dont know how to specify these objectives correctly. So king midas said, i want everything i touch to turn to gold. And we know what happened to him, right . His water and his wine and his food and his family all turned to gold. And he dies in misery and starvation. So as an engineering discipline, i would say its just bad engineering if the only way you can get this to work correctly is to specify objectives perfectly. Because that will never happen. Right. It would be like saying, ok you can only find this aeroplane if you have seven arms and five brains and if you dont, well its going to crash. Thats a badly designed aeroplane for a human being to fly and so the answer comes exactly from this point, that you can be pretty sure any objective the human states is not the true underlying objective, that they really have for the future. Therefore the machine needs to know that it doesnt know what the true objective is. That the human asks is perhaps indicative evidence of what they might like, but certainly is not complete. So if i ask you for a cup of coffee, that doesnt mean thats my only objective in life and Everything Else doesnt matter. Like im allowed to mow down people at starbucks and crash the car on the way back with a coffee and so on. Of course thats not what i mean by the objective. Theres all kinds of un stated stuff, some of which i couldnt even state if you tried to get me to stat it. So, so far we have been fairly abstract, fascinating but fairly abstract. So lets just bring you down if we can briefly, to the here and now. We know that as i said the billions are being invested in al, in particular spheres there is a concentration. Defence spending for example, the search for what one might call autonomous weapons. We have seen the rise of the drone. We now, thanks to the turkish government, know there are swarm drones which are, according to turkey, which has developed this capability, capable of using facial Recognition Technology, deciding for themselves whether that person represents a legitimate target, opening fire or smashing into them with explosives if they deem it to be the right target and if not, coming home again. So they have finished the mission and decided there was no attack to be launched. Right. That kind of weaponisation of ai sounds to many people, extraordinarily scary. Are you scared . Yes, in the near term thats a very, very serious risk. In fact, two years ago we made a small film called slaughter bots, which described something thats very like this turkish weapon. And many governments derided this film as science fiction, that this couldnt be possible. For decades, and its not even worth discussing a treaty to ban this kind of weapon, because its so far off in the future, but already at that time the turkish government and the arms manufacturer, stm, were in the process of designing and manufacturing pretty much precisely the thing that we showed in the movie. So, we have been working very hard to bring a treaty about but the way things work in diplomacy, if the united states, russia, and the United Kingdom are opposed to a treaty, its very hard to move things forward. So when you look across the piece, would you say it is in militarisation of ai that you, at the moment, find most cause for concern . So i think we have to think about different timescales simultaneously. In the short term to me, this is the most serious risk. And i think are another big issue so thats happening is the manipulation of opinions through social media and part of that comes down to the way the automated algorithms work and its the same issue, you you are the objective of these automated algorithms is to maximise click through and the way they operate is in fact to manipulate people so that they become more predictable in their opinions and the kinds of things that they will consume. So thats had a dramatically negative effect on, in some sense, the entire world and it was Collateral Damage of an algorithm operating to optimise the wrong objective. Just click through is a poorly designed objective because it doesnt take into account all of these externalities, these changes to peoples opinions which have such a negative effect globally. We see, as ive already said, states pouring money into ai and weve referred to the weapons business but surveillance in china, where we know now that facial Recognition Technology is being married with mass surveillance to, in a sense, create a surveillance where your behaviours, your behaviours attract and then, given Credit Scores and are part of a means of defining your place in society. A really all overarching surveillance notion. And then weve got russia which, im going to quote vladimir putin, russia appears to sort of see an ai arms race, saying that Artificial Intelligence is the future for not only russia but all humankind and whoever becomes the leader in this sphere will become the ruler of the world. Is that a mentality that you think is going to hasten us to this very dark place you talked about . I think its quite possible, yes. If countries feel that they are in this live or die competition with other nations to be the first to create superhuman ai, then they are going to cut corners. They are not going to wait for these more Robust Solutions that allow humans to retain control. Are they going to go straight for pushing forward on the standard model in the hopes they get there first . In terms of standard of living for your people, there is really no need to be saying we want to rule, we want leadership, we want ownership of this technology and a large corporations have actually agreed to an International Organisation called the partnership on al, whoever achieves superhuman ai will share it with everybody. Do you really believe that . This gets down to the idea of whether ai can be, in any sense, regulated globally. We made an early sort of comparison with the potential threat of nuclear weapons. For 19 years, we have had the iaea, the International Watchdog in vienna which is supposed to monitor the development of Nuclear Facilities around the world. In the field of ai, is it possible to imagine that level of Global Cooperation and transparency, both at corporate and state level . Its very interesting to look back at the future of nuclear technology, its a potentially civilisation ending technology but it was also thought to be something that could create a golden age of Unlimited Energy and also peace. Its that same risk reward dynamic. Even though there was a Movement Among scientists at the end of world war ii to Share Technology and put it in trust for humanity, instead, it became used as a tool of domination, first by the us and then the ussr, obtaining the technology and becoming the Nuclear Arms Race so part of the problem is, they tried to put it in trust for humanity after the fact, when it was already owned by the united states. What we are trying to do now is solve this problem before the fact, before the technology is created, do two things. One is to change the way we decide it so that we remain in control of ai systems forever, eliminate the standard model and move to a new model which doesnt involve these fixed known objectives that are put into the system. The second part of it is to essentially have unbreakable agreements that the technology will be shared for the benefit of humanity. Do you think that the rise of ai is going to undermine our liberal notions of democracy and individualism . Will it play into the hands of authoritarian controlling systems . I think the chinese model is something, my guess is they are going to abandon because what happens when you set up Something Like this social credit score which is supposed to be a numerical calculation of how much that person is contributing to an overall harmonious virtuous social order. Is it that people simply behave to optimise the score and not virtue and harmony . So you get bonus points, for example, for visiting your ageing parents because chinese tradition says you should honour your parents and your ancestors and so thats considered to be virtuous behaviour. But of course if you are only doing it to get more points, you lose the virtue. You are encouraging what used to be virtuous behaviour but you are turning it into cynical behaviour so you end up the society of cynical point maximises rather than anyone doing anything for a good reason. My final thought is this. Throughout our conversation, with sort of posed a juxtaposition between man and machine. It was an either or. Either man maintains control all the machines as they exponentially increase their capacity for data storage and thinking, learning anyway, they take over and we lose control and we cant turn them off. Maybe its not an either or. Maybe what we should think about for the future is the melding of the two, that somehow man and machine are married. And this is another thing that elon musk has suggested, he started a company, neuralink, his goal is exactly that. So you have to ask yourself, if we all need to have brain surgery, electronic implants, just to survive, did we make a mistake somewhere along the line . And i think the answer would be yes. So i dont find that to be an attractive future for the human race. But i think elon musk would say, we are halfway there already. The relationship we have with our mobile phone and the degree to which it now is a central part of almost o

© 2025 Vimarsana