Transcripts For BBCNEWS HARDtalk 20240703 : vimarsana.com

BBCNEWS HARDtalk July 3, 2024

Its a great pleasure to have you here. Now, you, in your career, are wrestling with the complex relationship between us humans and increasingly intelligent machines. It seems, if ive got it right, that youre not so much worried about the machines youre worried about us, our wisdom. Is that right . Its a great way of putting it. I mean, i think the great challenge we have is one of governance. Containment means that we should always be in control of the technologies that we create. And we need to make sure that they are accountable to us as a species, they work for us over many, many decades and centuries, and they always do way, way, way more good than harm. With every new type of technology we have, there are new risks risks that, at the time we experience them, feel really scary. Theyre new. We dont understand them. They could be completely novel in ways that could be very harmful. But thats no reason to feel powerless. These technologies are things that we create and, therefore, we have every ability to control. All right. And to be clear, as youve just described it, and i guess you would include innovations from, i dont know, the discovery of fire to the wheel to the sail to, in our more recent industrial age, the steam engine, maybe even all the way up to the internet. These Game Changing technological advances. You seem to be suggesting that the inventors, the innovators, people like you are today, cant foresee where theyll go. We cant always foresee where theyll go, but designed with intention, we can shape them and provide guardrails around them, which ultimately affect their trajectory in very fundamental ways. Take aircrafts, for example. I mean, flight, when it first appeared, seemed incredibly scary. How could i possibly get into a tin tube at 40,000ft and go 1,000 mph . But, over many decades, weve iterated on our process of safety and oversight. Weve had regulators that force competitors to share insights between them. We have the Black Box Recorder, the famous Black Box Recorder that tracks all the telemetry on board the plane. It records everything that happens in the cockpit and shares those insights across the world. Now, thats a great achievement for our species. I mean, it means that we can get all the benefits of flight with actually very, very few of the risks overall. And thats the approach that we need to take with al. All right. So lets get down to ai. Now, i began by positing a sort of widely held human fear that Artificial Intelligence leads inevitably to a sort of Superintelligent Machine that ultimately has sort of full autonomy and somehow goes rogue on humanity. You have deep concerns but it seems that is not your deepest prime concern. Unfortunately, the Big Straw Man that has been created over the last few years in al is that of terminator or skynet this idea that you would have a recursively self improving Artificial Intelligence, one that can update and improve its own code independently of human oversight, and then. Right. Sort of breaks the Umbilical Cord with the human creator. Right. Thats exactly right. And that would be the last thing we want. We want to make sure that we always remain in control and we get to shape what it can and cant do. And i think that the odds of this recursively self improving ai causing an intelligence explosion, is how its often referred to, i think theyre relatively low and were quite far away from that moment. And part of the problem at the moment is that were so fixated on that possibility mostly because sci fi gives us an easy way in to talking about it when we dont have very long that were actually missing a lot of the more practical near term issues which we should focus on. And that is what you call the coming wave. And you say the wave is all about understanding that where we are with Artificial Intelligence today is not going to be where we are in, say, five years� time. Because i think the phrase you use is, artificial capable intelligence is transformative. So i need you to explain to me how and why its transformative. Thats exactly right. So think of Ai Development over the last ten years in two ways. The first is that we have been training ms to classify that is understand our perceptual input. Right . So, they have got good at identifying objects in images so good, in fact, that they can be used to control, you know, self driving cars. Right . They have got really good at transcribing text. When you dictate to your phone, you know, you record that speech and you translate it into written text. They can do language translation. These are all recognition. Theyre understanding and classifying content. Now theyve got so good at that that they can now generate new examples of those images, of that audio, of the music, and thats the new Generative Ai Boom that were currently the last year has been incredible with these Large Language Models that can produce new text that is almost at human level in terms of accuracy. They are creating these new chatbots and other sort of apps that we talk about they are creating text, theyre creating music, theyre creating even visual art. And that gives us a sense that the machine somehow is developing its own consciousness, which of course its not. Its simply got more and more and more data to use and sort of mould in a way that fits the instructions its given. So, how far can this go . Thats exactly right. And so i think we often have a tendency to anthropomorphize and project more into these systems than they actually have. And thats understandable. Three or four years ago, people would often say, well, ai will never be creative, that will always be the preserve of humans. A few years ago, people would say, well, ai will never have empathy. And now you can see that these conversational ms and chatbots are actually really good at that. The next wave of features that theyre likely to develop over the next five years, as these models get bigger and bigger, are capabilities around planning. You know, you referred earlier to artificial capable intelligence. In the past, weve just defined intelligence on the basis of what an ai can say. Now, were defining intelligence on the basis of what an ai can do. These ais will co ordinate multiple steps in complicated scenarios. They will call apis, theyll use websites, they will use back end data bases. Theyll make phone calls to real humans, theyll make phone calls to other ais, and theyll use that to make really complicated plans over extended periods of time. You paint this sort of near to medium term future of the expansion of ai into every aspect of our lives, and you say that writing the book about it you call it the coming wave was a gut wrenching experience. Why was it gut wrenching . Are you. Are you frightened . Im not frightened, but i think new technologies always bring very new challenges. And in this case, there is the potential that very powerful systems will spread far and wide. And i think that has the potential to pose really significant, catastrophic threats to the future of the nation state. So, for example, all technologies in the history of our species, all the general purpose technologies that you just referred to, to the extent that they have been incredibly useful, theyve always got cheaper, easier to use, and therefore they have spread all around the world. That has been the absolute engine of progress for centuries, and its been a wonderful thing. Its delivered enormous benefits in every possible respect, in every domain. If that trajectory continues, when were actually talking about the creation of intelligence itself or, in synthetic biology, life itself, in some respects, then these units of power are going to get smaller and smaller and smaller and spread far and wide. Everybody in 30 to 50 years may potentially get access to state like powers, the ability to co ordinate huge actions over extended time periods. And that really is a fundamentally different quality to the local effects of technologies in the past you know, aeroplanes, trains, cars, really important technologies but have localised effects when they go wrong. These kinds of ms have the potential to have systemic impact if they go wrong in the future. I mean, theres so much thats profound and deep in what youve just said, im almost struggling to know where to start with it. But one. A couple of phrases just come to my mind. You talked about, uh, the way to create intelligence and synthetic life. This is sort of godlike power that we humans are now looking at, contemplating. But with the best will in the world, probably none of us believe that we deserve godlike powers. We are too flawed. Thats surely where the worry comes. We need these powers more than ever. And thats the paradox. I mean, this is the ultimate prediction engine. Well use these ms to make more efficient foods, for example, that are drought resistant, that are resistant to pests. Well use these ms to reduce the cost of health care dramatically. Right . Well use these ms to help us with transportation and education. Everyone is going to get access to a personal intelligence in their pocket, which is going to make them much, much smarterand more efficient at theirjob. That, i think, is going to unleash a Productivity Boom that is like a cambrian explosion. Imean. Well, hang on. A Productivity Boom for those like you who absolutely have the skills to be at the forefront of this transformation. But most of us humble humans do jobs which will disappear. Its quite possible, in this world of advanced ai, you wont need journalists. You wont necessarily need half the doctors weve currently got, or the lawyers or a whole bunch of other professions which thought they were safe from mechanisation but are certainly not safe from Artificial Intelligence. What on earth are human beings going to do when so much of what we need to do is done by machines . These are tools that make us radically smarter and more efficient. So if you are a doctor, you spend a vast portion of your day inputting data, writing notes, doing very laborious, painful work. These are tools that should save you an enormous amount of time so that you can focus on the key things that you need to. I take your point, but youll need less doctors. Its possible that you will need less doctors. We may need less of every possible role in the future. Yes. To that, i would say, bear in mind that work is not the ultimate goal of our society. The goal of our society is to create peace and prosperity, wealth and happiness, and to reduce suffering. The long march of civilisation is not a march towards creating work. It is a march towards reducing work and increasing abundance. And i believe that over a 30 to 50 year period, we are on a path to producing radically more with radically less. That has been the story of history. But is it not possible that. . I do not mean to interrupt you when youre painting this incredibly positive picture, but is it not possible that you will challenge the Mental Health of human beings . So much of our self worth comes from our sense of utility, our usefulness. A lot of that comes from work. In this world you are portraying, 30 years away, where work and productivity is fundamentally different, machine led, we humans may feel ourselves to becoming progressively more useless. The question is, who is the we . So you and i get an enormous amount of health and wellbeing and identity out of our work, and we are very lucky. Were the privileged few. Many, many people dont find that flow and peace and energy in their everyday work. And so i think its important for us to remember that, over the multi decade period, we have to be striving towards a world where weve solved the issues of creation and redistribution. The real challenge youre describing is the one that you alluded to at the top of the show. Its a governance question. How do we capture and redistribute the value that is being created in a fair way, that brings all of society with us . And thats a better challenge to have than not having enough. Right. So lets now get to the truly malign possibilities that come with this expansion of transformative ai. Because youve just explained that what ai allows is for the sort of, the cost of being powerful to become ever less. It empowers people in ways we havent imagined before, and it empowers non state actors and states to do bad things in new ways. How do you ensure that doesnt happen . Thats the great challenge. I mean, these models represent the best and the worst of all of us, and the challenge is to try to mitigate the downsides. Many people will use these tools to spread more efficient forms of misinformation. Its already happening. To sow dissent, to increase anger and polarisation. And also, to enforce a new level of authoritarianism through surveillance, through the elimination of privacy. And thats why its critical for us in the west, in europe and in the us and all over the world, to defend our freedoms, because these are clearly potential tools which introduce new forms of surveillance. They might reduce the barrier to entry to surveillance. And so the challenge for us is figuring out how we dont rabbit hole down that path. If we give in on our own set of values and accept that we have to then lunge towards more authoritarian surveillance, that would be a complete collapse of the values that we actually stand for. I talked about gut wrenching emotions as you wrote this. Youre clearly worried. However, you dress up the positivity of so much that ai offers, you signed a joint statement that came from the center for al safety earlier in this year. It was very simple. Itjust called upon all of you in this business, including governments, including private sector people like yourself, to mitigate the risk of extinction that was what it was called that could come from al. That, that you said in the statement, should be a global priority. I see no sign that it is becoming that global priority. Just as these ais reduce the barrier to entry, to be able to educate somebody that doesnt have access to education or provide the highest Quality Health care to someone who can only have a Telephone Call Interaction with an ai health clinician, they also enable people who dont have the expertise in biology, for example, to develop biological or chemical weapons. They reduce the barrier to entry, to access knowledge and take actions. And that is fundamentally the great challenge before us. I think its an intellectually honest position to notjust praise the potential upsides, but look straight in the face of the potential downsides. And wisdom in the 21st century has got to be about holding these two competing directions in tension and being very open and clear about them so that we can mitigate and address the risks today. But lets start our look at where we might expect the mitigation to come from, the responsible, accountable governance of the ai world to come from, by addressing the private sector, by addressing people like you whove made, let us be honest, hundreds of millions of dollars in some peoples cases billions of dollars by being sort of market movers, pioneers in Artificial Intelligence. You have an extraordinary financial stake in constantly pushing the boundaries, dont you . I do. And im building a company. It is a Public Benefit corporation, which is a new type of company, a hybrid for profit and nonprofit mission, entrenched in our legal charter. So it doesnt solve all of the issues of for profit missions, but its a first step in the right direction. And we create an ai called pi, which stands for personal intelligence. It is one of the safest ms in the world today. None of the existing jailbreaks and prompt hacks to try to destabilise these ms and get them to produce toxic or biased content work on pi. Pi is very careful, its very safe, its very respectful. So i believe. So you are so concerned about being responsible, being transparent and sort of being audited about what you are doing in this sphere, have you moved right away, then. . And youre based in Silicon Valley. The old Silicon Valley mantra was move fast, break things. And if you talk about a whole generation of Tech Pioneers who are now obviously multibillionaires, from the gateses to the larry pages, to the zuckerbergs and the musks, these were people who sort of developed their ideas to the max and perhaps only later began to wrestle with some of the downsides. Are you saying youve fundamentally changed that model . I think so. I think there is a new generation of ai leaders coming up who have been putting these issues on the table since the very beginning. I co founded deepmind in 2010, and right from the outset weve been putting the question of ethics and safety and ai on the table, at the forefront. I mean, you mentioned audits. For example, just two months ago, i signed up to the voluntary commitments at the white house and met with President Biden and laid out a suite of proposals that would proactively subject us at inflection ai, and the other big providers of Large Language Models, to audits by independent third parties, and to share those Best Practices notjust with each other, but with the world. Right. And when you, in your book, when you address how to mitigate the dangers of ai, you talk about a whole bunch of things which, to me, seem very idealistic. You say youre already being accountable. Youre accepting audit from the outside of everything you do, but you also say the International Community is going to have to work at this. Theres going to have to be amazing cooperation and collaboration. Theres no sign of that happening. The United States is looking at a voluntary code. Youve been to the white house to discuss that. The eu is looking at legislation on al, but then look at china. China is also moving as fast as it possibly can in this field. They are not interested in signing up to this sort of idealistic International Co operation that you write about. Ive got no shame in being an idealist. I think that you have to be an idealist to try and move civilisation forward. We have to raise the bar on ourselves and on our peers. Now, i cant control what happen

© 2025 Vimarsana