Transcripts For BBCNEWS HARDtalk 20171208 : vimarsana.com

BBCNEWS HARDtalk December 8, 2017

Im stephen sackur. For now, the bbc employs human beings like me to question the way our world works. But, for how much longer . As research and Development Effort into Artificial Intelligence intensifies, is there any sphere of human activity that wont be revolutionised by ai and robotics . My guest today, is alan winfield. A world renowned professor of robot ethics. From driving, to education, to work and warfare, are we unleashing machines which could turn the dark visions of Science Fiction into science fact . Alan winfield, welcome to hardtalk. High, delighted to be here, stephen. Hi. You do have a fascinating title, professor of robot ethics. Im tempted to ask you first, whats most important to you, the engineering, the robotics, or the ethics, being an ethicist . Well, both are equally important. I am fundamentally an engineer, so i bring and engineering perspective to robot ethics. But i would say more than half of my work now is actually thinking about. And, you know, im kind of a professional worrier now. Would you say the balance has shifted 7 because over the course of your career, you started out very much in computers and engineering and increasingly as you have dug deep into the subject, in a sense, the more philosophical side of it has been writ large for you . Absolutely right, yes. Actually, it was really getting involved in Public Engagement, robotics Public Engagement 15 years ago, that, if you like, alerted me and sensitised me to ethical questions about around robotics and ai. Lets take this phrase Artificial Intelligence. It raises an immediate question in my mind, how we define intelligence. I wondered if you could do that for me . Its really difficult. In fact, one of the fundamental if you like philosophical problems with al is we dont have a satisfactory definition for natural intelligence. So, here is a simple definition. Its doing the right thing at the right time. But thats not very helpful from a scientific point of view. I mean, one thing that we can say about intelligence is that its not one thing that we all have more or less of. What about thinking . Are we really, in the course of this conversation, talking about the degree to which human beings can make machines that think . I think thinkings a dangerous word. Its an anthropomorphisation. And in fact, more than that, its a humanisation of the term intelligence. A lot of the intelligence that you and i have is actually nothing to do with conscious reflective thought. So one of the curious things about al is that what we thought would be very difficult 60 years ago, like playing board games, chess, go, has turned out to be, not easy, but relatively easy. Whereas, what we thought would be very easy 60 years ago, like making a cup of tea in somebody elses kitchen, has turned out to be enormously difficult. Its interesting that you alight upon board games. In the news over the past few days, we have seen something really quite interesting. Googles Deepmind Department has this machine, computer, call it what you will, the alpha go zero, i think they call it, which has achieved astounding results playing this game. Im not familiar with it, but a game known as go. I think its primarily played in china. Extraordinarily complex. It has more computations, more moves in it, more, sort of, complexity than chess. And this machine is now capable of beating, it seems, any human grandmaster. And the real thing about it is that it is a machine that appears to learn unsupervised. Thats right. I must admit, im somewhat baffled by this, because ive just asked you about thinking, you say, no, dont use that word, but it seems to me, this is a machine that thinks. Well, its a machine that does, if you like, an artificial analogue of thinking. It certainly doesnt do it in a way you and i do. The technologys based on what is called artificial neural networks. They are, if you like, an abstract model of biological networks, neural networks, brains, in other words, which actually we dont understand very well. But we can still make very simple abstract models and thats what the technology is. But the way to think about the way it learns, and it is a remarkable breakthrough. I dont want to over hype it, because it only plays go, it cant make a cup of tea for you, but the very interesting thing is that the earlier generations effectively had to be trained on data that was gleaned from human experts, and many, many games of go. It had to be loaded with external information. Thats right. And that was what we called supervised learning, whereas the new version, and, again, if i understand correctly, i only scanned the paper this morning, the nature paper, is doing unsupervised learning. Technically, we call it reinforcement learning. If you like, the rules of the game, and its world is the board, the go board and the pieces, and then itjust plays essentially against itself millions and millions of times. Ifa bit like you ora human infant learning how to, i dont know, play with building blocks, lego, entirely on his or her own by just learning over and over again. Of course, is humans dont actually learn like that. Mostly, we learn with supervision, with you know, parents, teachers, brothers and sisters, family and so on. But, its interesting, you are prepared to use a word like, learning. Thinking, you dont like, learning, youre prepared to apply to a machine . And what i want to get to, before we go into the specifics of Driverless Cars and autonomous fighting machines, and all of that, i still want to stay with the big picture stuff, the human brain. Youve already mentioned the human brain, it is the most complex mechanism we know of on this planet. We know of, yes. In the universe, in fact. Is it possible, talking about the way in which google mind and others are developing Artificial Intelligence, that we can ultimately look to create machines that are as complex, with the billions and trillions of moving parts, if i can put it that way, that the human brain possesses . I would say, in principle, yes. But not for a very long time. I think the problem of making an al, or a robot, if you like, a robot isjust an ai in a physical body, that is comparable in intelligence to a human being, you know, an average human being, if you like, and averagely intelligent human being, is extraordinarily difficult, and part of the reason that its so difficult is because we dont actually have the design, if you like, the architecture of human minds. But in principle you think we can get it. Because what i am driving at, if this principle, philosophical question of what the brain is. To you, professor, is the brain, in the end, chemistry . Is it material . Is it a lump of matter . Yes, yes. It doesnt add any sort of spiritual or any other intangible thing, it is chemistry . In my view, but i am a materialist, yes, the brain is thinking meat. Yes, but thats a bit of a copout, because you have added, thinking meat. Its meat and the way that meats as arranged means that it can think. So, you can create something artificial, which if it were as complex, and as well arranged as Human Capacity can make it, one day, it can also think. I believe, in principle, yes. But the key thing is architecture. In a sense, the way to think about the current work on Artificial Intelligence, we have these artificial neural networks, which are almost like the building blocks. So, its a bit like having marble. Butjust having a lot of wonderful italian marble doesnt mean you can make a cathedral. You need to have the design, you need to have the architecture, and the know how to build that cathedral, and we dont have anything like that. Just one more general point, and then i want to get down to the specifics. Nick bostrom, you know him, i know you do, because he works in the same field as you, he says that you have to do think of ai has a fundamental game changer for humanity. You could be the last invention that human intelligence ever needs to make, he says, because it is the beginning of a completely new era, the Machine Intelligence era. In a sense, he says, we are a bit like children, playing with a. Something that we have picked up, and it has to be an unexploded bomb, and we dont even know the consequences that could come with it. Do you share that vision . I partially share it. Where i disagree with nick is that i dont think we are under threat from a sort of runaway super intelligence. Which is the thesis of his book. However, i do think that we need to be ever so careful. In a way, i alluded to this earlier. We dont actually understand what natural intelligence is and in fact we have no general scientific theory of intelligence. So, trying to build artificial general intelligence is a little bit like trying to do particle physics at cern, without any theory, without any underlying scientific theory. So it seems to me that we need both some serious theory, which we dont have yet, we have some, but its not unified, there isnt a single theory, if you like, like the standard model in physics. And we also need to do responsible research and innovation. In other words, we need to innovate ethnically to make sure that any, as it were, unintended consequences were foreseen and we head them off. In a more practical sense, unintended consequences may well come up. Lets start with something i think that most of us are aware of now, and regarded as one of the most challenging and perhaps exciting specific ai achievements. That is, the driverless car. It seems to me all sorts of issues are raised by a one world in which cars are driverless. Ethical and moral issues, as well as practical ones. You work with people in this field. Are you excited by Driverless Cars . I am, yes, and i think Driverless Cars have tremendous potential for two things. Do you see them as robots . I do, yes. A driverless car is a robot. And, typically, of course, once a robot goes into becomes part of normal life, we stop calling it a robot, like a vacuum cleaner. So, i think there are two tremendous advances from Driverless Cars we can look forward to. One is reducing the number of people who are killed in road traffic accidents significantly, if we can achieve that. So im going to be cautious when i speak more on this. And the other is giving mobility to people, you know, elderly people, disabled people, who currently dont have that. And, both of those are very practical, but Science Magazine last year, studied a group of almost 2000 people and asked them about what they wanted to see in terms of the morality, almost, of using Driverless Cars. How the programming of the car would be developed to ensure that, for example, in a hypothetical situation, if a car was on a road, and it was about to crash. And it veered off the road to avoid the crash, it would hit a group of schoolchildren being led by a teacher down the road, the public wanted to know that the car would in the end, except its own destruction, and that of its driver, its human passenger, rather, as opposed to saving itself, and ploughing into the children on the side of the road. How do you as a robot ethicist cope with this sort of challenge . Well, the first thing i would say is lets not get it out of proportion. You have got to ask yourself, as a human driver, probably, like me, youve got many years of experience of driving, have you ever encountered that situation . Well, no, in my case. But, nonetheless, i want to know if i ever step into a driverless car that somebody has thought about this . And youre right. I think, the ethicists and the lawyers are not clear. We need to, have a common vision about. I think its really important that if we have Driverless Cars, that make those kind of ethical decisions, that essentially decide whether to potentially harm the occupant or. You are doing what you told me off for doing, you are anthropomorphisizing. The car would not be making an ethical decision. The car would be referred in the values of the person who programmed it. Exactly. But, i would say that those rules need to be decided by, if you like, the whole of society, because, the fact is, however those rules. Whatever those rules are, there will be occasions when the rules result in consequences that we dont like. And therefore, i think the whole of Society Needs to own the responsibility for those cases. So, this is you making a call, whether it be Driverless Cars, or any of the other examples we are currently thinking about with al, of the Technological Developments in lockstep with a new approach to monitoring, regulation, sort of universal standardisation. And, the conversation, a Big Conversation in society, so that we, if you like, own the ethics that we decide should be invented. But, thats not been to help is it, because at the moment, much of the development here, you work in bristol, in a robot lab, buta lot of the cutting edge work in this field is done in the private sector, we have already mentioned google, there are Many Companies doing it, some of it is done by secretive defence establishment around the world, there is no standardisation, there is no cooperation, and in fact, it is a deeply competitive world. Well, there jolly well needs to be. I mean, my view is very simple, the autopilot of the drivers car should be subject to the same levels of compliance with safety standards, as, you know, for instance, the autopilot of an aircraft. We all accept that we wouldnt, you and i would not get into an aircraft, if we thought that the autopilot has not let those very high standards. And, i think its inconceivable that we could allow Driverless Cars on our roads that have not passed those kinds of safety certification processes. Lets leave Driverless Cars, and go to areas which are perhaps, more problematic, for human beings, because they develop the idea of the machine, the future intelligent machine, taking jobs and roles, that have traditionally always been done by human beings, because they involve things like empathy, and care, and compassion. Now, i am thinking about roles of social carers, as educators, teachers, and even, frankly, a sexual partner, because the all now read about the sex bots. So, in these roles, do you feel comfortable with the notion that machines will take over from human beings . No, and in fact, i dont think they will. They already are, japan has carers that are machines. Yes, but, we need to make a distinction here, that a carer robot may well be able to care for you, in other words, for your physical needs. It cannot care about you. 0nly humans can care about other humans, or any other animal. Objects, you know, robots, cannot care about people or things, for that matter. And, the same is true for teachers. Teachers, typically care about their classes,. So, do you think some people are getting way overheated, about this, one of britainss most well known teachers, anthony seldon, who ran wellington college, for a while, he now says that in his vision of a future education system, kids will be, many of them, taught one on one, in a spectacular new way by machines. He said it is like giving every kid access to the best private schools. Well, i think, that ultimately, there might well be, and we are talking about sometime into the future, some combination of machine teaching and human teaching. You cannot take the human out, and a really important thing to remember, here, is the peculiarly human characteristics of empathy, sympathy, theory of mind, the ability to anticipate to read each other, these are uniquely human characteristics, and so are intuition, creativity, innovation, these are things that we have no idea how to build artificially. Jobs that involve those things are safe. Interesting, because, a lot of people nowadays, are looking at doomsday scenarios, any development of robotics in al, and frankly, mostjobs one can think of, and i was sort of being a little bit flippant at the beginning about being a presenter who might be replaced by a robot, but you are suggesting to me that the idea that so many different jobs, notjust blue collar, but white collar as well, are going to end up being done by machines. Again, youre saying we are overstating it . I think we are overstating it. Yes, significantly. I am not saying it would happen eventually, but i think that what we will have is much more time than people suppose, to find, if you like a harmonious, if you like, accommodation between human and machine. That actually allows asked to exploit the qualities of humans and the, if you like, the skills, do things that humans want to do. If you dont mind me saying, you seem both extraordinarily sanguine and comfortable and optimistic about a way in which ai is developing. Under the control of human beings, and yourfaith in humanitys ability to cooperate on this and established standards, seems to me to run in the face of the facts, because one area which i want to end with, in a way, is weaponisation. Is the notion that al and robotics are going to revolutionise warfare and war fighting, now, you are one of a thousand senior scientists who signed an appeal, i think, for a ban on al weaponry in 2015, but thats not been to happen, is it . Well, the ban. Did you see what Vladimir Putin said . Artificial intelligence, is the future for russia, for all of humankind, and, this is the key bit. Whoever becomes the leader in this sphere will become the ruler of the world. Well, yes, i am an optimist, but i am also very worried about exactly this thing, and of course, we have already seen the political weaponisation of ai. Its pretty clear, isnt it, that the evidence is mounting that al was used in the recent elections . You are talking about the hacking, and that came from, we believe, from russia . Indeed. And that is a political weaponisation, so we do need to be worried about these things, we do need to have ethical standards. We need to have worldwide agreement, and i am optimistic abouta ban on lethal autonomous weapon systems. The campaign, you know, is getting traction, there has been all sorts of discussions, and i know some of the people involved by well. Yes, but, we know the limitations of the United Nations, we know the limitations of politics, frankly, and we know that human nature usually leads to be striving to compete and to win, whether it be in polit

© 2025 Vimarsana