Documentary filmmaker james barrat talks about runaway Artificial Intelligence. Mr. Barrat argues we are headed into the future in which machines will be able able to outthink humans and may eventually view is in the same way we view lower life forms today. This is about 50 minutes. [applause] hi. I am james barrat and i want to thank mary and jenice for inviting me here and i would like to thank cspan for being here. I think this is such a great bookstore. It is good to remember when books really were sold in bookstores and this was kind of a miracle because it manages to me, kind of captures the wild and mysterious bookstores from when i was a kid. They were just a little bit spooky. [laughter] but very rich and fascinating. That is what this one is and obviously its also such a great Meeting Place for people. I want to thank them for inviting me. I have written a book about Artificial Intelligence but my usual job is it documentary film producer. Ive made a lot of films you mightve seen on the National Geographic network and pbs. Some are available on netflix. It was through the documentary that i became interested in Artificial Intelligence and that is what im here to talk to you about tonight. Artificial intelligence what it is and why i think and a lot of ai researchers think its being developed in the wrong way. I hope to give you some things to think about because i really believe that this conversation is the most important conversation of our time. So lets begin with this. What is Artificial Intelligence . It is a theory and developmedevelopme nt of Computer Systems able to perform tasks that normally require human intelligence and such as visual perception etc. The human intelligence phrase or the whole idea of ai back to humans because by and large what we know much about his human intelligence and human intelligence with ai is both this study, the subject of study and the tool of which we try to penetrate what intelligence is. This is what makes ai fascinating to me. So most inward looking of any of the sciences. It involves psychology narrow science medicine statistics and a lot more on top of programming and Computer Science. It makes us ponder what it is we are looking for when we hope to mirror human cognition and machines. The science of ai asks us what to humans do . What are you . What is intelligence . There are a lot of definitions of intelligence in the ai research business. I like this concise one this is intelligence is the ability to achieve goals in a variety of novel environments and to learn and theres a lot packed into that definition. Its as intelligence is goal oriented so that the intelligence is not doing something its not displaying intelligence. Intelligent should be mobile and probably comes with a body although if you cant move around and adapt your intelligence may be poor quality and theres really no way of tee around you need some sort of oddly and you must learn from experience and this is a real important one for ai. Most animals come with all the abilities they will ever have. We can learn new languages jobs crafts etc. And of course other animals can learn that nothing like the scale of humans because we have intelligence. Ive i have been interested in ai for several decades but i really got bitten by the ai bug and was learning for the working for the learning channel i was making a film about Artificial Intelligence and i got to interview a man who was my hero at the time ray criswell. Ray criswell as you know as a pioneer of Speech Recognition Technology machines that read hooks to the blinded many other inventions. He has been called the Thomas Edison of our time. He is the man who coined the term singularity or rebranded the term singularity. He is in charge of the process to reengineer the brain and most researchers i spoken with think reverse engineering the brain is the fastest way to create artificial general intelligence which is human level intelligence and it might be something you want to look into. Its quite fascinating. I also got to interview another hero of mine back then rodney brooks. Rodney brooks is the foremost robotocized of our time. The company who founded it makes irobot and he has sold and moved on. This is a general purpose robot called baxter that is designed to learn to do things in your home or in factories. He imagines working on farms. But he also irobot makes robots but also a lot of battlefield robots that carry guns. Theres a debate going on right now thats a very important debate about whether or not at all filled robots whether they should make the kill decision without a human and the question of drones too and i will get into that later. Im taking to irobot robots. They are going to help us. Yeah they are going to help us. They are called first looks and they will help us go into a pyramid that hasnt been explored as got a lot of rockfall so all the passages inside we cant get through to. They will be excavated eventually while we are there but the first thing to do is put these robots in and get a sense of what is the fastest way to the burial chamber, what does the overall layout look like . And obviously dont let the title of my book mislead you because i really do like robots. Criswell and brooks were optimistic about the time that is coming when we will share the planet with machines that are smarter than we are. Criswell in his nonfiction books predicts ai will help us solve every medical problem facing us including the overall general problem of mortality. But after criswell and perks i interviewed martha c. Clarke. Most of the decisions affecting our lives will be made i machines. So i began asking followup questions will that transition be friendly. Will that be a handover handover or a takeover . Will we change ourselves to become machines with rain modifications which is criswells singularity or will be create machines smarter than ourselves and will while those machines somehow replace us . What i learned is if we see them on the course we are currently following and i want to explain why we will create Intelligent Machines that wont be benign or harmless. They will start out being tools to quickly we will become their tools if we continue to exist at all. My book is called our final invention Artificial Intelligence and the end of the human era. The books thesis is that we need to develop a science for understanding smarter human intelligence before we create it. The two years i spent writing the book drew me into a world of people who would i spend driven to create smart machines working at high levels on ai and knew they wanted to create smart rashean since they were teenagers and children. It also immersed me into the lives of all who are just as determined to stop the reckless use in the Reckless Development of advanced ai. The two years i spent writing the book were two years of the most intensely enjoyable but also the most harrowing because i went looking for, i got more than i bargained for. I went looking for a fish and i found a whale. I found more bad news than i wanted to find. So lets jump in. How did we get from smart ounce in our pockets to super Intelligent Machines that could threaten our existence . Let me ask you a question with a show of hands. Do you think scientists can make a machine as smart as a human . So its not than the problem is either too hard in an engineering sense or there is something about the human brain that defies engineering so who thinks the problem of intelligence is too to harden this is a legitimate, it could be a legitimate problem. It could be too hard. It may go to the next century. So the problem is either too hard or there something that is magical or mysterious about the human brain that cant be duplicated and who is on that side . Less than 15 of the professionals i spoke with or that polled believe the problem is too hard and none think theres anything magical about the brain. Being ai special as they are going to think that that i did a later poll of experts and nonexperts and combined them with a couple of graphs. My conclusions with them there is nothing unfathomable about the human brain. Given time we will create human level intelligence and then beyond. So in that case if you follow that path, you might not have been a where we just split off into different tribes i think that if you follow that, if you follow we can craft this problem and is just a matter of time, will it take 10 years or will it take 100 years . If intelligence is a problem that can be solved how long will it take . Ray criswell has been good at technological progress. The main date according to others i pulled was about 2045 which is in many of our lifetimes. The outside data from the specialists and nonspecialists was about 2200. Gary marcus of the new yorker magazine is a psychologist and neuroscientist and he was kind enough to review our final invention for the new yorker and he said this about how long it would take pity set a century from now nobody will care how long it took. What they will care about is what happened next. Its likely the machines will be smarter than us before the end of the century he said. So whats important says marcus is not when super intelligent shows up but what happens next. In other words will we be ready . Will we have prepared ourselves . Even ray criswell who is supremely optimistic believes Machine Intelligence will surpass our ability to understand it but how exactly so my question was how exactly will that happen . How will machines get smarter than us . Theres a Pretty Simple theorem put out by ig good in 1965 and there are are a couple of chapters about ig good. Hes a unknown scientist who worked as a crook code raider code breaker. Im going to give you time to read it. Im going to give you 20 seconds to read it. [laughter] i like the formula that i would like to put another way. We have created machines that are better than us at chess jeopardy now chess jeopardy and other tests like navigation and theorem proving and lots of other things. Probably within the next decade we will create machines that are better at ai research than we are. We should build su improve capabilities quickly. These machines will jump from human level intelligence to superhuman intelligence in a matter of days weeks or years. How close are we at software that improves his self . Now software exists and observes physics experiments derives hypotheses and make suggestions for further experimentation. Software to write software exists. Software the judges the quality of software exists. So a Software System that improves itself is within reach and their good attempt at doing that right now. If you think of the field of algorithms there is a lot that are capable of improving their own performance. Intelligence selfaware Software System is another thing. Thats general intelligence. When that cell phone grooving selfaware software exists it will share the planet with smarter machines. So that takes us back to the question how will we get along without . What makes us think we will understand machines to science. Watson beat the humans with an amazing collection of cognitive powers pattern recognition decisionmaking statistical reasoning hypothesis generation, hypothesihypothesi s generation is very important. We do pattern recognition when we pick out patterns in the crowd. We statistically weigh them all the time. For example how long will this guy talk on and on about Artificial Intelligence . You could hypothesize 10 minutes or 40 minutes and you can test the hypothesis. What is watson doing today . Watson is being trained to take the federal medical licensing exam so watson yeah. Watson is performing medical diagnostics and Drug Research so watson is beginning to do legal research. It wont be a consulting physician right away but it will it be a physicians aid and they want to license it so that they can avoid certain kinds of liability. Overall how good our ais cognitive cognitive functions in 2014 and this is what i bring up with people who say it hasnt gone anywhere since the achievements are few. You know that ai cognitive functions are pretty good when they start taking our jobs when they compete in the job market. Heres a shortlist of jobs were humans are being replaced by machines right now by ai and automation and automated intelligence. Sportswriters travel agents bank tellers manufacture jobs of all kinds postal workers Clerical Workers pharmacists. All of these are being computerized. Soon to be replaced by computers in medical diagnosticians. When selfdriving cars, about right now our are average, the average driver you grade them the messes c or a d. When we raise driving two b overall we will all be happier but we will all be drivers. Astor notts pilots soldiers software developers. A recent article in the m. I. T. Technology review said 45 of all jobs will be automated within the next 40 years. It i think thats a conservative how close is human intelligence and machine to being obtained . Is so close that reaching human intelligence is job number one for a lot of companies and governments. Why would a lot of companies and governments support millions of dollars in creating virtual brains . And the answer is because an artificial brain at the price of the computer would be the most lucrative product in the history of the world. An artificial brain at the price of the computer will be the most lucrative product in the history of the world. Imagine banks with 10,000 ph. D. Quality brains working 24 7 on things like Cancer Research climate modeling, Weapons Development. Imagine that product being offered by several companies that compete to drive the price down. Who wouldnt want that technology and who wouldnt want to be first to create that technology . This is a short list of the people who are going for agi and pouring billions of dollars into it. Companies like ibm google vicarious deep thought organizations like the department of defense nsa and darpa. China and israel want asi and it said as much. The European Union gave a billion euros to switch project called blue but rain ruined by gary martin which is reversing the brain. He gave the people who think about ai risk. Deep mind was bought by google for 400 million. The founders of deep mind and including shane legg whos been writing about ai for a long time before he became a millionaire said a condition of this the sale will be that google sets up a board for ethics and safety governing their technology. This is a giant milestone. They are technology and that advanced ai is risky and they are also setting a very high bar for future purchases. So once these guidelines get out the ones that deep mind is creating with google once those guidelines get out there will be Something Like the guidelines for dna and im going to get to that in a minute. The Industry Needs guidelines and it just happened two weeks ago. All of us thinking about these issues were just gob smacked and pleased. That is a great acknowledgments that these are issues that can hold up the 400 million sale and if google doesnt appoint this board or points aboard the doesnt do anything while all this show her holders from deep mind have a lawsuit in google is not afraid of lawsuits as you know but they will have to prove themselves in it court of Public Opinion about why they are not taking the ai risk seriously. Whats the one thing these groups have in common is they know the steampowered of the 19th century and electricity of the 20th this is ais century. So you might ask yourself what could possibly go wrong . Already we rely on machines for many things and smart machines and is working out okay. On wall street are carried out by algorithm. Our energy and transportation or infrastructure rely on an Automated System using ai. Our Banking System relies on systems using ai. Computers are everywhere and we depend on them but how do we jump to he is an ai make her who has created signs for understanding how artificial super intelligence will behave. His work is really important. I have two chapters about what he is doing. To analyze super intelligence he uses rational aging theory for economics. In brief rational aging theory posits that rational agents humans or machines which work to maximize preference is called utility functions. That makes them predictable. When economist posed this they quickly learn you can work for humans. We are not rational all the time time and we are impulse buyers. You cant base an Economic System on the logic of people but cant probably anticipate that smart machines will be logical and therefore rational in that economic sense. He argues that the rational systems that are selfaware and self improving will develop basic drives and these drives include resource acquisition selfprotection efficiency and creativity and how it works is like this. Self improving machines will pursue the goals they are created with. Whether that goal is to play chess or to pick stocks. To succeed they will need resources whether thats energy money hardware or whatever is most expedient. They wont be satisfied with trying to fulfill their goals. They will also seek to avoid Failure Modes like being turned off or unplugs. In other words they will protect themselves. They will be efficient and they wont squander resources. They will use their super intelligence to find creative ways to achieve their goals and since improving the software will be one winning route to the success they will grow their own intelligence at computer speeds. Now here is the rub. Being friendly isnt on the list of ai behaviors. Super intelligence doesnt apply. Being smart doesnt mean being kind. Super Intelligent Machines are dangerous because they will try to overthrow us but for any goal machines might have it will be useful for them to use all available resources to achieve that goal. All available resources includes virtually every bit on the planet. Im not talking about the intelligence that comes with syria. We are talking about super intelligence now. Was super intelligence we have to get used to big ideas to seem like Science Fiction at the moment. For example in pursuit of its goals super intelligence would seek to manipulate matter on a subatomic level and solve the problems of nanotechnology. That is why ai theorists think housekeeper did like this. The ai does not love you nor does it hate you but adams can be used for something else. A super intelligence wont share our values by default. We will create machines with immense power but for all intensive purposes they will be 100 are sent without extremely careful programming they will be super intelligence sociopaths. The obvious solution would be to give that machines a moral sense that gives them value of human life and property but as it turns out programming ethics into machines is extremely hard. If we humans can agree on when life begins how do we tell the machine how to protect life . In some parts of the world they would have a hard time defining humans to include women and children so protecting life is not as easy as just protecting what we consider human life. If we declare we want to be safe with a super intelligent machine if happiness is their goal of powerful machine which merely stimulate her brain pleasure centers. You and i can argue that what constitutes right and wrong all day and not reach an agreement. How can we program that moral sense into a machine . The good life in roman times slaveholding crucifixion is different from the good life today and what with the good life be a century from now . For super Intelligent Machines do not conflict with your goals you would have to build one that intuits whats best for you and changes every time. But we have only jesus concept of how to program intuition and apathy and friendliness. The hazy as concepts that these are the concepts that will keep us alive when we share the planet with smarter human machines. And it gets worse. Before we can figure out how to make friendly machines many of the richest most powerful Ai Developers can kill humans on the battlefield. The shareholders of the manufacturers will be very disappointed. But we have to think about what robots really are. They are machines that kill people. 56 nations are developing battlefield robots. Within five sixers the Gold Standard will be for robots to take humans to kill humans without a human in the loop. A month ago u. S. General said by 2030, 30 of our combat forces should be robots. By 2030. Now this isnt just battlefield robots and autonomous humanlike machines. This is drones, battlefield robots packbots and things they carry equipment. 30 of the battlefield will be robots and a lot of those will be autonomous killing robots. Cost 20 billion in todays evaluation to do that. In the same way a. I. Has a promising piece of legislation like a. A i. Quickly turned into a weapon with fission it was spent ten years with a gun pointed at the head with the nuclears arm an no plan for the Dangerous Technology. It was, that was a disaster people say well we department kill ourselves. Well we department kill ourselves but we held a gun to our heads for 50 yearses, so that is not a winning species adaptation. A species, you know, muff to behave that way will fail with a sensitive and Dangerous Technology and that is a. I. Is there a solution . Yes. Certainly not foolproof one but a lot of things on the drawing board. I come back to one researchers working on dna suspended work an had a meeting in california at the pes Campus Center and they came out with basic dpliens like dont track the dna out on your shoes you might contaminate the environment. The guidelines are still in place and theyre modified an improved. But theyve kept bad reaction, bad accidents from happening as far as we know. An were getting very promising gene thairns better crops an benefiting. A. I. Needs more guidelines and also needs the organization to monitor Research Like the International Atomic agency. That cost a lot of money and im not hopeful that without some material accident the world that creates such an agency would exist but whats happening with google and im hopeful that people will see this before we have to suffer an accident to really get it. So my sincere thanks for all of you for being here tonight. Im glad youve tuned into this conversation, and im happy to answer any questions. [applause] do you equate selfawareness to be the same as a consciousness or the difference between the two . I think theres a difference between the two. For a computer to have software it doesnt need conscienceness but a model of itself and a model of the environment. But not a whole lot more. Whats also has to know at a pretty deep level at some programming. For them to be selfaware for the computer. All of the nuances of conscienceness are beyond us in the shortterm. I think it is we dont know huff many about conscience enough and hope to program it but i think Everything Else that we do all of the other cognitive parts of our cognitive tool kit i think are within with reach. Have you factored in the possibility that someone equally smart but fallible is going to come up with a technology to thwart the technology as you turn your computer on within ten minutes without antivirus software, it is beginning to be infected. On the internet have you thought about that . Not much, that is a very interesting thought. There will always be saboteurs and really strong malware. One of the thins i write about in the book is the nexus of malware and it is going to see more and more a. I. Programs with bad programs that malware that is really, really powerful ill worry about that in material was our energy grid and i write about that quite a bit. But, you know, are the guys who want to stop the a. I. Developments smart enough to create programs that outsmart the programs by the guyses that have a lot of money . I would say, i dont know. Have to be a gorilla campaign. But i think it is possible. Misunderstood you, but you were looking for two types of safeguards for this development. Is that right . Im looking for well the ai community is getting this sense. This is a book about interviews with a. I. Developers. They want fit cards. We want safeguards. People interested in a. I. Risk wants safeguards. By definition was this singularity or expanding intelligence develops the cat out of bag . Ant human created safeguards are irrelevant . Once you get to superintelligence, theres it will be ungovernorrable. Yes. So what does a group that i would like everybody to look up at some point. But it is called mary Machine IntelligenceResearch Institute in california an theyre coming up with ways to try to program safe a. I. And that Means Program creating a. I. But creating it creating a. I. That is safe in the dna from the very getgo an trying to learn losses from industrial processes. That are built with a lot of safety, you know, built in from the inception. Right now you cant take a very advance cognitive architecture like watson an say wearing going to make it safe and put this dangerous put a condom on it. Youve got to start from scratch thinking safety is the First Priority youre not een intelligence. And theres a book by charles im sure youre familiar with called normal accidents it is call about complexity and Industrial Development and taking a lesson from that. And those kinds of thoughts about how to start from the ground up. But mary gets 400,000 in donations a year and has a nsa has a black budget of 50 billion. So who is going to win that race . [inaudible] very interesting thank you. One of the thing you mentioned, when where would be apparent and might wonder whether these machines actually were getting an upper hand, and you said when had people working with happy to be working for them. Needs to be met and theyre delighted. And think the machine is wonderful, and that brings to mind that possibility of actually people developing emotional relationships with these machines somehow conversations and actions texting them and develop relationships like that . It is just a whole new world. Yeah. Youre thinking the bias, we impute an think thunder is a god our computers relate to us. Siri relates to us. Have you seen the movie, her . It is taking that to an extreme and that is a expectation. I saw a robot once at m. I. T. I dont know if youve seen video but it has the affect of the child. So when youre with it, sure you watch it, its eyes get big and scary and it shrinks back, but using it, theyre able to look at the programs an figure out where all of the reactions were. Had 16,000 processors at the time and so complex they couldnt point to the place where, you know, where it was wrecked that was the complexity problem. Im sure if you were smart earn spend more time you could find what was causing those reactions. But that was an emotional kind of machine. You could get fond of it quickly. So about three quarters of the way through your book, and i found it fascinating on a number of level. But one of them was, you know, if you drill it down deeper, then the concern about a potential crisis and potentially a species ending crisis, there are a lot of deep questions raised in this. And i wonder, i wondered as i was reading it whether you have the opportunity to either research or interview rocker pennrose who has thought about a series of all gore algorithms. It is weird the chapter im turning to right now. I know penrose and people that think the conscienceness is the backbone of intelligence. But i said in the book this question wont be solved here. [laughter] is that the issue of conscienceness is so big and so important. Ill just youve read about the vibrations fascinate about that and sending messages to the neuroscientist right now saying what . What do you make of this . An im waiting to hear because i think it is exciting, interesting and penrose indicated or justified and who is to say hes not right . I think what is interesting about the development of a. I. Is it is mostly important but im grasped that is the product development. If youre developing the product you take the best first answer not the best answer overall. And what were doing is doing rapid Product Research an you can skip over skip over o the conscienceness an get to strong human like intelligence. And then youve got even less more slippery to try to get answered to a moral sense into it. So what were doing is Fast Product Development that is what all of these guys are doing. And thats, you know, what we need to be doing is research. Like in the 30s in the late early 40s and Weapons Development not basic research and then held ourselves ransom for 40 years. Yes. How quickly are they going to focus on human augmentation opposed to i guess isolated or official intelligence . That is a great question because that sort of when you talked to ray i talked to him once 10 years ago he was my hero not anymore but you cant help but admire him pip wish he wasnt painting rosy picture of the singularity but in the broader context is were going augment our brains faster than we create stand alone a. G. I. Then it will be safe. Because well be the superintelligence. But that carries assumption that were safe. [laughter] you know were sure theres psychopaths but not among computers. But those august men at a timed will be soldiers first and theyre not done but theyre not pacifist. So that is not exactly a recipe for safety either. Youve heard of the uncanned valley so i was wondering if you think during a. I. Research there will be uncanny valley an how will they react to it . The reaction is we get from like two human . Too human like and you notice what is wrong with it rather than what is right . I get that way with some people. [laughter] so life like i think well experience that, but yeah i hope we get to that point where that is the problem. I think that stand alone a. I. Not even robotized what to be worried about. Of course putting it in battlefields and other places. I think well experience it but i think that technology, if we make it that far, i think the technology will get so good that it will gloss over it and, in fact, may start to prefer it. Might jump over the uncannied valley. Yes. This is very simplistic but a experience and with you earlier i was Like Research project, in 79. And that was when a lot of big things are being created just with very simple programs. And there was no p. C. We were more language people and not programmers we were actually working on putting the book to go straight from computer to the printing press, and used to sit back and pick out the other subs off the printer and look at it to read it, and we could pick out who wrote what, an years later there was a book called the troublemakers where we pointed out the same thing and interviewing somebody who was the head of the Computer Science program. And he said yeah, youre the manager san tax everything comes into the code. Im wondering in any of your discussions of everyone just looking at the algorithms do they get any of the idea of possibly the sensibility of the personality of the program or the code doing the encoding or the many people. Yeah it is like forming the intelligence. Well, i dont know about that so much but i know about something that is very similar to that. That is a lot of people thinking about a. I. Risk that shocked me when i was in san francisco. And they had already accepted the fact that smarter than human machines would replace us. So their goal was they would skip to that. And so their goal was to instill in those machines something of our essence something of our nature. To carry on. Into the future. Sort of accepted that, so that was the personality embedded into the code. Code of ethics what is it . Part of the fascinating things with a. I. They were asking what is about humanity that is preserving. What are had the wonderful qualities that we have that we would like to if we cant survive in our bodies, because well be replaced or augmented so much that it will be hard to tell who is a machine and who is a computer theyre skipping past that to what can we instill in the superintelligence to explore the galaxy so our essence will be preserved which is desperately [inaudible] yes. Im wondering not sure how relevant it was from your interviews. But do you think religion will be preserved through the machines at any point do you think theyll ever find it relevant to themselves . I know it seems doubtful theres a woman named afor aforest she was at m. I. T. As the computer theologian writing about that sort of issue. But her take on it is if were going to create things with, quote, unquote souls and those are old o testament another sacred text and develop religion, spiritual wonder, good question. Good question. Take a couple more. Came across descriptions of people who lacked emotion. And that there are certain injuries to take the emotion out of the human brain, and the end result was always pretty much the same. People like that are very indecisive and cannot make up their minds. Because the emotion gives us the drive. These are intelligence to do thing. Do you think theyll be able to duplicate the intelligence and theyre all computing that is a great question. I think based on what hes written such a positive guy. You know positive outcomes with some prognostications that i know. I think that he sees that therell be goal driven to create them. What are the definition was intelligence is really about achieving goals in a variety of novel environments so always be wont lack the drive to get up this the morning. Theyll always be pursuing something that is in their original programming. So i dont think im not sure if that is connected to emotion. You see things, where we might have basic drives that we have to satisfy. And emotion is kind of a higher level, you know, yeah. But i guess his answer would be those machines will be driven by goal fulfillment and not emotion . Yes. Goal driven do you mean competitive . In chapter 5 and 6 at the prognostications about the future theyll be competitive because theyll be competing for the same resources an ultimately with us and theyll be looking not just for present threats for selfprotection but future threats. Because what is time to a machine . What is 100 years to a machine so there will just be assessing dangers on their battlefield now but looking down, you know, what could be threatened in 50 years or 1,000 years. Together as well. A huge first mover advantage in creating the first superintelligence. Like chess. And that is, you know, that is why thats why people say why not unplug the machine when it gets to be that developed . We could unplug it but here im thinking, talking speaking on behalf of the nsa. Which im happy to do any time. But what theyll say is we cant unplug it because china wont unplug it and google wont because ibm wont theyll say i hope this turns out well because theres such a great product at the a end. Theres human level brains at the prize of the computer and beyond. This is concerned raise by some scientists that perhaps the atmosphere and the world go out too an to take a chance. [laughter] innovation runs ahead of stewardship, they were at a desperate place too. The guys that were developing this. Good news china is on the brink of it theyll feel that theyre in a desperate place. So the thing to do is to address it now before we get to that froon tier which could be here sooner than we think. So i think just because people want to get home we should wrap up. But thank you so much for coming. Thank you for buying the book, with thanks for taking part in this conversation. [applause] thank you james that was fascinating and so happy that you came tonight. We do have his book if you want to take one home and read this and get it signed tonight, and once again thank you for coming tonight. Thank you james and hopefully a lot more conversations about this in the next 15 years. [laughter] thanks very much for everything. Thank you very much. Thank you. [applause] sharing anything on book tv. Org easilily chick clicking share on the upper left side of the paimg and selecting the format. Book tv streams live online 48 hours every weekend