Good evening everyone, welcome and thank you for supporting your local independently owned bookstore. Cspan is filming so please make sure all funds on silent. I do and to let you know about a couple events have coming up this month, tomorrow night haley reading from her and ran his new novel. Tuesday Steven Kinzer is going to present the history of the cia also outside events and tickets are Still Available for mondays conversation and for jonathan his foreign talk on wednesday. Tonight we welcome gary marcus, author of rebooting ai which is a human in jeopardy does not single engine signal we are on the doorstep or super intelligent machines. Taking inspiration from the human mind, the book explains that if we need to advance our social intelligence to the next level, suggestive we are wise along the way we wont need to worry about a machine overload. Finally a book that tells us what ai is, what it is not, and what ai could be, if only we are ambitious and creative enough. He calls it a lucid and equally informed accounts. Gary marcus is a founder and ceo of robust aims founder and ceo of geometrics intelligence hes published in journals including science and nature, he is the author of guitar zero and rebooting ai. Thank you. Thank you very much. [applause] s is not what we wanted to see, lets see if it will go missus not good. Okay maybe it will be all right. And some technical difficulties here. I am here to talk about this new book rebooting ai some of you might have seen an ad i had the New York Times called how to build Artificial Intelligence we can trust. He think we should all be worried about that question because people are building a lot of Artificial Intelligence they dont think we can yet trust. The way we put in the pieces Artificial Intelligence has a trust problem we are relighting more on ai and has to earn our confidence. We also suggest there is a hype problem. A lot of ai overhyped these days are often by people who are very prominent in the field so andrew is the deep learning its a major approach to ai these days he says that the typical pers pin can do a mental task with less than one second thought, we can probably automate using ai either now or in the user. The profound claim anything you can do in a second we can get ai to do. If it were true, the world would be on the verge of changing altogether. It may be true someday 20 years 50 years a hundred years for now, they try to persuade just not remotely true now. The trust problem is this. We have things like Driverless Cars that people think they can trust, that they should not actually trust. Sometimes they die in the process. This is a picture from a few weeks ago, in which a tesla crashed into a stopped Emergency Vehicle this is actually happened five times in the last year tesla on the autopilot crashed into a vehicle on that side the road. Heres another example im working on the robot industry hope this never happens in my robot. This robot is a security robot basically committed suicide by walking into a little puddle. [laughter] youve got saying machine saying they can do a thing a person you doing this person and a second can look in the puddle and said maybe i shouldnt go in there and the robot cant. We have other kinds of problems to like biased people been talking about lately, you can do a google image search for the word professor and you get back Something Like this were almost all of the professors are white males even though statistics are the United States only 40 of professors are white males it be even lower than that around the world. Give systems that are taking a lot of data, but they dont know if the data is any good and they are just reflecting it back out parade thats perpetuating cultural stereotypes. The underlying problem with Artificial Intelligence right now is the techniques people are using are simply too brittle. So everybodys excited about some the cold deep learning its really pretty good for a few things. Actually for many things. Object recognition student get deep learning to recognize this is a bottle and maybe this is a microphone. Or you can get it to recognize my face and may be distinguished from uncle teds face for example. I hope it would be able to do that. Deep learning can help some with radiology but it turns out all of the things is good at, fall into one category of human thought or human intelligence. The category they fall into is they all seem to be perceptual classifications to see a bunch of examples of something and you have to identify further examples of things that look the same, sound the same and so forth. That doesnt mean that one technique is useful for everything. I wrote a critique of deep learning a year end a half ago you can find online called deep learning critical appraisal he can find online for free deep learning is greedy, brittle, opaque and shallow downsizing deep learning in the arrays excited about it doesnt mean its perfect and im going to give you some examples. First going to give you real counterpart to anders claim for if you were running a business and wanted to use ai for example you need to know what can ai actually do for you and what can it not do for you. If you are thinking of ai ethics and wondering what machines might be able to do soon or not be able to do sooner think its important to realize their limits on the current system. Heres my counterpart. The typical person can do a mental task with less than one second of thought and we can gather in a enormous amount of data that are directly relevant, with a fighting chance to get her ai to work with that so long as the testator, the things that make the system work on are different than we not too terribly different from the things we taught the system on in the system does not change very much over time. Probably are trying to solve this not change over time. This is a recipe the for games. This is what ai is good at right now is fundamental things like games. So alpha go is the best go player in the world is better than any human if its exactly what is specified what systems are good at. System hasnt changed. The gentlemen, the game hasnt changed in 2500 years we have a perfectly good set of rules we have as much data you can gather for free we are almost for free you can have the computer play itself or different versions of itself is what deep mine did in order to make the best go player and it can keep playing itself keep gathering more data. Now compare that to lets say a robot that does eldercare pretty dont want a robot that does elder care to collect an infinite amount of data through trial and error and work some of the time and not work others. Your elder care robot works 95 of the time putting grandpa into bed and drops grandpa 5 of the time, you are looking it lawsuits in bankruptcy does not get a fly for the ai to drive in elder care robot. When it works, the way deep learning works is something called Neural Network thats depicted at the top. Its fundamentally taking big data making statistical part proclamations is taking labeled data see label a bunch of pictures of tiger woods, much affection of golf balls a bunch of pictures of Angelina Jolie and you showed a new picture of tiger woods is not too different from the old picture and it correctly identifies this is tiger woods not Angelina Jolie this is the sweet spot of deep learning. People got really excited about it what it for starting and popular. Magazines had an article sink deep learning will soon give a super smart robots. Weve already seen in some examples or an example of a robot its not all that smart partially more later. The promises been around for several years but has not been delivered on. Theres lots of people thats even a perception and then im a talk about Something Else which is reading. So on the right are some training examples pretty you teach the system these things are elephants. If you showed another elephant that looked a lot like those on the right, the system would have no problem at all it would say wow it knows what an elephant is. But suppose you showed the picture on the left, the picture on the left with a deep learning system responses person. It mistakes the silhouette of an elephant for person. And its not able to do what you would be able to do which is first of all to recognize it as a silhouette and second of all the trump is really salient and is probably an elephant. This is what you might call extrapolation visualization deep learning can do this. We are trusting deep learning everyday more and more its getting use in systems that make judgments about whether people should stay in jail or whether they should get particular jobs and so forth. Its really quite limited. Heres another example, kind of making the same point about unusual cases. If you show at this picture of a school bus on its side in a snow bank, it says with great confidence will thats a snowplow. With that tells you his assistant cares about things like the texture of the road in the snow has no idea the difference between a snowplow and a school bus or what theyre for. Fundamentally mindless the statistical summation correlations it doesnt notes going on. This thing on the right was made by people at mit if you are deep learning system youll say its an espresso because theres foam there its not super visible because of the lighting here but it picks up on the texture of the foam and says espresso because at the salient thing about espresso it doesnt understand its a baseball. Another example is you showed deep learning system of banana and you put the sticker in front of the banana which is kind of a psychedelic toaster. Because theres more color variation and so forth in the sticker, the deep learning system goes from calling the top when a banana to calling on the bottom a toaster. In fact it doesnt have a wave doing what you would do which is to say Something Like its a banana with the sticker in front of it thats too complicated it says which category something belongs to, thats all deep learning does is identify by category. If you are not worried this is starting to control or, youre not paying attention. This gets the next flight out of this . Maybe not. Going to have to go without a slighted think because of technical difficulties. I will continue though. Its just nothing i can do all right. One second here to look at my notes. Okay, so next i was going to show you a picture of a parking sign with stickers on it. It would be better if i could show you the actual picture, but presenting slides over the web is not going to work. So parking sign the stickers on if you can imagine that. The deep learning system calls it a refrigerator filled with a lot of food and drink. Its completely off, so to something that the color and texture but it doesnt really understands going on. Then i was going to show you a picture of a dog thats doing a benchpress with a barbell. Yes, something has gone wrong. Thank you for that. We will just shut it down. To me to it help you . Two no i need a. Laptop and i think it just couldnt do it fast. I dont think theyre going to be will it to edit so go on. You have a picture of a dog with a barbell and its lifting the barbell, deep learning system can tell you those of barbell there in a dog but it cant tell you hey thats really weird how did the dog get so ripped it can lift the barbell . Deep learning system is no concept of things is looking at. Current ais even more out of its depth when it comes to reading some going to read you a short story that Laura Inglis Wilder wrote. Its about el lanzo is a 9yearold boy finds a wallet full of money that is dropped on the street. And then almanzas father guesses that the wallet might belong to something entrance m a mr. Thompson alonzo finds mr. Thompson. Heres of them that wilder wrote l lanzo turns to mr. Thompson asked a jew lose a pocketbook . If mr. Thompson jumps up to ten his pockets and shouts yes i have, 1500 and it went about it . What you know about it . Is this it on lanzos question ricky said yes thats it he opens it and hardly counts the money he counts all of the bills over twice and then he breathes aside relief and said that boy didnt steal any of it so when you listen to that story you form a mental image of it might be a vivid image or not but you know you and for a lot of things liked it the boy hasnt stolen any of the money or where the money might be . You realize why he has reached in his pocket looking for the wallet because he knows wallet occupy physical space and if your wallet is in the pocketbook, it will recognize it if you dont feel anything it wont be there and so forth for you know all of these things you know a lot can make a lot of inferences on how everyday objects work and how people work. So you can answer questions whats going on. Theres no ai system yet that can actually do that. The closest thing we have is a system called gpt two. This is released by open ai some of you mightve heard about it because first of all its famous piquant elon musk founded it and it has the promise are going to give away all their ai for freight thats what make the story so interesting so they gave away all their ai for free until they made this thing called gpt two. They said gpt two is so dangerous that we cant give it away. Its an ai system that was so good at human language that they did not want the world to have it. But people figured out how worked in they make copies of it naked use it on the internet. And so my coauthor and i fed in the l lanzo story into it. So remember, a lanzo is found the wallet hes given it to the guy, the guys counted the money, he super happy. What you do is you feed in the story and it continues it. It took a lot of time, maybe hours this is how it continues it for him to get the money from the safe place where he hit it. This just makes no sense. Its perfectly grammatical but if he found his wallet was it doing . Its the words are correlated in some vast database. Its completely different from the kind of understanding little children get parade the second half of the talk which i will do without visuals, is called looking for clues. The first clue i think we need to do as we develop ai further is to realize that perception, which is what people do is part of what intelligence is. Some of you, might especially in cambridge understand multiple intelligence for example so this verbal intelligence musical intelligence and so forth. As a cognitive psychologist i would also say there are things like common sense, planning, tension is many different components. We have right now is a form of intelligence that just one of those. Its good at doing things that fit with that, good during perception, edit certain gameplaying doesnt mean it can do everything else. The way i think about it is deep learning is a great hammer. We have a lot of people looking around saying because i have a hammer everything must be a nail. And some things actually work with that, but there is been much less progress on language. So there has been exponential progress and how well computers play games. But theres been zero progress in getting them to understand conversation. That is because intelligence itself has many different components no silver ebola to solve it. The second thing i wanted to say is theres no substitute for common sense prayed we really need to build common sense into our machines i wanted to show a picture of a robot on a tree with a chainsaw its cutting down the wrong side if you can picture that. Its about to fall down. This a be very bad, we would not want to solve it with a popular technique called reinforcement learning we have many, many trials you would not want a fleet of 1000 robots and 100,000 chainsaws making 100,000 mistakes that would be really bad. Then i was going to show a really cool picture something called the yarn theater which is like a little bowl with some yarn and a string that comes out of a hole. With soon as i describe it to you to have enough common sense about how physics works and what i might be wanting to do to understand it. Then i was going to show a picture of a really ugly one and say you could recognize this one even though looks totally different because you get the basic concept, thats a common sense is about. Then i was initially picture marimba yell no its a vacuum cleaner robot, and then i was going to show you you tele and a dog doing its business i guess you can might say. Say the room but doesnt know the difference between the two as going to show you was called the pu pocket which is something thats happened not once but many times which is roombas that dont know the difference between what they need to clean up and dog waste, spread the dog waste all the way the peoples houses sometimes is been described as it Jackson Pollock of artificial intelligent common sense disaster. R8, then, what i really wish i could show you the most is my daughter climbing through chairs, sort of like the ones you have now. My daughter the time is four years old you guys are sitting in chairs were theres a space between the bottom of the chair and the back of the chair. When she was four years old to small to fit through. Now she again didnt do this will be called reinforcement learning which was trying hundreds of thousands of times, i was never able to climb to the chair i was little bit too big even if i was in good shape and exercising and she would never, for those who know it watch the Television Show dukes of hazard and climb through a window to get inside of a car should never seen that so she just invented for herself a goal. And i think this is how human children learn things is can i do this i wonder if i could do that . Can i walk on the small ridge on the side of the road. I have two children sir five and six and a half an all day long they just make up games for the like what if it were like this could i do that . She tried this, she learned it essentially in one minute. She squeezed the chair shall little section to the low problemsolving. This is very different from collecting a lot of data with a lot of labels the way deep learning is right now. I would suggest we need to take clues from kids and how to do these things. The next thing i was going to do is quote elizabeths pelkey who teaches at harvard down the street. She has made the argument that if you are born knowing their objects and sets in places things like that, you can learn about particular things to be just about pixels and videos you cant do that you need a starting point. This is what people call the opposite of the blank slate hypothesis and i like to show a video of a baby ibex climbing down a mountain. Nobody ever wants to think humans have anything in eight other there temperament, they dont think humans are built with notions of space and time and causality as hes arguing and im suggesting ai should do but nobody has any problem thinking animals might do this. I shall baby ibex climb down the side of amount a few hours after its born. Anybody who sees this video has to realize something built into the brain of the baby, there has to be for example an understanding of threedimensional geometry for the minute the ibex comes out of the womb. Similarly and must know something about physics and its own bite. That doesnt mean it cant calibrate to figure out just how strong its legs are and so forth. As soon as its born it knows that. The next video is going to show you how to look us up online as robots fail, shows a bunch of robots doing things like Opening Doors and falling over. Or trying to get into a car and falling over. It is sad cannot show you this right now, but you get the point current robots are quite ineffective in the real world. The video is going to show where things that evolve in simulated to the competition and everybody knew exactly what the events were going to be. They were just going to have to have the robots open the door and turn dials and stuff like that. Theyd all done them in computer simulation of a got to the real world the robots failed left and right. The robots couldnt deal with things like friction and wind and so forth. So, to sum up, i know a lot of people are worried about ai right now. And worried about robots taking over our jobs and killing us all and so forth. Theres a line in the book which is Something Like worrying about all that stuff would be like in the 14th century worrying about highway fatalities, highway traffic totalitys and people wouldve been better off worried about hygiene. But we should really be worried about right now is not some vast future scenario in which ai is much smarter than people and can do whatever it once, i could talk about this more in the question. We should really be worried about the limits of current ai in the fact that we are using current ai a lot and things like decisions about jobs, decisions about jail sentences and so forth. So anyway, on the topic of robot tactic i suggest if you think you could do. The first one is just close the door. So robots right now cant actually open doors is a competition not to teach them how to do that. If that doesnt work lock the door theres not even a competition to have robots lock the door. Old be another seven or ten years before people start working on doors were to little janzen have to pull in the knob and stuff like that. So just lock the door. Or, put up one of the stickers i showed you you will completely confuse the robot or if that doesnt work talk in a funny accent and noisy room. Robots dont get any of the stuff. The second thing i wanted to say, is that deep learning is a better latter than we have built before it lets us climb to certain heights. Just because somethings a better latter doesnt mean the better latters going to get you to the moon. Liver really helpful tool here but we have to discern as listeners and readers and so forth the difference between a little bit of ai and some magical form of ai that simply hasnt been invented yet. So, to close and i would love questions will take as many as you have, if we want to build machines or smartest people i think what we need to do is start by studying small people, human children and how they are flexible enough to understand the world in a way ai is not yet able to do. Thank you very much. [applause] yes questions. I am a root tired orthopedic certain in the got out just in time because there now coming up with robotic surgery which is in the knee replacements have you gotten any information about where that is headed and how good it is . Et cetera . Guest the dream is the robot can completely do the surgery itself. Right now, most of that stuff is an extension to the surgeon. So like any other tool. In order to get robots to really be fullservice, they need to really understand the understanding of biology what theyre working on. The need to understand the relation between the different body parts theyre working with. Our ability to do that right now is limited because the reasons im talking about therell be advances in that field in the next years but i would not expect to me sent people to mars, whenever that is if its any time soon that we would have eight sort of robot surge you have in Science Fiction we are nowhere near that point it will happen someday theres no principal reason why we cant build such things and have machines having better understanding of biology and medical training. We dont have the tools right now to allow them to absorb the medical trading. It reminds me of the famous experiment in Cognitive Development were chimpanzee was trained and raised in a human environment. The question was wouldnt learn language . In the answer is no. If you sent a current robot to medical school it would not learn diddly squat and that wouldnt help it to be a robotic surgeon. Two thank you. Youre welcome. Is the current limitations of ai also applied to. [inaudible] two im sorry didnt mention self driving cars write much. They are very interesting test case. It seems seems there logically possible to build them but empirically the property get us out later cases it follows directly from what is saying before. If your Training Data what you teach the model in the system on r2 different from what you see in the world, they dont work very well. So the case of the tow truck and the fire trucks the tesla keeps running into is probably in part because most on trained on ordinary data where the cars remain fast on the highway. The system see something and hasnt seen before doesnt really understand how to respond. So i dont know whether Driverless Cars are ultimately going to prove to be closer to Something Like chess or go words of bound enough we get the systems to work over its going to be moronic language with scenes completely outside of the range. People have been working on for 30 or 40 years. Duane mill has been working on their part of alphabet that used to be google has been working on it for a decade. Relatively slow progress. It looks a lot like whack a mole so people solve one problem and it causes another problem. The first fatality from a driverless car was a tesla that ran underneath a semi trailer that took a left turn onto highway. First i had the problem that it was outside the training center, it was an unusual case. Second of all, ive been told, i dont approve of this ive been told what happened is the tesla thought the tractortrailer was a billboard. In the system had been programmed to ignore billboards because of it didnt it was going to slow down so often that it would get rearended all the time. So one problem was solved, the slowing down for billboard problem another problem popped up, gave walkable. So its happened so far is a Driverless Cars have felt a lot like whack a mole, people make a little bit of progress they solve one particular problem but they just solve the general problem. To my mind we dont have general enough techniques even try to solve the problem. So people are saying ill just use more data, more data, more data we get a little bit better but we need to get a lot better. So right now, the whammo cars need human intervention about every 12000 miles last i checked. That sounds impressive it turns out humans only have fatalities 134 million miles on average. If you wanted to get his human level it got a lot more work to do. Thats just not clear, the grinding out the same techniques will get us there. This is again the metaphor of having a better latter is not necessarily going to get you to the moon. Steve at my questions about Machine Learning and im an astronomers are starting to use it in science where the question is, are the problem is a beating pattern recognition you dont learn anything. Do you think we are making progress on having Machine Learning kinds of programs be able to tell us how they are making decisions . In enough detail that its useful question for. Theres a lot of interest in them. Right now, it may change, but right now theres a tension between techniques that are relatively efficient and techniques that protas what we call sortable results, as i guess you know. Right now the best techniques for a lot of perceptual problems i went to identify does this look like another asteroid ive seen before, deep learning is the best at that its far from interpretable as you can possibly imagine people are imagining progress in making that a little bit better. But theres a tradeoff now you get better results and you give up interpretation this a lot of people worried about this problem. Ive not seen in a great solution to it so far. I dont think its insolvable in principle, but right now its kind of a moment where the ratio before the how good the system work and how little we understand is pretty extreme. Also going back to Driverless Cars were good have cases or someones going to die on somebody skin have to tell the parent of a child or something the reason your child died seems to be the parameter number three and 17 was a negative everyone should been a positive number will be completely meaningless and unsatisfied but thats where we are right now. Other questions . Two what are your thoughts on application to healthcare diagnostics and the racial bias that the people are concerned about . And also the fact that we cant afford to have any misdiagnosis. Steve echoes with three different questions. The first is and i may forget when you masterminded the first is can you use a stuffer medical diagnosis . And the answer is yes but how important is a misdiagnosis . The more important it is, the less we can rely on these techniques. Also the case of course human doctors are not completely reliable. The advantage machines have right now for some things, radiology in particular is they are pretty good at pattern recognition. They can be as good as radiologist at least in careful laboratory conditions. Nobody really has as far as i know her last i checked, a working realworld system that does radiology in general. Its more like demonstrations that i can recognize these particular pattern. But principle deep learning has an advantage over people there. As a disadvantage it cant read the medical charts. Theres lots of open called unstructured tasks and medical charts which is doctors notes and such its written in english rather than being big picture of a chart. Machines cant read that stuff at all. They canna little bit they can recognize key words and like that the really good radiologist i had explained to me sort of like sherlock holmes. I realize there is the symmetry here and it relates to the 70 had years ago and they put together pieces of an interpretation of whats going on. Current systems just dont do that. Again im not saying possible its not going to roll out next week. The first cases of ai really have an impact on medicine, are probably gonna be radiology that you can do on a cell phone where you dont have a radiologist available. In developing countries with a son of doctors, the systems may not be perfect, beacon try to reduce the false alarm to some degree. And you can get decent results read couldnt get any results before at all. Will start to see that. Its going to take longer because we dont have the data because pathologists have not been digital there starting to do it. Radiologist have been digital for a while. Then theres things were if you watch the Television Show house were trying to put together some complex diagnosis of a rare disease or Something Like that, systems are going to be able to do that for a long time the idea made an attempt at that with watson and it just wasnt very good so it would miss like Heart Disease when it was a first year medical student. And there it goes back to the difference between having a lot of data and having understanding so you just to correlations but you dont understand the underlying biology and medicine you cant really do that much. We just dont have the tools yet to do really high quality medical diagnosis thats a ways off. Thanks for coming, im working as a data angeles trent analysis im kind of in the team organization, what im working on is scoping what are small industry tasks that automation is using Machine Learning would be helpful for in harder tasks for forecasting to solve wider problems that are like youre saying not solvable right now with current methods. How would you im always interested in ways to explain or get the idea across between down did beecher think the fundamental differences some problems or close world the possibilities are limited. The more limited the world is the more the current techniques can handle them. Some problems are open ended. They could involve arbitrary knowledge or unusual cases so driving is interesting because in some ways its closed and into my ordinary driving her circumstances is open ended because there could be a Police Officer with the handlettered signs in this bridge is out. Theres so many possibilities and that with its open ended. What you end up finding is the driverless car works well and theres a lot of conventional daytoday work very poorly when theyre forced to go outside or there informally their comfort zone. These systems have a comfort zone, being a little bit glib about it, they go outside of it and doesnt work that well. Sue mackey made a critical points about how she think humans have an advantage of evolution . Sort of problem the problem that maybe we are not using it the wrong way or maybe enough data . Guest i dont see it that way. I see this evolution what they did is they build a genome that builds a draft rough draft of the brain pretty look at the development of biology it is clear that brain is not a blank slate. His very carefully structured we dont understand all the structure, but theres in a number of experiments that illustrate and all kinds of ways and you can do different experiences were animals dont have any exposure to the environment and they understand so forth. What evolution is done is shaped a rough draft of the brain us on a final brain that rough draft is actually built to learn specific things about the world. You can think about ducklings looking for something to imprint on the minute they are born. Our brains are built to learn about people and objects and so forth. But what evolution is done is given a really good toolkit for assimilating the data we are get. You could say buy more and more data and more time could i get the same thing . And maybe. We are not forget replicating billionaires of revolution thats a lot of trial and error that evolution did and we could try to just replicate that with enough gpu time enough graduate students and so forth. There is another approach to engineering which is called biomimicry where you look to nature and how it solves problems and tried to clues from the ways in which nature solves the problem. Thats fundamentally what im suggesting by another name is we should look at how biology in the form of human brains or other animals brains the baby ibexs brain, manages to solve problems. What d wish you representing correlations between individuals and labels and things like that that are poor at representing that kind of knowledge and what we would argue is a synthesis of the so learning techniques that allow you to be responsive to the data in ways that they were not particularly. So, learning that is responsive to dito did that theyd have toy presents and works with abstract knowledge so that you can teach it to something by saying i dont know, apple is a kind of fruit and have it understand that. But we dont really have systems where we have any way of teaching something explicitly like wireless occupies visible space and is going to be different than a wallet that is not inside a pocket. We dont have a way of even telling the machine god right now. Yes. Im thinking that if we dont take this approach and dont learn all of this, we try to encroach on this knowledge than arent we playing a different game and then tomorrow we need to encroach the knowledge about space . I think we need to do all three of those. Theres a lot of knowledge that needs to be encoded. I kind of borrow this their are those that enable certain things if you have a framework you can represent things that happen in time and if you dont know it exists messy and the correlation isnt going to give you that. One way you could sort of estimate things is think about the number of words in the english language that a typical speaker knows and its Something Like 50,000 if they have a big vocabulary. Meatiest ten pieces of common sense, then you talk about millions of pieces of knowledge but not petroleum. It would be a lot of work to encode them and maybe we could give it in different ways but its not an unbounded caskets when people dont want to do right now because it is fun to play with these and get good proclamations that isnt perfect yet so it already has the appetite to do it but it could be theres just no way to get there otherwise. And that is my view. There is a tradition saying the way you get into the game is to have something that allows you to bootstrap the rest. I think it is a scoping problem we need to pick the right domain to bootstrap the rest and there is a strong evidence that is true. They have some basic knowledge and then they develop more knowledge so i think we could at least start with the kind of things that the developmental psychologists have worked on and see where we are. Other questions . If not, then thank you all very much. [applause] i think we have a book signing right there. Seen those reporters, he reads the stories, he watches the news coverage. He once said it was the greatest inventions of mankind because he had all the shows on his dvr and he watches and pcs how she is being portrayed. I recall him at one point phil rocker with the washington post. The story that he had written before the New York Times in 2016 that he basically went and interviewed people on the Staten Island ferry. I didnt even see the story. He not only solve the story and read it. Its now been a couple of years earlier and he becomes preside president. Its a wonderful story. Its mind blowing. You can see the entire interview