Silence. Tomorrow night the author will be reading from his new novel. On tuesday poisener in chiefs author will be joining us. We will also have a talk on wednesday. We welcome the author of rebooting ai which argues a computer being a human in jeopardy doesnt signal that we are on the doorstep of fully Autonomous Cars or super intelligence machines. Taking inspiration from the human minds, the book explains if we need to advance Artificial Intelligence to the next level, if we are wise along the way, we wont need to worry about a future of machine overlords. Finally a book what tells us what ai is, what it is not and what it could become if were ambitious and creative enough. Its also been called an informed account. The founder and ceo of robust ai and founder and ceo of intelligence. Hes published in journals including science in nature. Hes the author of guitar zero and rebooting ai. Thank you. Thank you very much. [applause] uhoh, this is not good. Um, okay, maybe it will be all right. We had some technical difficulties here. Im here to talk about this new book rebooting ai. Some of you might have seen an oped i had this weekend in the New York Times called how to build Artificial Intelligence we can trust. I think we should be worried about that question because are building a lot of Artificial Intelligence they dont think we can trust. Artificial intelligence has a trust problem. We are relying on ai more and more but it hasnt yet earned our confidence. We also suggested i also want to suggest theres a height problem. So a lot of ai to suggest theres a hype problem. One of the leaders of something called deep learning which is a major approach to ai these days. He says if a typical person can do a mental task with less than one second a thought, we can probably automate it using ai either now or in the future. Thats a profound claim. If it were true, then the world would be on the verge of changing all together. It may be true some day, 20, 50, to 100 years from now, but it is not true now. We have Driverless Cars that we think we can trust but we cant actually trust. This is a picture of a few years ago where a tesla crashed into a stopped emergency vehicle. Thats happened five times in the last year that a tesla on autopilot has crashed into a vehicle by the side of the road. A systematic problem. Heres another example. I am working in the robot industry. I hope this doesnt happen to my robots. A security robot basically committed suicide by walking into a little puddle. Youve got andrew yang saying machines can do anything a person can do in a second. A person in a second can look in the puddle and say maybe i shouldnt go in there and the robots cant. We have other kinds of problems too like bias. A lot of people have been talking about that lately. You can do a google image search for the word professor and you get back Something Like this where almost all the professors are white males even though the statistics are in the United States that only 40 of the professors are white males and if you look around the world, it would be lower than. That you have systems that are taking a lot of data but they dont know if it is any good and they are reflecting it back out and perpetuating cultural stereotips. The underlying problem with Artificial Intelligence is the techniques people are using are simply too brittle. Everybody is exciting about something called deep learning. It is good for many things. You can get deep learning to recognize that this is a bottle or maybe this is a microphone. You can get it to recognize my face and maybe distinguish from my uncle teds face for example. Deep learning can help some with radiology, but it turns out that all of the things that its good at fall into one category of human thought or human intelligence. The category that they fall into is they are all things we call perce percentual perceptual classification. Things that look the same or sound the same. That doesnt mean that one technique is useful for everything. I wrote a critique on deep learning, you can find it online for free. The summary of it says deep learning is greedy, brittle, opaque and shallow. There are down sides to it. Even though everybody is excited about it. That doesnt mean it is perfect. I will give you some examples. I will give you a counterpart to andrew yangs claims. You were running a business and wanted to use ai, you need to know what it can do for you and what it cannot do for you. If you are thinking of ai ethics and wondering what machines might be able to do soon or what they may not be able to do soon, it is important to realize there are limits on the current sys m system. If a typical person can do a mental task with less than one second of thought and gather a lot of data thats directly relevant we have a fighting chance to get the ai to work with that, so long the test data, the things we make the system to work on are different than the things are not too terribly different than the things we taught the system on and the system doesnt change much over time. The problem you are trying to solve doesnt change much over time. This is a recipe for games. This says that what ai is good at is fundamental things like games. So alpha go is the best go player in the world, better than any human. It fits in what the systems are good at. The game hasnt changed in 2500 years, a perfectly fixed set of rules and you can gather as much data as you like for free, or almost for free. You can have the computer play itself or different versions of itself, which is what deep mind did in order to make the worlds best go player, and he can keep playing itself and keep gathering more data. Compare that lets say to a robot that does elder care. You dont want a robot that does elder care to collect an infinite amount of data through trial and error and work some of the time and not work others. If your elder care robot works 95 of the time putting grandpa into bed and drops grandpa 5 of the time, you are looking a lawsuits and bankruptcy; right . Thats not going to fly for the ai that would drive an elder care robot. When it works, the way that deep learning works there is something called Neural Network that i depicted at the top. Its taking big data and making statistical proximations. Taking labelled data, label a bunch of pictures of tiger woods, golf balls, Angelina Jolie and show it a new picture of tiger woods that isnt too different than the old pictures and it correctly identifies that this is tiger woods and not Angelina Jolie. This is the sweet spot of deep learning. People have got excited about it when it first started getting popular. Wire magazine had an article saying deep learning will soon give us super smart robots. We have already seen an example of a robot thats not really all that smart. I will show you some more later. This promise has been around for several years, but its not been delivered on. There are a lot of things that deep learning does poorly even in perception. Then i will talk about Something Else which is reading. On the right are some training examples. You would teach the system, these things are elephants. If you showed another elephant that looked a lot like those on the right, the system would have no problem at all. It would say wow it knows what an elephant is. Suppose you show it the picture on the left. The way the deep learning system responds is it says person. It mistakes the silhouette of an elephant for a person. And its not able to do what you would be able to do which is first of all to recognize it as a silhouette and second of all say the trunk is really salient and it is probably an elephant. This is what you might call extrapolation or generalization and deep learning cant really do this. Were trusting deep learning every day more and more. It is getting used in systems that make judgments about whether people should stay in jail or whether they should get particular jobs and so forth. And its really quite limited. Heres another example. Kind of making the same point about unusual cases. So if you show it this picture of a school bus, on its side, in a snow bank, it says with great confidence, well thats a snowplow. The system cares about things like the texture of the road and the snow, has no idea the difference between a snowplow or school bus or what they are for. It is fundamentally mindless statistical summation and correlation, it doesnt know whats going on. This on the right was made by some people at mit, if you are a deep learning system you would say its an espresso because theres foam there. Not super visible because of the lighting. It picks up on the texture of the foam because espresso. It doesnt understand that it is a baseball. Another example is you show deep learning system a banana, and you put this sticker in front of the banana, which is a kind of psychedelic toaster, and because theres more color variation and so forth in the sticker, the deep learning system goes from calling the top one a banana to calling the bottom one a toaster. In fact, it doesnt have a way of doing what you would do which is to say Something Like well its a banana with a sticker in front of it. Thats too complicated. All it can do is say which category something belongs to. Thats all deep learning does essentially is identify categories. If youre not worried a this is starting to control if you are not worried that this is starting to control our society, youre not paying attention. Lets get the next slide. Maybe not. Going to have to go without the slides i think because of technical difficulties. I will continue, though. Theres just nothing we can do. All right. One second here, look at my notes. Okay. So i was next going to show you a picture of a parking sign with stickers on it. It would be better if i could show you the actual picture, but presenting slides over the web is not going to work, so parking sign with stickers on it, if you can imagine that. The deep learning system calls it a refrigerator, filled with a lot of food and drinks. It is completely off. Its noticed something about the colors and textures but it doesnt really understand whats going on. Then i was going to show you a picture of a dog thats doing a bench press with a barbell. Yes, something has gone wrong. [laughter] thank you for that. All right. Well just shut it down. [inaudible]. Yeah . I would need a mac laptop. I just couldnt do it fast. You need a mac laptop . Ive got one. I dont think they will be willing to add it okay. Just go on. You have a picture a dog with a barbell and its lifting the barbell. Deep learning system can tell you that theres a barbell there and a dog, but it cant tell you hey thats really weird. How did the dog get so ripped that it could lift the barbell . Deep learning system has no concept of things that its looking at. Current ai is even more out of its depth when it comes to reading. I will read you a short little story that Laura Ingles Wilder wrote. A 9yearold boy finds a wallet full of money thats dropped on the street. The father guesses that the wallet might belong to somebody named mr. Thompson. He finds mr. Thompson. Wilder wrote he turns to mr. Thompson did you lose a pocketbook . Mr. Thompson jumps, slaps his hand in his pocket and jumps. Yes. I have. 1500 in it. Is this it . Yes, that is it. He opens it and counts the money. He breathes a sigh of relief and says well, that darn boy didnt steal any of it. When you listen to that story, you form a mental image of it. It might be vivid or not so vivid, but you know you infer a lot of things like that the boy hasnt stolen any of the money or where the money might be. You understand why hes reached in his pocket looking for the wallet because you know that wallets occupy physical space and that if your wallet is in the pocket, that you will recognize it, and if you dont feel anything there, then it wont be there and so forth. You know all of these things. You can make a lot of inferences about things like how every day objects work and how people work. You can answer questions whats going on. Theres no ai system yet that can actually do that. The closest thing that we have is a system called gpt 2. This is released by open ai. Open ai is famous because elon musk founded it and it has the premise that they will give away all their ai for free. Thats what makes this story interesting. They gave away their ai for free until they made this thing called gpt 2. They said it is so dangerous that we cant give it away. It was this ai system that was so good at human language that they didnt want the world to have it. But people figured out how it worked and they made copies of it. Now you can use it on the internet. So my collaborator ernie davis, my co author and i fed in this story into it. Remember the boy has found the wallet, given it to the guy. The guy has counted the money. He now has hes super happy. What you do is you feed in the story and it continues it. What it said it took a lot of time, maybe hours this continues it for him to get the money from the safe place where he hid it. It makes no sense. It is perfectly grammatically. If he found his wallet, what is this safe place . The words safe place and wallet are correlated in some vast database. It is completely different than tunsing that children do different than the understanding that children do. The second half that i will talk about without visuals is called looking for clues. We need to realize that perception which is what deep learning does well is just part of what intelligence is. So some of you might especially in cambridge know a theory of multiple intelligences for example. Theres verbal intelligence, musical intelligence, and so forth. As a cognitive psychologist, i would also say there are things like common sense, planning, attention. There are many different components. What we have right now is a form of intelligence that is just one of those, and its good at doing things that fit with that. So it is good at doing perception. It is good at certain kinds of game playing. That doesnt mean it can do everything else. The way i think about this is the deep learning is a great hammer, and we have a lot of people looking around saying because i have a hammer, everything must be a nail. And some things actually work with that, like go and chess and so forth, but theres been much less progress on language. So theres been exponential progress in how well computers play games, but theres been zero progress in getting them to understand conversations. Thats because intelligence itself has many different components. No Silver Bullet is going to solve it. The second thing i wanted to say is that theres no substitute for common sense. We really need to build common sense into our machines. A picture i wanted to show you right now is of a robot on a tree with a chainsaw. And its cutting down the wrong side, if you can picture that. So its about to fall down. Now, this would be very bad. We wouldnt want to solve it with a popular technique called reinforcement learning where you have many many trials. You wouldnt want a fleet of 100,000 robots with 100,000 chainsaws making 100,000 mistakes. That would be bad as they said in ghost busters. Then i was going to show you a cool picture of something called the yarn feeder which is a little bowl with some yarn and a string that comes out of a hole. As soon as i describe it to you, you have enough common sense about how physics work and what i might want to do with the yarn feeder. I was going to show you a picture of an ugly one. You can recognize this one even though it looks totally different because you get the basic concept. Thats what common sense is about. I was going to show you a picture of roomba. You all know the vacuum cleaner robot. I was going to show you nutella and a dog doing its business, you might say, and say the roomba doesnt know the difference between the two. I was going to show you something thats happened not once but many times which is roombas that dont know the difference between nutella that they should clean up, and dog waste, spread the dog waste through peoples houses. Its an Artificial Intelligence common sense disaster. [laughter] then whey then what i wish i could show you the most my daughter climbing through chairs. When she was 4 years old, she was small enough to fit through the space between the bottom of the chair and the back of the chair. She didnt do it by imitation. I was never able to climb through the chair. Im a little bit too big, even if im in good shape and exercising a lot. She had never watched dukes of hazard and climbed through a window and get inside of a car. She had never seen that. She invented herself for a goal. This is the essence of how human children learn things. They set goals, can i do this . Can i walk on this small ridge on the side of the road . I have two children, 5 and 6 1 2. All day long they just make up games what if it were like this or can i do that . She tried this and she learned it essentially in one minute. Like she squeezed through the chair. She got a little stuck. She did a little problem solving. This is very different from collecting a lot of data with a lot of labels the way that deep learning is working right now. And i would suggest that if ai wants to move forward, that we need to take some clues from kids and how they do these things. The next thing i was going to do was to quote elizabeth spellke who teaches at harvard down the street. Shes made the argument if you are born knowing there are objects and sets and places and things like that, then you can learn about particular objects and sets and places, but if you just know about pixels and videos, you cant really do that. You need a starting point. This is what people called the nativist hypothesis. I like to show a video. People dont want to think that humans are built with notions and space of time and causality as its been argued, as im suggesting ai should do. Nobody has any problem thinking animals should do this. I show this video. They have to realize that theres something built into the brain that there has to be an understanding of three dimensional geometry from the minute that it is born. It must know something about physics and its own body. That doesnt mean it cant calibrate and figure out how strong its legs are and so forth but as soon as it is born, it knows that. The next video i was going to show you, you have to look at this online, robots fail. It shows a bunch of robots doing things like Opening Doors and falling over. Or trying to get into a car and falling over. Im sad i cant show you this right now. But you get the point. Current robots are really quite ineffective in the real world. The video i was going to show were things that had been simulated. So it was a competition that was ran. Everybody knew exactly what the events were going to be. They were just going to have to have the robots open the doors and turn dials and stuff like that. They had done them in computer simulation. When it got to the real world, the robots failed left and right. The robots couldnt deal with things like friction and winds and so forth. To sum up, i know a lot of people are worried about ai right now. And worried about robots taking over our jobs and killing us all and so forth. Theres a line in the book which is Something Like worrying about all of that stuff would be like in the 14th century worrying about highway fatalities, highway traffic fatalities when people would have been better off worrying about hygiene. What we should be worried about right now is not some vast future scenario in which aishgs is much smarter than people future scenario in which ai is much smarter than people. We need to worry about the limits of current ai, the fact that we are using it a lot in decisions about jobs and jail sentences and so forth. On the topic of robot attack, i suggest a few things you can do. The first one is just close the door. Robots right now cant open doors. Theres a competition to teach them how to do that. If that doesnt work, lock the doors. There is not even a competition to have robots lock the doors. It will be another seven or ten years before people start working on doors where it is a little bit jammed and pull in the knob and stuff like that. Or put one of the stickers i showed you, you will confuse the robot. Or talk in a foreign accent in a noisy room, the robots dont get any of that stuff. Deep learning is a better ladder than weve built before, lets us climb to certain heights. But just because something is a better ladder, it doesnt mean it will get you to the moon. We have a helpful tool here but we have to discern as listeners and readers and so forth the difference between late bit of ai that can do a little bit and some magical form of ai that hasnt been invented yet. To close, and then i would love questions and we will take as many as you have. If we want to build machines that are as smart as people, what i think we need to do is start by studying small people, human children and how they are flexible enough to understand the world in a way that ai isnt able to do that. Thank you very much. [applause] yes, question . Im a retired orthopedic surgeon, and i got out just in time because theyre coming up now with robotic surgery, prominent in the knee replacements. Have you got any information about where that is headed and how good it is and snets and etc. . The dream is that the robot can do the surgery itself. Most of that is an extension to the surgeon right now. Like any other tool. In order to get robots to really be able to be kind of full service, they need to understand the underlying biology of what they are working on. They need to understand the relation between the different body parts they are working with. Our ability to do that right now is limited for the kinds of reasons we are talking about. There will be advances in that field in the next few years, but i wouldnt expect when we send people to mars, whatever that is, if its any time soon, that we would have a sort of robot surgeon like you have in science fiction. Were nowhere near to that point. It will happen some day. Theres no principled reason why we cant build such things and have the machines have better understanding of biology, but we dont have the tools right now to allow them to absorb the medical training. It reminds me of a famous experiment in Cognitive Development, where a chimpanzee was trained and raised in a human environment, and the question was, would it learn language . And the answer is no. If you sent a current robot to medical school, it wouldnt learn anything. That wouldnt help it to be a robotic surgeon. Thank you. Youre welcome. Other questions . Do current limitations of ai also apply to selfdriving cars . Yes, i am sorry i didnt mention selfdriving cars that much. It seems it is logically possible to build them but empirically the problem you get is what we call outlier cases. The training data, the things you teach the model system on are too different from what you see in the real world they dont work really well. The case of the tow trucks and the fire trucks that the teslas keep running into, its probably in part because mostly they are trained on kind of ordinary data where all the cars are moving fast on the highway, and you see the system sees something it hasnt done before. It doesnt really understand how to respond. So i dont know whether Driverless Cars are ultimately going to prove to be closer to Something Like chess or go where its bounded enough we can get the systems to work with Current Technology or if it is going to be more like language which to me seems completely outside the range. But people have been working on it for 30, 40 years. A company part of alphabet which used to be google has been working on it far decade. For a decade. Theres progress but it is relatively slow. People solve one problem, and it causes another problem. So the first fatality from a driverless car was a tesla that ran underneath a semi trailer that took a left turn on to a highway. So first of all, you had the problem that that it was outside the training center, whatever, it was an unusual case. And second of all, i have been told, and i dont have proof of this, but ive been told that what happened is that the tesla thought that the tractor trailer was a billboard and the system had been programmed to ignore billboards because if it didnt it was going to slow down so often that it was going to get rearended all the time. So one problem was solved, the slowing down for billboard problems and another problem popped up. And so whats happened so far is the Driverless Cars have felt a lot like people make a little bit of progress. They solve one particular problem but dont solve the general problem. To my mind, we dont have enough techniques to try to solve the problem. People are saying i will just throw more and more data, more data, and it gets a little bit better, but we need to get a lot better. Right now the cars need human intervention about every 12,000 miles last i check. That sounds impressive. It turns out humans have a fatality over 134 million miles every 134 million miles on average. If you want to get it to a human level, you have a lot more work to do. It is not clear grinding out the same techniques will get us there. This is the metaphor of having a better ladder is not necessarily going to get you to the moon. Yes . My question is about Machine Learning and using it to do right now im an astronomer, so weve started to use it in science. The problem is if youre just doing pattern recognition, you really dont learn anything. Do you think were making progress on having Machine Learning kinds of programs be able to tell us how theyre making decision in enough detail that its useful . Theres a lot of interest in. That it may change, but right now theres attention between techniques that are relatively efficient and techniques that produce what we call interpretable results as i guess you know. Right now the best techniques for a lot of perceptual problems. Do i want to identify does this look like an asteroid that i have seen before . Deep learning is the best from that. It is far from interpretable as you can possibly imagine. People with making progress to make it a little bit better. Theres a trade off now. You get better results and you give up interpretation. Theres a lot of people worried about this problem. I have not seen any great solution to it so far. I dont think its unsolvable in principle, but right now it is kind of at a moment of a ratio between how good the systems work and how little we understand is pretty extreme. Going back to the Driverless Cars, we will have cases where somebody is going to die and somebody will have to tell the parent of a child or something, the reason your child died seems to be the parameter number 317 was a negative number when it should have been a positive number. It will be completely meaningless and unsatisfying but thats sort of where we are right now. Other questions . What are your thoughts on application to healthcare diagnostics and the racial bias that the people are concerned about and also just the fact that we cant afford to have any misdiagnoses . Well, i guess those are three different questions. The first is that i may forget one. You might have to remind me. The first is can you use this stuff for medical diagnosis . And the answer is yes, but it relates to the last which is how important is a misdiagnosis . The more important it is, the less we can rely on these techniques. Its also the case of course the human doctors arent completely reliable. The advantage that machines have right now for some things, radiology in particular, is they are pretty good at pattern recognition. They can be as good as radiologists at least in careful laboratory conditions. Nobody really has as far as i know or last i checked a working real world system that kind of does radiology in general. They are more like demonstrations that i can recognize this particular pattern, but in principle, deep learning has an advantage over people there. It has a disadvantage though in that it cant read the medical charts. Theres a lot of unstructured text in medical charts which is like doctors notes and stuff like that, thats written in english rather than a picture of a chart. And machines cant read that stuff at all or they can recognize key words and stuff like that. So a really good radiologist is kind of like a detective. I had a radiologist explain to me, sort of like sherlock holmes, theres this asymmetry that shouldnt be there but is from an accident that happened 20 years ago. So they have to use interpretati interpretation, a story of whats going. Current technology doesnt do that. Its not going to roll out next week. The first cases of having an impact on medicine are probably going to be radiology that you can do on a cell phone where you dont have a radiologist available. In developing countries where theres not enough doctors, the systems may not be perfect, but you can try to reduce the false alarms to some degree, and you can get, you know, decent results where you maybe couldnt get any results at all. We will start to see that. Pathology is going to take longer because we dont have the data because pathologists havent been digital. They are starting to do it. But radiologists have been digital for a while. Then there are things where if you ever watch the television house, where youre like trying to put together some complex diagnosis of a rare disease or Something Like that. And systems arent going to be able to do that for a long time. So ibm made an attempt at that with watson and it just wasnt very good. It would miss like Heart Disease when, you know, it was obvious to a firstyear medical student. And there it goes back to the difference between having a lot of data and having understanding. So youre just doing correlations but dont understand the underlying biology or the underlying medicine, then you cant do that much. We dont have the tools yet to do high quality medical diagnosis. Thats a ways off. Thanks for coming. Im working at a data analyst, Data Scientist and part of what im sort of working on sort of what im working on is scoping small different tasks that automation, Machine Learning would be helpful for or also helping wider problems that arent solvable right now with current methods. How would you im always interested in ways to explain or get the idea across between sort of like what you said i think the fundamental difference is some problems are a closed world. They are limited. The possibilities are limited. The more limited the world is, the more current techniques can handle them. Some problems are open ended. They could involve arbitrary knowledge or unusual cases. Driving is interesting because in some ways its closed in, like you only drive on the roads if were talking about, you know, ordinary driving, ordinary circumstance, but it is open ended because there could be a Police Officer with a handlettered sign saying, you know, this bridge is out. Theres so many possibilities in that way that it is open ended. What you end up finding is that the Driverless Cars work well in the stuff that sort of closed theres a lot of conventional data for it and they work poorly when they are forced to go outside their comfort zone. These systems have a comfort zone, and they go outside of it and they dont work that well. Yes . You made a point about deep learning that [inaudible]. Do you think that humans have an inherent advantage which is million years of evolution billion years of evolution. Maybe were not using enough data . Well, i dont see it that way. So the way that i see it that the billion years of evolution, what they did is they built a genome. They built a rough draft of the brain. If you look at the developmental biology, it is clear that that brain is not a blank slate. It is very carefully structured. We dont understand all of the structure, but theres any number of experiments that illustrate this in all kinds of ways, where you can do deprivation observation where animals dont have any experience with the environment, but they know things. You can think about ducklings looking for something to imprint on the minute they are born. Our brains are built to learn about people, objects and so forth. What evolution has done is it has given us a really tool kit for assimilating the data we get. You could say by more and more data, more and more time, could i get the same thing . And maybe, but were not very good at replicating a billion years of evolution. I mean thats a lot of, you know, trial and error that evolution did, and we could try to replicate that with enough cpu or gpu time, enough graduate students and so forth, but theres another approach to engineering where you try to look to nature and how it solves problems and try to take clues in from the ways in which nature solves problems. Thats fundamentally of what im suggesting. Look at biology in terms of animals brains or human brains, not that we need to build more people. I have two small people and theyre great. We want to build ai systems that take the best of what machines do well, which is compute really fast with the best of what people do, which is to be flexible in their thought and to be able to read and so forth so that we can do things like solve problems that no human being can solve right now, such as, integrating the medical literature. This 7,000 or Something Like that papers published every day, no doctor could read them all. It is impossible for humans. Right now machines cant read at all. If we could build machines that could read and scale them the way we scale computers, then we could revolutionize medicine. But i think to do that, we need to build in basic things like time and space and so forth so the machine cans make sense of what they read the machines can make sense of what they read. Other questions . Yes . How are you thinking about fixing the problem, building these new modules, what form will they take . Are they going to be the same structure that deep learning currently uses or something completely different . Well, the first thing i would say is we dont have the answers, but we try to pinpoint the problems. We try to identify in different domains like space and time and causality where the current systems work and where they dont. The second thing i will say is the most fundamental thing is we need ways of representing knowledge in our learning systems. So theres a history of things called expert systems. They are very good at presenting knowledge. So if this thing is true, then do this other thing, and, you know, it is likely that such and such is happening if these two things are happening. The knowledge looks a little bit like sentences in the english language. Then we have deep learning which is very good at representing correlations between individual pix els and labels and things like that but very poor at representing that kind of knowledge. What we argue is we need a synthesis of that. Learning techniques that allow you to be responsive to data in ways that traditionally the techniques arent particularly responsive to data. Learning thats responsive to data but that represents or that works with abstract knowledge so you can, for example, teach it something by saying that, i dont know, an apple is a kind of fruit, have it understand that. We have systems that can do a little bit of this, but we dont really have systems where we have any way of teaching something explicitly like wallets occupy physical space and that a wallet inside a pocket is going to feel different than a wallet that is not inside a pocket. We just dont have a way of even telling the machine that right now. Yes . Im thinking if we dont take the deep learning approach and dont learn all this from data and we try to [inaudible] some of this knowledge, then arent we playing a different game, like were trying to uncode some knowledge about time and tomorrow were like we need to about space and then tomorrow causality. I think we need to do all three of those. Theres a lot of knowledge that needs to be uncoded. It doesnt have to be hand coded. We could build learning systems that could learn it themselves. Theres some core domains that enable other kinds of things if you have a framework for representing time then you could represent things that happen in time. If you dont know time exists then seeing a lot of correlation between pixels isnt really going to give you that. One way you could estimate things is you could think about the number of words in the english language that a typical native speaker knows. And it is Something Like 50,000 if they have a big vocabulary. Maybe theres ten pieces or 100 pieces, lets say, of common sense that goes with each of those words. Then you are talking about like millions of pieces of knowledge, but you are not talking about trillions of pieces of knowledge. It would be a lot of work to encode them all. Its been tried, and maybe we could try it in a different way. It is not an unbounded task. Its so much fun to play around with deep learning and get good approximations, not perfect answers so nobody has the appetite to do it right now. It could be theres no way to get there otherwise. I mean, thats my view. Thats spelkes view. Theres a long tradition of nativists say is the way you get into the game is you have something that allows you to boot strap the rest. I think it is a scoping problem that we need to pick the right core domain so we can boot strap the rest f. If you look at the Cognitive Development literature thats true for babies. They have some knowledge and then develop more knowledge. I think we could at least start with the kinds of things that developmental psychologists have identified, work on those and see where we are. Other questions . If not, thank you all very much. [applause] i think we will have a book signing if anyone thank you all for coming out. We have copies of the book at the front counter. This week you are watching book tv so you can see what programs are available every weekend. Watch top nonfiction authors and books, along with coverage of events, fairs and festivals and interviews on policy, technology and more. Plus our signature programs in depth and after words. Enjoy book tv this week and every weekend on cspan 2. Weeknights this week we are featuring book tv programs showcasing whats available every weekend on cspan 2. This thursday the theme is history, a harvard professor looks at how cold war propaganda was disseminated in the United States and written in the soviet union. We also talk about the womens suffrage movement. And also talk about nazi censorship in the 30s. Enjoy book tv this week and every weekend on cspan 2. Book tv has live weekend coverage of the southern festival of books from nashville, tennessee, starting saturday at 11 00 a. M. Eastern featuring the book no surrender, a good provider is one who leads, on the plane of snakes and jones talks about his memoir how we fight for our lives. Our live coverage from the southern festival of books continues on sunday at 1 00 p. M. Eastern. At 2 00 eastern, an author discusses her book learning from the germans. Former u. S. Ambassador to the United NationsSamantha Power talks about her book the edge case of an idealist the edge case of an idealist education of an ideal list. And the book religion of fear. Live coverage 11 00 a. M. Eastern saturday and 1 00 p. M. Sunday, on book tv on cspan 2. We will hear from the founding director of the mit center for collective intelligence. The work on ai will be discussed. Then later Kelly Harding explores a link between mental and physical health. We are back live at the National Book festival. Were pleased to have join us now on our book tv set professor Thomas Malone of mit. Heres his most recent book. It is called superminds, the surprising power of people and Computers Thinking together. Before we get into the topic of the book, professor, what is it that you do at mit . Im a professor in the sloan school of management and director of the mit center for collective intelligence. Pleasure to be here. What kind of Management Training do you give at mit . I teach two main courses. One is an mba course on strategic organizational design, about how to organize companies in different situations, including Innovative New things like wikipedia, for instance. And the other is a leadership work w