Transcripts For CSPAN2 Gary Marcus Rebooting AI 20240713 : v

Transcripts For CSPAN2 Gary Marcus Rebooting AI 20240713

Good evening everyone, welcome and thank you for supporting your local independently owned bookstore. Cspan is filming so please make sure all funds on silent. I do and to let you know about a couple events have coming up this month, tomorrow night haley reading from her and ran his new novel. Tuesday Steven Kinzer is going to present the history of the cia also outside events and tickets are Still Available for mondays conversation and for jonathan his foreign talk on wednesday. Tonight we welcome gary marcus, author of rebooting ai which is a human in jeopardy does not single engine signal we are on the doorstep or super intelligent machines. Taking inspiration from the human mind, the book explains that if we need to advance our social intelligence to the next level, suggestive we are wise along the way we wont need to worry about a machine overload. Finally a book that tells us what ai is, what it is not, and what ai could be, if only we are ambitious and creative enough. He calls it a lucid and equally informed accounts. Gary marcus is a founder and ceo of robust aims founder and ceo of geometrics intelligence hes published in journals including science and nature, he is the author of guitar zero and rebooting ai. Thank you. Thank you very much. [applause] s is not what we wanted to see, lets see if it will go missus not good. Okay maybe it will be all right. And some technical difficulties here. I am here to talk about this new book rebooting ai some of you might have seen an ad i had the New York Times called how to build Artificial Intelligence we can trust. He think we should all be worried about that question because people are building a lot of Artificial Intelligence they dont think we can yet trust. The way we put in the pieces Artificial Intelligence has a trust problem we are relighting more on ai and has to earn our confidence. We also suggest there is a hype problem. A lot of ai overhyped these days are often by people who are very prominent in the field so andrew is the deep learning its a major approach to ai these days he says that the typical pers pin can do a mental task with less than one second thought, we can probably automate using ai either now or in the user. The profound claim anything you can do in a second we can get ai to do. If it were true, the world would be on the verge of changing altogether. It may be true someday 20 years 50 years a hundred years for now, they try to persuade just not remotely true now. The trust problem is this. We have things like Driverless Cars that people think they can trust, that they should not actually trust. Sometimes they die in the process. This is a picture from a few weeks ago, in which a tesla crashed into a stopped Emergency Vehicle this is actually happened five times in the last year tesla on the autopilot crashed into a vehicle on that side the road. Heres another example im working on the robot industry hope this never happens in my robot. This robot is a security robot basically committed suicide by walking into a little puddle. [laughter] youve got saying machine saying they can do a thing a person you doing this person and a second can look in the puddle and said maybe i shouldnt go in there and the robot cant. We have other kinds of problems to like biased people been talking about lately, you can do a google image search for the word professor and you get back Something Like this were almost all of the professors are white males even though statistics are the United States only 40 of professors are white males it be even lower than that around the world. Give systems that are taking a lot of data, but they dont know if the data is any good and they are just reflecting it back out parade thats perpetuating cultural stereotypes. The underlying problem with Artificial Intelligence right now is the techniques people are using are simply too brittle. So everybodys excited about some the cold deep learning its really pretty good for a few things. Actually for many things. Object recognition student get deep learning to recognize this is a bottle and maybe this is a microphone. Or you can get it to recognize my face and may be distinguished from uncle teds face for example. I hope it would be able to do that. Deep learning can help some with radiology but it turns out all of the things is good at, fall into one category of human thought or human intelligence. The category they fall into is they all seem to be perceptual classifications to see a bunch of examples of something and you have to identify further examples of things that look the same, sound the same and so forth. That doesnt mean that one technique is useful for everything. I wrote a critique of deep learning a year end a half ago you can find online called deep learning critical appraisal he can find online for free deep learning is greedy, brittle, opaque and shallow downsizing deep learning in the arrays excited about it doesnt mean its perfect and im going to give you some examples. First going to give you real counterpart to anders claim for if you were running a business and wanted to use ai for example you need to know what can ai actually do for you and what can it not do for you. If you are thinking of ai ethics and wondering what machines might be able to do soon or not be able to do sooner think its important to realize their limits on the current system. Heres my counterpart. The typical person can do a mental task with less than one second of thought and we can gather in a enormous amount of data that are directly relevant, with a fighting chance to get her ai to work with that so long as the testator, the things that make the system work on are different than we not too terribly different from the things we taught the system on in the system does not change very much over time. Probably are trying to solve this not change over time. This is a recipe the for games. This is what ai is good at right now is fundamental things like games. So alpha go is the best go player in the world is better than any human if its exactly what is specified what systems are good at. System hasnt changed. The gentlemen, the game hasnt changed in 2500 years we have a perfectly good set of rules we have as much data you can gather for free we are almost for free you can have the computer play itself or different versions of itself is what deep mine did in order to make the best go player and it can keep playing itself keep gathering more data. Now compare that to lets say a robot that does eldercare pretty dont want a robot that does elder care to collect an infinite amount of data through trial and error and work some of the time and not work others. Your elder care robot works 95 of the time putting grandpa into bed and drops grandpa 5 of the time, you are looking it lawsuits in bankruptcy does not get a fly for the ai to drive in elder care robot. When it works, the way deep learning works is something called Neural Network thats depicted at the top. Its fundamentally taking big data making statistical part proclamations is taking labeled data see label a bunch of pictures of tiger woods, much affection of golf balls a bunch of pictures of Angelina Jolie and you showed a new picture of tiger woods is not too different from the old picture and it correctly identifies this is tiger woods not Angelina Jolie this is the sweet spot of deep learning. People got really excited about it what it for starting and popular. Magazines had an article sink deep learning will soon give a super smart robots. Weve already seen in some examples or an example of a robot its not all that smart partially more later. The promises been around for several years but has not been delivered on. Theres lots of people thats even a perception and then im a talk about Something Else which is reading. So on the right are some training examples pretty you teach the system these things are elephants. If you showed another elephant that looked a lot like those on the right, the system would have no problem at all it would say wow it knows what an elephant is. But suppose you showed the picture on the left, the picture on the left with a deep learning system responses person. It mistakes the silhouette of an elephant for person. And its not able to do what you would be able to do which is first of all to recognize it as a silhouette and second of all the trump is really salient and is probably an elephant. This is what you might call extrapolation visualization deep learning can do this. We are trusting deep learning everyday more and more its getting use in systems that make judgments about whether people should stay in jail or whether they should get particular jobs and so forth. Its really quite limited. Heres another example, kind of making the same point about unusual cases. If you show at this picture of a school bus on its side in a snow bank, it says with great confidence will thats a snowplow. With that tells you his assistant cares about things like the texture of the road in the snow has no idea the difference between a snowplow and a school bus or what theyre for. Fundamentally mindless the statistical summation correlations it doesnt notes going on. This thing on the right was made by people at mit if you are deep learning system youll say its an espresso because theres foam there its not super visible because of the lighting here but it picks up on the texture of the foam and says espresso because at the salient thing about espresso it doesnt understand its a baseball. Another example is you showed deep learning system of banana and you put the sticker in front of the banana which is kind of a psychedelic toaster. Because theres more color variation and so forth in the sticker, the deep learning system goes from calling the top when a banana to calling on the bottom a toaster. In fact it doesnt have a wave doing what you would do which is to say Something Like its a banana with the sticker in front of it thats too complicated it says which category something belongs to, thats all deep learning does is identify by category. If you are not worried this is starting to control or, youre not paying attention. This gets the next flight out of this . Maybe not. Going to have to go without a slighted think because of technical difficulties. I will continue though. Its just nothing i can do all right. One second here to look at my notes. Okay, so next i was going to show you a picture of a parking sign with stickers on it. It would be better if i could show you the actual picture, but presenting slides over the web is not going to work. So parking sign the stickers on if you can imagine that. The deep learning system calls it a refrigerator filled with a lot of food and drink. Its completely off, so to something that the color and texture but it doesnt really understands going on. Then i was going to show you a picture of a dog thats doing a benchpress with a barbell. Yes, something has gone wrong. Thank you for that. We will just shut it down. To me to it help you . Two no i need a. Laptop and i think it just couldnt do it fast. I dont think theyre going to be will it to edit so go on. You have a picture of a dog with a barbell and its lifting the barbell, deep learning system can tell you those of barbell there in a dog but it cant tell you hey thats really weird how did the dog get so ripped it can lift the barbell . Deep learning system is no concept of things is looking at. Current ais even more out of its depth when it comes to reading some going to read you a short story that Laura Inglis Wilder wrote. Its about el lanzo is a 9yearold boy finds a wallet full of money that is dropped on the street. And then almanzas father guesses that the wallet might belong to something entrance m a mr. Thompson alonzo finds mr. Thompson. Heres of them that wilder wrote l lanzo turns to mr. Thompson asked a jew lose a pocketbook . If mr. Thompson jumps up to ten his pockets and shouts yes i have, 1500 and it went about it . What you know about it . Is this it on lanzos question ricky said yes thats it he opens it and hardly counts the money he counts all of the bills over twice and then he breathes aside relief and said that boy didnt steal any of it so when you listen to that story you form a mental image of it might be a vivid image or not but you know you and for a lot of things liked it the boy hasnt stolen any of the money or where the money might be . You realize why he has reached in his pocket looking for the wallet because he knows wallet occupy physical space and if your wallet is in the pocketbook, it will recognize it if you dont feel anything it wont be there and so forth for you know all of these things you know a lot can make a lot of inferences on how everyday objects work and how people work. So you can answer questions whats going on. Theres no ai system yet that can actually do that. The closest thing we have is a system called gpt two. This is released by open ai some of you mightve heard about it because first of all its famous piquant elon musk founded it and it has the promise are going to give away all their ai for freight thats what make the story so interesting so they gave away all their ai for free until they made this thing called gpt two. They said gpt two is so dangerous that we cant give it away. Its an ai system that was so good at human language that they did not want the world to have it. But people figured out how worked in they make copies of it naked use it on the internet. And so my coauthor and i fed in the l lanzo story into it. So remember, a lanzo is found the wallet hes given it to the guy, the guys counted the money, he super happy. What you do is you feed in the story and it continues it. It took a lot of time, maybe hours this is how it continues it for him to get the money from the safe place where he hit it. This just makes no sense. Its perfectly grammatical but if he found his wallet was it doing . Its the words are correlated in some vast database. Its completely different from the kind of understanding little children get parade the second half of the talk which i will do without visuals, is called looking for clues. The first clue i think we need to do as we develop ai further is to realize that perception, which is what people do is part of what intelligence is. Some of you, might especially in cambridge understand multiple intelligence for example so this verbal intelligence musical intelligence and so forth. As a cognitive psychologist i would also say there are things like common sense, planning, tension is many different components. We have right now is a form of intelligence that just one of those. Its good at doing things that fit with that, good during perception, edit certain gameplaying doesnt mean it can do everything else. The way i think about it is deep learning is a great hammer. We have a lot of people looking around saying because i have a hammer everything must be a nail. And some things actually work with that, but there is been much less progress on language. So there has been exponential progress and how well computers play games. But theres been zero progress in getting them to understand conversation. That is because intelligence itself has many different components no silver ebola to solve it. The second thing i wanted to say is theres no substitute for common sense prayed we really need to build common sense into our machines i wanted to show a picture of a robot on a tree with a chainsaw its cutting down the wrong side if you can picture that. Its about to fall down. This a be very bad, we would not want to solve it with a popular technique called reinforcement learning we have many, many trials you would not want a fleet of 1000 robots and 100,000 chainsaws making 100,000 mistakes that would be really bad. Then i was going to show a really cool picture something called the yarn theater which is like a little bowl with some yarn and a string that comes out of a hole. With soon as i describe it to you to have enough common sense about how physics works and what i might be wanting to do to understand it. Then i was going to show a picture of a really ugly one and say you could recognize this one even though looks totally different because you get the basic concept, thats a common sense is about. Then i was initially picture marimba yell no its a vacuum cleaner robot, and then i was going to show you you tele and a dog doing its business i guess you can might say. Say the room but doesnt know the difference between the two as going to show you was called the pu pocket which is something thats happened not once but many times which is roombas that dont know the difference between what they need to clean up and dog waste, spread the dog waste all the way the peoples houses sometimes is been described as it Jackson Pollock of artificial intelligent common sense disaster. R8, then, what i really wish i could show you the most is my daughter climbing through chairs, sort of like the ones you have now. My daughter the time is four years old you guys are sitting in chairs were theres a space between the bottom of the chair and the back of the chair. When she was four years old to small to fit through. Now she again didnt do this will be called reinforcement learning which was trying hundreds of thousands of times, i was never able to climb to the chair i was little bit too big even if i was in good shape and exercising and she would never, for those who know it watch the Television Show dukes of hazard and climb through a window to get inside of a car should never seen that so she just invented for herself a goal. And i think this is how human children learn things is can i do this i wonder if i could do that . Can i walk on the small ridge on the side of the road. I have two children sir five and six and a half an all day long they just make up games for the like what if it were like this could i do that . She tried this, she learned it essentially in one minute. She squeezed the chair shall little section to the low problemsolving. This is very different from collecting a lot of data with a lot of labels the way deep learning is right now. I would suggest we need to take clues from kids and how to do these things. The next thing i was going to do is quote elizabeths pelkey who teaches at harvard down the street. She has made the argument that if you are born knowing their objects and sets in places things like that, you can learn about particular things to be just about pixels and videos you cant do that you need a starting point. This is what people call the opposite of the blank slate hypothesis and i like to show a video of a baby ibex climbing down a mountain. Nobody ever wants to think humans have anything in eight other there temperament, they dont think humans

© 2024 Vimarsana