My producer and cohost is here to bring in all your feedback throughout the program. Teaching robots morality. It feels like were in a sciencefiction movie. Its awesome. Were paid to geek out today. This is fascinating and terrifying, so i imagine a future met plus programmed by the matrix and run by sky net and patrolled by robocop. All of us will be like step ford wives because theyll have facial Recognition Technology that can tell if were lying. Well all be boring people, which means siri will be the most interesting person. We asked our community, in future of Artificial Intelligence, could you imagine dating your siri . We wouldnt last. Our Communication Styles are incompatible. She never listens to me. Taylor on facebook says, if its a robot, i. E. , a computer then reason will be all it has first. All you have to do is make is so the suffering the beings is a concern to it, and you have the potential for morality in the robot but the consequences. Taylor is thinking. Its smart. Piloting planes, diagnosing illnesses and making ethical decisions on the battlefield, those are duties that required trained and talented individuals. What if i told you the trained robots are up for the same kind of challenges . According to researchers Artificial Intelligence is advancing so rapidly that in some cases you cant tell whether you deal with a human being or computer. Recently programmers rocked the Artificial Intelligence world by creating a super computer that passed the turning test. The test measures the intelligence of a machine by whether it can fool people into believing its a human, and in this case it fooled a third of the scientists into believing they were talking to a 13yearold boy leading some to believe were on a path where machines eventually achieve the same level of critical thinking, introspection and even ethical reasoning as human beings. After you hear our guests today, you will not think this is impossible. Of course, what are the longterm consequences and implications here . To help us sort it out from london is george, who is an engineer and expert of Artificial Intelligence. This is the senior he had for for the blog extreme tech. The phrase Artificial Intelligence gets thrown around lot with everything about google to the iphone siri. What does Artificial Intelligence mean in 2014 . It means Different Things than it used to mean several decades ago. Several decades ago when it started in the late 50s and 60 it meant developing machines, computers, robots that had the potential to think and selfreflect. They have everything we recognize as human intelligence. Nowadays, because we understand more of the complexity around the human brain and human consciousness, the definition of that culture would sound more if youd like humble. It means making machines very intelligent, but not necessarily selfaware. What about this computer we just referenced that fooled some scientists into thinking that it was human . Have we broken through a barrier in Artificial Intelligence . I dont believe so, no. Certainly not. This is what is called a tiering test. It was proposed several decades ago from one of the greatest pioneers, if youd like, of computer Artificial Intelligence, alan cheering, and no one really believes this is saving the Artificial Intelligence. Far from it. Its seen more as a p. R. Exercise that has achieved a very important thing. We are having a discussion about Artificial Intelligence, which is a Serious Business here between you and sebastian. I think thats very good news indeed. Were talking about go ahead. As george mentioned, havi having thats one very small act of intelligence, if you can even call it intelligence. Its just you know, we used to have very lofty ideas of intelligence as george says, but now the folks see very, very specific parts and just tricking a few people that youre talking to a boy instead of a computer is a very, very small challenge. It doesnt actually help you have rational thoughts. It doesnt help you drive a car. It doesnt help you do these useful things. Its also worth noting that in this case the judges knew they were talking to a computer, and they knew they were judging a xur. Computer. So theres a lot of bias there. Its not a huge breakthrough. If it only tricks a third of the people, arent there significant implications here for people with nepharious intentions . If a computer can trick 30 of folks to think it is real, that could be problematic. I would say yes, its possible. I would say that the fact the test wasnt done very well. If the test was if it took someone off the street, sat them down and said talk to this person and they actually believed it was a person, i would say theres som applicability here. The fable fact they news they were talking to a computer makes it feel iffy. There are not a lot of research troops passing the test. Its an old World Odyssey that hung on because its attached to alans name, so it has the significance. I dont know of any large ai groups seriously trying to pass it. Theyre working on other things. Sebastian, the government is investing in moral robots. They have investigated 7. 5 million. Should robots be trust odd to make moral decisions . Lynn said, hell, i dont trust human beings to making decision. He he said will they interpret commands like a lawyer would and whose morals will they follow . Religious morals . Good question. Should we have moral robots of wa warfare. You heard my geek references. Sky net, matrix. Its possible to make these moral robots . Your thoughts. First of all, its necessary for you to think about this possibility. I think it was in 1942 that isaac the sciencefiction writer define the three laws of row bolttics, and that was like the first effort. Its a realization rather if you have intelligent creatures, machine creatures moving around in our environment, interacting with us, driving cars or performing operations, then those machines will have to make decisions. Sometimes those decisions have to have a moral underpinning. They have to be moral decisions. Just imagine driving a car down the road to avoid an accident and is given basically two choices. Either to kill person a or kill person b. How will that machine make this decision . This is a moral decision. Its a very perhaps predictable decision if we go down to the road this has cost. These are issues we definitely need to address, and if they did not concern purely the Media Establishment if you like or the battlefield. They have to do with what will happen in our everyday lives as well. Sebastian, george raises an interesting issue here, without getting too technical, how do you code for moral consequences . How do you code for ethics . So this is i mean, as the twitter people mentioned, the problem here is we still find it very hard to quantify what human morality is or human ethics, so the starting point for the u. S. Military research is actually having to work out how does a human make a moral decision . If you are driving your own car and you have a choice of running over two people, which one do you choose to run over . This decision plagues people for a long time. Once you work out what human morality is, in theory you can program it into a computer. It would probably take the form of a huge number of questions, like millions of questions like you will take as much data as you can. What does the person look like . Every kind of thought that goes through your head to make a decision and it would try to work out the answer. At end of the day, the robot is making that decision, which is very hard for us to get our heads around. The hard part for me to get my head around is not that its not logic, right . Feelings come into play, which are very different than actual inputs that require simply logics. So this is the thing. So i mean it depends on whether you believe that humans are purely the result of a bunch of chemical reactions in your head that make decisions or whether there is some kind of other force that is helping you make those decisions. I mean, theres a big argument that all your decisions that you make are just based on, you know, chemical things in your head. Making you answer in certain ways. In theory, we should be able to make a robot that makes exactly the same decisions. Theres been research into making robots that have the same hormones in the brain, the same endokrien systems to make a robot behave like a human. At some point you would think that they could closely mirror humans. You know, thats yeah. Are communities skeptical that airtoground combat has increased casualties. Forget the money. Whats the human cost of killing machines . Christa says fictional creations seems to channel a lot of anxiety. Sebastian you tweeted in, i welcome our new robot overlords. Youd be a trainer. Youll stick around for the next segment. The pictures and text that you share online with friends may seem like a private affair, but when it comes to the nsa, theyre fair game to collect and store usually facial Recognition Software in the name of national security. Up next, how the same technology is also being used by local Law Enforcement in catching criminals and how stores might be the next in line making your ever. Is your privacy, though, worth it . Later, you may be able to tell what Anyone Around you is feeling with the click of a button. We have all the details. See you in two minutes. Khaled hosseini author of the best selling novel the kite runner talks about his hopes for his homeland. No change will come to afghanistan unless its initiated by the afghans themselves. And the inspiration for his latest novel. The idea for the book came from painful acts of sacrifice. Every saturday join us for exclusive, revealing, and surprising talks with the most interesting people of our time. Talk to al jazeera only on Al Jazeera America welcome back. Were discussing the latest mindblowing advances in Artificial Intelligence. Facial Recognition Software has been around, but with recent advances were seeing effective results of use in Law Enforcement and other sectors. This month a chicago man became the first person to be convicted for a crime as a result of the technology. Here to talk about the various ways facial Recognition Software is permeates the lives on skype from San Francisco is jessica lynch. She works on transparency and privacy issues in new technology. From ames, iowa, brian meinke from iowa state university. Thank you for joining us. Brian, we justed mentioned the first man to be convicted using facial Recognition Software, convicted of robbery. These these recognition programs foolproof. Theyre not foolproof but they work well. Facebook recently announced they have an app that has a 98 accuracy rate. They have a lot of data, so the more data about you, particularly facial information the more accurate the program is. Talk about how the applications may work in the retail sphere. If i walk into a store how im recognized by my face and altering my Shopping Experience . Right now were on the cusp of seeing some apps looked at by retailers. Essentially, what theyre experimenting with now are generally referred to as anonymous. This is anonymous video analytics. They try to segment people into categories, so this all generates from a desire to get you to look at things, for example, and visual signs. If you put an image on a sign more likely to appeal to someone, theyll look at it. They figure out who you are, but they do so in a general sense. What we will see are things more related to, for example, reward card types of systems where you kind of walk in and get recognized. Throughout the show im the harbin engineer of negative consequences and potential bad news. And the Community Says will aib be covered under constitutional provisions and will we revault our meaning of life with obvious questions. Miguel says if people want smarter technology by personal assistance and other stuff, they have to give up privacy unfortunately. Jen, chicago currently has 23,000 surveillance cameras and the police pay for the technology. Where is the balance here between our privacy and security in in brave new world . Well, i think thats something we really have to talk about as a society. Now is the time to put privacy protections in place. Right now were not at the point where facial recognition can automatically identify any face in a crowd. We will be getting their soon, and as the government builds out databases of millions of images of people, its something we really need to be worried about for the future. Jennifer, the nsa is reportedly accessing Images Available on social media to use for Law Enforcement. Theyre publicly available images. Do you have concerns about the nsa doing this . Well, this is something that came out in the New York Times article a couple weeks ago about how the nsa is collecting millions of images every day and employing facial Recognition Technology to learn who people are in the images. I think what the nsa is doing is the agency is combining that facial recognition data with other bio graphic information and information from social media that explains who people are associated with and just using that information to identify people and create a Bigger Picture of who they are. Some more bad news here. Scott, we asked the community what are the drawbacks to use Computer Software to convict people of crimes . Its dangerous in and of itself to be relied upon technologies despite increasing algorithms. Juries should not predicate the decisions and id argue its a dangerous press dents to do so. Sebastian our resident geek here, what is your feedback to scotts comment here . Are you scared the precedent this technology will be setting . This is predicated on the idea that computers are better than humans. I mean, as has been mentioned, facebook has an algorithm better at humans at recognizing two faces. If you have seen him before and the computer saw him before and you see that person again in a crowd oran dom shop, the computer is better than a human at recognizing that person. So, again, this is an inherent distrust for computers and robots, but, you know, also surely, you know, computers arent biased. They dont have prejudices. They dont have all sorts of stuff like that. You could also say having a computer making those decisions is possibly quite a good idea. Its not like the computer is goalkeeper going to make the decisions on their own. A human will check it over time. Whether were talking about Something Like a jury and being used in a court system, eyewitness is not that reliable. You have to think that a Computer Software program would be more reliable than eyewitness testimony. One would hope so. Jennifer, where is the line, though . Where is the line where you say, thats far enough . Thats as much as Law Enforcement can use this kind of technology, and it has to stop here . Where is here . I wanted to get back to an early points about bias and technology and this idea that computers are not biased. Now, of course the only way that computers get information is if a human enters that information. A lot of times the information thats input into a database is based on for criminal databases based on biased policing, and so, you know, theres that saying garbage in and garbage out. If youre entering images into a database that are based on racial profiling, then thats your pool of people who youre trying to identify. One thing that we learned was from documents that we received from the fbi about the fbis massive next generation identification facial recognition database is that the system isnt all that accurate in actual fact. So the fbi only guarantees accuracy 85 of the time. That 85 of the time is dependent on the actual candidate being in the search results that are provided, and thats the top 50 search results. So that means 49 people may be misidentified and become suspects for a crime. Mills on twitter shares her concerns and says, just dont problem to carry human bias and dont target based on zip codes would be a good start. Chief justices, dont forget the system can be hacked. A lot of implications really. Indeed. Thanks to our guests Jennifer Lynch and brian. Still ahead, can Computers Read our minds . How facial Recognition Software may even know what youre feeling by understanding your expressions. Heres the catch. Its on the brink of becoming easy to use by anyone with a click of the button. You dont want to miss this. Consider this the news of the day plus so much more. We begin with the growing controversy. Answers to the questions no one else will ask. Real perspective, consider this on Al Jazeera America hundreds of days in detention. Al jazeera rejects all the charges and demands immediate release. Thousands calling for their freedom. Its a clear violation of their human rights. We have strongly urged the government to release those journalists. Journalism is not a crime. Its insane. The borderland marathon only at Al Jazeera America what if your devices could read your emotions and respond to them . Theyre trying to do just that. It will enable a revolution in device and application personalization. Welcome back. Researchers at Ohio State University say they have found a way for computers to decipher 21 distinct facial expressi