tv The Stream Al Jazeera February 20, 2015 4:30am-5:01am EST
4:30 am
they find the support available isn't really engaging enough. >> reporter: changing perceptions has taken time. but what this museum proves is that creatively is a very powerful channel for for dealing with mental illness. sonya gallegos, al jazerra london. can we teach robots morality? the u.s. is betting millions on the prospect. later from catching criminals to reading emotions, mind-blowing advances in facial recognition software, bringing computers frighteningly close to mind-reading.
4:31 am
my producer and co-host is here to bring in all your feedback throughout the program. teaching robots morality. it feels like we're in a science-fiction movie. >> it's awesome. we're paid to geek out today. this is fascinating and terrifying, so i imagine a future met plus programmed by the matrix and run by sky net and patrolled by robocop. all of us will be like step ford wives because they'll have facial recognition technology that can tell if we're lying. we'll all be boring people, which means siri will be the most interesting person. we asked our community, in future of artificial intelligence, could you imagine dating your siri? we wouldn't last. our communication styles are incompatible. she never listens to me. taylor on facebook says, if it's a robot, i.e., a computer then reason will be all it has first.
4:32 am
all you have to do is make is so the suffering the beings is a concern to it, and you have the potential for morality in the robot but the consequences. >> taylor is thinking. >> it's smart. >> piloting planes, diagnosing illnesses and making ethical decisions on the battlefield, those are dutyies that required trained and talented individuals. what if i told you the trained robots are up for the same kind of challenges? >> according to researchers artificial intelligence is advancing so rapidly that in some cases you can't tell whether you deal with a human being or computer. recently programmers rocked the artificial intelligence world by creating a super computer that passed the turning test. the test measures the intelligence of a machine by whether it can fool people into believing it's a human, and in this case it fooled a third of the scientists into believing they were talking to a 13-year-old boy leading some to believe we're on a path where machines eventually achieve the
4:33 am
same level of critical thinking, introspection and even ethical reasoning as human beings. after you hear our guests today, you will not think this is impossible. of course, what are the long-term consequences and implications here? to help us sort it out from london is george, who is an engineer and expert of artificial intelligence. this is the senior he had for for the blog extreme tech. the phrase artificial intelligence gets thrown around lot with everything about google to the iphone siri. what does artificial intelligence mean in 2014? >> it means different things than it used to mean several decades ago. several decades ago when it started in the late '50s and '60 it meant developing machines, computers, robots that had the potential to think and self-reflect. they have everything we
4:34 am
recognize as human intelligence. nowadays, because we understand more of the complexity around the human brain and human consciousness, the definition of that culture would sound more if you'd like humble. it means making machines very intelligent, but not necessarily self-aware. >> what about this computer we just referenced that fooled some scientists into thinking that it was human? have we broken through a barrier in artificial intelligence? >> i don't believe so, no. certainly not. this is what is called a tiering test. it was proposed several decades ago from one of the greatest pioneers, if you'd like, of computer artificial intelligence, alan cheering, and no one really believes this is saving the artificial intelligence. far from it. it's seen more as a p.r. exercise that has achieved a very important thing.
4:35 am
we are having a discussion about artificial intelligence, which is a serious business here between you and sebastian. i think that's very good news indeed. >> we're talking about -- >> go ahead. >> as george mentioned, having having -- that's one very small act of intelligence, if you can even call it intelligence. it's just -- you know, we used to have very lofty ideas of intelligence as george says, but now the folks see very, very specific parts and just tricking a few people that you're talking to a boy instead of a computer is a very, very small challenge. it doesn't actually help you have rational thoughts. it doesn't help you drive a car. it doesn't help you do these useful things. it's also worth noting that in this case the judges knew they were talking to a computer, and they knew they were judging a xur. computer. so there's a lot of bias there. it's not a huge breakthrough. >> if it only tricks a third of
4:36 am
the people, aren't there significant implications here for people with nepharious intentions? if a computer can trick 30% of folks to think it is real, that could be problematic. >> i would say yes, it's possible. i would say that the fact the test wasn't done very well. if the test was -- if it took someone off the street, sat them down and said talk to this person and they actually believed it was a person, i would say there's some applicability here. the fable fact they news they were talking to a computer makes it feel iffy. there are not a lot of research troops passing the test. it's an old world odyssey that hung on because it's attached to alan's name, so it has the significance. i don't know of any large ai groups seriously trying to pass it. they're working on other things. >> sebastian, the government is
4:37 am
investing in moral robots. they have investigated 7.5 million. should robots be trust odd to make moral decisions? lynn said, hell, i don't trust human beings to making decision. he he said will they interpret commands like a lawyer would and whose morals will they follow? religious morals? good question. should we have moral robots of war warfare. you heard my geek references. sky net, matrix. it's possible to make these moral robots? your thoughts. >> first of all, it's necessary for you to think about this possibility. i think it was in 1942 that isaac the science-fiction writer define the three laws of row bolttics, and that was like the first effort.
4:38 am
it's a realization rather if you have intelligent creatures, machine creatures moving around in our environment, interacting with us, driving cars or performing operations, then those machines will have to make decisions. sometimes those decisions have to have a moral underpinning. they have to be moral decisions. just imagine driving a car down the road to avoid an accident and is given basically two choices. either to kill person a or kill person b. how will that machine make this decision? this is a moral decision. it's a very perhaps predictable decision if we go down to the road this has cost. these are issues we definitely need to address, and if they did not concern purely the media establishment if you like or the battlefield. they have to do with what will happen in our everyday lives as well. >> sebastian, george raises an interesting issue here, without getting too technical, how do
4:39 am
you code for moral consequences? how do you code for ethics? >> so this is -- i mean, as the twitter people mentioned, the problem here is we still find it very hard to quantify what human morality is or human ethics, so the starting point for the u.s. military research is actually having to work out how does a human make a moral decision? if you are driving your own car and you have a choice of running over two people, which one do you choose to run over? this decision plagues people for a long time. once you work out what human morality is, in theory you can program it into a computer. it would probably take the form of a huge number of questions, like millions of questions like you will take as much data as you can. what does the person look like? every kind of thought that goes through your head to make a decision and it would try to work out the answer. at end of the day, the robot is making that decision, which is
4:40 am
very hard for us to get our heads around. >> the hard part for me to get my head around is not that it's not logic, right? feelings come into play, which are very different than actual inputs that require simply logics. >> so this is the thing. so i mean it depends on whether you believe that humans are purely the result of a bunch of chemical reactions in your head that make decisions or whether there is some kind of other force that is helping you make those decisions. i mean, there's a big argument that all your decisions that you make are just based on, you know, chemical things in your head. making you answer in certain ways. in theory, we should be able to make a robot that makes exactly the same decisions. there's been research into making robots that have the same hormones in the brain, the same endokrien systems to make a robot behave like a human.
4:41 am
at some point you would think that they could closely mirror humans. you know, that's -- yeah. >> are communities skeptical that air-to-ground combat has increased casualties. forget the money. what's the human cost of killing machines? christa says fictional creations seems to channel a lot of anxiety. sebastian you tweeted in, i welcome our new robot overlords. you'd be a trainer. >> you'll stick around for the next segment. the pictures and text that you share online with friends may seem like a private affair, but when it comes to the nsa, they're fair game to collect and store usually facial recognition software in the name of national security. up next, how the same technology is also being used by local law enforcement in catching criminals and how stores might be the next in line making your
4:42 am
ever. is your privacy, though, worth it? later, you may be able to tell what anyone around you is feeling with the click of a button. we have all the details. see you in two minutes. >> writer taiye selasi shares her impactful point of view >> certain people have to explain there presence... >> when you're part of many worlds, where is home? >> in ghana, i was not going to be able to become the person i wanted to be. >> every monday, join us for exclusive... revealing... and surprising talks with the most interesting people of our time... talk to al jazeera part of our special black history month coverage on al jazeera america
4:44 am
welcome back. we're discussing the latest mind-blowing advances in artificial intelligence. facial recognition software has been around, but with recent advances we're seeing effective results of use in law enforcement and other sectors. this month a chicago man became the first person to be convicted for a crime as a result of the technology. here to talk about the various ways facial recognition software is permeates the lives on skype from san francisco is jessica lynch. she works on transparency and privacy issues in new technology. from ames, iowa, brian meinke from iowa state university. thank you for joining us. brian, we justed mentioned the first man to be convicted using facial recognition software, convicted of robbery. these these recognition programs
4:45 am
fool-proof. >> they're not fool-proof but they work well. facebook recently announced they have an app that has a 98% accuracy rate. they have a lot of data, so the more data about you, particularly facial information the more accurate the program is. >> talk about how the applications may work in the retail sphere. if i walk into a store how i'm recognized by my face and altering my shopping experience? >> right now we're on the cusp of seeing some apps looked at by retailers. essentially, what they're experimenting with now are generally referred to as anonymous. this is anonymous video analytics. they try to segment people into categories, so this all generates from a desire to get you to look at things, for example, and visual signs. if you put an image on a sign more likely to appeal to
4:46 am
someone, they'll look at it. they figure out who you are, but they do so in a general sense. what we will see are things more related to, for example, reward card types of systems where you kind of walk in and get recognized. >> throughout the show i'm the harbin engineer of negative consequences and potential bad news. and the community says will aib be covered under constitutional provisions and will we revault our meaning of life with obvious questions. miguel says if people want smarter technology by personal assistance and other stuff, they have to give up privacy unfortunately. jen, chicago currently has 23,000 surveillance cameras and the police pay for the technology. where is the balance here between our privacy and security in in brave new world? >> well, i think that's something we really have to talk about as a society. now is the time to put privacy
4:47 am
protections in place. right now we're not at the point where facial recognition can automatically identify any face in a crowd. we will be getting their soon, and as the government builds out databases of millions of images of people, it's something we really need to be worried about for the future. >> jennifer, the nsa is reportedly accessing images available on social media to use for law enforcement. they're publicly available images. do you have concerns about the nsa doing this? >> well, this is something that came out in "the new york times" article a couple weeks ago about how the nsa is collecting millions of images every day and employing facial recognition technology to learn who people are in the images. i think what the nsa is doing is the agency is combining that facial recognition data with other bio graphic information and information from social media that explains who people are associated with and just using
4:48 am
that information to identify people and create a bigger picture of who they are. >> some more bad news here. scott, we asked the community what are the drawbacks to use computer software to convict people of crimes? it's dangerous in and of itself to be relied upon technologies despite increasing algorithms. juries should not predicate the decisions and i'd argue it's a dangerous press dents to do so. sebastian our resident geek here, what is your feedback to scott's comment here? are you scared the precedent this technology will be setting? >> this is predicated on the idea that computers are better than humans. i mean, as has been mentioned, facebook has an algorithm better at humans at recognizing two faces. if you have seen him before and the computer saw him before and you see that person again in a crowd oran dom shop, the computer is better than a human
4:49 am
at recognizing that person. so, again, this is an inherent distrust for computers and robots, but, you know, also surely, you know, computers aren't biased. they don't have prejudices. they don't have all sorts of stuff like that. you could also say having a computer making those decisions is possibly quite a good idea. it's not like the computer is goalkeeper to-- going to make the decisions on their own. a human will check it over time. >> whether we're talking about something like a jury and being used in a court system, eyewitness is not that reliable. you have to think that a computer software program would be more reliable than eyewitness testimony. >> one would hope so. >> jennifer, where is the line, though? where is the line where you say, that's far enough? that's as much as law enforcement can use this kind of technology, and it has to stop here? where is "here"? >> i wanted to get back to an early points about bias and
4:50 am
technology and this idea that computers are not biased. now, of course the only way that computers get information is if a human enters that information. a lot of times the information that's input into a database is based on for criminal databases based on biased policing, and so, you know, there's that saying garbage in and garbage out. if you're entering images into a database that are based on racial profiling, then that's your pool of people who you're trying to identify. one thing that we learned was from documents that we received from the fbi about the fbi's massive next generation identification facial recognition database is that the system isn't all that accurate in actual fact. so the fbi only guarantees accuracy 85% of the time. that 85% of the time is dependent on the actual candidate being in the search results that are provided, and that's the top 50 search results.
4:51 am
so that means 49 people may be misidentified and become suspects for a crime. >> mills on twitter shares her concerns and says, just don't problem to carry human bias and don't target based on zip codes would be a good start. chief justices, don't forget the system can be hacked. a lot of implications really. >> indeed. thanks to our guests jennifer lynch and brian. >> monday, studying deadly viruses. >> these facilities are incredibly safe, incredibly secure. >> go inside the study of infectious diseases. >> ventilated footy pajamas. >> protecting those working to protect us. >> we always have to stay one step ahead of them because they're out there. >> techknow's team of experts show you how the miracles of science... >> this is my selfie, what can you tell me about my future? >> can affect and surprise us. >> don't try this at home. >> "techknow" where technology meets humanity. monday, 5:30 eastern. only on al jazeera america.
4:53 am
4:54 am
computers to decipher 21 distinct facial expressions including complex emotions like happily disgusted and software like what you just saw is getting ready to make this kind of technology available to the wider public. how will this change the way we interact, and how will it impact our freedom of expression? joining us on skype out of new york is a cognitive scientist and a lead researchers on the ohio state study i mentioned. just reading about it makes me feel like someone is invading the space in my head. i'm not sure if i want perfect strangers to know my mood. what's the upside of this application? >> there were many upsides. understand how facial expressions have emerged and work is essential for understanding psychological disorders that people might have, and there are dozens, many dozens of psychological researchers that have been defined in the literature between very difficult to now to understand the difference between them.
4:55 am
we thought we only had six basic emotions that we could all express in the same way with happiness, surprise, anger, disgust, fear and sadness. what happens is we're just extremely difficult to differentiate between dozens and dozens of different emotional things. with these 2 11 ones the hope is we can differentiate between many disorders. so we'll be able to diagnose to begin with, which is a very difficult task right now for psychiatrists, and the medical establishment. it also is going to allow us to better understand what the difference is between the deserters and hopefully down the line ways to help people become
4:56 am
more adaptive to our society to interact with the rest of us. >> we have all have friends with a high degree of emotional intelligence. you think you look completely normal, and they're like what happened? are the computers able to pick up the same kinds of nuances that an intelligents emotionally intelligent human being would pick up on? >> the algorithms we have right now are as good as the average human is. so maybe a little better than the average human. so they're not extremely, extremely good yet. there's small changes that you were mentioning, but we're getting there. so the hope is, yes, in a few years we can detect the small differences. >> all right. so amy on twitter a loyal streamer shares her concerns. i know this is going to happen. i just can't stand the creeping lack of privacy. if i'm on social media, it's voluntary, but all of this, it isn't. we asked our community about the latest technology.
4:57 am
a ledee tektor app can spot fake emotions emotions. do we have a right to lie? >> we should ask how accurate it is? >> on facebook james says this kind of sucks and everybody kicks up the quote-unquote friendly when they say somebody they know. i'd like an app so they recall what the name is. white lies, i'm fueled many my social interaction by white lies. when i say i'll see you for lunch, you never want to see that person for lunch. do we have a right to the white lies, and does this technology deprive us of that and how does it impact social interactions? >> the important thing to note here is computers are very, very good. so the way it works is it looks at your face and looks for big expressions and it will probably go in very, very slow motion and then will analyze those movements and then see it. so all these
4:58 am
disgusted. we already know these emotions have a human element. when we see someone, they're happily disgusted so we know what a person looks like when they're happily disgusted. when you try to tell a white lie, the computer probably has as good a chance as a human to tell that you're telling a white lie. we all know in someone's eyes they crinkle up and they're not really smiling. there are things that are telltale signs. there's a chance a computer can tell better than a human. that's the risk. like a human, you might have seen stage magicians that read your mind by seeing the movements in your lips and that kind of thing. as computers get good enough, they may enable you to get your smartphone pointed at someone and read their mind by looking for this little movement in their lips or whatever. that's probably, you know, a year or two away. there are pretty cool applications there for sure. >> on that note, we're going to end it.
4:59 am
only a year or two away. that's not that comforting, to be honest. >> i will wear makeup for the rest of my life. >> thanks for all of our guests. until next time, we will see you online. >> tuesday on "the stream". >> selling cocaine was my purpose. >> they had been trafficking on behalf of the united states government. >> renowned filmmaker marc levin discusses his new movie "freeway: crack in the system". "the stream". tuesday, 12:30 eastern. only on al jazeera america. >> the new al jazeera america primetime. get the real news you've been looking for. at 7:00, a thorough wrapup of the day's events. then at 8:00, john seigenthaler digs deeper into
5:00 am
the stories of the day. and at 9:00, get a global perspective on the news. weeknights, on al jazeera america . ♪ >> announcer: this is al jazeera. ♪ hello and welcome to the news hour, i'm in doha with the top stories on al jazeera. after months of fighting the u.n. says warring political parties agree to form a council to govern the country until a final settlement is reached in yemen. led away like a common criminal the mayor of venezuela is arrested accused of plotting a
38 Views
IN COLLECTIONS
Al Jazeera America Television Archive The Chin Grimes TV News Archive Television Archive News Search ServiceUploaded by TV Archive on