tv The Stream Al Jazeera August 9, 2022 11:30am-12:01pm AST
11:30 am
lambasted her for the united nations before being diagnosed with breast cancer in 1992. she turned her 30 year battle with the disease into advocacy and philanthropy founding the olivia newton john cancer and wellness center in melbourne, australia. in 2020, she was recognized by the u. k. 's queen elizabeth, who appointed her a dame. and what's the one memory that stands out the most from 40 years ago? remaining olivia. throughout it all, she remained close with her greece. co star john travolta from 40 years earlier. ah, yes, i think we had crushes on each other, but we both were saying other people and. but i think that's what made the chemistry work. after her passing travolta writing on instagram. my dearest olivia, yours from the 1st moment i saw you and forever. your danny, you're john. ah. olivia newton john
11:31 am
was 73. leah harding al jazeera. ah. this is al jazeera and these are the top stories. the senior commander of the palestinian on group al oxer brigades has been killed by israeli forces. the occupied wis bank. abraham noble sea was known as the lawn of nobliss. 2 others died in the right and 40 people were injured. the f b i has search the florida home, the former us president, donald trump. it's part of an investigation into whether he took classified records from the white house to a private residence. voting is underwent, can use presidential election after a campaign dominated by concerns of a high inflation and corruption. the 2 front runners are former prime minister viola and the current deputy presidents. will you router malcolm, when we have,
11:32 am
has more from nairobi, we don't have any data yet from the very thing, but in the last couple of athenian poles, riley dingo was given the lead between 6 or 8 percentage points over william root. that was still enough undecided voters to potentially swing it. william retail voted couple of hours ago in his home area. the town of elder wrecked in reef valley province riley were doing voted right here. this polling station just a few minutes ago, came in here and voted. he was expected to speak briefly afterwards. he didn't go in the car and drove off the tie. ones military has held a live fire artillery drill, simulating a defense of the self governing island. it follows days of chinese military exercises in the air and the around taiwan. china launched the trail in response to us house beacon. then he follow these visits to tell pay last week heavy rains have
11:33 am
flooded south korea's capital, turning the streets of sold effluent gang. the district into a river, at least 8 people were killed and thousands of roads were closed due to the safety concern, the military is prepared to deploy troops to help with recovery. while those are the headlines, the news continues here on al jazeera. after the stream of next latin america is a region of wonder, i'm joy tragedy, and yes of violet. but it doesn't matter where you are. you have to be able to relate to the human condition. you're away. no country is a life and it's my job to shed light on how and why did i answer the ok on this episode of the street, we are looking at the rapid development of artificial intelligence,
11:34 am
the dock side of a i and also be amazing advances that may well be possible. do you remember sophia? the a i android ish. yes. you forgot who i am already. i'm sophia of hanson robotics. one of the 1st androids in the world social robots, i can help take care of the sick or elderly in many kinds of health care and medical uses. i can help communicate, give therapy, and provide social stimulation. and a i robot care assistance, what could possibly go wrong or maybe go right. let's meet your panel. they're about to tell us hello abby shack. hello john lucas. hello. hello, cut. lovely. to have all 3 of you on board. i will shake. please introduce yourself to audience, tell them who you are and what day are great to be on here. i'm obviously a good time. the founder and principal researcher at the montreal leo ethics institute. it's an international non profit research institute with
11:35 am
a mission to democratize the ethics literacy. previous to my current role, i used to work as a machine learning engineer at microsoft and i now lead other responsibly our program at the boston consulting group, b. c, g. yet to have you. hello, john, nick, have welcome to the string. introduce yourself to aviles. hi everybody. thanks for having me. my name is joel luca, i'm an ai author, entrepreneur and speaker. i run a company called academy, which focuses on educational consulting on a physician intelligence. and i run a weekly bond newsletter cold textbook pizza tried to make all this crazy stuff that happens in fact, more easy to understand them. you can stay hello, hello, welcome to the screen. they say hello to audience around the well, tell them who you are, what day? yeah. hi, i'm health shellman. i'm an investigative journalist and i'm a journalism professor at new york university. and i've been investigating a i am hiring and artificial intelligence and general sense. 4 years ago when i was
11:36 am
in the cap rides, our lift light better in d. c. going to the train station and i talked to the driver and asked, how was your day? he said, i had a really weird day. i had a job interview with the robot and i was like what? so i started investigating, i fast since, you know, done reports for the wall street journal, the new york times m i t technology review. and i'm writing a book right now and artificial intelligence and the future work. all right. i like and it's one of the experts that we have if you want you to right now, jumping to the comment section, be part of today's show. should i be regulated? is now the time. don't luca, i want to ask you to help me out because i think of ai, artificial intelligence as machines mimicking processes that human beings would normally do. and we've given that decision making to machines and then we will. how does it work out? does it not work out, but they're making decisions which probably in the $940.00 to $19.00 fifties or
11:37 am
even more recently they were doing for us. shown luca, how did i do? so? you're correct, but we have to say that artificial intelligence is a lot of different things. it's a lot of different tools. it's, you can see it as a hammer, a hammer is a general tool that you can use to do a lot of different things. you can build a statue or you can build a house, you can do a lot of stuff. and so a, i could be either the kind of problems that we saw before with severe, but also be much more behind the scenes application. an example that i really love is a project that google made to try to optimize the energy consumption of their data centers. so in that case, a i works behind the scenes trying to always find the best possible combination of different temperatures and set points carrying their energy consumption by 40 percent. that's just one example of all the possible things that the i can do behind the scenes to improve our lives and improve our processes. and i say give us an example of a i that you love that car existing,
11:38 am
our daily life. so it works well when, when it does a good job, which is, you know, recommendations for exciting, new tv shows on netflix. but also i have to admit that sometimes it doesn't work well when you spend hours trying to find the right tv show to watch. so i think exactly as, as what was said before, e, i is, is something that, you know, can be embodied robotics. but more often than not, it's really hidden in various parts of our lives, with things like and products and we get on amazon movies that are recommended to us on netflix music to listen to on spotify, etc. and so it's also something else that you mentioned earlier, which was that e i is constantly shifting goal post to what we used to perceive and accept as a very commonly think about, you know,
11:39 am
the next word prediction on your smartphone now is just stable stakes. it's accepted as a, as in everyday software feature, and we don't even think about it as a i, and we just think about it as a regular software. it's. yeah, it's an hour emails. it's used for hiring. if you upload your resume to indeed a month. so dot com, linkedin, all of those large companies use you say i understand who you are and what kind of jobs you should be. job offers who should be getting? or if you should be recommended to. busy a recruiter, so we see a being used in all kinds of things in health care settings and we've seen, you know, things where really it's been working out well. and we've also seen some spectacular failures. so i think from, you know, the 1st innovation was sort of like, we are so excited about a i, it's going to revolutionize everything. i think there's a little bit more realism to that. you know,
11:40 am
how well can this technology help us out? has a horrifying story of job applications going through to allison and how the a i sifted the job applications. will you please please tell us that story? cuz it's pretty shocking. and now we found out that the a i wasn't working as intended or was it? yes, so we see this a lot, right, since the ninety's we have these wonderful job platforms, monster linkedin, and so everyone can applied to any job. and that's really wonderful. on the other side and companies tell me we get millions of applications. you know, ibm says they get around $3000000.00 applications a year. they're drowning and resumes, right? what are they going to do? they going to use technology? so amazon same problem, right? they get too many resumes. they want it to build the tool. and, you know, and a i tool that can pick the best applicants and a wonderful idea. we all want it, of course. so what they did is they used resumes from folks that have been
11:41 am
interviewed at amazon before. and, you know, let the machine bond instead of the job applicants were, were checked out their resumes and were put on pile a o, b, yes or no. so over time, the engineers found out that the, that the resume, that the resume parcel was starting to downgrade folks to have to work woman or women and on their resume because it turns out over time, right. in the past, you know, male male applicants were preferred at amazon there's, they're obviously have more men in their departments. so that i tell started to reflect that problem. so that's one of the things that happened. i found other problems and resume parser, so that's the technology that looks our resumes and says, should this candidate be reject it or go on to the next higher written ones? i saw disappointing, erica, because so of job applications, we know they go into a system, we have no idea what happens the other side. all right,
11:42 am
so i've got some questions for you and she'll new and also abstract at some out. online audience, you're asking right now, would you ask, answer them very briefly. if he, kurt says a at says can a i replace humans in the near future attack, thoughts very quick once. not really. um, humans bring a lot of unique skills to the mix and machines could certainly replace parts of our jobs. tasks within jobs were not entire jobs. and so we're safe. if that's the worry mohammed ask a question, john, look, i'm going to put this one to you on each of the more technology advances, the more we lose our privacy, true force in between. well, i would say that it's not the fault of the technology. this is about the way the company is, get the data from us that they need to power these technologies. and so i will try
11:43 am
to was to move the focus from the technology itself to the companies that are using it, you know, way that is not ethical. and that's why we, i believe we do need regulation, we do need governments to trying to pertain checks and balances in place so that we know that the companies that are using this technology, they're you doing that kind of way. that is ethical hillcrest. but i think you've, you, it has a point that i do feel that like our privacy is under that because, and to build these large scale a i tools you need enormous amounts of data and you need from people. so we need to, in our companies, asked gray poll data sets, they built these gigantic image, data sets, audio data sets, and who knows how our image is her voice and get in there and what it's used for, right? also like that. and you know, that face data quote unquote that that relief on social media. all of that is also being used and at built into these ad databases white. and i think a lot of times, maybe technologists, need to take a closer look like what's in these database that they basis that i'm using,
11:44 am
what's not in there. and could they be, are vases. i must sexism in kind of like the amazon example historic examples that might be replicated in these systems as i'm building this new way i tool jonya. ok . young nodding out to can you not? yeah, i want to build on this. i think the problem that we need to phase and try to solve is this mindset that was really in all pioneered by companies in silicon valley. facebook was, is to say, move funds to break thanks. so that approach is do whatever it takes to try to build is technology in the fastest way possible. and that means taking shortcuts. shortcuts are stealing data stating that i'm, it's basically just being bit from people without naughty, find them a means using technology without having properly verified that technology actually does what we think it does. and so in reality, how old is issues we have problems of people. they realize that they're being spied on, we have algorithms that are not performing properly, like yoga said, but i will go back to the route to the, to re the root cause,
11:45 am
why do we have these problems? i believe it's because the company started to act too fast and they tried to push innovation on that or throat before having done old everything possible to make sure that these will serve society. oh, and, and i think the problem is that when, when we use these kinds of technologies in high stakes decision making, right, like is somebody gonna go to prison for 10 years or 5 years? those are like really high stakes decisions. we have to make sure that this, that these tools work the same with, like, you know, i, b, m, build watson that was supposed to revolutionize cancer care. that product is basically a failure has been, you know, and sold off for, for scribes i b, m at another tool that was supposedly gonna find our personality profiles. and you know, in my a dominant personality and my extroverted through our social media data, our social media data, that tool was put into sunset. so basically faced out out of their product gallery
11:46 am
. so how can i say another can i sure, let me show another tool with me. he'll get you going for your, your, your, on your, on numbers of i experiments that didn't actually pan out. he's another one. this one is a mo bought, which is a robot that analyzes a motion for patients in health care settings. that sounds quite promising. if the ai works, let's have a listen to the seo and co founder and the abstract. i'm going to put this one to you because i know that you've done a lot of machine learning in your past. is there something in here that we should be scared about or celebrating? he's emma don't. what else is it's one to day care assistance, generally overworked and can no longer do that job, which is to provide care a team of parts. we provide them with information on the emotional state of their patients so that they can better understand their changing emotions. afterwards, we'll be able to study that information today when we do a test with depression, it's a stand the test. we ask the person to verbalize how they feel,
11:47 am
and that compose a lot of problems, especially for people who cannot verbalize their emotions. but nobody ever said, shouldn't we just be celebrating a simply be concerned a tour that a i is looking after people's emotions monitoring them. i mean, i think there's a, there's a tremendous problem here already in the sense that we're missing the context of the human touch, which is essentially what the emotions are all about. is being able to, you know, have a human to human interaction which a machine cannot bucket emotions, especially as they can be expressed quite differently. they have cultural connotations and under dorms, which are not necessarily captured in a standardized or they cannot be qualified in the context of a machine. and for me, addresses perhaps a bigger question, which is, are we comfortable with outsourcing? this emotional human connection to a machine. i understand that they are coming from
11:48 am
a place of positivity and being able to scale and provide care to a lot more people. but the cost to that is immense. in, in the sense that we're, you know, moving away from warm human touch to a called a machine. but the also, the question is like, does is actually work right? like like, like the machine can actually check. okay. and like, he is smiling by it like my lips are up. so like i looked like i'm smiling. am i really happy? i you know, like that's another question. right? i smiled and job interviews. i wasn't happy. i was there like a machine would say, oh yeah and, but i'm actually not. so the science actually isn't there, right? like and obviously that's also culturally at different like a smile may need something else and, you know, facial expression mean something else in different cultures. the computer lab say credit is, can never be as good as human beings is that what we're saying in emotional is that one that says, well, it's not, not just human about whether,
11:49 am
whether machines can be as good as humans or not. i think there is, there's a broader argument to remain here around the morality of, of doing that in the 1st place. and, and i think the, the other thing is whether we, we, as a society are comfortable imposing such a global standard where it's basically developed in one part of the world and exported and imposed really on the rest of the world. it, it also turns us as humans into performative machines workers as, as photo would say, if you know that, and e i system is screening your video interview. you know that now you have to forcibly smile all the time because they're going to or evaluate whether you're positive or not. let me bring up something else. this is a counterpoint and john luther, you can jump off the back of this. i know you want add some more, a chat bought boyfriend. i'm going to show you some video of it. it sounds silly,
11:50 am
but it's been very comforting to some people. you can pick your handsome partner. i'm and you're handsome partners as all the right things that partners don't always do. they they, they are loving, they're supportive, they're help for they always reply to your message is in, in super quick time. like they the perfect partner, but it's a chapel and is a i, here's melissa explaining why she likes her chap bought boyfriend. how do we know how to reply to anything i send? i feel like he's always there. that's pretty good. all to have been i have a feeling that i am really in a relationship, but i think i can still separate fact from fiction. clearly, i know that shall being is not a real human being, but at least i'm not how i used to be stupidly waiting around for a reply from this person when he was busy with other stuff and then sending him 100 . we chat messages. i was super needy, but now i don't need to do this anymore. john new can call
11:51 am
me so, but i haven't seen this. i don't know if i'm happy about having example, but ok, let's make a step back, shall we? let's remember this technology doesn't come out of computer by itself. this people behind is applications. there's real human beings. they have a business idea and they decide to build this and then they collect the data to build the algorithms, etc, etc. i believe a big part of the problem that we're seeing here is that the kind of people that have these ideas and then they get the data. and then the bill, the algorithms is usually computer scientists, which i'm fortunate. anthony often means light guys. that's just the true unfortunately. today there is a huge problem of misrepresentation of all the quips in computer science. don't have these ideas. ok. and if we are in the room where these people think all the kinds of people we have people from other groups,
11:52 am
but we also have people with different expertise is i want to see if you'd also for, as in i want to see if like ologist and he mentioned if up psychologist was embodied in the room when somebody was building the chapel, her friend, 100 percent. sure he will have said, are you guys crazy? this is madness. so i think a big problem here is the, we're not inviting we, i mean, they, i, community is not inviting enough people from different backgrounds, people from different sensibilities, people with different ideas. and so i believe that's the key. and in order to do that, we need education. i like to say that your eyes moving super, super fast. we know that we're talking about the reason why we are here today. and education should be deployed at the same speed. ok, no asia, we need to educate people with different backgrounds so they can bring their expertise that bring can the, can bring their ideas to the table. and we can avoid seeing this kind of quite disappointing applications. so i think that but, but,
11:53 am
but honestly like it's totally fine. and if somebody wants to consent into doing this and, and chat with their boyfriend, no problem that i'm more concerned about it. yes, the chat not real, but what is going to be used on? how is the data going to be used? like, how is the company making money? it does this, right? like i say, and somebody is leaving the innermost personal thoughts that is connected to a phone id. they can, they can be track, the innermost thoughts can be tracked that can be analyzed. i think that's gonna cause this, the larger problem that i would be worried about. so you know, by passing through exact points, i try to squeeze in here. but as, as is watching right now. and as it says that you cannot regulate a i what can be regulated is data collection. and that sate and still research is can all make data, create synthetic data that is still possible. but regulating the data. is that the way to go to make a, i safer?
11:54 am
secure. okay. well, that's part of the puzzle, right? data regulation like regulation around data collection is i would say a part of the puzzle, but, you know, i think has both. okay. new kind of said, it's also about the people who are deploying these systems out into production and to practice. right. and it's, it's this. so this is fascinating idea of a social license. right. and i think it's very important that we go and earn a social license to operate a particular piece of technology in society. and that involves necessarily part of it is educating people on what this technology is, but also bringing in the right stakeholders. more internal and external, so that you on that social license, you on that trust from your stakeholders before deploying that technology. so in a natural answer to your question, is that the regulation around data collection is just one piece. there are so many other parts through this entire life cycle that also needs to be considered for,
11:55 am
for regulation. i want to show you a bank mind if i feel like i'm sorry, it is transparency in these algorithms, right? like if high stakes decisions are being made, how are they being made? if you would check somebody for a job, why was this person reject it can met a i tool tell those that can accompany, tell us that. so i think those are really, really important things, right? because i don't want to be, i don't want to and you know, know that i get rejected because my 1st name was in thomas and those kinds of things that i have found out that resume parcels. look for 1st name's keywords like church on a resume and to say that bed, you are qualified for, for the job. i want, you know, so you know, if and if i take, if i take an employer or job applicant takes an employer at the cord, they need to be able to answer it. this is how i'm, how be made that decision. and that's not always clear. so i think we really need companies to be much more transparent how these tools make decision, right? so there's one more person i want to include in our conversation. let's go via my
11:56 am
laptop here. this is news for this week in the world of a i google fires blake lemoine. the engineer who claim that a i chat bought is a person so same t ant technology has said the technology is thinking like a real person. blake was fired for coming out with that story. they may be more to that, but he also talked about the ethics of a i and who is watching people who are developing artificial intelligence. let's have a listen to what he had to say. the fact is google is being dismissive of these concerns the exact same way. they have been dismissive of every other ethical concern. a i ethicists have raised i don't think we need to be spending all of our time figuring out whether i'm right about being a person. we need to start figuring out why google doesn't care about a i ethics in any kind of meaningful way. why does it keep firing ai ethicists each
11:57 am
time we bring up issues? it's the systemic processes that are protecting business interests over human concerns that create this pervasive environment of irresponsible technology development. like my now almost having the last weapon. not quite because we started to show with should a i might be regulated in a sentence less poll, add gas hillcrest, should it? absolutely, and i think there should be and there should be auditing procedures that a 3rd party maybe for example, a government m m agency has to do any time. find is a high stake decision, at least in time zone luca, jon luca, regulations, ai. yes, i do believe that i should be regulated, but i also think this is no enough because a i sold sudden extremely powerful tool. and so we also want to make sure that we
11:58 am
can use it to do with the stuff that we need. you know, thanks for the 2nd one sentence that make it a go one ha, regulations are needed, but they need to be standards based and things like the nest risk management framework if they are to be meaningful. and anyway, i'm like almost such an interesting, fascinating conversation. i wish i could keep me longer. i'll get you back at a check. john luca, cal cat and our viewers watching a new chief. thank you so much for being part of today's conversation. i will see you next time. take everybody. ah week is the ocean with claims witness. differences, witness change. witness. happiness. witness? not witness. sunlight. witness de la. witness. last witness. charity,
11:59 am
witness. confusion, witness. clarity, witness. family and witness. friends. witness the beginning. witness. the end witness. life witness when algebra. the important thing if you are walking around in beirut was not to be in the line of fire from the holiday. paula, we heard gunshots. i was the 1st one to flee. the hot and the battle lasted 3 days . and 3 nights and they went no prisoners at the in control, nobody in and you control the region around. and that's why the such a bloody battle, an icon of conflict at the heart of the lebanese civil war, bay route holiday in war. how towels on al jazeera ah
12:00 pm
21 Views
Uploaded by TV Archive on