tv The Stream Al Jazeera August 9, 2022 5:30pm-6:01pm AST
5:30 pm
battle with the disease into advocacy and philanthropy founding the olivia newton john cancer and wellness center in melbourne, australia. in 2020, she was recognized by the u. k. 's queen elizabeth, who appointed her a dame. and what's the one memory that stands out the most from 40 years ago, remaining olivia throughout it all. she remained close with her grease co star, john travolta from 40 years earlier. ah, yes, i think we had crushes on each other, but we both were seeing other people and. but i think that's what made the chemistry work. after her passing travolta riding on instagram, my dearest olivia, yours from the 1st moment i saw you and forever. your danny. you're john. ah. olivia newton john was 73. leah hardin al jazeera.
5:31 pm
ah hon. let's take you through some of the headlines now. a senior commander of the palestinian armed group, alex hamilton brigades, has been killed in a row by israeli forces in the occupied west bank to all the palestinians. dive in the operation more than 60 people were injured. john hallman has more from nablus. you can see that there is a crowd of people, young men who come to the house here and not listening occupied with where even i don't know who was killed by breaking up about you can see the bullet holes in the roof. there was a fire fight that lasted from 5 in the morning to at 3 or 4 hours in which he was eventually hold up in this building. and then killed, spoke to an eye witness. he said he eventually found the body just just next to
5:32 pm
where i'm standing. most polling stations are now closed, the cross kenya and the nation waiting to hear the results of a tightly contested presidential election. the 2 front runners are form of prime minister riley dinner, and the current deputy president william russo, it campaign was dominated by concerns about inflation and corruption from us president donald trump says the f b i is such is florida home. it's believes to be part of an investigation into whether you took classified records from the white house to his private residence. trump criticize the race calling it a weaponized. ation of the justice system has been an explosion near a military base in russian controlled crimea. moscow's defense ministry is said, there were no casualties, exclaimed the bloss was caused by a detonation of ation. ammunition. high ones, military is held a live fire artillery drill, simulating a defense of the self governing island. it follows days of chinese military
5:33 pm
exercises in the air and sea around taiwan. china launch the drills in response to us how speak and nancy pelosi visits and i pay last week. heavy rain as flooded south korea's capital, turning the streets of soles, affluent gang them, district into a river. at least 8 people were killed. thousands of roads were close to suit safety concerns. the military is preparing to deploy troops to help with recovery efforts. those i had lines, the news continues after the stream. one year ago, the thought about that is the global following, the redraw of foreign forces. red, 2 years of war ended, but many of guns are still waiting to benefit from the peace that yvonne had not one had not been recognition as the legitimate government of one sy balaban take over one. you're on another there with
5:34 pm
i answered the ok on this episode of the street, we are looking at the rapid development of artificial intelligence, the dark side of a i and also be amazing advances that may well be possible. do you remember sophia, the a i android ish. yes. you forgot who i am already. i'm sophia of hampton robotics. one of the 1st androids in the world social robots, i can help take care of the sick or elderly in many kinds of health care and medical uses. i can help communicate, give therapy, and provide social stimulation. and a i robot care assistant, what could possibly go wrong or maybe go right, let's meet your panel. they're about to tell us hello other shack. hello, john newkirk. hello, hello kurt. lovely to have all 3 of you on board. i was shak, please introduce yourself to our audience, tell them who you are and what you day. hi,
5:35 pm
great to be on here. i'm obviously the founder and principal researcher at the montreal, the i think institute. it's an international nonprofit research institute with emission to democratize ethics literacy previous to my current role, i used to work as a machine learning engineer at microsoft and i now lead the responsibly i program at the boston consulting group. the c g. get to have you had john, welcome to the stream. introduce yourself to us. hi everybody. thanks for having me . my name is jo luca, i'm an ai author, entrepreneur and speaker. i run a company called academy, which focuses on educational consulting on a physician intelligence. and i run a weekly newsletter called textbook pizza tried to make all this crazy stuff that happens. in fact, more easy to understand them. you can stay hello, hello, welcome to the screen. they say hello to our audience around the well, tell them who you are, what day? yeah. hi, i'm health shellman. i'm an investigative journalist and i'm
5:36 pm
a journalism professor at new york university. and i've been investigating a i am hiring and artificial intelligence and general sense. 4 years ago when i was in the cap rides, our lift lied better in d. c. going to the train station and i talked to the driver and asked, how was your day? he said, i had a really weird day. i had a job interview with the robot and i was like what? so i started investigating i fast since, you know, done reports for the wall street journal, the new york times, mit technology review. and i'm writing a book right now and artificial intelligence and the future work. all right, i like and it's one of experts that we have to be if you're new to right now jumping to the comment section, be part of today's show. should i be regulated? is now the time, don't luca, i want to ask you to help me out because i think of ai, artificial intelligence as machines mimicking processes that human beings would normally do. and we've given that decision making to machines and then we will,
5:37 pm
how does it work out? does it not work out? but they're making decisions which probably in the $940.00 to $19.00 fifties or even more recently they were doing for us. shown luca, how did i do? so? you're correct, but we had to say that artificial intelligence is a lot of different things. it's a lot of different tools. it's, you can see it as a hammer, a hammer is a general tool that you can use to do a lot of different things. you can build a statue or you can build a house, you can do a lot of stuff. and so a, i could be either or the kind of problems that we saw before with severe, but it also be much more behind the scenes application. an example that i really love is a project that google made to try to optimize the energy consumption of their data centers. so in that case, a i worked behind the scenes trying to always find the best possible combination of different temperatures and set points carrying their energy consumption by 40 percent. that's just one example of all the possible things that the i can do
5:38 pm
behind the scenes to improve our lives and improve our processes. and i say give us an example of a i that you love that car existing, our daily life. so if it works well when i, when it does a good job, which is you know, recommendations for exciting, new tv shows on netflix. but also i have to admit that sometimes it doesn't work well when you're spend hours trying to find the right tv show to watch. so i think exactly as, as what was said before, i. e, i is, is something that, you know, can be an embodied robotics, but more often than not, it's really hidden in various parts of our lives, with things like and of the semester products that we get on amazon movies that are recommended to us on netflix music to listen to one, spotify, et cetera. and so it's, it's also something else that you mentioned earlier,
5:39 pm
which was that e i is constantly shifting gold close to what we used to perceive and accept as a i very commonly think about, you know, us next word prediction on your smartphone now is just stable stakes it's accepted as a, as an everyday software feature. and we don't even think about it as a i and we just think about it as a regular software. it's. yeah, hey, it's an out e mouth. it's used for a hiring. if you upload your resume to indeed a monster that com are linked in all of those large companies use. i use a i to understand who you are and what kind of jobs you should be. job offers you should be getting them or if you should be recommended to. busy a recruiter, so we see a being used in all kinds of things and health care settings and we've seen, you know, things where really has been working out well. and we've also seen some spectacular failures. and so i think from, you know, the 1st innovation was sort of like,
5:40 pm
we are so excited about a i, it's going to revolutionize everything. i think there's a little bit more realism maybe to that. you know, how well can this technology help us out? has a horrifying story of job applications going through to allison and how the a i sifted the job applications. will you briefly tell us that story because it's pretty shocking. and now we found out that the a i wasn't working as intended or was it? yes. so we see this a lot, right, since the ninety's we have these wonderful job platforms, monster linkedin, and so everyone can applied to any job. and that's really wonderful. on the other side, and companies tell me, we get millions of applications. you know, ibm says they get around $3000000.00 applications a year. they're drowning and resumes, right? what are they going to do? they going to use technology? so amazon same problem, right? they get too many resumes. they want it to build the tool. and, you know, and
5:41 pm
a i tool that can pick the best applicant. so a wonderful idea. we all want it of course. so what they did is they used resumes from folks that have been interviewed at amazon before. and, you know, let the machine bond instead of the job applicants were, were checked out, their resumes were put on pile a o, b, yes or no. so over time, the engineers found out that the, that the, that the resume parcel was starting to downgrade folks to have to work woman or women and on their resume because it turns out over time, right. in the past, you know, male male applicants were preferred at amazon there's, they're obviously have more men in their department, so that actual started to reflect their problem. so that's one of the things that happened. i found other problems and resume parser, so that's the technology that looks our resumes and says, should this candidate be reject it or go on to the next higher written ones?
5:42 pm
i saw disappointing, erica, because of, of job applications. we know they go into a system, we have no idea what happens the other side. all right, so i've got some questions for you and she'll new and also abstract at some out online audience, you're asking right now, would you ask answer them very briefly. if he, kurt says a at says can a i replace humans in the near future attack, thoughts very quick once. not really. um, humans bring a lot of unique skills to the mix and machines could certainly replace parts of our jobs. tasks within jobs were not entire jobs. and so we're safe. if that's the worry mohammed ask a question, john, look, i'm going to put this one to you on each of the more technology advances, the more we lose our privacy. true force in between. well, i will say that it's not the fault of the ecology. this is about the way that
5:43 pm
companies get the data from us that they need to power these technologies. and so i was trying to was to move the focus from the technology felt to the companies that are using it, you know, where that is not ethical. and that's why we, i believe we do need regulation. we do need governments to try to pertain checks and balances in place so that we know that the companies that are using this technology, they're doing that way. that will help. but i think you've, you has a point that i do feel that like our privacy is under threat because to build these large scale a i tools you need enormous amounts of data and you needed from people. so we need to companies scrape whole data sets that built these gigantic image, data sets, audio data sets, and who knows how our image is, how a voice get in there and what it's used for, right? also like the, you know, the face data quote unquote that relief on social media. all of that is also being used and built into these databases. right. and i think a lot of times,
5:44 pm
maybe technologists, need to take a closer look like, what's in the database, but the basis that i'm using, what's not in there. could they be racism, sexism, you know, kind of like the amazon example historic examples that might be replicated in these systems as i'm building this new way. i tool john luca young noting articulate, you're not yeah, i want to build on this. i think the problem that we need to phase and tried to solve is this mindset that was really l pioneered by competing silicon valley. facebook was used to say, move fast and break things. so their approach is do whatever it takes to try to build this technology in the past as way possible. and let me take a shortcut. shortcuts are stealing data sticking. they basically just taking data from people, we don't know to find them. a means using technology without having properly verified that the knowledge actually does what we think it does. and so in the end, you have all these issues. we have problems of people. they realize that they're
5:45 pm
being spied on. we have algorithms that are not performing properly, like yoga said, but i will go back to the route to the, to re the root cause, why do we have these problems? i believe it's because the company started to act too fast and to try to push innovation down or throat before having done old everything possible to make sure that these will serve society. oh, and, and i think the problem is that when, when we use these kinds of technologies in high stakes decision making, right, like as somebody gonna go to prison for 10 years or 5 years, those are like really high stakes decisions. we have to make sure that this, that these tools work the same with, like, you know, i, b, m, build watson that was supposed to revolutionize cancer care. that product is basically a failure has been, you know, sold off for, for scribes i b, m. and another tool that was supposedly gonna find our personality profiles. and you know, in my a dominant personality and my extroverted through our social media data,
5:46 pm
our social media data, that tool was put into sunset. so basically faced out of their product gallery. so can i say another question? let me show another tool with you. i, how can you going for your, your, your numbers of i experiments that didn't actually pan out. he's another one. this one is m o bought, which is a robot that analyzes and motions for patients in health care settings. that sounds quite promising. if the ai works, let's have a listen to the seo and co founder and the abstract. i'm going to put this one to you because i know that you've done a lot of machine learning in your past. is there something in here that we should be scared about or celebrating? he's m, i bought. what else is this one to day care assistance are generally overworked and can no longer do their job, which is to provide care at emo parts. we provide them with information on the emotional state of their patients so that they can better understand their changing
5:47 pm
emotions. afterwards, we'll be able to study that information today when we do a test with depression, it's a standard test. we ask for person to verbalize how they feel, and that compose a lot of problems, especially for people who cannot verbalize their emotions of, but nobody i was actually just be celebrating a strip may be concerned, a tore that a i is looking after people's emotions monitoring them, i mean, i think there's a, there's a tremendous problem here already in the sense that we're missing the context of the human touch, which is essentially what the emotions are all about. is being able to, you know, have a human to human interaction which a machine cannot bucket emotions, especially as they can be expressed quite differently. they have cultural connotations and under domes which are not necessarily captured in a standardized or they cannot be codified in the context of a machine. and for me, addresses perhaps a bigger question, which is, are we comfortable with outsourcing?
5:48 pm
this emotional human connection to in machine? i understand that they are coming from a place of positivity and being able to scale and provide care through a lot more people. but the cost for that is immense. in the sense that we're you know, moving away from warm human touch to a cold, a machine. but the also, the question is like, does it actually work right? like, like, like the machine can actually check. okay. and like, he is smiling by it like my lips are up. so like i look like i'm smiling. am i really happy? i you know like that's another question. my ex, mountain job interviews. i wasn't happy i was a machine would say, oh humana. but i'm actually not so the science actually isn't there, right? like and obviously that's also culturally at different like a smile may need something else. and yet, and you know, facial expression mean something else in different cultures. the computer. i know
5:49 pm
that say, credit is, can never be as good as human beings is that what we're saying? in emotional said one that says, well, it's not, not just human about whether, whether machines can be as good as humans or not. i think there is, there's a broader argument to be made here around the morality of, of doing that in the 1st place. and, and i think the, the other thing is whether we, we, as a society are comfortable imposing such a global standard where it's basically developed in one part of the world and ex, bordered and imposed really on the rest of the world. it, it also turns us as humans into performative machines workers as, as, so go with say, if you know that an e i system is screening your video interview. you know that now you have to forcibly smile all the time because they're going to evaluate whether you're positive or not. let me bring up something else. this is
5:50 pm
a counterpoint and you're looking, you can jump off the back of this. and i know you want to add some more, a chat bought boyfriend, i'm going to show you some video of it. it sounds silly, but it's been very comforting to some people. you can pick your handsome partner. and your handsome partner is all the right things that partners don't always the day. they are nothing, they're supportive, they're helpful. they always, we apply to your message is in, in super quick time. i think the path at part and i've been at the chat box and he's, i, he is melissa explaining why she likes her chat, bought boyfriend. how do we know how to reply to anything i send? i feel like he's always there. that's pretty good or to have been i have a feeling that i am really in a relationship, but i think i can still separate fact from fiction. clearly, i know that shall being is not a real human being, but at least i'm not how i used to be stupidly waiting around for
5:51 pm
a reply from this person when he was busy with other stuff and then sending him 100 . we chat messages. i was super needy, but now i don't need to do this anymore. john new can call me so, but i haven't seen this. i'm. i don't know if i'm happy about having example, but okay, let's make a step back, shall we? let's remember this technology doesn't come out of computer by itself. this people behind is applications. there's real human beings. they have a business idea and they decide to build it. and then they collect the data to build the algorithms, etc, etc. i believe a big part of the problem that we're seeing here is that the kind of people that have these ideas and then they get the data and them to build the algorithms is usually computer scientists, which i'm fortunate to me often and mean light guys. that's just the true unfortunately. today there is a huge problem of misrepresentation of all the quips in computer science. don't
5:52 pm
have these ideas. ok. and if we are in the room where these people think all the kinds of people we have people from other groups, but we also have people with different expertise is i want to see if you'd also for, as in i want to see it's like ologist and he mentioned if up psychologist was embodied in the room when somebody was building the chapel, her friend, 100 percent. sure he will have said, are you guys crazy? this is madness. so i think a big problem here is the, we're not inviting we, i mean, they, i, community is not inviting enough people from different backgrounds, people from different sensibilities, people with different ideas. and so i believe that's the key. and in order to do that, we need education. i like to say that the eyes moving super, super fast. we know that we're talking about the reason why we are here today. and education should be deployed at the same speed. ok, no asia, we need to educate people with different backgrounds so they can bring their
5:53 pm
expertise that bring can the, can bring their ideas to the table. and we can avoid seeing this kind of quite disappointing applications. so i think that but, but, but honestly like it's totally fine. and if somebody wants to consent into doing this and, and chat with their boyfriend, no problem that i'm more concerned about it. yes, the chat not real, but what is going to be used on how is the data going to be used? like how the company making money? it does this, right? like i say, and somebody is leaving the innermost personal thoughts that is connected to, you know, a phone id, they can, they can be track, the innermost thoughts can be tracked that can be analyzed. i think that's gonna cause this, the larger problem that i would be worried about. so you know, by passing through exact points, i try to squeeze in here. but as, as is watching right now. and as it says that you cannot regulate a i what can be reconnected is data collection and that sate and still research is can all make data,
5:54 pm
create synthetic data that is still possible. but regulating the data. is that the way to go to make a, i safer? secure. hello. that's part of the puzzle, right? data regulation like regulation around data collection is i would say, a part of the puzzle, but, you know, i think has moto new kind of said, it's also about the people who are deploying these systems out into production and to practice. right? and it's, it's this. so this is fascinating idea of a social license, right? and i think it's very important that we go and earn a social license to operate a particular piece of technology in society. and that involves necessarily part of it is educating people on what the stick knology is. but also bringing in the right stakeholders more than internal and external, so that you on that social license, you on that trust from your stakeholders before deploying that technology. so in
5:55 pm
a natural answer to your question, is that the regulation around data collection is just one piece. there are so many other parts to this entire life cycle that also need to be considered for, for regulation. i want to show you, i think i've been findeth also like i'm sorry, it is, is transparency in these algorithms, right? like if high stakes decisions are being made, how are they being made? if you reject somebody for a job, why was this person reject? it can met a i tool tell those that can accompany, tell us that. so i think that was a really, really important things, right? because i don't want to be, i don't want to and you know, know that i get rejected because my 1st name was in thomas and those kinds of things that i have found out that resume parcels. look for 1st name's keywords like church on a resume and to say that bed, you are qualified for, for the job. i want the, you know, so you know, if and if i take, if i take an employer or job applicant takes an employer add to chord, they need to be able to answer it. this is how i'm, how be made that decision. and that's not always clear,
5:56 pm
so i think we really need companies to be much more transparent how these tools make decision, right? so there's one more person i want to include in our conversation. let's go via my laptop here. this is news for this week in the world of a i google fires blake lemoine. the engineer who claim the a i chat bought is a person so 70 and technology has said the technology is thinking like a real person. blake was fired for coming out with that story. they may be more to that, but he also talked about the ethics of a i and who is watching people who are developing artificial intelligence. let's have a listen to what he had to say. the fact is google is being dismissive of these concerns the exact same way. they have been dismissive of every other ethical concern a i at the says have raised i don't think we need to be spending all of our time figuring out whether i'm right about being
5:57 pm
a person. we need to start figuring out why google doesn't care about a i ethics in any kind of meaningful way. why does it keep firing ai ethicists each time we bring up issues? it's the systemic processes that are protecting business interests over human concerns that create this pervasive environment of irresponsible technology development. like my now almost having the last last but not quite because we started the show with should a, i b regulated in a sentence. lex poll ad gas, hillcrest, should it? absolutely, and i think there should be and there should be auditing procedures that a 3rd party, maybe, for example, a government m m agency has to do any time. thank it. is a high steak decision at least in time zone luca, jon luca regulations a eyes. yes, i do believe that i should be regulated,
5:58 pm
but i also think this is no enough because a i is all sudden extremely powerful tool. and so we will to want to make sure that we can use it to the, with the stuff that we need. you know, thanks for the 2nd one sentence that make it a go. one ha, regulations are needed, but they need to be in, standards based and things like the nest risk management framework if they are to be meaningful. and anyway, i'm like, almost such an interesting, fascinating conversation. i wish i could count you longer. i'll get you back at a check, john luca, cal cat and our viewers watching a new chief. thank you so much for being part of today's conversation. i will see you next time. take everybody. ah good libya like everywhere, connectivity this paramount. and yet for infrastructure and dependence on foreign
5:59 pm
corporations means to many remain offline. now, a politician uncheck activists are building a home grown solution. connect when it views and secure the nation's technological sovereignty. greg geeks, the citizens network on audio we, i generation, can people very ambitious, very united way, but faced and i'm very good that can yield ben might be comfortable right now. let's not for long. you will soon feel the same kids we feel every day from cuba, hong kong and uganda, 3 women grappled with the impact of the frontline activists fear future children on a j 0
6:00 pm
21 Views
Uploaded by TV Archive on