tv The Stream Al Jazeera July 27, 2022 10:30pm-11:01pm AST
10:30 pm
to sleep. so we are looking for some cancer people that can help us, but when we ask, it will be very hard to find someone from the doctor today. and yet, but it happened to me that this could be that we talked about the situation. and even though we are affected by the strike, we find is okay, what they're doing better. these are difficult times. everyone has to fight. but lufthansa is making good money right now. those on friday go around $20000.00 to ground. sort of look hands on those of people working at secure with the luggage handling. maintenance of plain solar towns have no choice but to cancel these flights and what the ground stuff wants is that a pay they have working been working very hard. there was a shortage of staff, seems to go for some damage, asking for 9 and a half percent more salary a month, which is around $350.00 off the toronto for negotiations that failed. they decided to go on this friday and they will have another round of negotiations happening next week. but the labor union doesn't exclude anymore,
10:31 pm
thrice happening off of that. ah, one of the top stories on how to 0100 people have broken into iraq's parliament to protest against the nomination of mohammed sheer su danny, for the position of prime minister, the supporters of shia cleric and politician federal sutter, who said he wouldn't accept any bit by asked to donate to form a new government last month, more than 70 members of parliament affiliated with us had resigned on his orders. deepening months of political deadlock, which looked at her son has urged demonstrations to go home in peace. parents who have no hand and but they may go to fountain with voted the sacrifice for many years and ask reforms and change of regime. the most important thing is change of regime. the current regime useless there is carrying. there is bloodshed in the same places are coming back so the country will not work. we just wanted to go
10:32 pm
ahead is following developments in baghdad. the protestors who was told with the headquarters of the council of representative have was drawn. that is following a command from the leader them through until she clinic and politically that it looked at the saw the who asked his supporters to withdrawal. he also addressing his supporters of southern mentioned that the message has been conveyed the message in his words that his supporters have to defied the corrupt politicians in power. yes, extra state and nibbling can says a substantial offer will be made to secure the release of basketball star. brittany greiner and former us marine pul whelan, who both currently detained in russia. he'll speak to his russian count, bought a lever off in the coming days. locking the highest level talks between the 2 since
10:33 pm
the ukraine war, britney groaner faced court in russia. on wednesday, she was detained at moscow at fort in february, carrying vape cottages containing hashish oil. their hopes that millions of tons of grain blocked in ukrainian port will soon reach the world market with using the joint coordination center at any time. bull center was opened by turkey, defense minister on wednesday as part of a landmark deal signed by keven moscow last week specials. the moment the stream is coming up next with more news straight after that. i know ah ah,
10:34 pm
i answered the ok on this episode of the street and we are looking at the rapid development of artificial intelligence, the dock side of a i and also be amazing advances that may well be possible. do you remember sophia, the a i android is yes. you for gum. who i am already. i'm for fear of hansen robotics, one of the 1st androids in the world. social robots, i can, you can help take care of the sick or elderly in many kinds of health care and medical uses. i can help communicate, give therapy, and provide social stimulation. and a i, robot care assistance, what could possibly go wrong or maybe go right. let's meet your panel, they're about to tell us hello abby shack. hello john lucas. hello, hello, cut. lovely. to have all 3 of you on board habersham, please introduce yourself to audience. tell them who you are and what you day. i
10:35 pm
agree to be on here. i'm obviously a group term, the founder and principal researcher at the montreal lee i fixed institute. it's an international non profit research institute with a mission to democratize ethics literacy. previous to my current role, i used to work as a machine learning engineer at microsoft and i now lead other responsibly our program at the boston consulting group, b. c, g. yet to have you. hello, john, nick, a welcome to the string. introduce yourself to aviles. hi everybody. thanks for having me. my name is joe, i'm in a i or entrepreneur and speaker. i run a company called economy, which focuses on educational consulting on a physician intelligence. and i run a weekly newsletter called tech book pizza. and i tried to make all this crazy stuff that happens in fact more easy to understand and you can stay hello, hello, welcome to the screen. please say hello to our audience around the world. tell them who you are, what you day? yeah. hi, i'm health shellman. i'm an investigator journalist and i'm
10:36 pm
a journalism professor at new york university. and i've been investigating a i am hiring and artificial intelligence and general sense. 4 years ago when i was in a cab rides, our lift light better in d. c. going to the train station and i talked to the driver and asked, how was your day? he said, i had a really weird day. i had a job interview with the robot and i was like what? so i started investigating, i fast since, you know, i've done reports for the wealthy journal, the new york times, mit technology review. and i'm writing a book right now, an artificial intelligence and the future work i, i like and it's one of excellence that we have to be if you're new to right now jumping to the comment section, be part of today's show. should i be regulated? is now the time, don't luca, i want to ask you to help me out because i think of ai, artificial intelligence as machines mimicking processes that human beings would
10:37 pm
normally do. and we've given that decision making to machines and then we work out, does it work out, does it not work out, but they're making decisions which probably in the $191419.00 fifties or even more recently they were doing for us. shown luca, how did i do? so? you're correct, but we have to say that artificial intelligence is a lot of different things. it's a lot of different tools. if you can see as a hammer hummers a general tool, they can use to do a lot of different things. you can build a statute, we can build a house, you can do a lot of stuff. and so a, i could be either the kind of problem that we saw before with severe, but it also be much more behind the scenes application. an example that i really love is a project the google made to try to optimize the energy consumption of their data centers. so in that case, i worked behind the scenes trying to always find the best possible combination of different temperatures and set point, cutting their energy consumption by 40 percent. that's just one example of all the
10:38 pm
possible things that i can do behind the scenes to improve our lives and improve our processes. i would say give us an example of a i that you love the company listing and our daily life. so it works well when, when it does a good job, which is, you know, recommendations for exciting, new tv shows on netflix. but also i have to admit that sometimes it doesn't work well when you spend hours trying to find the right tv show to watch. so i think exactly as, as what was said before, e, i is, is something that, you know, can be embodied robotics. but more often than not, it's really hidden in various parts of our lives, with things like and the products that we get on amazon movies that are recommended to us on netflix music to listen to one, spotify, etc. and so it's also something else that you mentioned earlier, which was that
10:39 pm
e i is constantly shifting goal post to what we used to perceive and accept as a very commonly think about, you know, the next word prediction on your smartphone now is just stable stakes. it's accepted as a, as an everyday software feature, and we don't even think about it as a i, and we just think about it as a regular software. it's. yeah, it's an hour emails. it's used for hiring. if you upload your resume to indeed a month dot com or linkedin, all of those large companies use. you say i understand who you are and what kind of jobs you should be. job offers who should be getting? or if you should be recommended to. busy a recruiter, so we see a being used in all kinds of things and health care settings and we've seen, you know, things where really as some working out well. and we've also seen some spectacular failures. so i think from, you know, the 1st innovation was sort of like,
10:40 pm
we are so excited about a i, it's been a revolution as everything. i think there's a little bit more realism to that, you know, how well can this technology help us out? have a horrifying story of job applications going through to allison and how the a i sifted the job applications. will you briefly tell us that story? cuz it's pretty shocking. and then we found out that the a i wasn't working as intended or was it? yes. so we see this a lot, right, since the ninety's yet these wonderful job platforms, monster linkedin, and so everyone can apply to any job. and that's really wonderful. and the other side companies tell me we get millions of applications. you know, i b m says they get around 3000000 applications a year. they're drowning and resumes, right. what they're going to do, they're going to use technology. so amazon same problem, right? they get too many resumes. they want to build the tool, you know, and
10:41 pm
a i tool that can pick the best applicant. so, wonderful idea. we all want it, of course. so what they did is they used resumes from folks who have been interviewed at amazon before. and, you know, let the machine vine and sort of the job applicants were, were checked out their resumes and were put on pile a o, b, yes or no. so over time, the engineers found out that the, that the, that the resume parcel was starting to downgrade folks who had the word woman or women on their resume because it turns out over time, right? in the past, you know, male male applicants were prefer that amazon, there's obviously have more men in their departments so that i till started to reflect their problem. that's one of the things that happened. i found other problems in resume parser, so that's the technology that looks resumes and says, should this candidate be reject it or go on to the next higher?
10:42 pm
i know disappointing. they'll go because of job applications. we know they go into a system, we have no idea what happens the other side. all right, i've got some questions for you. sure. luca. and also abstract at some out online audience. you're asking right now, would you ask, answer them very briefly. if he, kurt says a at says can a i replace humans in the near future attack, thoughts very quick ones. not really. um, humans bring a lot of unique skilled stood. mixon machines could certainly replace parts of our jobs. tasks within jobs were not entire jobs. and so we're safe. if that's the worry mohammed ask a question on, look, i'm going to put this one to you on each of the more technology advances, the more we lose our privacy, true force in between. well,
10:43 pm
i would say that it's not the fault of the ecology. this is about the way that companies get the data from us that they need to power these technologies. and so i was trying to was to move the focus from the technology felt to the companies that are using it in a way that is not ethical. and that's why we, i believe we do need regulation. we do need governments to try to pertain checks and balances in place so that we know that the companies that are using this technology they're using, doing in a way that will help us. but i think you've, you has a point that i do feel that like our privacy is under threat because to build these large scale a i tools, you need enormous amounts of data and you need it from people. so we need to companies scrape whole datasets, they built these gigantic image, data sets, audio data sets, and who knows how our images, how a voice get in there and what it use for, right? also like the, you know, the face data quote unquote that relief on social media. all of that is also being
10:44 pm
used and built into these databases. right. and i think a lot of times maybe technologists need to take a closer look like what's in this database. the basis that i'm using, what's not in there, and could they be vases and sexism, you know, kind of like the amazon example historic examples that might be replicated in these systems as i'm building this new way. i tool john luca young, you're nodding articulate, you're not. yeah, i'm going to build on this. i think the problem that we need to phase and tried to solve is this mindset that was really l pioneered by companies in silicon valley. facebook was used to say, move fast and break things, so their approach is do whatever it takes to try to build this technology in the past as way possible. and let me digging shortcut shortcuts are stealing data. sticking me, it's basically just taking data from people. we don't normally find them a means using technology without having properly verified. adopt ecology actually does what we think it does. and so in the end,
10:45 pm
you have all these issues. we have problems of people, they realize that they're being spied on. we have algorithms that are not performing properly, like yoga said, but i will go back to the route to the, to re the root cause, why do we have these problems? i believe it's because these companies started to act too fast and to try to push innovation down or throat before having done old everything possible to make sure that these will serve society. oh, and, and i think the problem is that when, when we use these kinds of technologies in high stakes decision making, right, like as somebody gonna go to prison for 10 years or 5 years, those are like really high stakes decisions. we have to make sure that this, that these tools work the same with, like, you know, i, b, m, build watson that was supposed to revolutionize cancer care. that product is basically a failure has been, you know, and sold off for, for scribes i b, m. and another tool that was supposedly gonna find our personality profiles. and you know, in my a dominant personality and my extroverted through our social media data,
10:46 pm
our social media data, that tool was put into sunset. so basically faced out out of their product gallery . so can i say another question? let me show another tool with me. he'll get you going for your, your, your, on your, on numbers of a i, experiments that didn't actually pan out. he's another one. this one is emma bought, which is a robot that analyzes a motion for patients in health care settings. that sounds quite promising. if the ai works, let's have a listen to the seo and co founder and the abstract. i'm going to put this one to you because i know that you've done a lot of machine learning in your past. is there something in here that we should be scared about or celebrating his m about policies? it's a day care assistance, generally overworked and can no longer do that job, which is to provide care at email bought. we provide them with information on the emotional state of their patients so that they can better understand their changing
10:47 pm
emotions. afterwards, we'll be able to study that information today when we do a test with depression, it's a stand the test. we ask the person to verbalize how they feel, and that can pose a lot of problems, especially for people who cannot verbalize their emotions. but nobody, i would say, certainly just be celebrating a certainly be concerned a tour that a i is looking after people's emotions monitoring them. i mean, i think there's a, there's a tremendous problem here already in the sense that we are missing the context of the human touch, which is essentially what the emotions are all about. is being able to, you know, have a human to human interaction which a machine cannot bucket emotions, especially as they can be expressed quite differently. they have cultural connotations and under domes which are not necessarily captured in a standardized or they cannot be quantified in the context of
10:48 pm
a machine. and for me, addresses perhaps a bigger question, which is, are we comfortable with outsourcing? this emotional human connection to a machine? i understand that they are coming from a place of positivity and being able to scale and provide care to a lot more people. but the cost to that is immense. in, in the sense that we're, you know, moving away from warm human touch to a cold or machine. but the author, the question is like, does is actually work right? like like the, like the machine can actually check. okay. like she is smiling, right? like my lips are up, so like a look like i'm smiling. am i really happy? i you know, like that's another question, right? i smiled and job interviews. i wasn't happy with your machine would say, oh, sure. but i'm actually not. so the science actually isn't there, right? like and obviously that's also culturally different, like a smile may need something else. you know, facial expression means something else in different cultures that computer labs can
10:49 pm
never be as good as human beings. is that what we're saying? in emotional? so one that says, well, it's not, not just even about whether whether machines can be as good as humans or not. i think there is, there's a broader argument to be made here around the morality of, of doing that in the 1st place. and, and i think the, the other thing is whether we, we, as a society are comfortable and posing such a global standard where it's basically developed in one part of the world and exploited and imposed really on the rest of the world. that it also turns us as humans into performative machines, because as, as, so that was saying, if you know that and ai system is screening your video interview, you know that now you have to forcibly smile all the time because they're going to i evaluate whether you're a positive or not,
10:50 pm
let me bring up something else. this is a counterpoint and you're looking, you can jump off the back of this. i know you want add some more. a chat bought boyfriend. i'm going to show you some video of it. it sounds silly, but it's been very comforting to some people. you can pick your handsome partner, and your handsome partner says all the right things that partners don't always to do. they they, they are loving their supportive, their help for. they always reply to your message is in, in super quick time. like they the perfect partner, but it's a chapel and it's a i, here's melissa explaining why she likes her chat, bought boyfriend, probably no reply to anything i send. i feel like he's always there. that's pretty good all to have been i have a feeling that i am really in a relationship, but i think i can still separate fact from fiction. clearly, i know that shall being is not a real human being,
10:51 pm
but at least i'm not how i used to be stupidly waiting around for a reply from this person when he was busy with other stuff and then sending him 100 . we chat messages. i was super needy, but now i don't need to do this anymore. john lucas? 5. so, but i haven't seen this. i don't know if i'm happy about having example, but ok, let's make a buck shall we? let's remember this technology doesn't come out of computer by itself. this people behind is applications. there's real human beings. they have a business idea and they decide to build is and then they collect the data to build the algorithms, etc, etc. i believe a big part of the problem that we're seeing here is that the kind of people that have these ideas and then they get the data and then they build the algorithms is usually computer scientists, which i'm fortunately often mean white guys. that's just the true unfortunately. today there is a huge problem of misrepresentation of all the clips in computer science. don't
10:52 pm
have these ideas. ok. and if we are in the room where these people think all the kinds of people we have people from other groups, but we also have people with different expertise is i want to see if you'd also present. i want to see it's like ologist and imagine if up psychologist was embodied in the room when somebody was building the chapel, her friend, 100 percent. sure it will. it said, are you guys crazy? this is madness. so i think a big problem here is the, we're not inviting we, i mean, they, i, community is not inviting enough people from different backgrounds, people from the foreign sensibilities, people with different ideas. and so i believe that's the key. and in order to do that, when he did you cation, i like to say that the eyes moving super, super fast. we know that we're talking about this. the reason why we are here today, and education should be deployed at the same speed. ok? no asia, we need to educate people with different backgrounds so they can bring their
10:53 pm
expertise that bring can the, can bring their ideas to the table. and we can avoid seeing this kind of quite disappointing applications. so i think that but, but, but honestly like it's totally fine. and if somebody wants to consent into doing this and, and chat with their boyfriend, no problem that i'm more concerned about it. yes, the chat not a real black. what is going to be used on? how is the data going to be used like how the company making money? it does this and somebody is leaving the innermost personal thoughts that is connected to, you know, a phone id. they can, they can be track, the innermost thoughts can be tracked that can be analyzed, i think i had stated cuz there's a larger problem that i would be worried about. answer that, you know. yeah. but passing through exact points i did try to squeeze in here, but as, as is watching right now. and as out says that you cannot regulate a, i what can be regulated is data collection. and that sate and still research is can
10:54 pm
all make data, create synthetic data that is still possible, but regulating the data. is that the way to go to make a i safer, secure hilda? oh, that's part of the puzzle, right? data regulation like regulation around data collection is i would say a part of the puzzle, but, you know, i think has spoken. john lucas said, it's also about the people who are deploying the systems out into production into practice, right. and it's this. so there's this fascinating idea of a social license, right, and i think it's very important that we go and aren't a social license to operate a particular piece of technology in society. and that involves necessarily part of it is educating people on what this technology is. but also bringing in the right stakeholders, both internal and external, so that you own that social license and that trust from your stakeholders before
10:55 pm
deploying that technology. so in, i'm not sure the answer to your question is that the regulation around data collection is just one piece. there are so many other parts to this entire life cycle that also need to be considered. and for, for regulation, i want to show, i think, a big point. it's also like i'm sorry, is, is transparency in these algorithms, right? like if high stakes decisions are being made, how are they being made? if you reject somebody for a job, why was this person rejected? can met a i tool tell us that can accompany, tell us bad. so i think those are really, really important things, right? because i don't want to be, i don't want to know that i get rejected because my 1st name was thomas and those kinds of things that i have found out that resume par says, look for 1st names, keywords like church on a resume to say that then you are qualified for, for the job. i want, you know, you know, if i take, if i take an employer or job applicant takes an employer to chord, they need to be able to answer. this is how i'm, how be made that decision. and that's not always clear,
10:56 pm
so i think we really need companies to be much more transparent how these tools make decision. all right, so there's one more person i want to include in our conversation. let's go via my laptop here. this is news for this week in the world of a i google fires blake lemoine. the engineer who claim the a i chat bought is a person so 70 and technology has said the technology is thinking like a real person. blake was fired for coming out with that story. they may, may be more to that, but he also talked about the ethics of a i and who is watching the people who are developing artificial intelligence. let's have a listen to what he had to say. the fact is google is being dismissive of these concerns the exact same way. they have been dismissive of every other ethical concern. a i ethicists have raised i don't think we need to be spending all of our
10:57 pm
time figuring out whether i'm right about being a person. we need to start figuring out why google doesn't care about a i ethics in any kind of meaningful way. why does it keep firing ai ethicists each time we bring up issues? it's the systemic processes that are protecting business interests over human concerns that create this pervasive environment of irresponsible technology development. like lemoine, they're always having lasler, but not quite because we started the show with should a i b regulated in a sentence. let's poll, add gas hillcrest, should it? absolutely. and i think there should be and there should be auditing procedures that a 3rd party maybe for example, a government m. m agency has to do any time that there is a high stake decision at least in time zone. luca, jon luca regulations a eyes. yes, i do believe the day i shall be regulated,
10:58 pm
but i also think this is low enough because a i sold sudden extremely powerful tool. and so we also want to make sure that we can use it to do all the stuff that we need. you know, thanks like i said, you've got one sentence that make it a go one ha, regulations are needed, but they need to read standards based and things like the nest risk management framework if they are to be meaningful. and anyway, i'm like, i'm a such an interesting, fascinating conversation. i wish i could. can you longer? i'll get you back at the check. john luca cal cut and our view is watching on youtube. thank you so much. been part of today's conversation. i will see you next time. take everybody ah ah
11:00 pm
safe going home and then international anti corruption excellence award boat now for your hero. take off a ganga media censorship and the rise of all of their italian rule. you wake up one day, this system has been turned from an electoral democracy into a competitive with military shape. i looked at the love of power in hungary, indian to the experiences of those who live in every day. that is a pressure on us. but we have to be very careful, of course, and we have to be brave enough to support that pressure. how democracy dies. democracy may be on al jazeera, under cover reporting. or there's worse links for exclusive stories. explosive results, al jazeera investigations. ah.
30 Views
Uploaded by TV Archive on