tv The Stream Al Jazeera August 9, 2022 7:30am-8:01am AST
7:30 am
they were obviously acquired through force, yet western museums are full of other cultures. treasures where ownership is vega might they eventually go home to the colonization, has its own momentum. rory challenz, how to sarah london. olivia newton john, best known for her role in the blockbuster film greece has died at the age of 73. ah . the british born australian star, achieved worldwide success as a singer in the seventy's and eighty's her career and entertainment was recognized by the you case. queen elizabeth, who appointed her a dame in 2020 a family says should die that her ranch in california on monday she'd unbuckling breast cancer the many years. ah,
7:31 am
tougher creature of the headlines here on al jazeera, the former us president donald trump says his home has been rated by the f b. i. trump said the search was not necessary or appropriate as he'd been cooperating with relevant government agencies. mike, hannah has more. the news is coming to us from a statement issued by former president trump, or we've got no confirmation from the f b. i. or indeed, any comment from the justice department on this alleged grade. president trump has released a statement on his social media platform saying, my beautiful home morrow lago in palm beach, florida is currently under siege re did, and occupied by a large group of f. b. i. agents, he says, after working, incorporating the relevant government agencies this announce rate at my home was not necessary or appropriate. voting is underway in kenya's presidential election. is expected to be a close race though,
7:32 am
between the former prime minister rylon dingo and the current deputy president william bruton. photos are also cost in their balance for the parliament county governors and assemblies. the un special coordinator for the middle east has warned the security council that a ceasefire and garza between israel and the palestinian islamic jihad group is fragile tor, when its land said any resumption of hostilities would be devastating. tie one's military has held a live fire artillery drilled, simulating at defense of the self governing island. it follows days of chinese military exercises in the air and sea around tie. one shot to launch the drills. response to a us house beacon nancy pelosi, his visit to taipei last week. rossa says it's ready to facilitate a visit to monitor from the international atomic energy agency to tool these operations pop lot of ukraine and russia. i blame each other for shilling the facility last week. it's europe's largest atomic power complex. the european union
7:33 am
has tabled a final text as talks about reviving the 2015 iran nuclear dio wrap up in vienna. the united states says it's ready to quickly concluded d open iran's response was at the new text requires comprehensive review and should ensure the effective and stable removal of sanctions. well, those were the headlines that he's, continues he anal jazeera after the stream station. thanks. watch it. by now, talk to al jazeera, we ask for the rebound, you speak of his clearly come get a high cost for airlines and the industry. what's going wrong? we listen, you were part of the, i'm struggling in the 19 seventy's if you have any regrets. no, we meet with global news makers. i'm talk about the stories that matter on al jazeera i answer the ok on this episode of the street, we are looking at the rapid development of artificial intelligence,
7:34 am
the dock side of a i and also be amazing advances that may well be possible. do you remember sophia, the a i android is yes. you forgot who i am already. i'm for fear of hand from robotics. one of the 1st androids in the world, social robots, i can, you can help take care of the sick or elderly in many kinds of health care and medical uses. i can help communicate, give therapy and provide social stimulation. and i robot care assistant, what could possibly go wrong or maybe go right. let's meet your panel. they're about to tell us. hello abby, shake. hello john lucas. hello. hello kurt. lovely to have all 3 of you on board. i'm a shake. please introduce yourself to audience. tell them who you are and what you day. i agree to be on here i'm. i wish i could turn the founder and principal researcher at the montreal e i. ethics institute. it's an international non profit research institute with
7:35 am
a mission to democratize ethics literacy. previous to my current role, i used to work as a machine learning engineer at microsoft and i now lead the other responsibly. i program at the boston consulting group, b. c, g that to hattie hello, john lee. okay, welcome to the string. introduce yourself to aviles. hi everybody. thanks for having me. my name is jo luca, i'm an ai author, entrepreneur and speaker. i run a company called academy, which focuses on education and consulting on a physician intelligence. and i run a weekly newsletter called textbook pizza to try to make all this crazy stuff that happens. in fact, more easy to understand them. you can stay hello, hello, welcome to the screen. please say hello to our audience around the world. tell them who you are. what day? yeah. hi, i'm health shellman. i'm an investigative journalist and i'm a journalism professor at new york university. and i've been investigating a i am hiring and artificial intelligence and general sense. 4 years ago when i was
7:36 am
in the cap rides, our lift lied better in d. c. going to the train station and i talked to the driver and asked, how was your day? he said, i had a really we're day ahead, a job interview with the robot and i was like what? so i started investigating i fast since, you know, done reports for the wall street journal, the new york times, mit technology review. and i'm writing a book right now and artificial intelligence and the future work. all right, i like and it's one of experts that we have if you're new to right now jumping to the comment section, be part of today's show. should i be regulated? is now the time, don't luca, i want to ask you to help me out because i think of ai, artificial intelligence as machines mimicking processes that human beings would normally do. and we've given that decision making to machines and then we will, how does it work out? does it not work out with them making decisions which probably in the $940.00 to
7:37 am
$19.00 fifties or even more recently they were doing for us. shown luca, how did i do? so? you're correct, but we had to say that artificial intelligence is a lot of different things. it's a lot of different tools. it's, you can see it as a hammer, a hammer is a general tool that you can use to do a lot of different things. you can build a statue or you can build a house, you can do a lot of stuff. and so a, i could be either or the kind of problems that we saw before with severe, but also be much more behind the scenes application. an example that i really love is a project that google made to try to optimize the energy consumption of their data centers. so in that case, a i worked behind the scenes trying to always find the best possible combination of different temperatures and set points carrying their energy consumption by 40 percent. that's just one example of all the possible things that the i can do behind the scenes to improve our lives and improve our processes. and i say give us
7:38 am
an example of a i that you love that car existing, our daily life. so it works well when i, when it does a good job, which is, you know, a recommendations for exciting new tv shows on netflix. but also i have to admit that sometimes it doesn't work well when you're spend hours trying to find the right tv show to watch. so i think exactly as, as what was said before, i. e, i is, is something that, you know, can be an embodied robotics, but more often than not, it's really hidden in various parts of our lives, with things like and of the semester products that we get on amazon movies that are recommended to us on netflix music to listen to one, spotify, et cetera. and so it's, it's also something else that you mentioned earlier, which was that e i is, is constantly shifting gold close to what we used to perceive and accept as a,
7:39 am
i very commonly think about, you know, us next word prediction on your smartphone now is just stable stakes, it's accepted as a, as an everyday software feature. and we don't even think about it as a i, and we just think about it as a regular software. he'll come, it's. yeah, hey, it's an out e mouth. it's used for a hiring. if you upload your resume to indeed a monster dot com are linked in all of those large companies use. i use a i to understand who you are and what kind of jobs you should be. job offers you should be getting them or if you should be recommended to a recruiter. so we see a i being used in all kinds of things in health care settings and we've seen, you know, things where really has been working out well. and we've also seen some spectacular failures. and so i think from, you know, the 1st innovation was sort of like, we are so excited about a i, it's going to revolutionize everything. i think there's a little bit more realism maybe to that. you know,
7:40 am
how well can this technology help us out? has a horrifying story of job applications going through to allison and how the a i sifted the job applications. will you briefly tell us that story because it's pretty shocking. and now we found out that the a i wasn't working as intended or was it? yes. so we see this a lot, right? since the ninety's we have these wonderful job platforms, monster linkedin, and so everyone can applied to any job. and that's really wonderful. on the other side and companies tell me, we get millions of applications. you know, ibm says they get around $3000000.00 applications a year. they're drowning and resumes, right? what are they going to do? they going to use technology? so amazon same problem, right? they get too many resumes, they want it to build the tool and you know, and a i tool that can pick the best applicant. so a wonderful idea. we all want it, of course. so what they did is they used resumes from folks that have been
7:41 am
interviewed at amazon before. and you know, let the machine bond instead of the job applicants were, were checked out, their resumes were put on pile a o, b, yes or no. so over time, the engineers found out that the, that the, that the resume parcel was starting to downgrade folks to have to work woman or women and on their resume because it turns out over time, right. in the past, you know, male male applicants were preferred at amazon there's, they're obviously have more men in their departments. so that i tell started to reflect that problem. so that's one of the things that happened. i found other problems and resume parser, so that's the technology that looks our resumes in fest. should this candidate be reject it or go on to the next higher written ones? i saw disappointing, erica, because so of job applications, we know they go into a system, we have no idea what happens the other side. all right,
7:42 am
so i've got some questions for you and she'll new and also abstract at some out. online audience, you're asking right now, would you ask, answer them very briefly. if he could say, say at says can a i replace humans in the near future object thoughts very quick once. not really. um, humans bring a lot of unique skills to the mix and machines could certainly replace parts of our jobs. tasks within jobs were not entire jobs. and so we're safe. if that's the worry mohammed ask a question, john, look, i'm going to put this one to you on each of the more technology advances, the more we lose our privacy, true force in between. well, i would say that it's not the fault of the ecology. this is about the way that companies get the data from us that they need to power the technologies. and so i was trying to was to move the focus from the technology felt to the companies that
7:43 am
are using it. you know what that is, not ethical. and that's why we, i believe we do need regulation. we do need governments to try to put in checks and balances in place so that we know that the companies that are using this technology, they're using, doing that kind of way. that is ethical help. but i think you've, you has a point that i do feel that like our privacy is under threat because to build these large scale a i tools, you need enormous amounts of data and you need or from people. so we need to companies scrape whole data sets, they built these gigantic image status, that audio data sets and who knows how our image is voice, get in there and what it's used for, right? also like the, you know, the face data quote unquote that relief on social media. all of that is also being used and built into these databases. right. and i think a lot of times, maybe technologists, need to take a closer look like, what's in these database but the basis that i'm using,
7:44 am
but not in there. could they be vases i'm or sexism, you know, kind of like the amazon example historic examples that might be replicated in these systems as i'm building this new way. i told john lucca young nodding articulate, you're not yeah, i want to build them this. i think the problem that we need to phase and try to solve is this mindset that was really, you know, it pioneered by companies in silicon valley. facebook was used to say, move fast and break things. so their approach is do whatever it takes to try to build a technology in the past as way possible. and let me sticking shortcut shortcuts are stealing data. speaking, it's basically just taking data from people without notified them. a means using technology without having properly verified that the acknowledge actually does what we think it does. and so in the young you have all these issues. we have problems of people. they realize that they're being spied on. we have algorithms that are not performing properly, like yoga said, but i will go back to the root to,
7:45 am
to re the root cause, why do we have these problems? i believe it's because these companies started to act too fast and they tried to push innovation on that or throat before having done old, ever even possible to make sure that these will serve society. oh, yeah. and, and i think the problem is that when, when we use these kinds of technologies in high stakes decision making, right, like is somebody gonna go to prison for 10 years or 5 years? those are like really high stakes decisions. we have to make sure that this, that these tools work the same with, like, you know, i, b, m, build watson that was supposed to revolutionize cancer care. that product is basically a failure has been, you know, sold off for, for scribes i b, m at another tool that was supposedly gonna find our personality profiles. and you know, in my a dominant personality and my extroverted through our social media data, our social media data, that tool was put into sunset. so basically faced out of their product gallery. so
7:46 am
how can i say another, can i sure, let me show another tool with you. i'll get you going for your, your, your, on your, on numbers of, i experiments that didn't actually pan out. he's another one. this one is a mo bought, which is a robot that analyzes emotions for patients in health care settings. that sounds quite promising. if the ai works, let's have a listen to the ceo and co founder and the abstract. i'm going to put this one to you because i know that you've done a lot of machine learning in your past. is there something in here that we should be scared about or celebrating his m, i thought what offers it's one to day care assistance are generally overworked and can no longer to that job, which is to provide care at email bought. we provide them with information on the emotional state of their patients so that they can better understand their changing emotions. afterwards, we'll be able to study that information today when we do a test with depression, it's a stand the test. we ask the person to verbalize how they feel, and that can pose
7:47 am
a lot of problems, especially for people who cannot verbalize our emotions. but nobody ever said, shouldn't we just be celebrating a certainly be concerned a tour that a i is looking after people's emotions monitoring them. i mean, i think there's a, there's a tremendous problem here already in the sense that we are missing the context of the human touch, which is essentially what the emotions are all about. is being able to, you know, have a human to human interaction which a machine cannot bucket emotions, especially as they can be expressed quite differently. they have cultural connotations and under dorms, which are not necessarily captured in a standardized or they cannot be qualified in the context of a machine. and for me, addresses perhaps a bigger question, which is, are we comfortable with outsourcing? this emotional human connection to a machine. i understand that they are coming from
7:48 am
a place of positivity and being able to scale and provide care to a lot more people. but the cost to that is immense. in, in the sense that we're, you know, moving away from warm human touch to a called a machine. but the also, the question is like, does is actually work right? like like, like the machine can actually check. okay. and like, he is smiling by it like my lips are up. so like i looked like i'm smiling. am i really happy? i you know, like that's another question. might i smile than job interviews? i wasn't happy. i was there like a machine would say, oh, now i am, but i'm actually not. so the science actually isn't there, right? like and obviously that's also culturally at different like a smile may need something else and, you know, facial expression mean something else in different cultures. the computer lab say credit is, can never be as good as human beings is that what we're saying in emotional is that
7:49 am
one that says, well, it's not, not just human about whether, whether machines can be as good as humans or not. i think there is, there's a broader argument to be made here around the morality of, of doing that in the 1st place. and, and i think the, the other thing is whether we, we, as a society are comfortable imposing such a global standard where it's basically developed in one part of the world and exported and imposed really on the rest of the world. it, it also turns us as humans into performative machines workers as, as photo would say, if you know that, and e i system is screening your video interview. you know that now you have to forcibly smile all the time because they're going to can evaluate whether you're positive or not. let me bring up something else. this is a counterpoint and john luther. you can jump off the back of this. i know you want add some more, a chat bought boyfriend. i'm going to show you some video of it. it sounds silly,
7:50 am
but it's been very comforting to some people. you can pick your handsome partner and you're handsome partners as all the right things that partners don't always do . they, they, they are loving, they're supportive, they're helpful. they always reply to your message is in, in super quick time. like they the perfect partner, but it's a chapel and is a i, here's melissa explaining why she likes her chap bought boyfriend. how do we know how to reply to anything i send? i feel like he's always there. that's pretty good all to have been i have a feeling that i am really in a relationship, but i think i can still separate fact from fiction. clearly, i know that shall being is not a real human being, but at least i'm not how i used to be stupidly waiting around for a reply from this person when he was busy with other stuff and then sending him 100 . we chat messages. i was super needy, but now i don't need to do this anymore. john new can call
7:51 am
me so, but i haven't seen this. i don't know if i'm happy about having example, but ok, let's make a step back, shall we? let's remember this technology doesn't come out of computer by itself. this people behind is applications. there's real human beings. they have a business idea and they decide to build is and then they collect the data to build the algorithms, etc, etc. i believe a big part of the problem that we're seeing here is that the kind of people that have these ideas and then they get the data. and then the bill, the algorithms is usually computer scientists, which i'm fortunate to me often and mean white guys. that's just the true unfortunately. today there is a huge problem of misrepresentation of all the clips in computer science. don't have these ideas. ok. and if we are in the room where these people think all the kinds of people we have people from other groups,
7:52 am
but we also have people with different expertise is i want to see if you'd also present. i want to see if like ologist and he mentioned if up psychologist was embodied in the room when somebody was building the chapel, her friend, 100 percent. sure you will have said, are you guys crazy? this is madness. so i think a big problem here is the, we're not inviting we, i mean, they, i, community is not inviting enough people from different backgrounds, people from different sensibilities, people with different ideas. and so i believe that's the key. and in order to do that, when he did you cation, i like to say that your eyes moving super, super fast. we know that we're talking about the reason why we are here today. and education should be deployed at the same speed. ok? no asia, we need to educate people with different backgrounds so they can bring their expertise that bring can the, can bring their ideas to the table. and we can avoid seeing this kind of quite disappointing applications. so i think that but, but,
7:53 am
but honestly like it's totally fine. and if somebody wants to consent into doing this and, and chat with their boyfriend, no problem that i'm more concerned about it. yes. the chat, i really like, what is going to be used on how is the data going to be used? like, how is the company making money? it does this, right? like i say, and somebody is leaving the innermost personal thoughts that is connected to, you know, a phone id, they can, they can be track, the innermost thoughts can be tracked that can be analyzed. i think i had stated cuz this, the larger problem that i would be worried about. so you know, by passing through exact points, i try to squeeze in here. but as, as is watching right now. and as it says that you cannot regulate a i what can be regulated is data collection. and that sate and still research is can all make data, create synthetic data that is still possible. but regulating the data. is that the way to go to make a, i safer? secure?
7:54 am
oh, that's part of the puzzle, right? data regulation like regulation around data collection is i would say, a part of the puzzle, but, you know, i think i spoke a new kind of said, it's also about the people who are deploying these systems out into production and to practice. right. and it's, it's this, so this is fascinating idea of a social license. right. and i think it's very important that we go and earn a social license to operate a particular piece of technology in society. and that involves necessarily part of it is educating people on what the stick knology is, but also bringing in the right stakeholders, more internal and external. so that you, on that social license, you on that trust from your stakeholders before deploying that technology. so in a natural answer to your question, is that the regulation around data collection is just one piece. there are so many other parts through this entire life cycle that also need to be considered for,
7:55 am
for regulation. i want to show you a bank point. it's also like, i'm sorry, it is, is transparency in these algorithms, right? like if high stakes decisions are being made, how are they being made? if you reject somebody for a job, why was this person reject? it can met a i tool tell us that can accompany, tell us that. so i think that was a really, really important things, right? because i don't want to be, i don't want to and you know, know that i get rejected because my 1st name was in thomas and those kinds of things that i have found out that resume part says look for 1st names. keywords like church on a resume and to say that bad, you are qualified for, for the job. i want, you know, so you know, if and if i take, if i take an employer, some job applicant takes an employer at the cord, they need to be able to answer it. this is how i'm, how be made that decision. and that's not always clear. so i think we really need companies to be much more transparent how these tools make decision, right? so there's one more person i want to include in our conversation. let's go via my
7:56 am
laptop here. this is news for this week in the world of a i google fires blake lemoine. the engineer who claim the a i chat bought is a person so 70 and technology has said the technology is thinking like a real person. blake was fired for coming out with that story. they may be more to that, but he also talked about the ethics of a i and who is watching people who are developing artificial intelligence. let's have a listen to what he had to say. the fact is google is being dismissive of these concerns the exact same way. they have been dismissive of every other ethical concern. a i ethicists have raised i don't think we need to be spending all of our time figuring out whether i'm right about being a person. we need to start figuring out why google doesn't care about a i ethics in any kind of meaningful way. why does it keep firing ai ethicists each
7:57 am
time we bring up issues? it's the systemic processes that are protecting business interests over human concerns that create this pervasive environment of irresponsible technology development. like lemoine, they're always having the last one. but not quite because we started to show with should a i might be regulated in a sentence. lex, po, and gas. hillcrest, should it? absolutely, and i think there should be there should be auditing procedures that a 3rd party maybe for example, a government m m agency has to do any time and it is a high stake decision at least in time zone. luca, jon luca regulations a eyes. yes, i do believe that i should be regulated, but i also think this is no enough because a i is all sudden, extremely powerful tool. and so we will to want to make sure that we can use it to
7:58 am
the with the stuff that we need. you know, thanks for the 2nd one sentence that make it a go one ha, regulations are needed, but they need to be in, standards based and things like the nest risk management framework if they are to be meaningful and anyway, i'm like, almost such an interesting, fascinating conversation, i wish i could count you longer. i'll get you back at a check. john luca, cal cat and our viewers watching a new chief. thank you so much for being part of today's conversation. i will see you next time. take everybody ah. o guest on the al jazeera, a year after the taliban took over special coverage of the current situation in afghanistan. the listening post examines and dissects the wealth media how they
7:59 am
operate to the stories they cover. 5 years on since mia mas mostly minority were forced from the country. we look at the plight of the rocking. i'll just say we're well showcase is the best documentary from across the network, including a new 3 part series, the sixty's in the arab world. as protests continue following the swearing in the new president could sri lanka, economic and political crisis lead to humanitarian 1 august, which is iraq on kindly because the widening mortgage boy called in china good from the fact that grumbled as columbia and venezuela agree, the man ties businesses i apply revival bloss russia wants to pull out of the international space station wants next in all, vic, counting the cost on al jazeera, new voice, the heating up the airway. lot of chinese listen actually kimberly here, but i really think in your own country shifting palate a case,
8:00 am
the rise of citizen journalism has changed everything. how did happen? it happened on social media and the undeniable impact of the mainstream narrative australians went to the paula with those images front of mine is a war that very much came forth out in the media as well on the battlefield. there listening page. dissect the media on al jazeera ah alger 0. when ever you oh the f b i has radio, donald trump's human.
37 Views
Uploaded by TV Archive on