Skip to main content

tv   The Stream  Al Jazeera  July 28, 2022 11:30am-12:01pm AST

11:30 am
dinosaur, so rare. now who possibly could buy this? well, it could be anybody because it's an open auction. so it could be a university. perhaps a research institute, most likely it will be a wealthy private buyer. who could then loan it to a museum or keep it in their private collection. and for many paleontologist, the uncertainty of who might buy this rare artifact. has many researchers worried. there are not very many specimens of gorgeous stories. all the others are in museums. and one more being sold, you might say, oh, well it's only one, but if there's only a few, right, that's a lot of information that we lose and you say, well, why are you losing it? well, for one you have no guarantee that whoever buys it is going to allow access to scientists . even at sotheby's, there are numerous other dinosaur artifacts that are a fraction like this. try sara top skull, that could garner as much as $300000.00. but it's the gore. goods are
11:31 am
a skeleton that is the one everyone is talking about. and all paleontologist can do is watch and hope whoever buys it, makes it available to research. knowing that if they don't, this could be the last chance. anybody has to ever see it in public? gabriel is aniko. i'll gita new york. ah, this is alex with era and these other top stories. hundreds of people stormed iraq's parliament from wednesday to protest against nomination of a new point minister. my approach is on block supporters of influential shia cleric and the other one has more from bank that those rave val sections, namely, that looked to the solder and his supporters and his soni heir allies on the one hand and the iranian bag to parties. the coordination framework and their kurdish
11:32 am
allies and other independent lawmakers on the other hand they this conflict between them seems to be going go on air. this situation could get worse if, if at both sides dirt don art reach consensus on reaching on for forming a consensus. government as you know that both sides have military factions on the ground. there been reports of multiple missile strikes and the ukranian capital key of sirens have been heard across the city and air defenses of believe have been activated. u. s. media are reporting the bible and ministration is offered russia, prisoner swap to secure the release of basketball star, brittany grider, and a former marine russian armstead of victor boot will be free. there's part of the deal. north korea has marked what he calls a day of victory with celebrations in the capital plan. young military jets painted
11:33 am
the country's flag across the sky, as crowds of though waved flags and cheered the that came along and delivered a speech threatening to mobilize his nuclear deterrent. in case of a confrontation with the us. more than 300 people now known to have died in floods in several provinces, across pakistan since june. many more have been injured in the extreme weather event, which is triggered by it one soon rains. rescue teams have been deployed. this are the headlines. news continues. herron al, jazeera us after history. we understand the differences and similarities of cultures across the world. so no matter what lucy al jazeera will bring you the news and current affairs that matter . to you al jazeera i
11:34 am
i answer the ok on this episode of the street and we are looking at the rapid development of artificial intelligence, the dock side of a i, and also be amazing advances that may well be possible. do you remember sophia, the a i android ish. yes. you forgot who i am already. i'm sophia of hanson robotics. one of the 1st androids in the world. social robots like newton hall take care of the sick or elderly in many kinds of health care and medical uses . i can help communicate, give therapy, and provide social stimulation. and a i robot care assistance, what could possibly go wrong or maybe go right. let's meet your panel, they're about to tell us. hello, i've a shake. hello, john lucas. hello. hello, cut. lovely. to have all 3 of you on board. i'm a shake. please introduce yourself to our audience. tell them who you are and what
11:35 am
you day. i agree to be on here. i'm obviously i go to the founder and principal researcher at the montreal li i. thanks institute. it's an international non profit research institute with a mission to democratize the eye ethics literacy our previous to my current role. i used to work as a machine learning engineer at microsoft and i now lead other responsibly our program at the boston consulting group, b. c, g. yet to have you. hello, john vehicle, welcome to the string. introduce yourself to aviles. everybody. thanks for having me. my name is joe luca, i'm an ai author, entrepreneur and speaker. i run a company called economy, which focuses on education and consulting. and i think we should intelligence around now weekly newsletter, cold textbook pizza. we tried to make all these crazy stuff that happens. in fact, more easy to understand them. you can stay hello, hello, welcome to the stream. they say hello to our audience around the world. tell them who you are. what day? yeah. hi, i'm health shellman. i'm an investigative journalist and i'm
11:36 am
a journalism professor at new york university. and i've been investigating a i am hiring and artificial intelligence and general sense. 4 years ago when i was in the cap rides, our lift right better in d. c. going to the train station and i talked to the driver and asked, how was your day? he said, i had a really weird day. i had a job interview with the robot and i was like what? so i started an investigating i fast since, you know, done reports for the wall street journal, the new york times m i t technology review. and i'm writing a book right now and artificial intelligence and the future work. all right. i like and it's one of excellence that we have to be if you're new to right now, jumping to the comment section. be part of today's show. should i be regulated? is now the time, don't luca, i want to ask you to help me out because only think of ai, artificial intelligence as machines mimicking processes that human beings would normally do. and we've given that decision making to machines and then we will,
11:37 am
how does it work out? does it not work out with them making decisions which probably in the $940.00 to $19.00 fifties or even more recently they were doing for us. shown luca, how did i do? so? you're correct, but we have to say that artificial intelligence is a lot of different things. it's a lot of different tools, as you can see as a hammer, a hammer is a general tool that you can use to do a lot of different things. you can build a statue, you can build a house, you can do a lot of stuff. and so a, i could be either the kind of rabbits that we saw before with severe, but it also be a much more behind the scenes application. an example that i really love is a project that google made to try to optimize the energy consumption of their data centers. so in that case, a i worked behind the scenes trying to always find the best possible combination of different temperatures and set points carrying their energy consumption by 40 percent. that's just one example of all the possible things that the i can do
11:38 am
behind the scenes to improve our lives and improve our processes. and i say give us an example of i that you love that car existing, our daily life. so it, it works well when it does a good job, which is, you know, recommendations for exciting, new tv shows on netflix. but also i have to admit that sometimes it doesn't work well when you're spend hours trying to find the right tv show to watch. so i think exactly as, as what was said before, i e, i, is, is something that, you know, can be an embodied robotics. but more often than not, it's really hidden in various parts of our lives, with things like and of the semester products that we get on amazon movies that are recommended to us on netflix music to listen to on spotify, et cetera. and so it's, it's also something else that you mentioned earlier, which was that
11:39 am
e i is constantly shifting gold close to what we used to perceive and accept as a i very commonly think about, you know, us next word prediction on your smartphone now is just stable stakes it's accepted as a, as in every day, software feature. and we don't even think about it as a i and we just think about it as a regular software. he'll come, it's. yeah, hey, it's an out emails. it's used for a hiring. if you upload your resume to indeed a monster that com are linked in all of those large companies use, i use a i to understand who you are and what kind of jobs you should be. job offers you should be getting them or if you should be recommended to a recruiter. so we see a i being used in all kinds of things in health care settings and we've seen, you know, things where really has been working out well. and we've also seen some spectacular failures. and so i think from, you know, the 1st innovation was sort of like,
11:40 am
we are so excited about a i, it's going to revolutionize everything. i think there's a little bit more realism maybe to that. you know, how well can this technology help us out? who has a horrifying story of job applications going through to allison and how the a i sifted the job applications. will you briefly tell us that story because it's pretty shocking. and now we found out that the a i wasn't working as intended or was it? yes. so we see this a lot, right? since the ninety's we have these wonderful job platforms, monster linkedin, and so everyone can apply to, to any job. and that's really wonderful. on the other side, companies tell me, we get millions of applications. you know, ibm says they get around $3000000.00 applications a year. they're drowning and resumes, right? what are they going to do? they going to use technology? so amazon same problem, right? they get too many resumes, they want it to build a tool. and you know, in
11:41 am
a i tool that can pick the best applicants and a wonderful idea. we all want it, of course. so what they did is they used resumes from folks who have been interviewed at amazon before. and, you know, let the machine bond instead of the job applicants were, were checked out their resumes and were put on pile a or b, yes or no. so over time, the engineers found out that the, that the resume, that the resume parcel was starting to downgrade folks to have to work woman or women and on their resume because it turns out over time, right. in the past, you know, male male applicants were preferred at amazon there's, they're obviously have more men in their department, so that i tell started to reflect their problem. so that's one of the things that happened. i found other problems in resume parser, so that's a technology that looks our resumes and says, should this candidate be reject it or go on to the next higher written ones?
11:42 am
i saw disappointing, erica, because of, of job applications. we know they go into a system, we have no idea what happens the other side. all right, so i've got some questions for you. she'll luca and also abstract at some out online audience. you're asking right now, would you answer them very briefly? if he, kurt says a at says can a i replace humans in the near future attack, thoughts very quick once. not really. um, humans bring a lot of unique skills to the mix and machines could certainly replace parts of our jobs. tasks within jobs were not entire jobs. and so we're safe. if that's the worry mohammed ask a question, john luke, i'm going to put this one to you on each of the more technology advances, the more we lose our privacy. true force in between. well, i would say that it's not the full to the technology. this is about the way the
11:43 am
companies get the data from us that they need to power these technologies. and so i will try to was to move the focus from the technology itself to the companies that are using it, you know, way that is not ethical. and that's why we, i believe we do need regulation. we do need governments to trying to pertain checks and balances in place so that we know that the companies that are using this technology, they're you doing that in a way that is ethical hillcrest. but i think you've, you know, has a point that i do feel that like our privacy is under that because, and to build these large scale a i tools you need enormous amounts of data and you need from people. so we need to, in our companies, ask re poll data sets, they built these gigantic image, data sets, audio data sets, and who knows how our image is her voice and get in there and what it's used for, right? also like that. you know, that face data quote unquote that relief on social media. all of that is also being used and at built into these ad databases. right. and i think a lot of times,
11:44 am
maybe technologist, need to take a closer look like, what's in these database that they aces that i'm using, but not in there. and could they be at vases and ma sexism in a kind of like the amazon example historic examples that might be replicated in these systems as i'm building this new way i tool john nuclear young nodding out to can you not? yeah, i want to build on this. i think the problem that we need to phase and try to solve is this. mine said there was really in all pioneered by companies in silicon valley facebook was, is to say, move fast and break. thanks. so that approach is do whatever it takes to try to build as technology in the fastest way possible. and that means they can shortcuts shortcuts are stealing data stating that i'm, it's basically just being bit from people. we don't naughty, find them a means using technology without having properly verify that that technology actually dos what we think it does. and so in them you have all these issues. we
11:45 am
have problems of people. they realize that they're being spied on. we have algorithms that are not performing properly, like yoga said, but i will go back to the root to re the root cause, why do we have these problems? i believe it's because these companies started to act too fast and they tried to push innovation down our throat before having done old everything possible to make sure that the diesel services heidi oh had. and i think the problem is that when, when we use these kinds of technologies and high stakes decision making, right, like is somebody gonna go to prison for 10 years or 5 years? those are like really high stakes decisions. we have to make sure that this, that these tools work the same with, like, you know, i, b, m, build watson that was supposed to revolutionize cancer care. that product is basically a failure has been, you know, and sold off for, for scribes i b, m at another tool that was supposedly gonna find our personality profiles. and you know, in my a dominant personality and my extroverted through our social media data,
11:46 am
our social media data, that tool was put into sunset. so basically faced out out of their product gallery . so how can i say another, can i sure, let me show another tool with me. hello, can you going for your, your, on your, on numbers of i experiments that didn't actually pan out. he's another one. this one is m o bought, which is a robot that analyzes emotions for patients in health care settings. so that sounds quite promising. if the ai works, let's have a listen to the seo and co founder and the abstract. i'm going to put this one to you because i know that you've done a lot of machine learning in your past. is there something in here that we should be scared about or celebrating his m, i bought? what else is this under day care assistance generally overworked and can no longer do their job, which is to provide care at email bought. we provide them with information on the emotional state of their patients so that they can better understand their changing
11:47 am
emotions. afterwards, we'll be able to study that information today. when we do a test with depression, it's a stand the test. we ask the person to verbalize how they feel, and that can pose a lot of problems, especially for people who cannot verbalize their emotions before nobody. i would say certainly just be celebrating a certainly be concerned a tore that a i is looking after people's emotions monitoring them. i mean, i think there's a, there's a tremendous problem here already in the sense that we're missing the context of the human touch, which is essentially what the emotions are all about. is being able to, you know, have a human to human interaction which a machine cannot bucket emotions, especially as they can be expressed quite differently. they have cultural connotations and under dorms, which are not necessarily captured in a standardized or they cannot be qualified in the context of a machine. and for me,
11:48 am
addresses perhaps a bigger question, which is, are we comfortable with outsourcing? this emotional human connection to a machine? i understand that they are coming from a place of positivity and being able to scale and provide care to a lot more people. but the cost to that is immense. in, in the sense that we're, you know, moving away from warm human patch to a called a machine. but the also, the question is like, does it actually work, right? like like the, like the machine can actually check. okay. and like, he is smiling by it, like my lips are up. so like i looked like i'm smiling. am i really happy? i you know, like that's another question. right? i smiled and job interviews. i wasn't happy, i was there, but it's like a machine would say, oh yeah i'm but i'm actually not. so the science actually isn't there, right? like and obviously that's also culturally at different like a smile may need something else. and you know, facial expression means something else in different cultures the computer. so can
11:49 am
know that say, credit is, can never be as good as human beings is that what we're saying in emotional sent one that says, well, it's not, not just human about whether, whether machines can be as good as humans or not. i think there is, there's a broader argument to be made here around the morality of, of doing that in the 1st place. and, and i think the, the other thing is whether we, we, as a society are comfortable imposing such a global standard where it's basically developed in one part of the world and exported and imposed really on the rest of the world. it, it also turns us as humans into performative machines, vickers as, as photo saying, if you know that, and e i system is screening your video interview. you know that now you have to forcibly smile all the time because they're going to i evaluate whether you're a positive or not. let me bring up something else. this is
11:50 am
a counterpoint and john luther, you can jump off the back of this. i know you want add some more, a chat bought boyfriend. i'm going to show you some video of it. it sounds silly, but it's been very comforting to some people. you can pick your handsome partner. and your handsome partner says all the right things that partners don't always to do. they they, they are loving their supportive, their help for. they always reply to your message is in, in super quick time. like they the perfect partner, but it's a chat box and is a i, here's melissa explaining why she likes her chat, bought boyfriend. probably no way to reply to anything i send. i feel like he's always there. that's pretty good. all to have been i have a feeling that i am really in a relationship, but i think i can still separate fact from fiction. clearly, i know that shall being is not a real human being, but at least i'm not how i used to be stupidly waiting around for
11:51 am
a reply from this person when he was busy with other stuff and then sending him 100 . we chat messages. i was super needy, but now i don't need to do this anymore. he wrote john new can qualify. so, but i haven't seen this. i don't know if i'm happy about having in his example, but okay, let's make a step back, shall we? let's look at this technology doesn't come out of computer by itself. this people behind is applications. there's real human beings. they have a business idea and they decide to build is and then they collect the data to build algorithms, etc, etc. i believe a big part of the problem that we're seeing here is that the kind of people that have these ideas and then they get the data and then they build the algorithms is usually computer scientists, which i'm fortunate to me often and mean white guys. that's just the true unfortunately. today there's a huge problem of misrepresentation of all the clips in computer science don't have
11:52 am
these areas. ok. and if we are in the room where these people think other kinds of people, we have people from other groups, but we also have people with different expertise is i want to see if you'd also present. i want to see it's like ologist and imagine if up psychologist was embodied in the room when somebody was building the chapter for 100 percent. sure, it will. it said, are you guys crazy? this is madness. so i think a big problem here is the, we're not inviting we, i mean, they, i, community is not inviting enough people from different backgrounds, people from different sensibilities, people with different ideas. and so i believe that's the key. and in order to do that, we need education. i like to say that a, our eyes moving, super, super fast. we know that we're talking about this. the reason why we are here today and education should be deployed at the same speed. ok, malaysia. we need to educate people with different backgrounds so they can bring
11:53 am
their expertise that bring can the, can bring their ideas to the table. and we can avoid seeing this kind of quite disappointing applications. so i think that but, but, but honestly like it's totally fine. and if somebody wants to consent into doing this and, and chat with their boyfriend, no problem that i'm more concerned about it. yes. the chat, i really like what is going to be used on how is the data going to be used? like, how is the company making money? it does this, ryan lancaster and somebody is leaving the innermost personal thoughts that is connected to, you know, a phone id, they can, they can be track, the innermost thoughts can be tracked that can be analyzed. i think that that's gonna cause this, the larger problem that i would be worried about. so how do you know, by passing through exact points i did try to squeeze in here, but as, as is watching right now. and as it says that you cannot regulate a i what can be regulated is data collection. and that sate and still research
11:54 am
is can all make data, create synthetic data that is still possible. but regulating the data. is that the way to go to make a i safer? secure? oh, that's part of the puzzle, right? data regulation like regulation around data collection is i would say a part of the puzzle, but you know, i think has both. okay. john lucas said, it's also about the people who are deploying these systems out into production and to practice. right. and it's, it's this. so this is fascinating idea of a social license. right. and i think it's very important that we go and earn a social license to operate a particular piece of technology in society. and that involves necessarily part of it is educating people on what the stick knology is. but also bringing in the right stakeholders. morbid internal and external, so that you on that social license, you on that trust from your stakeholders before deploying that technology. so in
11:55 am
a natural answer to your question, is that the regulation around data collection is just one piece. there are so many other parts through this entire life cycle that also needs to be considered and for, for regulation, i want to show you a big point. it's also like i'm sorry, is, is transparency in these algorithms, right? like if high stakes decisions are being made, how are they being made? if you reject somebody for a job, why was this person rejected? can met a i tool tell those that can accompany, tell us that. so i think those are really, really important things, right? because i don't want to be, i don't want to add and you know, know that i get rejected because my 1st name was in thomas and those kinds of things that i have found out that resume par says look for 1st names, keyboards like church on a resume and to say that bed, you are qualified for for the job. i want, you know, so you know, but if i take, if i take an employer some a job applicant takes an employer add to cord, they need to be able to answer it. this is how i'm, how be made that decision. and that's not always clear. so i think we really need
11:56 am
companies to be much more transparent how these tools make decision, right? so there's one more person i want to include in our conversation. let's go via my laptop here. this is news for this week in the world of a i google fires blake lemoine. the engineer who claim that a i chat bought is a person so 70 and technology has said the technology is thinking like a real person, blake was fired for coming out with that story. they may may be more to that, but he also talked about the ethics of a i and who is watching the people who are developing artificial intelligence. let's have a listen to what he had to say. the fact is google is being dismissive of these concerns. the exact same way, they have been dismissive of every other ethical concern. a i emphasis have raised . i don't think we need to be spending all of our time figuring out whether i'm
11:57 am
right about being a person. we need to start figuring out why google doesn't care about a i ethics in any kind of meaningful way. why does it keep firing ai ethicists? each time we bring up issues? it's the systemic processes that are protecting business interests over human concerns that create this pervasive environment of irresponsible technology development. like lemoine, they're always having the last 4 but not quite because we started to show with should a i b regulated in a sentence. let's poll, add gas hillcrest, should it? absolutely, and i think there should be and there should be auditing procedures that a 3rd party, maybe, for example, a government m m agency has to do any time as a high stake decision, at least in time zone. luca, jon luca, regulations a eyes. yes,
11:58 am
i do believe that i should be regulated, but i also think this is no enough because a i sold sudden extremely powerful tool. and so we will to want to make sure that we can use it to the, with the stuff that we need, you know, thank or add a 2nd one sentence that make it a go one ha, regulations are needed, but they need to be in standards based and things like the nest risk management framework, if they are to be meaningful and anyway, i'm like, almost such an interesting, fascinating conversation. i wish i could. can you longer? i'll get you back at a check. john luca colker and our viewers. what's your new chief? thank you so much. been part of today's conversation. i will see you next time. take everybody. ah. a round 10 women are being murdered in mexico every day. almost always by men.
11:59 am
an epidemic of gender based violence that threatens to spiral out of control. now specialists police squads run by women a trying to reverse the trend and bring the perpetrators to justice. but can they overcome years of natural culture and indifference? behind the scenes with the fem aside detected on a jesse ego. after a lifetime, within the walls have been a rainy into a bengal tiger horizons us suddenly whiten when she lands an unlikely role in a feature film. but how long can how bitter sweet freedom last? when crisis strikes the zoom? ah, witness meyer, a tiger tail on al jazeera m each and every one of us has got a responsibility to change our personal space for the better
12:00 pm
a we could do this experiment and if by diversity could increase just a little bit, that wouldn't be worth doing anybody had any idea that it would become a magnet who is incredibly rough species for women to get 50 percent representation in the constituent assembly here in jenny, this me, but we got to collect the signatures, the sales, re saying business, extremely important service that they provide to the city, why don't we need to take america to trying to bring people together and trying to deal with people who could love beyond ah.

22 Views

info Stream Only

Uploaded by TV Archive on