tv Shift Deutsche Welle March 28, 2021 11:15am-11:31am CEST
11:15 am
and that i'm back with more news headlines at the top of the hour and until then there's always our website dot com you can follow us on social media as well we're at day w. news i'm rebecca riches thanks for watching. my 1st boss i also saw in machines. where i come from women are bound by this notion of in something as simple as learning how to write a bicep of isn't. since i was a little girl i wanted to have a bicycle off my home and it took me years to and. finally they gave up and went on buying and i say that returns because sewing machine sewing i suppose was more appropriate for girls than writing a. novel i want to meet those woman back home full of bones by then and social
11:16 am
rules and informed and old dead basic rights my name is them out of the home and i work at some of them. from. a black and a white hand each holding the same device for facial and image recognition software these photos maybe interpret it quite differently just how racist is ai autopsy today on ships. one of the key elements of autumn officially intelligence is that it teaches itself but to learn i needs to be fed with massive amounts of data and that data is selected by humans that's keep the way i operate. and the conclusions it draws
11:17 am
dependent on the data sets it has to work with here's an example of how things can go wrong those 2 nearly identical images were presented to google's cloud vision image recognition software the shocking result the temperature gun in the white hand was identified as a monocular a bit strange perhaps but an understandable mistake you might say about the same object held in the docs and hand was identified as a handgun google has since apologized and has removed the label gun from its system but is that really enough to counter grace's assumptions made by its algorithm black people and people of color in particular often suffer from discrimination by ai systems as the new american documentary coded by this reveals during my 1st semester at mit i got computer vision software that was supposed to track. didn't work until i put on this white mask i'm thinking all right what's going on here is
11:18 am
that the lighting conditions is that the angle at which i'm looking at the camera or is there something born in 20. was one of the 1st computer scientists to reveal that systems are racist the documentary. tells her story. the film explains why systems are racist many of the algorithms used in the i software are developed and tested by teams that consists mainly of white men their view of the world is reproduced in the software and they're less likely to notice what is discriminatory and biased the systems are fed does this it's made up mostly of like people as a result the technology ends up less accurate and when it comes to people of color . you mean that the most diverse the data that spread into these ai system has the better it can identify a variety of people. oh and i mentioned i can but the bias goes even deeper i mean
11:19 am
you know even before the deficit are created. this thing with their hardware on. poorly it starts with the hardware which was originally designed for use and cameras in movies so it was designed with white men and minds and so on the light senses the way the devices are calibrated the infrared technology developed in recent decades they were all designed with white skin in mind and with how light reflects on white skin heart they ignored the fact that people have many different skin colors which also reflects light differently the least and that's a 50 and that's why tweaking the algorithms isn't enough a more fundamental change is needed. clewiston dimension the decent people using these technologies are key here in other words while all of us at the end of the day what's being programmed is not the only factor i mean what also matters is
11:20 am
how we interact with our technology and what data we feed into it that's what we're missing of we're missing that fundamental debate about social inclusion and exclusion about discrimination and how biased permeates our society with thought eventually demons here in this community. joy. is also continuing her fight i mean her findings public even appearing before the u.s. congress. she's also the founder of the auguries mc justice league which is working to create less biased a more equitable artificial intelligence when it comes to equality we still have a long way to go especially when algorithms on involved algorithms also use the online search engines the most obvious example is google as image search helps us navigate the world if you're looking for information on something or want to see what someone looks like you'll probably start googling them but the response. an
11:21 am
experiment conducted by the w. shows that google rhythms often lead to skewed results searching for women of specific nationalities online is a harsh lesson in stereotypes searching for brazilian women yields young women in suggestive poses that german women brings up mainly politicians and athletes. analysis team used google's very own cloud vision safes to analyze thousands of images this image processing software was designed to spot sexualized content google uses it to flag suggestive images. for women from to mimic in republic of brazil and up to 40 percent of sexually suggestive for american women it's just 5 percent and for german women 4 percent. this has a major impact on how women are seen and treated and women from certain countries are especially vulnerable such for ukrainian women for example and 61 of the top
11:22 am
100 results are links to dating sites websites with sexualized content. algorithms are key to our data driven and automated world the images they deliver can be shockingly biased racist and sexist but for data scientist catherine it's not really the algorithms that are to blame algorithm learns because we train it using machine learning so either whether it's deep learning with a neural network so to speak or it's shallow machine learning where feeding it data and were asking it to form an opinion and of course when that opinion is based on data that has unfair treatment of groups then the algorithm or very much learned this in china is being used to identify specific ethnic groups take the telecom giant while away which took part in testing and we go. software that could identify
11:23 am
faces of the countries we go minority that's according to i p v m a u.s. based company specializing in video surveillance the software as accuracy rate is estimated at about 70 percent is reportedly being used across china so is ethnic discrimination being automated. this camera system tested by wa a is designed to use ai to identify people of specific ethnic cities it was trained on data sets that we teach a child dental 5 age and gender and ethnicity. the system was trained on data sets containing people from china's week a minority and none weakness to teach it to use facial features to determine if this city what's even more disturbing is how china plans to use it when the ai system blank someone is a member of the we can minority it can trigger a we go along and notify the police. in the region of northwestern china
11:24 am
security forces have implemented a massive surveillance network designed to clamp down on the we can minority hundreds of thousands of readers have been detained in so-called reeducation camps . officially chinese companies have denied that their testing software is being used for commercial purposes but internet giant alibaba has already admitted that its cloud division has developed discriminatory ai software and even greater concern is the fact that it's custom a step by step instructions for how to use the software to target we've got muslims paired with mobile phones and surveillance cameras the software enables users in china to send images of people to its cloud service to flag as suspicious we guess in other words users as opposed to keep feeding the facial recognition with data that enhances racial targeting that's truly scary tools like that show that can be used to target ethnic groups. says karen young an expert in artificial intelligence
11:25 am
and human rights very powerful systems increasingly being used to sort school and organize people people at population what level and even though we don't see that kind of drastic action being taken in western industrialized societies there are an increasing number of social. scoring people particularly in the criminal justice system to make predictions based on algorithmic assessment of the kinds of risks that they do to public safety in the us african-americans and asian americans where up to 100 times more likely to be missed identified by facial recognition systems than white men that's according to a study commissioned by the us government falls maps can lead to all sorts of problems like a mistake in the rest but these systems are growing increasingly popular and not just in the us that could have a serious impact on society says karisma believes from the institute for ethics at
11:26 am
oxford university i can be extremely disruptive in the negative way to society it can be racist it can be extremely unfair to take one example in 2016 microsoft launched its teeth. designed to reply to twitter users in less than a single day of exposure. had to be taken offline twitter had taught it to parrot sexist racist anti-semitic and offensive comments. why are such bias in every poorly i systems being unleashed on society to berlin based artist researchers. are exploring that question. that the data sets being fed into these ai systems are always old so the systems and up conservative and outdated. when they're used to predict to recommend. to. expect.
11:27 am
to be. different. it's a. very i guess anyone they deem to. gender friends or income and you may never realize. because. it's truly a massive problem an easy way to visualize how it works is to imagine you baking a cake if you use the wrong gradients the cake just won't taste good similarly if the data used to train a poor not representative of the diversity found in society then it won't work
11:28 am
properly except that here we're not talking about a cake but more dramatic consequences and right now this poses a greater threat to certain people like people of color it's not the technology that is to blame i don't think that we should be. this is something for my opinion we need to raise awareness for how ai systems can have a racist and discriminatory impact so before we adopt the more widely they need to be rigorously tested according to ethical criteria what about you have you ever been negatively affected by the systems or do you believe that you're not really affected by them but it's not on you tube facebook dot com. take care and see you next time.
11:29 am
to come from nigeria to save the world to be a comic republic. for scream i didn't use that. inspire people to kill the white people here. because we are super villains everywhere had better watch out in the 70 percent. being muscle car. with an engine instead of. even. try. to. read. 16 d. w. . i'm david and this is
11:30 am
climate change. happiness pre-book. the book for you. to get smarter for free. so if i were a superhero this is what i would look like and my name would be. we clearly have something special about for you on the show but i'm not going to give away too much yet i mean my good genia and you're welcome for the 77 percent. is usual so they show it's packed with amazing stories all over the african continent and we have a special guest joining.
13 Views
Uploaded by TV Archive on