tv Shift Deutsche Welle June 11, 2022 11:15pm-11:31pm CEST
11:15 pm
join the block by the end of next week, brussels boss ursula founder lion has made a surprise visit to key for talks of president vladimir lensky as offer now coming up next art technology show shifts looks at why artificial intelligence may not be neutral. don't forget you get all the latest news and information around the clock on our website at stevie dot com. i'll next visor the base you watched with ah, what people have to say matters to us. ah,
11:16 pm
that's why we listen to their stories. sh reporter. every weekend on d, w 175 years ago, the young start up entrepreneur at a specific goal. 1 build the best article instruments in the world. good size. indeed his wish has become a reality. 175 years of ice starts june 19th on d, w. artificial intelligence insights massive our lives. for instance, a i decides what we get shown when we online, but a i also reproduces raises them sex stereotypes and often fails to detect hate, speech, prejudice they, i, and what we can do about it. that's our topic on shift today. ah,
11:17 pm
a, i is great at sorting through vast amounts of data and can do so much faster and more accurately than humans. it works using algorithms. but contrary to popular believe these aren't neutral. in the past, like scientists found mainstream facial recognition systems to be better at recognizing white faces than people of color. similarly, digital voice assistance like siri or alexa, a best at understanding english and best of all, and understanding the white white american computer program as talk and a i biases show up even in medicine. in the u. s. hospitals widely use algorithms to allocate health care to patients. these are supposed to be bias free. but researchers have found that decision making algorithms often reproduce racial or gender stereotypes. one study investigated an algorithm used in the us,
11:18 pm
which assigned risk scores to patients on the basis of total health care costs accrued annually. it found that black patients were generally assigned lower risk scores than equally sick white patients. and thus were less likely to be referred to programs that provide more personalized care. in other words, people's ethnicity was impacting the care they received. the fact that a i use in medicine is leading to black people receiving less medical care than white people in the u. s is truly shopping. it shows that we all were rely on a i, and that we aren't careful enough about state. we used to train these machine learning algorithms. but who decides what gets used? and how does the i learn? clearly, there's still room for improvement. i, i, systems are trained using data sets. the algorithms learn to solve problems
11:19 pm
independently, through repetition, in deep learning with neural networks. a, i can also learn to solve new, more complex tasks. what an a i learns, depends on the dataset. and this is often linked to who the program is on. take google's image recognition software. in 2015, it made headlines when it labeled photos of black people as guerrillas. the shocking mistake was due to the fact that for the category human, the software had been trained using millions of photos of white people, but hardly any of people of color. so the algorithm had only learned to recognize white people as humans. datasets are usually assembled by scientists. in the case of google's image recognition software, these were predominantly white men. they probably didn't intentionally leave out photos of people of color. they simply didn't consider the impact to lack of diversity in their dataset would have the i thus produced
11:20 pm
a new form of old discrimination. this shows how important diversity is. if it is lacking in a dataset, an a i trained using that data will develop blind spots when it comes to data and they, i the saying goes garbage in, garbage out. another component factor is that algorithms are mainly developed by tech companies and classified as trade secrets. many companies have said that they themselves don't understand exactly how they're algorithms work, including meta, facebook's parent company, from an economic standpoint, the secrecy is understandable, says german tech writer, thomas hunger did had. so does the fact that the most data rich companies are also the best that using ai and are the most valuable companies in the world. so it's not a closely connected hang on to this angry and, and that's currently data is the most important resource for creating value. so for the voucher for and right now,
11:21 pm
tech companies are actually profiting from buyers. they, i much is being done to change this and researchers are trying to figure out how to program fear algorithms, including a team at the i to university of copenhagen in denmark. teaching an algorithm to spot discrimination is difficult because we often do not agree about what is discriminatory to create more nuanced understanding of miss. so jenny, for instance, the i to university of copenhagen assembled a diverse team to identify sexism in online content. the 8 team members were men and women with different ages, nationalities, and professions. they looked at thousands of facebook reddison, twitter post from 2020. the different team members had diverging well, views. this help, the more accurately labeled data used to train ai because they were able to spot more nuances if miss so to me. but making datasets more diverse wouldn't solve all
11:22 pm
of these problems. another issue is that internationally active companies often do not have international teams. stop to not speak the languages of the countries their products or use them that alone understand the culture. examples like myanmar where facebook was weaponized in the mass attack on the ringer. show how dangerous this can be. what we see in social media is largely determined by algorithms. they decide which posts appear where under feed and which ons we are shown. algorithms also decide what we aren't shown, and they are supposed to filter out violent content or hate speech. but such filters largely only work for english content news. one of the people who has spoken out about this is francis hogan, her former facebook employee turned whistleblower. so when i joined facebook and i became aware of how severe the problem was in places outside the united states, how facebook's under investment in solutions that can work in oh,
11:23 pm
linguistically diverse way in a world where not everyone speaks english. ah, that because they don't invest in those things, they were really studying the stage for was what i saw was going to be a very, very scary next decade or 2 decades. one recent example is ethiopia where civil war has raged since 2020 and were content to post it on facebook hasn't cited further acts of violence. why wasn't discontent flanked? when reason is not very few. facebook content moderators speak local languages. i knew that myanmar had happened that there been a mass, a mass killing event as a result of uh, that was found by social media. but i had no idea that we were about to see chapter after chapter unfold. facebook is used world wide, but the platforms safeguards protector mainly. those in english speaking countries often the most vulnerable or the least protected. personally, i would much prefer if mazacco both were invested millions in addressing these
11:24 pm
problems rather than in his new med averse project. are util, maddie, formerly known as opal to maddie, is the co founder of the black lives matter movement. she's, as facebook has social obligations, it's failing to meet the needs i was x i reason they need, he can help him in writing a model. these marginal lives and warmer, maybe as well, that is with it's alex with sometimes gets to minute people with to the side. not really. this is precisely what the research team at the i to university of copenhagen that's going to do. their goal is to teach a i to recognise hate speech and sexism and different cultural contexts. when it comes to creating data sets used to train
11:25 pm
a eye. diversity matters. ready ready ready ready ready as the research group at the i to university of copenhagen found miss in viewpoints lead to mistakes blue with hey, screech annotation. one of the really tricky things is that sometimes just one minority, one group of people, one small subset of people know the answer. it's with our group often we find that everyone would have an agreement about something that one person would disagree. those at least one person on the team who was muslim. and she recognized a couple of words through as used to refer to muslims in a very derogatory way, that the rest of the group to learn groups being under represented in data leads to biases in a i. what's more artificial intelligence is constantly learning. if there isn't much content expressing a given viewpoint on line a, i can falsely assume that viewpoint is a minority opinion bugs if we have result may be imagined. there's a lot of massage in your line. what to how to imagine um, than women maybe don't want to interact in the spaces to achieve. everyone has
11:26 pm
better things to do it. and so then we'll start thinking that views talking about the fair employment for the, for the sexes actually is an extreme is few because we don't see that view the i t university research group is developing a i that could spot discriminatory content and hate speech online and that doesn't consider only one world view. ringback ringback ringback the challenges that online content is often ambiguous as a result, creating accurate tags for training the a i is difficult. one example here could be, i hate all danish people who have a list of all identities who have danish people and english people and whatever. maybe then we have a go label. so what is, is this hateful or not towards an identity? so what direction are we here in?
11:27 pm
is it a more general? so it to it, all those people are doing things. people like me. the algorithm will be complete in a few months and it may prove useful for tech companies to improve their content moderation . we urgently need solutions for dealing with online hate speech. at the moment, we're still relying heavily on human moderators, which leads to problems, say the researchers in denmark. moderation is tough because there's a lot of abusive, violent, unpleasant content online. and it can come out at any time of day. and the impact is when people see it, right, it's not when they interact with it. if people have seen it and then reported it already, it's maybe a little bit too late. a number of content moderators have suffered from post traumatic stress disorder, which goes to show just how tough the job is. but as we seen, we can't leave this past 2 a i,
11:28 pm
especially when it comes to complex topics. for instance, the delphi project by the u. s. research institute a out to had no problem getting clear answers from that had but for questions like whether the earth is flat, we know it's rollin so the chat bought it as well. but the bought initially described abortion as murder, the researchers than tweaked it so that it said abortion is acceptable. but this goes to show that a, i cannot answer questions regarding moral issues that we as a society, have different views on or to put it another way. how can we expect a, i to know things we don't know. and should we even be going to a i for an says, what's your take on a, i could we rely on it too much or issues like i is just minor problems that needs fixing. let us know in the comments on youtube and d, w dot com. that's all from me for to day. good bye. ah
11:29 pm
ah. ah, enter the conflict zone with sebastian more than a 100 days of war in ukraine on the bottom of the intensify my here. this week from brussels is new book to both got advisor, ukraine's armed forces. how long can care, rely on western on the nation? who can it really trust? a d, w, a city struck down buying a, a, a french human rights really wants to prosecute rule primes. epicenter the west,
11:30 pm
humanitarian crisis in the world. he gathers evidence a game in a, in 30 minutes on d, w. a thought they were great able to do more than a 100 days of war and ukraine battles are intensifying. ukraine's losses have been severe since invading russia has occupied a 5th of the country and is pounding lease been dumbass, which it seems. determined to occupy. how long can care for la.
31 Views
Uploaded by TV Archive on