Skip to main content

tv   Shift  Deutsche Welle  June 12, 2022 11:15am-11:31am CEST

11:15 am
biden announced that washington would defend dive on, should china dac provoking an angry rebuke from beijing? by the white house? has since walked back. those statements bought off a deliberate policy of weakness on by one. no one esther deeds ambiguity for the small island living in the shadow of it's increasingly a sort of neighbor that ambiguity provides little comfort. this is dw news lie from berlin coming up next our technology show shift. looks at why artificial intelligence may not be neutral. i'm interested. more news in 45 minutes . ah. sometimes the b c shall write out at you. out t v highlights. fishing you in books every week. snuck him up. it is a secret war and a scene. endless one, exit. the conflict between iran on the one hand and israel in the united
11:16 am
states on the other with more than 40 years, the adversaries have been irreconcilable. there is never been any real dialogue. how did this confrontation begin? how great is the danger that it will spread? the long war his will, iran usa starts june 15th on d. w. artificial intelligence, insights massive our lives. for instance a i decides what we get shown when we online. but a i also repo uses, raises them sex of stereotypes and often fails to detect hate speech. prejudice, they, i, and what we can do about it. that's our topic on shift today. ah,
11:17 am
i is great at sorting through vast amounts of data and can do so much faster and more accurately than humans. it works using algorithms. but contrary to popular, believe these aren't neutral. and the past, like scientists found mainstream facial recognition systems to be better at recognizing white faces than people of color. similarly, digital voice, a systems like serio alexa, a best at understanding english and best of all, and understanding the white white american computer program as talk and a i by as us shop even in medicine. in the u. s. hospitals widely used algorithms to allocate health care to patients. these are supposed to be biased free. but researchers have found that decision making algorithms often reproduce racial or gender stereotypes. one study investigated an algorithm used in the us,
11:18 am
which assigned risk scores to patients on the basis of total health care costs accrued annually. it found that black patients were generally assigned lower risk scores than equally sick white patients. and thus were less likely to be referred to programs that provide or personalized care. in other words, people's ethnicity was impacting the care they received. the fact that a i use in medicine is leading to black people receiving less medical care than white people in the u. s. is truly shopping. it shows that we all were rely on a i, and that we aren't careful enough about state. we used to train these machine learning algorithms, but who decides what gets used and how does the i learn? clearly, there's still room for improvement. i systems are trained to using data sets. the algorithms learn to solve problems independently, through repetition,
11:19 am
in deep learning with neural networks, a, i can also learn to solve new, more complex tasks. what an a i learns, depends on the dataset. and this is often linked to the program isn't take google's image recognition software. in 2015, it made headlines when it labeled photos of black people as guerrillas. the shocking mistake was due to the fact that for the category human, the software had been trained using millions of photos of white people, but hardly any of people of color. so the algorithm had only learned to recognize white people as humans. datasets are usually assembled by scientists. in the case of google's image recognition software, these were predominantly white men. they probably didn't intentionally leave out photos of people of color. they simply didn't consider the impact to lack of diversity in their dataset would have the i thus produced
11:20 am
a new form of old discrimination. this shows how important diversity is. if it is lacking in a dataset, an a i trained using that data will develop blind spots when it comes to data and they, i the saying goes garbage in, garbage out. another component factor is that algorithms are mainly develop a tech companies and classified as trade secrets. many companies have said that they themselves don't understand exactly how they're algorithms work, including meta, facebook's parent company, from an economic standpoint, the secrecy is understandable, says german tech writer, thomas lamba. did that sir? does the fact that the most data rich companies are also the best that using ai and are the most valuable companies in the world? so it's not a closely connected hanger to live exit and, and that's currently data is the most important resource for creating value. so for the voucher for and right now,
11:21 am
tech companies are actually profiting from buyers. they, i much is being done to change this. and researchers are trying to figure out how to program fear algorithms, including a team at the i to university of copenhagen in denmark. teaching an algorithm to spot discrimination is difficult because we often do not agree about what is discriminatory to create a more nuanced understanding of miss. so jenny, for instance, the i to university of copenhagen assembled a diverse team to identify sexism in online content. the 8th team members were men and women with different ages, nationalities, and professions. they looked at thousands of facebook reddison, twitter post from 2020. the different team members had diverging well views. this help, the more accurately labeled data used to train ai because they were able to spot more nuances if miss so to me. but making datasets more diverse wouldn't solve all of these problems. another issue is that internationally active companies often do
11:22 am
not have international teams starved to not speak the languages of the countries. their products are used them, let alone understand the culture. examples like myanmar where facebook was weaponized in the mass attack on the ringer. show how dangerous this can be. what we see in social media is largely determined by algorithms. they decide which posts appear where on or feed, and which ons we are shown. algorithms also decide what we aren't shown. and they are supposed to filter out violent content or hate speech. but such filters largely only work for english content. what is the people who has spoken out about this is francis hogan, her former facebook employee turned whistleblower. so when i joined facebook and i became aware of how severe the problem was in places outside the united states, how facebook's under investment in solutions that can work in oh,
11:23 am
linguistically diverse way in a world we're not, everyone speaks english. ah, that because they don't invest in those things, they were really studying the stage for was what i saw was going to be a very, very scary next decade or 2 decades. one recent example is ethiopia where a civil war has raged since 2020, and were content to post it on facebook hasn't cited further acts of violence. why wasn't discontent flanked? when reason is not very few. facebook content moderators speak local languages. i knew that myanmar had happened that there been a mass, a mass killing event as a result of uh, that was found by social media. but i had no idea that we were about to see chapter after chapter unfold. facebook is used world wide, but the platforms safeguards protector mainly. those in english speaking countries, often the most vulnerable or the least protected. personally, i would much prefer if mars locker, but we're invested millions in addressing these problems rather than in his new med
11:24 am
verse project. our util maddie, formerly known as opal to maddie, is the co founder of the black lives matter movement. she's, as facebook has social obligations, it's failing to meet my needs. i was x, i reason they need a larger the lives and warmer maybe as well. that is with it's alex with sometimes gets to minute people with to the side. not really. this is precisely what the research team at the i to university of copenhagen that's riding to do. their goal is to teach a i to recognize hate speech and sexism. and different cultural contexts when it comes to creating data sets used to train a eye diversity matters. ready ready ready ready ready as the research group at the
11:25 am
i to university of copenhagen found miss in viewpoints. ready lead to mistakes, hulu and with haste reach annotation. one of the really tricky things is that sometimes just one minority, one group of people, one small subset of people know the answer. it's, we're now group often we find that everyone would have an agreement about something that one person would disagree. those at least one person on the team who was muslim. and she recognized a couple of words through as used to refer to muslims in a very derogatory way, that the rest of the group to learn groups being under represented in data leads to biases in a i. what's more artificial intelligence is constantly learning. if there isn't much content expressing a given viewpoint on line a, i can falsely assume that viewpoint is a minority opinion on. so if we have result, maybe imagine that a lot of massage in your line will too hard to imagine um, than women maybe don't want to interact in the spaces to achieve. everyone has
11:26 am
better things to do it. and so then we'll start thinking that views talking about the fair employment for the, for the sexes actually is an extreme is few because we don't see that view like the i t university research group is developing a i that could spot discriminatory content and hate speech online and that doesn't consider only one world view. ringback ringback ringback the challenge is that online content is often ambiguous as a result, creating accurate tags for training the a i is difficult. one example here could be, i hate all danish people who have a list of all identities who have danish people and english people and whatever. maybe then we have a go label. so what is, is this hateful or not towards an identity? so what direction are we here in? is it a more general?
11:27 am
so it to an all those people are doing things. people like me, the algorithm will be complete in a few months and it may prove useful for tech companies to improve their content moderation. we urgently need solutions for dealing with online hate speech. at the moment, we're still relying heavily on human moderators, which leads to problems, say the researchers in denmark. moderation is tough because there's a lot of abusive, violent, unpleasant content online. and it can come out at any time of day and the impact is when people see it, right, it's not when they interact with it. if people have seen it and then reported it already, it's maybe a little bit too late. a number of content moderate us have suffered from post traumatic stress disorder, which goes to show just how tough the job is. but as we seen, we can't leave this past 2 a i, especially when it comes to complex topics. for instance,
11:28 am
the delphi project by the u. s. research institute a out to had no problem getting clear answers from that had but for questions like whether the earth is flat, we know is rollin so the chat bought it as well. but the bought initially described abortion is murder. the research has been tweaked it so that it said abortion is acceptable. but this goes to show that a, i cannot answer questions regarding moral issues that we as a society, have different views on or to put it another way. how can we expect a, i to know things we don't know and should be even be going to a i for an says, what's your take on a i good, we rely on it too much or issues like i is just my, the problems that needs fixing that is knowing the comments on youtube and d, w dot com, that's all from me for to day. good bye. ah, [000:00:00;00]
11:29 am
a reef ritual is about breaking stereotypes. is coming up for identity. for those getting case every day in kenya, we meet a young woman. i think the interesting my surrounding how can it be in the gym? my company is robin morton. i'll be back with don't ever let anyone talk to you down the 77 percent next on d. w with electron mobility without recharging boundary level by implementing battery exchange stations,
11:30 am
chinese car maker leo wants to conquer the european market. with fully automated battery swaps and software updates, you can be ready to hit the road again in just 5 minutes. read. in 60 minutes on d, w. every journey begins with the 1st step and every language with the 1st word follows pinnacle rico is in germany to learn german. why not learn with him? simple online, on your mobile and free shuttle. d w e learning course, nikos vague. german made easy. hello, i'm back and i missed you. my name is wendy camara from the 77 percent. and as always, this show is for you africa's young majority. it is so good to have you. we that

31 Views

info Stream Only

Uploaded by TV Archive on