Skip to main content

tv   Shift  Deutsche Welle  June 12, 2022 4:15pm-4:31pm CEST

4:15 pm
for the small island living in the shadow of it's increasingly assertive neighbor, that ambiguity provides little comfort. and before we go, a quick look at our top story. ukraine says his forces are still holding out in part of the eastern city of savanna, done that for hundreds of civilians have found shelter. the city and surrounding area are witnessing consistent intense street fighting and heavy russian shelling you're up to date on dw news. stay tuned. for shit, living in the digital age, america hasn't think thanks for watching every day for us and for our planet, with global ideas is on its way to bring you more conservation.
4:16 pm
how do we make cities greener? how can we protect animals and their habitats? what to do with all our waste. we can make a difference by choosing reforestation over deforestation recycling over disposable smarten solutions over steam set in our ways. earth is truly unique and we know that that uniqueness is what allows us to live and survive google ideas. the environmental series in global 3000 on d, w, and online artificial intelligence impacts most of all alive. for instance, a, i decides what we get shown when we online. but a i also reproduces racist them takes the stereotypes and often fails to detect, pay, speech, prejudice they, i, and what we can do about it. that's all the topic on shift today. ah,
4:17 pm
i is great at sorting through vast amounts of data and can do so much and more accurately than humans. it works using algorithms. but contrary to popular, believe these aren't neutral. and the boss, black scientists found mainstream facial recognition systems to be better at recognizing white faces than people of color. similarly, digital voice assistance like serio alexa at best, at understanding english and best of all, and understanding the white white american computer program as talk and a i by as us shop even in medicine. in the u. s. hospitals widely use algorithms to allocate health care to patients. these are supposed to be bias free. but researchers have found that decision making algorithms often reproduce racial or gender stereotypes. one study investigated an algorithm used in the us, which assigned risk scores to patients on the basis of total health care costs
4:18 pm
accrued annually. it found that black patients were generally assigned lower risk scores than equally sick white patients. and thus were less likely to be referred to programs that provide or personalized care. in other words, people's ethnicity was impacting the care they received. the fact that a i use in medicine is leading to black people receiving less medical care than white people in the u. s is truly shopping. it shows that we all rely on a i, and that we aren't careful enough about which data we used to train these machine learning algorithms. but who decides what gets used? and how does a i learn? clearly, there's still room for improvement. i, i, systems are trained using data sets, the algorithms, nan, to solve problems independently, through repetition, in deep learning with neural networks. a, i can also learn to solve new,
4:19 pm
more complex tasks. what an a i lands depends on the dataset. and this is also linked to the program isn't take google's image recognition software. in 2015, it made headlines when it labeled photos of black people as guerrillas. the shocking mistake was due to the fact that for the category human, the software had been trained using millions of photos of white people, but hardly any of people of color. so the algorithm had only learned to recognize white people as humans. datasets are usually assembled by scientists. in the case of google's image recognition software, these were predominantly white men. they probably didn't intentionally leave out photos of people of color. they simply didn't consider the impact to lack of diversity in their dataset would have the ai thus produced
4:20 pm
a new form of old discrimination. this shows how important diversity is. if it is lacking in a dataset, an ai trained using that data will develop blind spots when it comes to data and they, i the saying goes garbage in, garbage out. another component factor is that algorithms are mainly develop a tech companies and classified as trade secrets. many companies have said that they themselves don't understand exactly how they're algorithms work, including meta, facebook's parent company, from an economic standpoint. the secrecy is understandable, says german tech writer to mass, rambo, the tots or her does the fact that the most due to rich companies are also the best that using ai and are the most valuable companies in the world. so it's not a closely connected hanger to this angry and, and that's currently data is the most important resource for creating value to for the virtual and right now, tech companies are actually profiting from buyers. they,
4:21 pm
i much is being done to change this. and researchers are trying to figure out how to program fear algorithms, including a team at the i to university of copenhagen in denmark. teaching an algorithm to spot discrimination is difficult because we often do not agree about what is discriminatory to create a more nuanced understanding of miss. so jenny, for instance, the i to university of copenhagen, assembled to divest team, to identify sexism in online content. the 8th team members were men and women with different ages, nationalities, and professions. they looked at thousands of facebook reddison, twitter post from 2020. the different team members had diverging well views. this help, the more accurately labeled data used to train ai because they were able to spot more nuances if miss. so jenny, but making datasets more diverse, wouldn't solve all of these problems. another issue is that internationally active
4:22 pm
companies often do not have international teams, stopped to not speak the languages of the countries, their products, they use them, let alone understand the culture. examples like myanmar where facebook was weaponized in the mass attack on the ringer. show how dangerous this can be. what we see in social media is largely determined by algorithms. they decide which posts appear where under feed and which ons we are shown. algorithms also decide what we aren't shown and they are supposed to filter out violent content or hate speech. but such filters largely only work for english content. a. one of the people who is spoken out about this is francis hogan. her former facebook employee turned whistleblower when i joined facebook and i became aware of how severe the problem was in places outside the united states. how facebook under investment in solution 2nd work in a linguistically diverse way in
4:23 pm
a world where not everyone speaks english. i'm that because they don't invest in those things. they were really studying the stage for was what i saw was going to be a very, very scary next decade or 2 decades. one recent example is ethiopia where civil war has raged since 2020 and were content to post it on facebook hasn't cited further acts of violence. why wasn't, is content flagged? when? reason is that very few facebook content moderators speak local languages. i knew that myanmar had happened that there been a mass, a mass killing event as a result of uh, that was found by social media. but i had no idea that we were about to see chapter after chapter unfold. facebook is used world wide, but the platform safeguards protect mainly those in english speaking countries, often the most vulnerable or the least protected. personally, i would much prefer if mars like about, were invested millions in addressing these problems rather than in his new med
4:24 pm
averse project. are util, maddie, formerly known as opal to maddie, is the co founder of the black lives matter movement. she says, facebook has social obligations is failing to meet their needs. i was x i reason they need you can have him in right, a large the lives and won't work on it. is that is with our, with some i'm scared to minute people. we're switched to the side not commented really, this is precisely what the research team at the university of copenhagen are trying to do. their goal is to teach a i to recognize hate speech and sexism, and different cultural contexts. when it comes to creating data sets used to train ai diversity matters. ready ready ready ready as the research group at the i to
4:25 pm
university of copenhagen found miss in viewpoints lead to mistakes blue with hey, speech annotation. one of the really tricky things is that sometimes just one minority won't group of people. one small subset of people know the answer. it's with our group often we find that everyone would have an agreement about something that one person would disagree. those at least one person on the team who was muslim. and she recognized a couple of words through as used to refer to muslims in a very derogatory way, that the rest of the group to look, groups being under represented in data leads to biases in a i. what's more artificial intelligence is constantly learning. if there isn't much content expressing a given viewpoint on line a, i can falsely assume that viewpoint is a minority opinion. and so if we have result may be imagined that a lot of massage in your line look to how to imagine um, than women maybe don't want to interact in the spaces too much. you know,
4:26 pm
everyone has better things to do it. and so then we'll start thinking that views talking about the fair employment for the, for the sexes actually as an extreme is few because we don't see that view. ringback the i t university research group is developing a i that can spot discriminatory content and hate speech online. and that doesn't consider only one world view. ringback ringback ah, the challenge is that online content is often ambiguous as a result, creating accurate tags for training the a i is difficult. one example here could be, i hate all danish people who have a list of all identities who have danish people and english people and whatever. maybe then we have a go label. so what is, is this hateful or not towards an identity? so what direction are we here in? is it a more general?
4:27 pm
so it through and all those people are doing things. people like me, the algorithm will be completely in a few months and it may prove useful for tech companies to improve their content moderation. we urgently need solutions for dealing with online hate speech. at the moment, we're still relying heavily on human moderators, which leads to problems, say the researchers in denmark. moderation is tough because there's a lot of abusive, violent, unpleasant content online. and it can come out at any time of day and the impact is when people see it, right, it's not when they interact with it. if people have seen it and then reported it already, it's maybe a little bit too late. a number of content moderate us have suffered from post traumatic stress disorder, which goes to show just how tough the job is. but as we seen, we can't leave this past 2 a i, especially when it comes to complex topics. for instance,
4:28 pm
the delphi project by the u. s. research institute a out to had no problem getting clear answers from that had but for questions like whether the earth is flat, we know is rollin so the chat bought it as well. but the bought initially described abortion as murder, the researchers than tweaked it so that it said abortion is acceptable. but this goes to show that a, i cannot answer questions regarding moral issues that we as a society, have different views on or to put it another way. how can we expect a, i to know things we don't know and should be even be going to a i for an says, what's your take on a i good, we rely on it too much or issues like i is just my, the problems that needs fixing that is knowing the comments on youtube and d, w dot com. that's all from me for to day. good bye. ah
4:29 pm
ah, with most of it still down tech ah, noise activists in action. they're always on the lookout for ideas to fight noise pollution in the illness it even an hour mean auto that i will at last. how every day life can be quieter. oh, next on d. w. ah, welcome to the dark side where everyone has their own truth. when you have that sort of inability to agree on basic docs, i think that you face a future with a country that is buried. graham,
4:30 pm
i equals the struggle for truth in 45 minutes on d w. how about taking a few risks? you could even take a chance on the la raring to. ah, don't expect a happy ending. literature list. 100 germany stories with ah, noise is everywhere. and noise is making is sick when it's done, but all i can hear are the trans we can sound proof as much as we like your voice. many people can't get away from the clamor.

32 Views

info Stream Only

Uploaded by TV Archive on