Skip to main content

tv   Shift  Deutsche Welle  June 12, 2022 8:15pm-8:31pm CEST

8:15 pm
in the digital age, i'm pick spicer. it's from with, oh, i did her again that i work. that's hard and in the end is a me you are not a lot us to you and more we will send you back. are you familiar with this
8:16 pm
reliance beef, what's your story with. ready women, especially and victims of financing and take part and send us your story. we are trying always to understand this new culture. so you are not a visitor, not the guests. you want to become a citizen. in phil migrants, your platform for reliable information artificial intelligence insights, massive all alive. for instance, a i decides what we get shown when we online. but a i also repo uses, raises them, texas stereotypes and often fails to detect hate speeds, prejudice they, i, and what we can do about it. that's our topic shift today.
8:17 pm
ah, a, i is great at sorting through vast amounts of data and can do so much faster and more accurately than humans. it works using algorithms. but contrary to popular believe these aren't neutral. in the past, like scientists found mainstream facial recognition systems to be better at recognizing white faces than people of color. similarly, digital voice, a systems like siri or alexa, a best at understanding english and best of all, and understanding the white white american computer program as talk and a i by as us shop even in medicine. in the u. s. hospitals widely used algorithms to allocate health care to patients. these are supposed to be biased free. but researchers have found that decision making algorithms often reproduce racial or gender stereotypes. one study investigated an algorithm used in the us,
8:18 pm
which assigned risk scores to patients on the basis of total health care costs accrued annually. it found that black patients were generally assigned lower risk scores than equally sick white patients. and thus were less likely to be referred to programs that provide or personalized care. in other words, people's ethnicity was impacting the care they received. the fact that a i use in medicine is leading to black people, receiving less medicare than white people in the u. s. is truly shopping. it shows that we all were rely on a i, and that we aren't careful enough about which data we used to train these machine learning algorithms. but who decides what gets used? and how does a i learn? clearly, there's still room for improvement. i, i, systems are trained to using data sets the algorithm,
8:19 pm
learn to solve problems independently, through repetition, in deep learning with neural networks. a, i can also learn to solve new, more complex tasks. what an a i learns, depends on the dataset. and this is often linked to the program isn't take google's image recognition software. in 2015, it made headlines when it labeled photos of black people as guerrillas. the shocking mistake was due to the fact that for the category human, the software had been trained using millions of photos of white people, but hardly any of the people of color. so the algorithm had only learned to recognize white people as humans. datasets are usually assembled by scientists. in the case of google's image recognition software, these were predominantly white men. they probably didn't intentionally leave out photos of people of color. they simply didn't consider the impact to lack of
8:20 pm
diversity in their dataset would have the i thus produced a new form of old discrimination. this shows how important diversity is. if it is lacking in a data set, an a i trained using that data will develop blind spots when it comes to data and they, i the saying goes garbage in, garbage out. another component factor is that algorithms are mainly develop a tech companies and classified as trade secrets. many companies have said that they themselves don't understand exactly how they're algorithms work, including meta, facebook's parent company from an economic standpoint. the secrecy is understandable, says german tech writer, thomas lamba, the tots or her does the fact that the most data rich companies are also the best at using ai and are the most valuable companies in the world. so it's not a closely connected hanger to live exit and, and that's currently data is the most important resource for create value to,
8:21 pm
to the voucher for and right now, tech companies are actually profiting from buyers. they, i much is being done to change this. and researchers are trying to figure out how to program fear algorithms, including a team at the i, to university of copenhagen and denmark. teaching an algorithm to spot discrimination is difficult because we often do not agree about what is discriminatory to create a more nuanced understanding of miss. so jenny, for instance, the i to university of copenhagen assembled a diverse team to identify sexism in online content. the 8th team members were men and women with different ages, nationalities, and professions. they looked at thousands of facebook reddison, twitter post from 2020. the different team members had diverging well views. this help, the more accurately labeled data used to train ai because they were able to spot more nuances if miss so to me. but making datasets more diverse wouldn't solve all
8:22 pm
of these problems. another issue is that internationally active companies often do not have international teams, stopped to not speak the languages of the countries. their products are used them, let alone understand the culture. examples like myanmar where facebook was weaponized in the mass attack on the ringer. show how dangerous this can be. what we see in social media is largely determined by algorithms. they decide which posts appear where under feed and which ons we are shown. algorithms also decide what we aren't shown and they are supposed to filter out violent content or hate speech. but such filters largely only work for english content. a. one of the people who has spoken out about this is francis hogan. her former facebook employee turned whistleblower when i joined facebook and i became aware of how severe the problem was in places outside the united states. how facebook's under investment in
8:23 pm
solutions 2nd work in a linguistically diverse way in a world we're not everyone speaks english. ah, that because they don't invest in those things, they were really studying the stage for was what i saw was going to be a very, very scary next decade or 2 decades. when recent example is ethiopia where civil war has raged since 2020, and we're content to post it on facebook hasn't cited further acts of violence. why wasn't, is content flanked? one reason is that very few facebook content moderators speak local languages. i knew that myanmar had happened that there been a mass, a mass killing event as a result of uh, that was found by social media. but i had no idea that we were about to see chapter after chapter unfold. facebook is used worldwide, but the platform safeguards protect mainly those in english speaking countries, often the most vulnerable or the least protected. personally, i would much prefer if mars locker,
8:24 pm
but we're invested millions in addressing these problems rather than in his new med averse project. are util maddie formerly known as open till maddie, is the co founder of the black lives matter movement. she says, facebook has social obligations is failing to meet their needs. i was x i reason they need you can have human rights a marginal lives and warmer that is with it's alex with sometimes gets to minute like people to the side. not really. this is precisely what the research team at the i to university of copenhagen that's writing to do. their goal is to teach a i to recognize hate speech and sexism, and different cultural contexts. when it comes,
8:25 pm
creating data sets used to train ai diversity matters. ready ready ready ready ready as the research group at the i to university of copenhagen found missing viewpoints lead to mistakes with haste reach annotation. one of the really tricky things is that sometimes just one minority won't group of people. one small subset of people know the answer. it's with our group often we find that everyone would have an agreement about something that one person would disagree, lose at least one person on the team who was muslim. and she recognized a couple of words through as used to refer to muslims in a very derogatory way, that the rest of the group to look, groups being under represented in data leads to biases in a i. what's more artificial intelligence is constantly learning. if there isn't much content expressing a given viewpoint online, a, i can falsely assume that viewpoint is a minority opinion. and so if we have the zone may be imagined that a lot of massage in your line look to how to imagine um,
8:26 pm
than women maybe don't want to interact in those faces too much. you've, everyone has better things to do it. and so then we'll start thinking that views talking about the fair employment for the, for the sexes actually is an extreme as few because we don't see that view the i to university research group is developing a i that can spot discriminatory content and hate speech online and that doesn't consider only one world view. ringback ringback ringback the challenge is that online content is often ambiguous as a result, creating accurate tags for training the a i is difficult. one example here could be, i hate all danish people who have a list of all identities who have danish people and english people and whatever. maybe then we have a go label. so what is, is this hateful or not towards an identity? so what direction are we here in?
8:27 pm
is it a more general? so to hit all those people or to head things, people like me, the algorithm will be completed in a few months. and it may prove useful for tech companies to improve their content to moderation. we urgently need solutions for dealing with online hate speech. at the moment, we're still relying heavily on human moderators, which leads to problems, say the researchers in denmark. moderation is tough because there's a lot of abusive, violent, unpleasant content online. and it can come out at any time of day and the impact is when people see it, right, it's not when they interact with it. if people have seen it and then reported it already, it's maybe a little bit too late. and number of content moderate us have suffered from post traumatic stress disorder, which goes to show just how tough the job is. but as we seen,
8:28 pm
we can't leave this past 2 a i, especially when it comes to complex topics. for instance, the delphi project by the u. s. research institute a out to had no problem getting clear answers from that had but for questions like whether the earth is flat, we know it's rollin so the chat bought it as well. but the bought initially described abortion as murder, the researchers than tweaked it so that it said abortion is acceptable. but this goes to show that a, i cannot answer questions regarding moral issues that we as a society, have different views on or to put it another way. how can we expect a, i to know things. we don't know. and should we even be going to a i for an says, what's your take on a, i could we rely on it too much or issues like i is just minor problems that needs fixing. let us know in the comments on youtube and d, w dot com. that's all from me for to day. good bye. ah
8:29 pm
. electric mobility without recharging boundary level by implementing battery exchange stations, chinese car maker lia wants to conquer the european market with fully automated battery flocks and software updates. you can be ready to hit the road again in just 5 minutes. read on d. w. a tunnel to be in this case that outrage mia mars population and pushed investigative journalists to a nurse, a crew to move i. it's so it is obvious that the child was sold for $800.00. u. s.
8:30 pm
bobby child slavery and me and mario the brutal business with adoptions in 45 minutes on d. w. sometimes books are more exciting than real life raring to read. oh. but what if there's no escape? do w literature list under german must read the issue with existing batteries is that they saw they're really horrible.

45 Views

info Stream Only

Uploaded by TV Archive on