tv Shift Deutsche Welle June 11, 2022 7:15pm-7:31pm CEST
7:15 pm
bonded line made a surprise visit to keep 4 talks with you, creating president below to near zaleski. that's all for now coming up next to our technology, show shift. no forget you can always get all the latest news information round the clock on our website. as d w dot com. thanks bye sir. thanks with enjoying the view and come to take a look at this tv highlights every week in your inbox. subscribe. now, it is a secret war and a sing, endless one exiting the conflict between iran on the one hand and israel
7:16 pm
and the united states on the other with more than 40 years. the adversaries have been irreconcilable. there is never been any real dialogue. how did this confrontation begin? how great is the danger that it will spread? the long war? he's really, iran usa starts june 15th on d. w. in artificial intelligence in bytes, myself, all alive. for instance, a, i decides what we get shown when we online. but a i also reproduces raises them sex of stereotypes and often fails to detect hate, speech prejudice. they, i and what we can do about it. that's our topic on shift today. ah,
7:17 pm
ah, a i is great at sorting through vast amounts of data and can do so much faster and more accurately than humans. it works using algorithms. but contrary to popular believe these aren't neutral. in the past, like scientists found mainstream facial recognition systems to be better at recognizing white faces than people of color. similarly, digital voice, a systems like serio alexa, a best at understanding english and best of all, and understanding the way white american computer program as talk and a i by as us shop, even in medicine. in the u. s. hospitals widely used algorithms to allocate health care to patients. these are supposed to be biased free. but researchers have found that decision making algorithms often reproduce racial or gender stereotypes. one study investigated an algorithm used in the us,
7:18 pm
which assigned risk scores to patients on the basis of total health care costs accrued annually. it found that black patients were generally assigned lower risk scores than equally sick white patients. and thus were less likely to be referred to programs that provide or personalized care. in other words, people's ethnicity was impacting the care they received. the fact that a i use in medicine is leading to black people, receiving less medicare than white people in the u. s. is truly shopping. it shows that we all were rely on a i, and that we aren't careful enough about which data we used to train these machine learning algorithms. but who decides what gets used? and how does a i learn? clearly, there's still room for improvement. i, i, systems are trained to using data sets, the algorithms, nan, to solve problems independent,
7:19 pm
lisa repetition in deep learning with neural networks. a, i can also learn to solve new, more complex tasks. what an a i learns, depends on the dataset. and this is often linked to the program isn't take google's image recognition software. in 2015, it made headlines when it labeled photos of black people as guerrillas. the shocking mistake was due to the fact that for the category human, the software had been trained using millions of photos of white people, but hardly any of the people of color. so the algorithm had only learned to recognize white people as humans. datasets are usually assembled by scientists. in the case of google's image recognition software, these were predominantly white men. they probably didn't intentionally leave out photos of people of color. they simply didn't consider the impact to lack of diversity in their dataset would have the i thus produced
7:20 pm
a new form of old discrimination. this shows how important diversity is. if it is lacking in a data set, an a i trained using that data will develop blind spots when it comes to data and they, i the saying goes garbage in, garbage out. another component factor is that algorithms are mainly develop a tech companies and classified as trade secrets. many companies have said that they themselves don't understand exactly how they're algorithms work, including meta, facebook's parent company from an economic standpoint. the secrecy is understandable, says german tech writer thomas stronger the tots or her does. the fact that the most data rich companies are also the best at using ai and are the most valuable companies in the world. so it's not a closely connected hanger to live english and, and that's currently data is the most important resource for creating value. so for the voucher for and right now, tech companies are actually profiting from by,
7:21 pm
as they, i much is being done to change this. and researchers are trying to figure out how to program fear algorithms, including a team at the i, to university of copenhagen and denmark. teaching an algorithm to spot discrimination is difficult because we often do not agree about what is discriminatory to create a more nuanced understanding of miss santini. for instance, the i to university of copenhagen assembled a diverse team to identify sexism in online content. the 8th team members were men and women with different ages, nationalities, and professions. they looked at thousands of facebook reddison, twitter post from 2020. the different team members had diverging well views. this help, the more accurately labeled data used to train ai because they were able to spot more nuances if miss so to me. but making datasets more diverse wouldn't solve all
7:22 pm
of these problems. another issue is that internationally active companies often do not have international teams, stopped to not speak the languages of the countries, their products. i used them, let alone understand the culture. examples like myanmar where facebook was weaponized in the mass attack on the ringer. show how dangerous this can be. what we see in social media is largely determined by algorithms. they decide which posts appear where under feed and which ons we are shown. algorithms also decide what we aren't shown and they are supposed to filter out violent content or hate speech. but such filters largely only work for english content. a. one of the people who has spoken out about this is francis hogan or former facebook employee turned whistleblower. so when i joined facebook and i became aware of how severe the problem was in places outside the united states, how facebook's under investment in solutions 2nd work in
7:23 pm
a linguistically diverse way in the world. we're not, everyone speaks english i that because they don't invest in those things. they were really studying the stage for was what i saw was going to be a very, very scary next decade or 2 decades. one recent example is ethiopia where civil war has raged since 2020, and were content to post it on facebook, as in cited further acts of violence. why wasn't discontent flanked? when reason is not very few. facebook content moderators speak local languages. i knew that myanmar had happened that there been a mass, a mass killing event as a result of uh, that was found by social media. but i had no idea that we were about to see chapter after chapter unfold. facebook is used world wide, but the platform safeguards protector mainly. those in english speaking countries, often the most vulnerable or the least protected. personally, i would much prefer if mazacco both were invested millions in addressing these
7:24 pm
problems rather than in his new med averse project. are util, maddie, formerly known as opal to maddie, is the co founder of the black lives matter movement. she's, as facebook has social obligations is failing to meet their needs. i was x i reason they need you can help him and right, i mean more marginal lives and warmer maybe as well. that is with it's alex with i'm scared to minute people. we're switched to the side. not really. this is precisely what the research team at the i to university of copenhagen that's going to do. their goal is to teach a i to recognize hate speech and sexism, and different cultural contexts. when it comes to creating data sets used to train
7:25 pm
a eye, diversity matters. ready ready ready ready ready as the research group at the university of copenhagen found miss in viewpoints lead to mistakes blue with hey, screech annotation. one of the really tricky things is that sometimes just one minority, one group of people, one small subset of people know the answer. it's with our group often we find that everyone would have an agreement about something that one person would disagree, lose at least one person on the team who was muslim. and she recognized a couple of words through as used to refer to muslims in a very derogatory way, that the rest of the group to learn groups being under represented in data leads to biases in a i. what's more artificial intelligence is constantly learning. if there isn't much content expressing a given viewpoint on line a, i can falsely assume that viewpoint is a minority opinion. and so if we have result may be imagined as a lot of massage in your line. look too hard to imagine. um, than women maybe don't want to interact in the spaces to achieve. everyone has
7:26 pm
better things to do it. and so then we'll start thinking that views talking about the fair employment for the, for the sexes actually is an extreme is few because we don't see that view. ringback the i t university research group is developing a i that can spot discriminatory content and hate speech online. and that doesn't consider only one world view. ringback ringback ringback the challenge is that online content is often ambiguous as a result, creating accurate tags for training the a i is difficult. one example here could be, i hate all danish people who have a list of all identities who have danish people and english people and whatever. maybe then we have a go label. so what is, is this hateful or not towards an identity? so what direction are we here in?
7:27 pm
is it a more general so it through in all those people or do head change, people like me, the algorithm will be complete in a few months and it may prove useful for tech companies to improve their content moderation. we urgently need solutions for dealing with online hate speech. at the moment, we're still relying heavily on human moderators, which leads to problems, say the researchers in denmark. moderation is tough because there's a lot of abusive, violent, unpleasant content online. and it can come out at any time of day and the impact is when people see it, right, it's not when they interact with it. if people have seen it and then reported it already, it's maybe a little bit too late. and number of content moderate us have suffered from post traumatic stress disorder, which goes to show just how tough the job is. but as we seen, we can't leave this past 2 a i,
7:28 pm
especially when it comes to complex topics. for instance, the delphi project by the u. s. research institute a out to had no problem getting clear answers from the chat, but for questions like whether the earth is flat. we know it's rollin so the chat thought did as well. but the bought initially described abortion as murder, the researchers than tweaked it so that it said abortion is acceptable. but this goes to show that a, i cannot answer questions regarding moral issues that we as a society, have different views on or to put it another way. how can we expect a, i to know things. we don't know. and should we even be going to a i for an says, what's your take on a, i could we rely on it too much or issues like i is just minor problems that needs fixing. let us know in the comments on youtube and d, w dot com. that's all from me for to day. good bye. ah
7:29 pm
. ah, the freak show is about breaking stereotypes. it's coming up for identity. getting paid in kenya, we me, a young woman. i just interested my surrounding albinism. john, my company is rob with us in the studio. i don't ever let anyone talk to you. the 77 percent next on d w. ah, what's making the headlines and what's behind them? dw news africa. the show faculty issues shaping the continent. life is slowly getting back to normally on the street to give you enough reports on the insights
7:30 pm
our corresponds is on the ground reporting from across the continent. the trends doesn't matter to you. in 60 minutes on d, w, these places in europe are smashing all the records, stepped into a bold adventure. it's the treasure map for modern globetrotters. discover some of you will record breaking sites on google maps. you too. and now also in book form. hello i'm back and i missed you. my name is wendy camara from the 77 percent. and as always this so is for you africa's young majority. it is so good to have you with us with.
36 Views
Uploaded by TV Archive on
