tv Going Underground RT October 18, 2021 11:30am-12:01pm EDT
11:30 am
well, if you'd like a deeper dive into any of this, our stories are t dot com. how's you covered? well, worth some of your scoring today. i'm, you know, my for 9 the join me every thursday on the alex simon show. and i'll be speaking to guess in the world, the politics sport. business. i'm show business. i'll see you then me ah, the rather voice of the place but also there is a fin. daniels, trulia little fish with little castile with it. yeah. it was just food, say that him and then you would you that is images moves up was good for supposedly got my did some i would say again to spend your music in which is spread it. so we
11:31 am
11:32 am
time action retents. and you're watching a special edition of going underground. it's 10 years since dnc supporter and then chairman of google, eric schmidt went to see julian assange of wiki leaks. carnegie awaiting next week's court hearing in his long battle for freedom. you can watch our interview with jolena san john is radically different perspective on technology compared with google on our youtube channel. now dr. tim needs care brew a former google employee. an expert in artificial intelligence who spoke out against a bias has been calling the stronger. so blew a protection against us all the goc run big tech companies. she joins me now from california to me. thank you so much for coming on. i think we all have you to thank for server features on our i pads on google. i mean, it's a, yeah, we reflected all our lives in a way. what is ethical? a i was emily different to the squid, the game. i haven't seen all of that series,
11:33 am
but everybody's talking about what is it? and the cal ai is, i consider it to be a field that tries to ensure that when we work on at technology, we're working on it with foresight and trying to understand what the negative potential negative societal impact are and minimize those and try to work on something that's actually beneficial for humanity. so started by saying what is on ethical ai? there's lots of a sudden i think, all right, i think that whatever you so for example, as a part of my work taught showed that my collaboration with joy will let me show that a lot of as a p i that sell automated automated facial. and that is tools, we showed that they were much there. they had much higher error rates for darker skinned women than lighter skinned men. this is with ignition. yeah,
11:34 am
yeah. and that's spurred a lot of movement because a lot of people have been worried about surveillance related technology anyway. so that spurred a lot of movement to ban some of these, they used to some of these technologies by law enforcement, because they're mostly used to suppress dissent. to surveil a lot of marginalized communities. and so for me that's an ethical, that's not ethically i right, that is an ethical ai. but even for capitalism, it's not that useful. all law enforcement is not useful to have a i that wrongly identifies individuals on the basis of a color. and i think the amazon company did initially fight against a criticism of u. s. law and cash with them. they do presumably so well, you know, no one wants that. i don't, i don't think they saw the light. i think what happened is that there were
11:35 am
a world wide black lives matter protests. and so it looks bad for them to continue using it amid this world wide protest. so they said that they were going to have a moratorium of using this technology until there was a federal law. i think you all right, i have a statement here, lee said quote, this research paper and i'm assuming you were cool the article a misleading and drill false conclusions. i mean, why the thing the research is used an outdated version of amazon recognition. we made a significant set of improvements. i mean, what do you think, why, why do they fight back so strongly against dis, interested, scholarly research? you know, the, i mean this isn't you being an activist for blacklist mother this is you analyzing the algorithms? yes. so this was actually my colleagues, so joyful, i me and i wrote a paper that preceded this one and her and another black woman at deborah raji wrote this paper called actionable auditing. and then when they put it out there
11:36 am
for the very 1st thing that amazon wanted to do was push back. so v p after v p wrote a blog post trying to discredit them. and me and my colleague mag, mitchell, you know, we're both fired from google. now that we were a google at the time, we spent a whole month to writing a point by point rebuttal and then, and galvanized the academic community to come to their support. so we had a whole petition that more than $7080.00 academics including tearing award winners . that's kind of like the equivalent of the nobel prize for, or computer science reg signed our letter. so we came to their defense, we debunked. we had a point by point rebuttal of what they were trying to say here. and then we all asked them to stop selling recognition to law enforcement. so then that was all over the news and they weren't able to far there, sort of try to discredit them. but the thing that so what you just saw there is
11:37 am
very similar to what google was trying to do to me right after i came out with my paper about large language models because we were trying to show the harms of such technology. well, let's get where we were, we'll get on to the lodge language models. and i have to say that these are massive companies that you know, some people say own all alive. so i better say these responses because they have a lot of money. you know, before you make out the actions that they did, i completely google says they didn't find you at all to me through that. if we didn't meet these demands, she would leave google and worked on an end date. we accept and respect a decision to resign from google. so the, one of the gods of our societies are saying, no, you didn't, you won't find. i think they would, you know, they had a quote unquote and a non apology of an apology after the backlash. because they were just doubling down and it's so clear as to resign. i can't even believe,
11:38 am
i don't know why they thought that they could get away with saying that maybe they thought i'm so stupid that i would just kind of feel like, oh, i guess i resign. that's not how resignations work, right? resignations, you have to submit. you have to have paperwork, you have to, you know, say you're going to resign, you have, there is a whole process and a dispute. i mean, as i say, they say that why did you, according to google resign? according to you were fine. i just said that i resigned was that i'm not going to have in the context of all of this. what exactly happened? because ok, again, you're working on scholarly research. as far as i understand about lodge language models and people use google translate, love love these google translations, and that uses large language, what could possibly be wrong with large language models. so in all of these incidents that you see, whenever you show a problem and it's inconvenient, it looks like the problem maybe is too big for what they want to admit or,
11:39 am
or maybe it's not even that bad, like they, maybe they really do think that, that the issues that i'm raising are not that serious. the one of the things they said is that my paper made it look like. there were too many issues with large language models, right? they say it's paint a stark picture. and so they wanted me to retract this academic peer reviewed paper that was being published at an academic scientific conference with no process whatsoever. so it's one thing if somebody, if your employer comes to you and says we followed this particular process that you know about to come to this conclusion that your paper should be retracted. and here is why we can discuss it. let's have a conversation about, you know, what kind of process we use, why we want you to retract the paper? absolutely not, right? this was way late after we submitted it
11:40 am
a long time after we submitted our paper. and we went through the internal approval processes. my manager resigned from google, right? because he was to approver ok, well it was fine if we go to the specifics of the lodge language model of problems that you are investigating. maybe we can figure out what so does question. i mean, i know you give an example of a palestinian using woods and then being wrongly interpreted just just to think of . yeah. so what we did was we went through actually prior works surveyed a lot of prior works and had some initial, our own, a lot of these large language models and what, what has been going wrong and what could go wrong if we focus on larger and larger language models. so the 1st point that we started with was environmental and financial cost. right? these large language models consume a lot of compute, consume
11:41 am
a lot of compute power. and so if you are working on larger and larger language models, only the people would, these kinds of huge compute powers, are going to be able to use these large anguish models. and that leads to what we talked about as being environmental, racism, that people who benefit from large language models are not the people who are paying the cost of the this environmental and financial cost of large language models. and so we give a bunch of examples there about different languages, right? it's always people in the dominant groups, whether it is between countries or within a specific country that benefit from these large language models. the 2nd point we make is the fact there's this weird it, there is this assumption that if you use the data on the internet that you'll somehow incorporate every one's point of view. right? and those of us who've been harassed on social media and know that that's not the
11:42 am
case, but these large language models are trade with data on the internet and most of the time like the whole internet events. and so we were talking about the dangers of doing that, the kinds of biases that you would code, the kinds of hateful content that you and code. we were saying that just because you have a large sized dataset, it doesn't mean that you are now incorporating every once point of view. so we give many examples of that. and then we, we give examples of what happens when this and coded bias at when this large language models trained with data encoded in this, with this kind of bias are deployed. so we give certain examples. one of these examples was this example that you gave about a palestinian. it was this a, he wrote good morning on facebook, and it was translated to attack them. and people didn't even see the
11:43 am
initial, didn't check to see what he initially thought a wrote. they wrote, they saw the translation and they arrested him. so this was a google translate era. this was, this was a facebook, a underlying technology of large language. they all use large language models. so what we were saying was sometimes when you have machine translation, you get these cues when you have when, when you have errors, right? you can see that the grammar is not quite right. you can see that something is wrong. but with these large language models, you can have something that sounds so fluent and coherent, and it's completely wrong. so if you only with these companies, but why would these companies visiting? there's no malice there. there's a mistake in the algorithm and in the, in the software engineering, why would they seek to minimize the publicity given to papers that showed these areas so that they could, i went in. i would disagree that there is no malice because when you look
11:44 am
at what happened with joy and deb, for example, it is 2 black women talking very much about the impacts of amazon's technology on the black population. so if you, you see a pattern here, it's me and other black women and a bunch of other people in our team who are very much concerned with the impacts of these large anguish models on marginalized communities. and so if, if you're talking about, you know, something that maybe should not be used right now. that's directly going to impact their money. it's a money making machine, i see what you mean that, but how good morning can translate into them. why would, why would, why would the algorithm even work that out based on the dataset? because of the domain from where the user came from?
11:45 am
because a, if you look at some languages that doesn't event so on, on twitter, twitter users, google translate, and i had an interview, a on b, b, c with integration, which is my mother tongue. and a google translate doesn't even have to going out. they don't, it's not part of the languages that they offer a translation to, but it uses the same ok alphabet as i'm heinrich. and when people were looking, we're sharing my interview. it just went haywire and just says there it just when you know, let's talk about the greedy people. greedy, greedy greeting, greeting, greeting, greeting, really. i like there is not i, there is nothing in my interview i've talked about really has given the military industrial complex is particularly addressed and we own of africa and what's
11:46 am
happening in eritrea review. yeah. that's actually more disturbing with that 1st that it 1st just sounds to me. i'll stop you that more from the former colleague of google's ethically i team and co founder of black and i after this break. ah welcome back. i'm still with dr. tim need good brew former go lead of google's ethically i team. i'm co founder of black in a. i mean is it in effectively translation? is it, is that process? does it violate the 1st amendment of the us constitution? lemme because it is it's, it's not allowing free speech and effect. oh, i've never thought of it that way at all. i don't know. i mean, i guess i'm not a legal scholar, but back to the question that you asked me in terms of ethical and what is
11:47 am
ethically i, if you read the works of some critical scholars, they would say that the tech industry strategy is to make it look like a purely technical issue, purely algorithmic issue that needs to be fixed. as you said, purely mathematical issue. it has nothing to do with antitrust laws. it has nothing to do with monopoly. it has nothing to do with labor issues or power miss cheerly this, you know, technical thing that we need to work out. and part of the things that we say in our paper and many others is that is not necessarily this purely technical. you know, algorithmic tweak that you want to do, right? we need regulation. and i think that's part of why all of these organizations and companies want it to come down hard on me and a few other people. right? because they think that what we're advocating for is that this simple algorithmic
11:48 am
tweak it is for larger structural changes that does this is to realize that the, the way to really answer the problem is as you say, to do with the nature of monopoly power. yes, i think now with the whole all there's so many whistleblowers now since i, since i 1st came forward, there's one after another after another. i think the public is now starting to really understand that we need some sort of regulation bed in. but the way that what the companies would argue is exactly what you were saying earlier. oh, there is no malice. it's just, you know, the algorithm is not this. why would we want to have algorithms that don't work that's not good for our business. and so of course, we will work really hard to fix our algorithms, right? so for example of it for facebook, march, 2nd, requesting a safety. i don't know any tech company that was to create a our advertisers don't want to advertise to angry people or bad content. they
11:49 am
constantly tell us that. so why would we want to do that? that's a kind of argument that they're making. right? but what i'm saying is that, that's, you know, that a lot of the work in this, in this space, interdisciplinary work has shown that we need to, we need to look into more structural solutions to this problem. now of course you were at google for quite a while. i julie, the songs, you know, when he met eric smith wanted to meet him, julius on said, you know, google may not be act illegally in any of this. presumably he's referring actually to the scientific element as opposed to the anti trust element. who knows, which is why the f b, i don't investigate. when you write a paper, i would assume, did you come across a guy called jared cohen, seo jigsaw, back then it was called google. eric, how do you remember that? i didn't come across him, but i heard about him. i was down. i was, i google for actually 2 years, 2 there he long years,
11:50 am
every month i was thinking can i survive here? should i leave next month back? and i mean, i had a lot of people watching what we think must be amazing to work in that amazing office and all of that. that's the future there. graduates watching, he had desperately trying to get through the complicated process to get in. what is i mean and, and i should say jared cohen back by joe biden, blink in the sex your state and the national security advisor, jake sullivan. they love google ideas and google very much associated with the democrat royalty as it were. why, why was it no fun to work? well, again, it comes down to what your views and the demographic that you're in and how, what you think i was, you know, in a role that's called a research scientist role. and i was the 1st black woman to be a research scientist at google. and i saw why raid, i had so many issues right off the bat i was under levels are not people with much
11:51 am
less experience. to me way more level than me. there was a lot of just kind of disrespect. there was a lot of harassment and all sorts of things. so it was very, very difficult for me to just even concentrate on my job. and actually when i'm talking about being fired, i'm pretty certain that it wasn't just that paper that got me fired, right. i was, there were so many things that they didn't like that i was doing. i was speaking up about workplace issues, discrimination issues. and they and each time i spoke up, they weren't happy that i spoke up about it. so i didn't buy the idea that there was something philosophically ayn rand, this kind of a right wing supremacy about it, cuz they didn't, i only obviously they say you resign and they deny any element, workplace being a bad place to work in, let alone harassment or anything like that? yeah, i mean, they also, i don't know if you're aware of the google walk out there was a large, 20000 people walked out 2 months after i joined google. there was
11:52 am
a large $20000.00 people demonstrated, walked out because they were demonstrating against, you know, andy reuben, being leaving with the $90000000.00 or something like that after you know, here, harass people. right. and so there were $20000.00 people walking out in protest, and they pushed out that 2 of the organizers, claire and meredith, you know, after like a year or so afterwards. so they were per g. and then a year after i, before i got fired, they pushed they up fire the fired 5, there's an l r b trial going on right now, the national labor relations board trial about these fired 5 and then a right after one year later it fired me and i spoke up about all of those firings, google, then i all wrong doing all of these things in there, so powerful. i mean, you know, when you just mentioned these cases, obviously join us. you'll have to look them up using,
11:53 am
using google. but as for diversity and identity politics, they go, a 9th, december 2020, the millionaire, all the god saying it's incredibly important to me that all black women and under represented google's people to work at google know that we value you and you do belong at google we started a conversation together early this year when we announced a broad set of racial equity commitments to take a fresh look at all, assistance from hiring and leveling to promotion and retention to address the need for leadership accountability committed to continue to make diversity equity inclusion bottom, everything we do from how we build up products, i presume that some of the hardware over represented by people of color. who knows that to how we build our workforce. you know, you know, the statements coming from google completely against what you're saying. yes. so maybe this was what he did, the quote non apology and you know, he had to apologize because the other road they were going down on, which was doubling down, you know, to saying that my work was set par saying that i,
11:54 am
i told people to stop working on diversity and all of the other things they were saying it was creating more and more backlash each time they went down that road. so i'm sure he realized at some point that he had to do some sort of an apology. and if you read closely the apology, he, it was the kind of apology for it that said, i'm sorry for how you feel kind of apology. there was a long time. um, what is it called a person who a comic like a person who draws these, these cartoons that cartoon in. there was a long time cartoonist at google labor. i who left after 1415 years because of what they did. and later i believe you had thousands of people on your side, they find less is and maybe 1000 i did with it. i mean, i just want to ask you about, i mean, we were talking about jared cohen and i don't know what you think about jigsaw and why you think jigsaw or a is kind of
11:55 am
a competitor. now you're the co founder of black in a i. you're gonna have to tell me about what black in a i is. why is your a i going to be better than ai? ai? certainly the innovation department of the google conglomerate. why is it going to be better'n and you're going to make it more profitable actually, because you're going to be more accurate and you're going to get the cia national security agency i. so for, for the group black black and the i that i found, did i co founded was a, a group to it was a, it's a non profit. and it's a group for practitioners and researchers in a i black purchase nurse practitioners and researchers. any ad from all over the world. so. so this is not like a group that builds a or anything like that, as a group that builds mentor builds community networking, has mentorship programs a for various things, like graduate mentorship programs,
11:56 am
entrepreneurship program, and we have workshops to raise the visibility of black people in a i et cetera. so this is different from what i'm doing right now. and in, and so what i'm, what i'm hoping to build right now is an interdisciplinary. i research team and the goal is not to be an extremely profitable i research institute because i believe that if your goal, your number one goal is to maximize profit, then you're going to cut corners and you're going to do things downstream. that end up making you build a i that is applied section 70 to the $26.00 companies act fiduciary duty company to maximize profit love we're not profit. good. yeah. duty maximize profit very quickly. well, a whistleblower right now. who's terrified? have you worked to one of these organizations and even seeing what you're talking
11:57 am
about feels more terrified. never wants to come clean with journalists or anyone else about yeah, lack of ethics that they see in their daily workplace. well, i would ask them to look at the handbook the tech work or handbook that if your mom was de la, who is another, was employed just launched like last week. and so that handbook is meant to help people decide whether they want to come forward and what to expect. and i think it's a very personal thing for each person, because it's not a joke. you're gonna really, you're gonna have a lot of backlash. so you have to determine whether this is a right avenue for yourselves. and so i understand the fear because you, you get a lot of harass and being cursed into that public space. but i'd want to say that, you know, you were mentioning eric smith and all a julian assange, etc. before eric smith in particular has
11:58 am
a lot of influence within the u. s. government right now. he created this national a i kind of committee or something like that. and so he's a very, he is this, we view that we have a cold war with china. there's an a, i race, etc. so i think that where we're going with this can be very, very dangerous. so if you are voices or the whistle blowers voices are not heard. it's these people at the top who are having a lot of influence right now. and i think that it would make a huge difference for more people to speak up. at the same time. we have to protect that, so i can go ask people to speak up without making sure that the society at large will also protect, protect them. well, we invite to hold as google board members, people who left people in the government. now the come on dr to meet, give, thank you. thank you. that's it for the show will be back on wednesday, 10 years to the day since the leader of africa, which is because of the country,
11:59 am
12:00 pm
russia alone says it's to suspend the work of its permanent mission to nato in brussels from november 1st. the measure is a response to nato's decision earlier this month to expel 8 russian diplomats. ready for action. russia fills one section of the lord stream to pipeline with natural gas and awaits the final green light from regulators to start supply europe . thank you. commissioner, wardens energy poverty on the continent, etc. tortured angio for 17 years without trial. we explore the case of a pakistani national, who still in guantanamo despite being cleared for relief. south drew it emerged he'd been mistaken for a terrorist.
20 Views
Uploaded by TV Archive on