Skip to main content

tv   The Stream  Al Jazeera  April 26, 2023 7:30am-8:01am AST

7:30 am
this interview scene overseas and sees motor g p overseas, i really only get then be a champion. i'm one of the, the, one of the 1st south african champions as well. or his dad to be so runs his own race academy. having bought 10 bikes to help other black youngsters gain access to the sports, he's a typical proud dad, but also confident in his son's ability to make it on the world stage. sometimes you think he's very good at this. and then when you hear other people tell you the same thing, then especially people that are very good at this and that i've been doing it for years then you it and then you know you, that's when you're like, no, honestly, my son is a guy. he just, he just makes my eyes, you know, swell up with pride. i can't, i can't, i can't put it into words he makes me so fraught with limited finance. his sponsorship will be key for ari and he almost certainly need to move to europe.
7:31 am
he's now old enough to compete to broaden, is applied to the red bull rookie series for up and coming riders. ah, which would be a big step towards achieving his moto, g p dream. david stokes. l g 0. ah, this is out there and these are the top stories fighting in said dawn has killed at least 459 people. and other sees far between the army and the paramilitary rapids support forces has been repeatedly violated the when's refugee agencies warning that 270000 people could end up fleeing to chad and south sudan? un secretary journal has called on sedans, warring generals to silence the guns, and immediately stop fighting attorney. good terrorist says the conflict could set development work that i decades ball struggling with one is not only putting that country's future at risk. it's a lighting, it's lighting
7:32 am
a fuse that could desolate the cross barbers cause the most suffering for causing human suffering for years and setting development back for decades. excellencies, the fighting must stop immediately. we need to know law efforts for peace. i call on the parties of the conflicts on general abdel fatah booth. and i'm and i'm done, doug low, i met t and this is an he's armed forces and the rapid support forces to silence the guns . at least 57 people have died after 2 boats carrying migrant sang the mediterranean. their bodies were found off the libyan coast. aid workers say they expect more bodies to wash off in the coming days. not a nation says violence in haiti's capital. puerto france has reached levels similar to those of countries that war has been further evidence of the scale of the problem with more than a dozen suspected gang members killed in the parents vigilante attack in business
7:33 am
groups from across the brazil of held a rally in the capital of postilion during the annual free land camp, they say current policies threaten their traditional lifestyle and demanding the government to more to protect and festival ran. and japanese company i space as is unmanned, luna crossed likely crashed during an attempt to land on the moon communications with the cross lost just at the moment of the tanned landing. there's other headlines on the heron out there after the stream. talk to al jazeera, we are who is really fighting this rush of. is it wagner, or is it the russian or military? we listen, we started talking to me on my own so that this is the i your citizen. he shook to get him back. we meet with global news makers and talk about the stories that matter on out. you see i hello and
7:34 am
welcome to the stream. i'm at sabot dean. today could a u. s. supreme court case break the internet. we'll look at cases that are challenging what's known as section $230.00, the shield law that protects social media and tech companies from being held accountable for harmful content posted on line. the court's eventual rulings could have a major impact on the future of content, moderation, and internet free speech. of course, we are always interested to hear what you have to say, just jump into our live you tube chat, share your questions and comments and be part of the stream. ah. with us to discuss this is megan jojo, senior council, with the electronic privacy information center, also known as epic. and in san francisco, julie ohio executive director of internet south wants yay, or internet without borders. and in oakland, california mclean,
7:35 am
draughty of the electronic frontier foundation, he's an attorney and legal fellow that focuses on free speech litigation. ah, so many things to discuss here, a complicated topic for people who are not caught up. welcome to you all. i'm gonna start with the basics. julie, what is section $230.00. thank you. have no, that's a great pleasure to be here. well, it's important to remember that basically regulation of content, moderation, by social media, because it's, that's what it's about is restricted in the united states by the 1st amendment on the one hand and section 230, which we're talking about the 1st amendment limiting, oh, preventing government from making laws to force an interpretation of freedom of speech . but what does section to 30 says it says on the one hand to 30 c. one says on the one hand that platforms are not publishers. therefore they're not liable for the speech on their platforms. and 2nd aspect of section 230 section 230 c 2,
7:36 am
which is also referred, referred as a take dumb clause, and states that trifles can remove any material they deem objectionable for any reason. so we do see that that forms do have some freedom to decide what they want . i guess when we talk about section 230, i'm really interested in how it's kind of shape the modern internet, if you will, megan, it was supposed to incentivize tech companies if i'm not mistaken to, to moderate their platforms is not what it's doing. that's exactly what it was meant to do, but almost immediately, courts misread section $230.00 to immunize companies, or any decisions they made about publishing, 3rd party content. so since so, so much of what internet company do can be categorized as publishing 3rd party content. section $230.00 has eliminated a lot of potential liability for tech companies. and as a result, it's incurred some pretty bad behaviors that disproportionately impact marginalized
7:37 am
groups. so for instance, 2nd to 30 as a lab tech companies to ignore and even encourage harassment and abuse on their services. because companies claim that the harm comes entirely from 3rd party content. so like say are no, no, i was just going, sorry, i mean to, there of it's, it's almost a bit confusing. i want to try to clarify this a bit. maybe mcclin's you can help us out and then we'll, we'll dig a little bit deeper. does section $230.00, give free freedom to platforms, to kind of decide what content would be deemed, you know, acceptable further spaces, their platforms and what content they would moderate with with it, without government intervention, to be clear. section $230.00 was not meant to only incentivize platforms to take down content. it was meant to protect their content, moderation decisions in general. and some of the court cases that lead to section 230 involves people who are upset about what other internet user said about them on
7:38 am
blogs and forums. they sued those platforms. and congress was worried that if people were able to sue because certain content was not taken down, not just what content was taken down, that platforms would not be able to evolve. and we would not have the flourishing internet that we have today. and there's no denying that there's a lot of problems with content on these platforms, but it's not just what's left up. a lot of very good content, including in when we come, we're talking about the console case, which involves content related to terrorism. when platform start going after bad content, especially the big platforms that are using algorithms that are frankly not very well soon because it's are to do for millions of pieces of content every day. you start taking down content, not just by terrorist, but about terrorists, not just terrorist propaganda, but from human rights organizations explaining what are the bad terrorist acts that have happened and what are be doing about it. those things get caught up and that also when you know that on yesterday is not very ok for a lot of so so,
7:39 am
so you're saying it's necessary to protect platforms. you also brought up the gonzalez versus google case. and for those who don't know, it's a very specific type of online interaction with, with huge implications, which we're going to talk about. and it's, it's basically a lawsuit that stems from and slamming state, shooting in paris, that killed student. no, i me, gonzalez in 2015 and her family is arguing that youtube had direct recommended videos by terrorists and therefore violated laws against aiding and abetting terrorists. now, i want to get into all that before we do though. this video comment was shared with us by john berg mare, who's legal director for public knowledge. and it raises some interesting questions that lead us directly into the gonzalez versus google case. take a take a listen. didn't have changed a lot since 1996 when congress passed section 230. section 2, there is responsible for a lot of the growth of the internet and quite a bit of what people really like about the internet. and it is in part responsible for what
7:40 am
a lot of people don't like about the internet. and it's completely reasonable to think that the law should be changed to meet today's environment. however, the supreme court is really not the correct avenue for that effectively. the plaintiffs in this case are asking the court to rewrite the law and overturn decades of precedent in a way that i think would effectively overturn the law. and it was throughout the baby with the bathwater. julie john says the supreme court is not the correct avenue for that. what do you make of that? that that's an interesting case take. it does highlight the fact that this, this, this case does raise issues that go way beyond recommender system that go way beyond just google and that really interrogate, can we hold companies liable for recommending terrace content? extremely fascinating question in bright. now from our position it would be difficult. first of all, it would require a very streets agreement on what terrorism means, because today we're talking about isis in the likes. but we also know that they are
7:41 am
very, there is an understanding of what terry's organizations are and this is very visible and another issue related to content moderation. because platforms do have list of organizations that are forbidden and that are considered very or terrorist. that's one thing. the other thing is, if we were to change in the manner that is asked from the supreme court, if we were to change the current regime, it would basically ask those platforms to not only review the content that is available, that's a radio available, but also to review the recommend or the recommendations themselves. and this would give another get another power to platforms not only to decide what the content is, but also to decide what you will be individually able to see. whereas, think the, the way should be entered and user can, i think go ahead. i think julie's pointing out something really important, which is that the way we draw lines in the free speech context has massive
7:42 am
ramifications. not just for the, the bad people that we're trying to target and the platforms who are making concerning decisions here about their content. and they did their criticism for that. but drawing lines on speech is something that we this country under the 1st amendment. as julie mentioned earlier, has always been very wary of. this is a new context when it comes to free speech, but it's not a new issue. this is always been a question of. we have bad speech, we have people using their speech to do really terrible things to recruit people to terrible causes. but the solution is not to make that speech illegal. the solution is for users to have more freedom, to create their own platforms where they can exercise their 1st amendment right to say the kind of content we want this kind of content. we don't want to section 230 is really just in training in law funded mental free speech principles that the 1st amendment are ready for tack. and the supreme court should not mess with those principles. and i don't know if this is what john in his comment was getting out. but congress yeah, not be amending and weakening stone to 30,
7:43 am
so i'm going to go ahead. yeah. megan, i mean i know your organizations again, section 230. what do you make of americans argument there? well, where, where more for returning section 230 to its original, very narrow purpose. but, you know, one of the things that really struck me about the oral argument gonzales was, how the justices seem to think that their job was to engage in this policy making and making these like very nuanced decisions about what kind of liability there should be for algorithms and such, and 1st of all, that has very little to do with section 230 that has to do with the underlying liability statute which entirely agree with everything that the others have been saying about liability for terrorist or turn related speech. but it doesn't mean that we need an overall section to 30 if we need to limit these laws or knock down the laws that interfere with speech. and what the court should be doing instead is
7:44 am
to what the course are meant to do, which is interpret shattered, says so much so last week had to do with that. okay, and so interesting. thank you. just go ahead, go ahead and go. given an example on this front, a few years ago, congress passed a law the foster that week in section $230.00 for content that congress thought would be related to sex trafficking. that seems like a reasonable goal, right? we do want sex traffic. terrible has many victim that ruins people's lives. so we want to limit the traffic or ability to use social media. but what ended up happening because of this broad big law that congress passed weakening, platform protections for content related to sexuality. what ended up happening is resources for sex workers, resources for victims of sex traffickers got taken down, sex workers who voluntarily wanted to use these platforms as
7:45 am
a way to connect with their customers. because it's safer than standing on a street corner. they got kicked off of many forms, platform, sort of adopting broad policies, limiting all 3 of sexual content, nothing to do with trafficking. so the point is that again, like the reason that congress passed, section $230.00 is that it realize there is going to be bad content and there's going to be good content. and part of free speech is platforms and users getting to decide for themselves. how are we going to draw a line between what we're going to leave up and what we're going to take down? that's not for the government, and that's not for plaintiffs bringing lawsuits. that's not for the court to decide . it's for us to decide it well and so that, that makes sense to me. i want to take a moment here. i want to get bringing some youtube chat. basically what's happening in our youtube chat live we have in here and thing private companies limit free speech all the time. this is about the government limiting free speech. then we have anything. section 230 has allowed companies to ignore abuse and harassment on
7:46 am
their sir services. i see megan, you nodding from the periphery here in my eye. and because of that, i kinda want to take, take a moment and let's listen to matt shewer's. matt is the president of the computer and communications industry association. and he's talking about what might happen from the perspective of tech companies. it's section 230 where to change if we, if we ratchet up liability rules, you get 2 potential results. one is that services over moderate over filter. i over sanitize their content in and to ensure that need and there's no potential risk of liability. and the other is that they throw up their hands and abdicate responsibility. we want companies to moderate to take action against content that violates their terms, that makes their communities less safe, but they're not going to get every single call. right. and if courts penalize
7:47 am
companies that miss needles and haystacks, that sends a signal don't look at all. and that turns the internet into a cesspool of dangerous content. i mean, mccord, isn't it already broken? i mean, it confusing to me. i mean, the internet certainly themes are a lot of, there are a lot of problems with the internet and there is no getting around that. but the problem is not section 230. look, i am no fan, i am no friend of the big tech companies. they have a lot of problems and they're doing a lot of things wrong. a lot of which we should criticize them for. to be clear, i'm not saying that there is no legal path or nothing the government can do. i'd appreciate the nuance and i know that it's, i know that it's not, you know, black and white. so yeah, give an example like if we're concerned about the power of these big tech companies, there are other laws to do something about that. when tech, when companies do something bad, what normally happens is consumers go to another company, that's how a free market is supposed to work. but that's not how it's working online. and
7:48 am
that's because these companies have become so big and powerful. that's what any trust laws are for. part of the reason they're so big and powerful is they have all of our data, so they're able to use their algorithms to lower than and so on. that's what consumer privacy laws are. so, so, so there are other laws. yeah, the countable without regulating. yeah. and julie, i know you want to jump in. i mean, obviously we all can agree, something needs to be done about this, you know, specifically about harmful content online. but julie, i would imagine you don't think a big tax should be getting a pass as they have know they shouldn't. and i think one thing that this case is highlighting perfectly. it's the need for transparency on how rhythms work. because right now we are, we're making assumptions, a lot of assumptions that has been researched by most of them external. so that case would make a strong incentive to shed more transparency and oversight of algorithmic moderation and recommender systems. now, when it comes to how to deal with that,
7:49 am
how to deal with big tech, well, certainly the way is not going to be government having to tell us what should be online or what shouldn't. that's extremely dangerous. we all know it historically, yet that doesn't mean companies get away, but at the same time we have to recognise a rec limitations. he, we are talking about section $230.00 for web $2.00 platform. so at the facebook and instagram and google's youtube, sorry. and the likes, but we are on the, on the, on the brink of changing out that we're already talking about immersive technologies. and i'm thinking, of course, here about metaphors. we're also talking about the centralized platform. how do you do, how do you do moderation? how do you moderate recommended systems if any, is there any on decentralized back on these are extremely important questions, right? be dealt with with the narrow angle. it doesn't allow us to have these things come . so megan, i know you want to jump in and before you do, i want to share with our audience on twitter. i think this is a, yeah, this is your, your organization. treating this out. i was ripping my hair out,
7:50 am
especially at the beginning of the oral arguments. this is what people from your organization is that about what the job just says got, and didn't get about algorithmic harm. now, i think, you know, what that underlines for me is that i don't presume to understand algorithms, and i'm not trying to poke fun. it's brute supreme court's knowledge, per se. but i do have to say, no, you know, algorithms are not so simple and it's not as if, in this case the tech company is kind of also telling us through the algorithm what we want to see, it's not like we're only looking for things right. is that fair way of putting it? right? it's the, it's not that you're finding the content. the content is finding you. the tech companies targeting algorithms don't just show people what they want see, based on their own inputs. tech companies collect massive amounts of data on users, like you know, how they interact with the company's website and their location,
7:51 am
and then they make inferences about their users. and then they recommend content based on those inferences. so what you see is like what the tech company thinks is most likely to keep you on a sir. so i can search more ad generates more revenue for that, right? and like the effect of the algorithm can be, you know, either they think, you know, a stream ok. the video is will keep you engage. yeah. thought, you know, also a stream of extreme content might keep some people and get it to be fair. yeah. i know most certainly, and it's going to go ahead. i couldn't say i agree. i agree with megan on that. i. i don't like the, the framing that the tech companies are just trying to send us content that we want . i think that often giving them being a little bit the truth is a little more profit oriented for one. i mean they, they are, as megan said, trying to sell us ad. they're trying to keep us on their platforms. we're, we're a captive audience. and so it's not, they're not doing this out of the benevolence of their heart,
7:52 am
but we do need to make sure there is speech protection for the many other smaller platforms that are trying to serve the public that are, you know, sometimes groups of people who get together they want to create a website or a forum, or a blog, and section to 30 protect those people also. and they need to have breathing room. because the google's lawyer acknowledge this at the supreme court. fair enough, if the supreme court weakens to 30 go might say alive, it has a lawyer that great law firms people working for that he can fight lawsuit with sure it's the smaller platforms that are going to get hit. and so again, i agree with megan's concern about the way these companies are collecting the data the way they're designing these algorithms are thin. and again, the solution is not to attack the algorithm recommendation that this is a publishing act. that's an active speech. we should go after the way they are collecting this data because privacy is also important, right? and one that we should talk about more, it isn't it is. and we're going to try to get all this in, in the last few minutes of this important conversation. so if you just bear with me,
7:53 am
i want to share with you a clip from february 21st. this is when the supreme court held oral arguments on the should take a listen. i could imagine the world where you write that none of this stuff gets protection and you know, every other industry has to internalize the costs of his conduct. why is it that the tech industry gets to pass a little bit unclear? on the other hand, i mean we're at court, we really don't know about these things with, you know, these are not like the 9 greatest experts on the internet. there is no word called recommendation on youtube website. it is videos that are posted by 3rd parties that is solely information provided by another. you could say any posting is a recommendation. any time anyone publishes something you could be said, it's a recommendation. whether the videos just don't appear out of thin air,
7:54 am
they appear pursuant to the algorithms of that your clients have. and those algorithms must be targeted to something. and they're tar that targeting, i think, is fairly called a recommendation. and that is google's, i guess, watching that or hearing that i should say, megan, i'm wondering. i mean, so many questions come to mind, right? like what role do algorithms play in promoting extremist content, specifically or, or causing real world harm? but then if we go back to the prophet question, i mean, many people seem to think that algorithms are somehow inherently more neutral than human beings, right? that human beings might have bias, but an algorithms design not to. is that true? i mean, i think your statement mclelland about and is just my personal opinion, just having covered tech companies and these designed aren't. these apps aren't designed for free expression or all the lofty wording that the, you know, the companies put out there. i think they're designed for profit. and why is that such an important part of this debate? megan and when we looked to the next steps,
7:55 am
like what are the solutions in your mind make it. so that's exactly correct that the algorithms are designed for particular purpose. they are to make a profit from a company, and that they're designed by people and people have biases. they have objectives as they both be lined up in the algorithms. so yeah, it's a base misconception that algorithms are neutral and anyway, there's no such thing as a neutral agro algorithm. so and the thing is that, i'm sorry, please. no, no, no, i'm so sorry. it's just, i want to try to get in. and i'm actually because i interrupted if i, if i may, i want to get in the privacy law aspect of this. so to do that i want to share with you a video common that was sent to us by caitlin vogue. then we'll come back to mac and take a listen. second to 30 also encourages services to voluntarily moderate content like hate speech and disinformation by ensuring that those who moderate are not at greater risk of liability than those who don't. but section 2, thirties,
7:56 am
liability shield is not absolute, and it doesn't apply to claims that have nothing to do with 3rd party speech like those based on competition or privacy law. comprehensive federal privacy legislation could address some of the harms that are caused by providers, collection of users, personal data, and the algorithmic recommendation of content like that. it issue and gonzales. so one could how could anti trust and privacy law actually address the content moderation problem? what do you make? what kaitlin i'd say there? i think caitlin is, is very on point here. and i, this is really important because when we talk about things like bias and profit, those are, those can be bad incentives, but they do not eliminate protection for speech. the near time is biased, it's pretty clear from it that authorial, it also makes profit that's it's a company. it's also very clearly it has free speech rights about what it says and how it says it, how it appeals to readers and so on. and caitlin's, right, that rather than going after and people have been concerned about the dominance of
7:57 am
traditional media for a very long time. but the government has never been allowed to go after their speech, right? so if we are worried that google has this huge platform where everyone's going and it's hard for people to leave, because if it's algorithms, let's look at whether it's violated anti trust laws. if the reasons that it's algorithms are so effective and are recommending bad content that people are lowering people in and frankly, it's hard to leave. social media is addictive. and part of the reason for that is they have designed these algorithms so effectively based on all of the data they've collected on us, right? so then let's do something with the incentive with the objective of earning money. ok, well we have the last word, i'm going to come to you julie, go ahead. can we? can we, can we briefly discuss this? yes, this has been done in the european union where the digital services act right now prevents platforms for targeting minors for instance. but beyond, when we talk about section 230, we should think about the broader context as well. any decision that will be made will have in, in incredible importance. international impact given the interconnectedness of the
7:58 am
states, that the space that we're talking about. we could end up in a space where, what is allowed in the united states become totally forbid it in other countries. how do we reconcile these as an internet community? how to platforms, ricka saudi. that's their problem. but really, the question is, how do we continue to make sure that the internet social media space is allow for speech including speech that we disagree with, and including understand the world and you look today. all right, well, you know, i have so many more questions for you. 3 a, but i wanna thank you 1st and foremost for joining us on this show. that's all the time we have for today. thanks for wad saying thanks for joining us and see you next. ah ah
7:59 am
and the climate has changed every year for millions a year, decades of talk, but little action is all about distract, create confusion to create smoke and mirrors. the shocking truth about how the climate debate has been systematically, cir, purchased the oral industry, was a main bank roller or opposition to clock back the campaign against the climate. do you think that's a bad thing? more to, to a different spheres? it was absolutely on all 0, no place. and so i gone would say the press retreated with the car about a media hub and vital vantage point. during the 1st truly televised war from the
8:00 am
roof, we could see the recreation at the american embassy, where the most iconic images of the conflict in vietnam were transmitted to the world. this was the front row sheet to the final stages of the war saigon, caravel war hotels. on l. g 0. the soaps i believed to have migrated in the 6th century from the co patheon mountains in central europe to these forest and present day germany. tens of thousands to live in this region, not far from the polish and checked board us every year at easter matina. huffman tries to preserve to so being tradition of aggravating like her answer. so she gets the ex different symbols. it's special back to just for good town, happiness and prosperity, the sort of manager preserve their culture for over a 1000 years, mainly because they live quite isolated along these what ways dev survived wars communism naziism. but now that knology is threatening native language. oh.

26 Views

info Stream Only

Uploaded by TV Archive on