Skip to main content

tv   The Stream  Al Jazeera  April 18, 2023 5:30pm-6:01pm AST

5:30 pm
all these unknown habitats, as we all know, it's very difficult to obtain information for more than 40 or 50, or even 60 meters. deep with these tools or this technological equipment will be able to access this information on florida. the discovery is the 1st of its kind inside the galapagos marine reserve. it's believe this reef has been here for many centuries, supporting potentially unique marine life deep under the ocean surface. and far away from our damaging actions. stephanie decker or j 0. ah. take you through the headlines now. the warring sides in sudan have agreed on 24 hours seas fired to take effect at 1600 g m t v u n says more than a 185 people have been killed. since the fighting between the army and rapid support forces began on saturday, if morgan reports from khartoum,
5:31 pm
when the 1st his fire was announced on sunday and then the 2nd on monday both did not last. people were not able to leave their homes. and even now ahead of, that's his fire that was agreed upon our by both sides following international pressure. there is intense airstrikes, so intense that the building where we're at, we're at, we are forced to stay in the basement or for safety reasons. so it's not clear if both sides will actually adhere to at the ceasefire that they are top command has agreed to. listen to nicea, have arrested 3 prominent officials of the main opposition party hours after detaining its leader and raving its headquarters, a mazda leader or should i know she is the main opponent of president close science . u. s. journalist to even ghost coverage has lost an appeal against his pre trial detention as a hearing in moscow. the wall street journal reporter was arrested last month, my rush of security services on the espionage chargers. the u. s. says he's being
5:32 pm
wrongfully detained. the kremlin has released the video showing president vladimir putin visiting to army posts in russian controlled areas of ukraine. the trip is the 2nd visit by foods into occupy. there is a new crime since march driven spoke to military leaders about the combat situation in the se 3 people have been killed in more a missing after a landslide in northern pakistan, near the afghan border, a highway near the town of tour home was hit before dorm on tuesday, at least 6 palestinians have been wounded in an israeli raid on the refugee camp in the occupied westbank city of janine, the palestinian health ministry says the individuals are in stable condition. they're being treated in nearby hospitals. those a headlines this al jazeera dot com for the latest. next up is the stream
5:33 pm
frank assessments justice means to give them the basic human rights, not only in the camp, but also inside the me and my informed opinions by ministration are very concerned about this development especially, or what it means for china's power on the world's day critical debate is are only both the legal reform door. they see that we are living in depth analysis of the days headlines inside story on al jazeera. i hello and welcome to the stream. what you're watching is josh rushing generated by ai on today's show. we look into the dangers of a power disinformation and how it could be used to manipulate the public fears of a new wave of disinformation loom as sophisticated ai programs become increasingly
5:34 pm
adept at making images and text. how could be used to distort our sense of reality and what's being done to protect the public from misinformation and malicious actors? ah and what you're watching now is the real josh rushing or is it? well that's the pointed today show. so let's talk to some experts about that with us to talk about the future of a i and this information from cambridge in the u. k. is henry, either an expert broadcaster, and since that a media researcher in new york, sam gregory, executive director of witness and organization that looks at the intersection of video and human rights. and in austin, texas, audra, nadeem, the chief operating officer at open a. i an artificial intelligence company. hey, one more guest at the table. that's you. so if you're watching this on youtube
5:35 pm
worker, we've got a live producer right there in the box waiting to get your questions to me so i can get them to the expert. so why don't we do this together, right. ok, sam, can you start us off? i want to talk about the letter that came out today. that's news. let's save that for just a 2nd because what gets me is it seems like i mean, weeks ago chat g p t was like the thing. and then it's like a couple of weeks later already to what g t p 4 and things seem to be changing really fast. and i know the deal with a i is like the theory is that you've crossed a tipping point before you realize it. so can you just kind of catch up, maybe people who are obsessed with this, like me? where are we right now in a i development, why is this important? so we've had 35 years of really foundational research that started to create the ability to do things like creat deep fakes. like a face of someone who never did something started to create the ability to have
5:36 pm
interactions with chap bots that gave you a parent human answers started to make it easier to falsify or clone someone's voice. and so what's happened in the last years we've done that seen this very rapid progress of these tools getting into the public eye and becoming accessible. so between the summer of last year, we've gone from tools like a tool called mid journey that came out in july 2022. and last week created an image of the pope in a puff, a jacket or a parent arrest of president trump. and then in parallel, we've seen the advent of these tools like chat g p t that apparently allow you to communicate with what appears to be someone who has given you an answer and maybe doesn't appear like a machine, but it's actually a machine based on a large language model, right, right, right. i want to bring in a video comment. this is from a professor at princeton university, our, our vend narayan here. check this out. there is an open letter making the rounds stood a calling for
5:37 pm
a moratorium on new ai. unfortunately, this is a distraction from the real issues, the idea that the air is gonna break free and kill us. that's pure speculation. the real harm is actually happening, are totally different. companies train these models on the back of people's data, but give them nothing and exchange. we don't have to solve that today. we don't need 6 months tax that a i companies and compensated the artists. unfortunately, we don't seem to have the political will to consider anything like that. i guy, so i want to show the audience my laptop. this headline says getty images assuming stable diffusion for staggering. $1.00 trillion dollars, that's $150000.00 for image and all the images they took. so this is what he was talking about. the video comment, this kind of internet scraping henry, you can touch on that, but he began the comment by mentioning the letter today. that alon mosque was also a part of, can you bring this up to date on that letter, please? and then maybe you could touch on this idea that he brought him. yes,
5:38 pm
certainly. so this letter was published today. it was an open letter featuring, i believe, over a 1000 industry experts, researchers and academics, including as you mentioned, ill mosque. but also people are young talon, the co founder of skype, essentially asking for a moratorium all for the brakes to be put on a i development. particularly in the generative space with a focus on models like g, p, t for, but also moving on to other areas. you know, as you mentioned, mid journey text, the image and so on. and, and so this is really come off the back of this, you know, massive alms race. we're effectively seeing where as i mentioned, within the space of less than a year, mid journeys on version 5, all of the big tech companies are piling in to try and build better and more accessible models. and perhaps, you know, the concern is that there isn't enough proactive safety and ethics considerations going on here and the fear is it be reactive. but to little too late with regards
5:39 pm
to the comment from your, from your come into their around and training. he is indeed right, that these models are veracious, the data they consume immense amounts of data to get to the state that they're in. we're talking, you know, hundreds of millions of photos and indeed most of the web being scraped in the context of text. and indeed, many of the kind of companies, the license stock imagery, and indeed artist the saying, well look, this is a violation of our copyright. it's a most the, what is a, is a controversial issue with some people saying, well, look, they're not actually copying. they're sort of taking inspiration from them, but obviously some people are saying, well look, you know, we need to update copyright to make it fit for purpose in this new age of synthetic content. and using data in this way, without permission, without royalties, is something which we can't accept moving forwards, or you're at an ai company or you are you in an arms race? and do you think there needs to be someone outside of the company is developing the
5:40 pm
stuff, stepping in with some kind of regulatory measures? definitely. so we work on text to video, which is kind of the next big wave of what you're going to see being generated to jamie, i ordered better to be eyes what we call that. i think it's going to have to be a public private partnership later is the raising doctors and companies are raving . dr. going, anyone has never the solution. neither is putting regulation just positively because i don't think dc or anybody, any of the politicians really get the grass or what the technology is. and on the other hand, a lot of people working on it might not know the social must not fully understand the social political aspects of things going into it. so something by a public private partnership, that's what's on some sort of funds, almost like a constitutional law kind of things in place. and these can bear country by country or by geographic boundaries on how we come up with solutions is going to be about at least for me. i think that's going to be the best way to go about it. i want to
5:41 pm
bring in another video comment. this is jesse lurk. he's the co founder of accountable tech. the rapid escalation and the i arms are, is really underscores how far behind us this fall. and when it comes to regulating tag, we've done nothing to grapple with the catastrophic societal harm, social media giants of i'm the least over the past 15 years. we don't even have a federal privacy law in place. and now these state companies are trying to capture the next market, rushing out half baked generative a i tools that pose entirely novel threats to the information ecosystem. we're not prepared to grapple with them and we do need accountability. that's the job of government. but in the short run, we all need to re learn our critical thinking skills and approach content with the healthy skepticism, especially if it seems to confirm your existing biases. so sam, i'm checking in on our youtube audience right now,
5:42 pm
and it seems like the tone is fairly scary and just opium. is there real reason to be fearful here, or is this be blowing out a proportion? i think the real reason to be fearful, i spent the last 5 years working globally on this issue, talking to people who already face similar problems, right? attacks on them with fake images, attacks using nonconsensual sexual images of women, which is the biggest way people use these synthetic media deep models already. and all of those 5 years before this letter came out, folks was saying, we need action. we need to think about how this is going to be controlled. we need to think about the safeguards that are going to be built into the systems. so i think it's right for us to say, be fearful. now at the same time, we also have to recognize that when we create a hype cycle around this, it benefits certain people. it benefits people who get away with saying you can't trust what you see online. it benefits big companies. when we say, let's put a moratorium on development, it benefits the encumbrance. like jesse said,
5:43 pm
so i think we have to be fearful, but also really thinking about what are the homes that we already know are going to be generated here? let's not think about hypotheticals, a lot of that, for example, about the future of life lead. i was very focused on big picture like future hypotheticals of home. when we know the homes already from what's happening already . the way in which social media is playing out the way in which miss and as information is playing out. so it's time to respond to those needs right now, rather than play to sort of hypothetical fears. further down the line, henry pointed out rather astutely though, who is it that's going to step in because like i look at congress in the u. s. and what their average age is, that they, this is technology that they don't, i don't think they fully get. so, so who's can step in a regulators who should step in and regulate those? yeah, i think the tick tock, hearing the other day, was another example that like the zuckerberg hearing a few years back, which sort of highlighted some of the am ignorant, a lack of knowledge around them. emerging technologies and regulation is
5:44 pm
a really tricky one. i mean, we have plays around the world who are considering and are actively working to implement legislation. and we have a notably am in the e u, the u. a i act which would classify many of these generative technologies as well. they called high risk, which would then put further and measures in place for creators of these tools in terms of how they source data, how they disclose the outputs of their, their tools to audience isn't to 2 people online. um, but then we also have company, i'm country, sorry, like china who have introduced this year. the deep synthesis law, which takes it, you know, a step further and says, you know, we're going to actively moderate the content in terms of whether it's fake or real . and we get to say, if you will, being, you know, essentially a criminal and publishing they content which it's, you know, perhaps a little bit too far considering that track record and censorship. but i think
5:45 pm
you'll come to again, he's right. the currently government's we're, you know, salmon, i have been warning for years and about this, this problem and trying to get full full, you know, legislation going in to get key stakeholders engaged. i'm and is often is the case . it takes a real shock. it takes x, the heights like to weight people up and, and i think the u. s. and, you know, has a, has a fair bit of work to do as those, the u. k. to kind of get in line here. yep. and henry, is it country by country? can you trust a country by country solution for this? because the internet seems to know no boundaries that way. well, i'm not entirely sure what your sanity would be. i mean the u. n is the only body that i'm aware of that could potentially try and get all of it met the states involved in some kind of draft legislation to, to cover this kind of action. but again, look at the dynamic between china and the us around ship manufacturing and the ions rates, not just between companies, but also between countries. i think it's very unlikely that you're going to get
5:46 pm
that kind of international consensus built that way. unfortunately. which leads to a difficult challenge of countries trying to balance innovation, right, would safety, which is one to strike, jump in there. so i think there's also a person that found out, or i think there's also a different framework that's required on how we think about it. and saw that, right? so there is in danger. so for example, people take phone calls where they're, where they can, you know, copy towards that somebody that, you know, end up and educating people on having safe words or having these secret phrases within their family. not sharing information online to a stranger on the phone, even if they do sound like somebody that you know, there's like an image conversation that needs to happen in terms of security. also if there is rebecca foreign and a lot of this is used to abuse women of more education that goes into how the police feel today how easily and quickly complaints can be made. so they're
5:47 pm
immature things that needs to be done and then they're more policy wide things. but at the same time, i think a huge responsibility also live with the people, think these technologies and how you philosophically deal with it and what you what so for example, the creators who data you're using, how do you incentivize those creators? how do you pay those creators? how do you create more opportunities to build things that, that, that they can monetize upon. and i think a lot of the conversations are happening, but because of all the fear mongering around it because i agree with. 5 sam, yours is like a lot of money to be made in your monitoring. right and get him to the hands of the wrong people were talking about it. and these images and more philosophical questions get enough of the table. and i think we need frameworks or those way more than the machines are going to take over the world because we're nowhere near that . it's still there. i want to stop the conversation here for a 2nd guys,
5:48 pm
because i want you to expand on something you said at the top to make sure our viewers here. to explain this i did it. you need to have a safe word with your family. what, why, what are you talking about? i want people to get this yes. for example, there are some difficulties available now where my voice can be used to, you know, for a phone call and somebody can call my dad and be like, hey dad, i'm in this, you know, emergency, can you give me your bank account details, or can you give me your credit card information so i can pay for x, y, and z. that's like the most simplest, you know, scan ever. and what we need to have is our family, this conversation of like, okay, if this happens, what's the 1st question that you're going to ask? that question remains within all if you don't write it anywhere, no, like, put it on your phone, don't put it on a google doc. and that way, if you do receive one of these phone calls, it's again, a lot of it goes back to very simple like how to not get down to one to one. but i
5:49 pm
think the need to have these conversations with our kids, with our families is there, and we don't have enough of that conversation. sam, you're going to jump in. yeah, i, you know, one of the things that i think is really important is that we don't place too much pressure on individuals to handle this. so i just came back from last week. we ran a meeting with 30 activists and journalists from across africa talking about this because the agents be in response to these types of a i generated images and, you know, we weren't able to, when we looked at images to necessarily discern that they were a, i generated and what that tells us is that instead of trying to put the pressure on each of us, for example, to look at these pictures we see on line and try and guess whether they're manipulated. we really need to think of this pipeline. and this is where, placing the responsibility on tech companies and distributors matters like is a tech company that's building one of these tools. so they placing the signals within that enable you to see that it was made with a synthetic tool of a providing watermarks that show that it was generated with
5:50 pm
a particular site of training data are those durable across time so that they're available to someone watching it cuz otherwise we end up in this world words like we start getting skeptical about every image. we look really closely, we try and apply all media literacy and it undermines truthful images, right? and that's the, the biggest worry is that like we create a world where we can trust and the thing because we create this assumption. and if we place that pressure on individuals, particularly individuals, round the world who don't have access to sophisticated detection tools, who exist in environments where they're not being supported by the platforms very well that were doomed to be in a very tough situation. so let's not place the pressure as, as exclusively on us as the viewers we really need to push the responsibility of the pipeline to the technology developers and the distributors. i want to share something that you're talking about here. this is a story about the great cascadia earthquake of 2001. it has pictures of scared families when it's happening, the city is destroyed. i mean, look at some of these,
5:51 pm
right. this are quick never happened. and if you really look at this photo, which is a great photo of someone on like a battered beach holding a canadian flag and you zoom in on it. the hand is backward on the right hand side, right? but this looks so real. and what i'm really concerned about, henry is here in the states, we saw how misinformation and disinformation affected not election and 2016. i think it affected brett's. it there as well with cambridge analytical. what does cambridge analytical look like in a i for future elections? yeah, it's a good question and i think i just very brief diesel into echo sams coleman. and, you know, i get a lot of journalists reaching out to me to go over the last few days saying, you know, can you tell us how to spot a deep fake as an individual. and i kind of have to copy everything i say by saying, you know, look, these images like the ones you just showed are getting better and better. if you look at the images that mid journey was generating back in july of last year. you
5:52 pm
know, the new ones are coming out at the moment. all, you know, leaks ahead in terms of realism. and so if we put the button on the individual to look for telltale signs, like the hands which are improving all the time. and it's going to get people false confidence that they can detect something when actually it might be trained out of the models by the new versions. i'm in terms of the election context is a really interesting one. and you know, again simon, i think we've been hearing every single election in the u. s. mid term presidential . the, this is going to be the one that deep fakes cold chaos. right? the a video is going to leak or an audio clips going to leave the night before the election is going to swing it or it's gonna cause, you know, chaos. i'm wondering, as you're talking, we're showing the tromp getting arrested photos just so you know what? go ahead, yes, yes. like or image and see them which, which luckily didn't fool many people in contrast that the image of pope frances, which we may get on to know. but, but yeah, i think the, this election i think will be different. not necessarily because i think that kind
5:53 pm
of worst case scenario indistinguishable fake the, even the media and experts such as you know, such as myself and others can't detect. but i think is going to be different because big in a see just a mass proliferation of this content as we're seeing right now. you know, there are videos a line of all of the presidents with their voice. it's close to playing mary call party. and it's kind of a really convincing on one level, right? and there's a lot of images it showed of the precedence and he's kind of kooky scenarios. a lot of means a lot of artistic content. and then as you said, some you know, low scale information. luckily, most of it is detectable right now, but the, the movement of travel, another direction of travel and the speed of advances means that that's not something that we can be as sure about as we have been in previous selection, that it won't have a serious impact i'm really confused and potentially play into the kind of fact with information ecosystem that we are currently experience experiencing. as you
5:54 pm
mentioned in the us, on here in the u. k. sale if you want to jump. yeah, it feels like there's a big shift that's really important to name that's around the ability to do it. this volume, right? this has been quite nice before and then potentially all sort of personalized that we've already seen. you know, the sense in which, you know, you can all chat g p t to do something in the style. all right? so when people have done research on this, they look at this ability to create volume. the ability to pass lights, which of course, can be used to target people, and then we're seeing this commercialized and available to people. and so all of those make this a more vulnerable election to maybe organize factors. lots of people just want to have fun with the process like follows. so we have to be careful of all of those. i also want to name, but one of the biggest threats that we've seen in this is in elections, but elsewhere is also nonconsensual sexual images. and we don't always know they've been used and they used to target women, politicians, women, journalists, women activists. and so that's definitely going to be happening under the surface even if we don't know it's happening visibly. and so we should name that one is
5:55 pm
a threat that doesn't get the headlines around like disinformation, but it's equally harmful to all the civic space. yeah, youtube, charming in here, i'm looking at someone named anti cage taro, insights one of my favorite parts of the show when i use people's youtube handles and have to kinda crazy names. but she has real concerns and says, my fear of a i is folksy, it is neutral, but it's very racist because of contracts with police. the military's, i don't know if that's true and that racism is going to be recreated in art and writing. i'm an artist, there's a lot of decks that can be created record time, but also those extra white ad i guess are sort of us are key to talk for a moment about that. the way that i think race, gender identity might play into what we're getting from me. i hear yeah, definitely. so that's why i think that's the easiest answer. that's not true. there
5:56 pm
are biases that are built into like charge. it could be, it would be know that example where it would say all the nice things about buy them, but not from these and people at the back and are fixing these every single day. again, it's human that the round the you are going to see somebody says in the data, but then add more and more new versions of these models are coming out there. 6 there are different, almost like stages being built where you can, you can dictate what kind of war. so what kind of opinion you want to have. and so, for example, if you want it to be shakespearian or if you want it to be more liberal, whatever you want it to be, you can change and get opinions more and that. so i think more and more of these models get trained. we're going to see more optionality for people to generate things that they want. and again, can it be used to create harmful content, a 100 percent. but all the biases that we see are kind of added to built in by the
5:57 pm
human entities model. so the model themselves are not basis yet, so they reflect humanity, immunity, and henry, i'm going to ask you to soft question now to give you 60 seconds to answer, because the show is going to end whether you do or not. but my son said, you know, when they said that chest was uniquely human, and computers would never be able to beat humans and chess. and of course they did made it look easy. what does this tell us? what does i tell us about being uniquely human? is there something we learn about humanity from this? well, thank you for the generous 60 seconds. yeah, that's a really tough question. i think look, we've seen time and time again a i able to replicate certain aspects of human performance capabilities. i'm in quite narrow domains and this i think is one of the 1st that really flip the narrative on his head that, you know, low skilled, repetitive, narrow jobs were going to be replaced by a i was seeing, you know, creative industries getting really shaken up by this, at the same time,
5:58 pm
i do think that i, i is not able to be fundamentally creative in the way we talk about it necessarily . humans. i don't think there is an intentionality, that there's not a consciousness which is for acting on the world headline that way. so i, yeah, some aspects can maybe be got it and it's eric henry and i want to thank you. awesome sam for being on the show today and all the humans that joined us out there watching. we'll see you next time on the strength. ah, ah, and this is an enormous emergency for literally billions of the world's population.
5:59 pm
earth rise explores how different fades across the globe are rallying communities. we are actually focus caretaker's of year in a mission to rebuild our broken relationship with the planet. if we can mobilize that huge proportion the world's population, emory got really great believing in change on al jazeera al jazeera sent to the stage, the 3 elephant here, 5 other. i don't know any of the been deployed to faith, just one enemy global experts in disc and i did was in danger. you can see been established in democracy. it was bound to explore an abundance of world class programming of it least polluted, yet best in life. impacts designed to inform, motivate, and inspire you are now to sierra from breaking down the headlines till exposing the powers attempting to finance reporting. what did you do?
6:00 pm
what did you investigate? why didn't you asked the facts to question? there are many during that said, fencer, it will have, but you think effect on subsequent stories. the listening post doesn't cover the news. it covers the way the news is covered to suppress moderate. and in some cases amplify the content you see on your final bill. if he post on al jazeera, ah ah, hello, i'm serial van. yeah, it's great to have you with us. this is the news. our live from doha. coming up in the program, hopes for a respite after days of violence across sedan. a 24 hour ceasefire is due to start in the next hour. i personally prefer that we try from hunger and we die from bullets.

25 Views

info Stream Only

Uploaded by TV Archive on