tv The Stream Al Jazeera March 29, 2023 10:30pm-11:00pm AST
10:30 pm
5 meters long, almost the equivalent of 4 double decker buses. 3 times the size of a t rex. it would have dwarfed elephants and humans. paleontologists still learning about how they were able to evolve, to become so vast, really false profit, they could get to these enormous sciences. i mean, various aspects of their skeleton meant they could walk around. alea's la sizes, huge pillow light legs, really white hips that help stabilize the body. and also they would have had a gigantic heart on a huge got to help power, all of the energy that you lead to move that body around. as well as inspiring wonder is exhibit hopes to also remind people who have our own titans to protect like the african savannah elephant which is currently endangered by preserving the habitats and preventing illegal poaching. we can stop them from becoming extinct, charlie,
10:31 pm
unto the our desert london. ah, and one of the top stories on our 0 mexico's president has promised to find those behind monday's fire at a migrant detention center, which kill 38 people. s, as anger grows over a video circulating on social media, which appears to show security footage from inside the center ensued at hires during the blaze. 3 uniform people can be seen walking past without opening the door a lot, sell. holding several men. i put a sonata that i can for you said to all of those people who died. the guards could have opened the gates because there was only a few meters between the gate that separated them from the migration offices. they didn't open the gate, leaving them locked in, the fire advanced and they didn't leave. they didn't help them because they didn't feel like it. the gods treat you badly. i tension arising between israel and it's
10:32 pm
closest ally, the united states over prime minister benjamin netanyahu is controversial plans to overhaul judiciary and the biggest protests in the country's history. netanyahu delayed the plan, and the 2nd day of negotiations has been held with the opposition. president joe biden has urged him to walk away from the controversial plan that prompted a defined response from that. and yahoo! he said israel wait bow to international pressure. and he says, he think it's a compromise with the opposition could be possible. was roland united says her there occasion differences. but i want to assure you that the alliance between the world's greatest democracy and a strong, proud and independent democracy, israel in the heart of the middle east, is unshakable. nothing can change that. united nations has adopted a landmark resolution asking the wealth top court to define the obligations of countries to combat climate change. it was passed by more than
10:33 pm
a $130.00 countries. the call for the international court of justice to provide a legal opinion on climate crisis was inspired by law students from venue i to there's a headlines to stay with us. the stream is up next, looking at the future of a i pad, dis, information on that with more news for you straight after that. thanks for watching . bye for now. talk the law will the law when with neither side, willing to negotiate is the ukraine war becoming a forever war is america's global leadership, increasingly fragile. what will you as politics look like? as we had to the presidential election of 2024. the quizzical look us politics, the bottom line. good. hello and welcome to the stream. what you're watching is josh rushing generated by
10:34 pm
ai. on today's show, we look into the dangers of ai power disinformation and how it could be used to manipulate the public fears of a new wave of disinformation loom as sophisticated ai programs become increasingly adept at making images and text. how could ai be used to distort our sense of reality and what's being done to protect the public from misinformation and malicious actors? ah, and what you're watching now is the real josh rushing or is it? well that's the pointed to they show. so let's talk to some experts about that with us to talk about the future of a i and this information from cambridge in the u. k. is henry either and a i, expert broadcaster, and synthetic media researcher in new york, sam gregory, executive director of witness an organization that looks at the intersection of
10:35 pm
video and human rights. and in austin, texas as read nadeem, the chief operating officer at opus a. i an artificial intelligence company. hey, one more guest at the table. that's you. so if you're watching this on youtube look, or we got a live producer right there in the box, waiting to get your questions to me so i can get them to the expert. so why don't we do this together, right. ok. ah, sam, can you start us off? i, i want to talk about the letter that came out today. that's news. let's say that for just a 2nd, because what gets me is it seems like i mean, weeks ago chat g p t was like the thing. and then it's like a couple of weeks later already to what g t p for and things seem to be changing really fast. and i know the deal with a i is like the fear is that you've crossed a tipping point before you realize it. so can you just kinda catch up maybe people who aren't obsessed with us like me? where are we right now in the development? why is this important?
10:36 pm
so we've had 35 years of really foundational research that started to create the ability to do things like creating fakes, like the face of someone who never did something started to create the ability to have interactions with chap. what's that gave you? a parent human answers started to make it easier to falsify or clone someone's voice. and so what's happened in the last year, as we've done the seen this very rapid progress of these tools getting into the public eye and becoming accessible. so between the summer of last year, we've gone from tools like a tool called mid journey that came out in july 2022. and last week created an image of the poke in a puff a jacket or an a parents arrest of president trump. and then empower law. we see the advent of these tools like chad g, p t that apparently allow you to communicate with what appears to be someone who is giving you an answer and maybe doesn't appear like a machine, but it's actually a machine based on a large language model. right? right, right, i want to bring in a video comment. this is from
10:37 pm
a professor at princeton university, our vend narayan here. check this out. there is an open letter making the rounds stood a calling for a moratorium on new e i. unfortunately, this is a distraction from the real issues. the idea that here is gonna break free and kill us. that's pure speculation. the real harm's actually happening are totally different. i companies train these models on the back of people's data, but give them nothing and exchange. we don't how to solve that today. we don't need 6 months. tax that a i companies and compensate the artists. unfortunately, we don't seem to have the political will to consider anything like that. i guy, so i want to show the audience my laptop. this headline says, getty images assuming stable diffusion for staggering $1.00 trillion dollars, that's $150000.00 for image in all the image they took. so this is what he was talking about. the video can't comment this kind of internet scraping henry,
10:38 pm
you can touch on that. but he began the comment by mentioning the letter today that alon mosque was also a part of, can you bring us up to date on that letter, please? and then maybe you could touch on this idea that he brought in. yes, certainly. so this letter was published today. it was an open letter featuring, i believe, over a 1000, an industry experts research as an academic, including as you mentioned, ill mosque, but also people i yon talon, the co founder of skype, essentially asking for a moratorium all for the brakes to be put on a i development particularly in the generative space with a focus on models like g, p, t for, but also moving on to other areas. you know, as you mentioned, mid journey text to image and so on. and, and so this is really come off the back of this, you know, massive arms race. we're effectively seeing where as sam mentioned, within the space of less than a year, mid journeys on version 5. i'm all of the big tech companies are piling in to try and build better and more accessible models. and perhaps, you know,
10:39 pm
the concern is that there isn't enough proactive safety and ethics considerations going on hair and the fear as it be reactive, but to little too late with regards to the comment from your i'm from your comment there around and training. he is indeed right, that these models are veracious, the data they consume immense amounts of data to get to the state that they're in. we're talking, you know, hundreds of millions of photos and indeed most of the web being scraped in the context of text. and indeed, many of the kind of companies, the unlicensed stock imagery, and indeed artist the saying, well look, this is a violation of our copyright. it's a term with the what is a, is a controversial issue with some people saying, well, look, then i'll actually copying their sorta taking inspiration from um, but obviously some people are saying, well, look, you know, we need to update copyright to make it fit for purpose. in this new age of synthetic content, and using data in this way without permission,
10:40 pm
without royalties, is something which we can't accept moving forwards or you're at an a i company, are you, are you in an arms race and do you think there needs to be someone outside of the company is developing the stuff, stepping in with some kind of regulatory measures. definitely. so we work on tech to video, which is kind of the next big wave of what you're going to see. be generated to jamie. i ordered better to be eyes what we call that. i think it's going to have to be a public private partnership later is raising taxes and companies are raising taxes and anyone is never the solution. neither is putting regulation just positively because i don't think dc or anybody, any of the politicians really get the grass or what the technology is. and on the other hand, a lot of people working on it by not know the social, much not fully understand the social political aspects of things going into it. so something by a public private partnership, that's what's on some sort of almost like constitutional law kind of things in
10:41 pm
place. and these can be concrete by country or by geographic boundaries on how we come up with solutions is going to be the best, at least for me. i think that's going to be the best way to go about it. i want to bring in another video comment. this is jesse lurk. he's the co founder of accountable tech. the rapid escalation and the i arms are, is really underscores how far behind the u. s. s. fall. and when it comes to regulating tag, we've done nothing to grapple with the catastrophic societal harm, social media giants of i'm the least over the past 15 years. we don't even have a federal privacy law in place. and now at the same companies are trying to capture the next market, rushing out half baked generative a. i tools that pose entirely novel threats to the information ecosystem. we're not prepared to grapple with them and we do need accountability. that's the job of government. but in the short run,
10:42 pm
we all need to re learn our critical thinking skills and approach content with the healthy skepticism, especially if it seems to confirm your existing biases. so sam, i'm checking in on our youtube audience right now, and it seems like the tone is fairly scary and just opium. is there real reason to be fearful here, or is this be blowing out of proportion? i think the real reason to be fearful, i spent the last 5 years working globally on this issue, talking to people who already face similar problems, right? attacks on them with fake images, attacks using nonconsensual sexual images of women, which is the biggest way people use these synthetic media debate models already. and all of those 5 years before this letter came out, folks were saying we need action. we need to think about how this is going to be controlled. we need to think about the safeguards that are going to be built into the systems. so i think it's right for us to say, be fearful. now at the same time, we also have to recognize that when we create a hype cycle around this,
10:43 pm
it benefits certain people. it benefits people who get away with saying you can't trust what you see online. it benefits big companies. when we say, let's put a moratorium on development, it benefits the encumbrance. like jesse said, so i think we have to be fearful, but also really thinking about what are the homes that we already know are going to be generated here? let's not think about hypotheticals, a lot of that, for example, that future of life lead. i was very focused on big picture like future hypotheticals or palm when we know the homes already from what's happening already . the way in which social media is playing out the way in which miss and just information is playing out. so it's time to respond to those needs right now, rather than play to sort of hypothetical fears. further down the line, henry pointed out rather astutely though, who is it that's going to step in because like i look at congress in the u. s. and what their average age is that they, if this is technology that they don't, i don't think they fully get social who's can step in a regulators who should step in and regulate those. yeah, i think the tick tock,
10:44 pm
hearing the other day, was another example of like the zuckerberg hearing a few years back now. which sort of highlighted some of an ignorance, a lack of knowledge around and emerging technologies. and regulation is a really tricky one. i mean, we have players around the world who are considering and are actively working to implement legislation. and we have a notably, i'm in the u, the u. a i act which would classify many of these generative technologies is what they call high risk, which would then put further measures in place for creators of these tools in terms of how they source i data, how they disclose the outputs of their, their tools to audiences and to 2 people online. um, but then we also have company, i'm country, sorry, like china who have introduced this year. the deep synthesis law, which takes it, you know, a step further and says, you know, we're going to actively moderate the content in terms of whether it's fake or real
10:45 pm
. and we get to say, if you will, being, you know, essentially a criminal and publishing they content which it's, you know, perhaps a little bit too far considering their track record and censorship. but i think you'll come to again, he's right. the currently government's we're, you know, sam and i have been warning for years and about this, this problem and trying to get full full, you know, legislation going in to get key stakeholders engaged. i'm and is often is the case . it takes a real shock. it takes acts up the height cycle to wake people up. and, and i think the u. s. m, you know, has a high so fed of words to do as does the u. k. to kind of get in line here. yeah, but i, henry, is it country by country? can you trust a country by country solution for this? because the internet seems to know no boundaries that way. well i, i'm not entirely sure what tentative would be. i mean, the un is the only bodied i'm aware of that could potentially try and get all member states involved in some kind of draft legislation to,
10:46 pm
to cover this kind of action. but again, look at the dynamic between china in the us, around ship manufacturing and the i alms, right? not just between companies but also between countries. i think it's very unlikely that you're going to get that kind of international consensus built that way. unfortunately. which leads to a difficult challenge of countries trying to balance innovation, right, with safety, which is a one to strike, jump in there. so i think there's also a person that them other i think is also a different framework that's required on how we think about it. and saw that, right? so there is image dangerous. so for example, people take phone calls where they're, where they can, you know, coffee towards that somebody that you know, and, and educating people on having safe words or having these secret traces within their family. not sharing information online to a stranger on the phone, even if they do sound like somebody that you know, there's like an image or conversation that needs to happen in terms of personal
10:47 pm
security. also if there is rebecca foreign and a lot of this is used to abuse women, more education that goes into how the police feel for their how easily and quickly complaints can be made. so they're immature things that needs to be done and then they're more policy wide things. but at the same time, i think a huge responsibility also live with the people building these technologies and how you philosophically deal with it and what you what so for example, the creators who data you're using, how do you incentivize those creators? how do you create, how do you create more opportunities to build things that, that, that they can monetize upon. and i think a lot of the conversations are happening, but because of all the fear mongering around it because i agree with. 5 sammy are just like a lot of money to be made and feel monitoring right and get into the hands of the wrong people. we're talking about it and these images and more philosophical
10:48 pm
questions get enough of the table. and i think we need the frameworks for those way more than the machines are going to take over the world because we're nowhere near about it. so i want to stop the conversation here per 2nd guys, because i want you to expand on something you said at the top to make sure our viewers here to explain this i did it, you need to have a safe word with your family. what, why, what are you talking about? i want people to get this yes. for example, there's time to, to college is available now where my voice can be used to, you know, for a phone call and somebody can call my dad and be like, hey, dad, i'm in this, you know, emergency, can you give me your bank account details or can you give me your credit card information so i can pay for x, y, and that's like the most simplest, you know, scan ever. and what we need to have is our family, this conversation of like, okay, if this happens, what's the 1st question that you're going to ask? that question remains within all if you don't write it anywhere, no, like,
10:49 pm
you know, put it on your phone, don't put it on a google doc. and that way, if you do receive one of these phone calls, it's again, a lot of it goes back to very simple like how to not get down to one to one. but i think the need to have these conversations with our kids, with our families is there, and we don't have enough of that conversation. sam, you're going to jump in. yeah, i, you know, one of the things that i think is really important is that we don't place too much pressure on individuals to handle this it's, i just came back from last week. we ran a meeting with 30 activists and journalists from across africa talking about this, the agency in response to these types of a i generated images and, you know, we weren't able to, when we looked at images to necessarily discern that they were generated. and what that tells us is that instead of trying to put the pressure on each of us, for example, to look at these pictures we see on line and try and guess whether they're manipulated. we really need to think of this pipeline. and this is where, placing the responsibility on tech companies and distributors matters like is
10:50 pm
a tech company that's building one of these tools. so they placing the signals within that enable you to see that it was made with a synthetic tool of a providing watermarks that show that it was generated with a particular side of training data are those durable across time so that they're available to someone watching it cuz otherwise we end up in this world words like we start getting skeptical about every image. we look really closely, we try and apply all media literacy and it undermines truthful images, right? and that's the, the biggest worry is that like we create a world where we can trust and the thing because we create this assumption. and if we place that pressure on individuals, particularly individuals, round the world who don't have access to sophisticated detection tools, who exist in environments where they're not being supported by the platforms very well that were doomed to be in a very tough situation. so let's not place the pressure as, as exclusively on us as the viewers we really need to push the responsibility of the pipeline to the technology developers and the distributors. i want to share
10:51 pm
something that you're talking about here. this is a story about the great cascadia earthquake of 2001. it has pictures of scared families when it's happening, the city is destroyed. i mean, look at some of these, right. this are quick never happened. and if you really look at this photo, which is a great photo of someone on like a battered beach holding a canadian flag and you zoom in on it. the hand is backward on the right hand side, right? but this looks so real. and what i'm really concerned about, henry is here in the states, we saw how misinformation and disinformation affected not election and 2016. i think it affected brett's. it there as well with cambridge analytical. what does cambridge analytical look like in a i for future elections? yeah, it's a good question and i think i just very brief diesel into echo sams coleman. and, you know, i got a lot of journalists reaching out to me to go over the last few days saying,
10:52 pm
you know, can you tell us how to spot a deep fake as an individual. and i kind of have to copy everything i say by saying, you know, look, these images like the ones you just showed are getting better and better. if you look at the images that mid journey was generating back in july of last year. you know, the new ones are coming out at the moment. all, you know, leaks ahead in terms of realism. and so if we put the button on the individual to look for telltale signs, like the hands which are improving all the time. and it's going to give people false confidence that they can detect something when actually it might be trained out of the models by the new versions. i'm in terms of the election context is a really interesting one. and you know, again simon, i think we've been hearing every single election in the u. s. a mid term presidential. the this is going to be the one that deep faked schools. chaos right . the a video is going to leak or an audio clips going to leave the night before the election is gonna swing it, or it's gonna cause, you know, chaos. i'm wondering, as you're talking, we're showing the trump getting arrested photos just so you know what?
10:53 pm
go ahead, yes, yes. like or image and see which, which luckily didn't fool many people. in contrast, the image of pope frances, which we may get on to but um, but yeah, i think the, this election i think will be different. not necessarily because i think that kind of worst case scenario indistinguishable fake the, even the media and experts such as you know, such as myself and others contact. but i think is going to be different because we're gonna see just a mass proliferation of this content as we're seeing right now. you know, there are videos a line of all of the presidents with their voice. it's cloned, playing my recall party, and it's kind of a really convincing on one level, right? and there's a lot of images it showed of the precedence and these kind of tricky scenarios. a lot of means a lot of artistic content. and then as you said, some, you know, low scale inflammation. luckily, most of it in detectable right now, but the, the movement of travel, another direction of travel and the speed of advances means that that's not
10:54 pm
something that we can be as sure about. we have been in previous selection that it won't have a serious impact. i'm really confused and potentially play into the kind of fact with information ecosystem that we are currently experience experiencing. as you mentioned in the us, on here in the u. k. fail if you want to jump. yeah, it feels like there's a big shift that's really important to name that's around the ability to do it. this volume, right? this is being quite nice before. and then potentially all sort of personalized that we've already seen. you know, the sense in which, you know, you can ask chat g p t to do something in the style. all right? so when people have done research on this, they look at this ability to create volume. the ability to pass life, which of course, can be used to target people, and then we're seeing this commercialized and available to people. and so all of those make this a more vulnerable election to maybe organize factors. most people just want to have fun with the process like for laws. so we have to be careful of all of those. i also want to name, but one of the biggest threats that we've seen in this,
10:55 pm
in elections, but elsewhere is also nonconsensual sexual images. and we don't always know they've been used and they used to target women, politicians, women, journalists, women activists. and so that's definitely going to be happening under the surface even if we don't know it's happening visibly. and so we should name that one is a threat that doesn't get the headlines around like disinformation, but it's equally harmful to all the civic space. yeah, youtube, charming in here, i'm looking at someone named anti cage taro, insights it's one of my favorite parts of the show when i use people's a year to handles and have to read these kinda crazy names. but she has real concerns and says, my fear of a i is folksy, it is neutral, but it's very racist because of contracts with police. the military's, i don't know if that's true and that racism is going to be recreated in art and writing. i'm an artist, there's a lot of a decks that can be created and record time, but also those extra white ad that i guess that are sort of us are key to talk for a moment about that. the way that i think race,
10:56 pm
gender identity might play into what we're getting from me. i hear yeah, definitely. so that's why i think that's the easiest. that's not true. there are biases that are built into like charge. it could be, it would be know that example where it would say all the nice things about buy them, but not from these and people at the back and are fixing these every single day. again, it's human that around. so you are going to see some biases in the data, but then add more and more new versions of these models are coming out there. 6 there are different, almost like stages being built where you can, you can dictate what kind of war. so what kind of opinion you want it to have. and so, for example, if you want it to be shakespearian or if you want it to be more liberal, whatever you want it to be, you can change and get opinions more and that. so i think more and more of these models get trained. we're going to see more optionality for people to generate
10:57 pm
things that they want. and again, can it be used to create harmful content, a 100 percent. but all the biases that we see are kind of added to built in by the human entities models. so the model themselves are not basis yet, so they reflect, humanity, humanity, henry, i'm gonna ask you to soft question now to give you 60 seconds to answer it, because the show is going to end when you do or not. but my son said, you know, when they said that chest was uniquely human, and computers would never be able to beat humans and chess. and of course they did made it look easy. what does this tell us? what does i tell us about being uniquely human? is there something we learn about humanity from this? well, thank you for the generous 60 seconds. yeah, that's a really tough question. i think look, we've seen time and time again a i able to replicate certain aspects of human performance capabilities. i'm in
10:58 pm
quite narrow domains and this i think is one of the 1st that really flip the narrative on his head that, you know, low skilled, repetitive, narrow jobs were going to be replaced by a i was seeing, you know, creative industries getting really shaken up by this, at the same time, i do think that i, i is not able to be fundamentally un creative in the way we talk about it necessarily. humans. i don't think there is an intentionality, that there's not a consciousness which is for acting on the world headline that way. so i, yeah, some aspects can maybe be, i got it and it's there henry, and i want to thank you. ok, i'm sam for being on the show today and all the humans that joined us out there watching will see you next time on the strength ah,
10:59 pm
[000:00:00;00] with holding the powerful to account as we examine the ulysses rule in the world on al jazeera. it's a $1000000000.00 money known drink operation. the coal. marsha is bigger than the company with financial institutions, regulators and governments complicit. not with it. is that right? i've described that in a 4 part series. al jazeera is investigative unit because under cover in southern africa, pittsburgh, we can fill the 90 percent of the government. once it's falling, it's perfectly brandon, good part to on al jazeera, when the news breaks, some buildings that had already been damaged, have been further pushed over to one side. others that were close to collapse when people need to be heard. and the story told, i couldn't tell them that i was
11:00 pm
a musician when i was supposed to be burned with exclusive interviews, an in depth to poor. each centimeter of this stuff represents a year of life. al jazeera has teams on the ground to bring you more award winning documentary and lives landmark cases that sent shock waves around the world. it's enormous it's phenomena and just article and paid the way for the potentially to penalize climate in action is the will wake up call for the government. this is really something that can make a turning point or thrice, meets the citizens using the law to hold governments and corporations to account if they don't want to do it by asking, then let's go to court. the case for the client on a jesse ah,
21 Views
Uploaded by TV Archive on
