Skip to main content

tv   The Stream  Al Jazeera  March 30, 2023 11:30am-12:00pm AST

11:30 am
apartment blocks with limited access to power points. engineers are working on ever faster charging stations. so more people switch to electric, to go to canon thong if there is definitely potential as the development of electric cars is rapidly progressing. so is the charging infrastructure and services. and it's not just cause this event is now rebranded from a motor to a mobility show because the same advances an electric vehicle technology are now transforming every moving thing we make from the robots around us to the machines. flying above us also being showcased other kinds of autonomous flying taxes that could soon be filling our skies with south korea, planning on the 1st commercial services within a couple of years. rob mcbride, al jazeera at the sol mobility show. ah,
11:31 am
this is our 0. these are our top stories. china's prime minister is assuring investors his country's economy will bounce back after been battered by pandemic restrictions. a chung till the annual ball, our forum for asia, the china is delivering reforms to revive growth, which it kinda has more from hong kong. economists to say this forum is an opportunity for china to try and seek cooperation on the financial front, particularly with its asian neighbors at a time when it's trying to promote both itself and be asian region as an economic safe haven after consecutive banking crises in the u. s. and europe have royal global stock markets. political analysts also say this is an opportunity for beijing to try and present itself as a credible place for these types of corporation deals to be broken at a time when it's just been trying to promote itself as a global peacemaker. with regards to be ongoing war and ukraine, a fairy carrying around $250.00 people as caught fire in the philippines, killing at least $28.00. several passengers are missing and such
11:32 am
a rescue efforts are underway. the dead include at least 3 children. mexican authorities have identified 8 suspects in a homicide investigation into a fire at a migrant detention center that killed 39 people and suspects glued to federal agents, the state emigration official, and members of a private security company. as early as government has introduced a bill that pays the way for a referendum on the recognition of indigenous people in the constitution as being described as a new chapter in repairing relations. european court of human rights has begun hair and ground break groundbreaking cases against france and switzerland. the be accused of failing to take action on climate change. thousands of retired swiss women say it's damaged their health is when the prime minister benjamin netanyahu says a compromise on his controversial judicial changes hood be possible. he paused for legislation for talks with opposition parties. earlier this fleet proposals have
11:33 am
sparked the biggest protests in israel's history and pipe. frances has spent wednesday night in hospital where the respiratory infection head of the roman catholic church is expected to spend several days receiving treatments. those are headlines. i'll be back with more news hair on out as era after the stream. in south korea, a new generation is taking the stage, shaking up social media fashion in in time. $1.00 oh $18.00, made the world hold. include on out there with hello and welcome to the stream. what you're watching is josh rushing generated by ai. on today's show, we look into the dangers of ai powered dis information,
11:34 am
and how it could be used to manipulate the public fears of a new wave of disinformation loom as sophisticated ai programs become increasingly adept at making images and text. how could a i be used to distort our sense of reality and what's being done to protect the public from misinformation and malicious actors? ah. and what you're watching now is the real josh rushing or is it? well that's the pointed to they show. so let's talk to some experts about that with us to talk about the future of a i and this information from cambridge in the u. k. is henry either an expert broadcaster instance that a media researcher in new york, sam, gregory, executive director of witness and organization that looks at the intersection of video and human rights. and in austin, texas, audra, nadeem, the chief operating officer at open a. i an artificial intelligence company. hey,
11:35 am
one more guest at the table. that's you. so if you're watching this on youtube worker, we've got a live producer right there in the box waiting to get your questions to me so i can get them to the expert. so why don't we do this together? right. ok, sam, can you start us off? i want to talk about the letter that came out today. that's news. let's save that for just a 2nd because what gets me is it seems like i mean, weeks ago chat g p t was like the thing. and then it's like a couple of weeks later already to what g t p 4 and things seem to be changing really fast. and i know the deal with a i is like the fear is that you've crossed a tipping point before you realize it. so can you just kinda catch up maybe people who aren't obsessed with us, like me? where are we right now in a development? why is this important? so we've had for 5 years of really foundational research that started to create the
11:36 am
ability to do things like creat deep fakes, like face of someone who never did something started to create the ability to have interactions with chap bots that gave you a parent human answers started to make it easier to falsify or clone someone's voice. and so what's happened in the last years we've done that seen this very rapid progress of these tools getting into the public eye and becoming accessible. so between the summer of last year, we've gone from tools like a tool called mid journey that came out in july 2022. and last week created an image of the pope in a puff, a jacket or a parent arrest of president trump. and then in parallel, we've seen the advent of these tools like chat g p t that apparently allow you to communicate with what appears to be someone who has given you an answer and maybe doesn't appear like a machine, but it's actually a machine based on a large language model, right, right, right. i want to bring in a video comment. this is from a professor at princeton university, our, our vend narayan here. check this out. there
11:37 am
is an open letter making the rounds stood a calling for a moratorium on new ai. unfortunately, this is a distraction from the real issues. the idea that here is gonna break free and kill us. that's pure speculation. the real harm is actually happening, are totally different. companies train these models on the back of people's data, but give them nothing and exchange. we don't how to solve that today. we don't need 6 months. tax i companies and compensate the artists. unfortunately, we don't seem to have the political will to consider anything like that. i guy, so i want to show the audience my laptop. this headline says, getty images assuming stable diffusion for staggering. $1.00 trillion dollars, that's $150000.00 for image and all the images they took. so this is what he was talking about. the video comment, this kind of internet scraping henry, you can touch on that, but he began the comment by mentioning the letter today. that alarm mosque was also
11:38 am
a part of can you bring this up to date on that letter, please? and then maybe you could touch on this idea that he brought him. yes, certainly. so this letter was published today. it was an open letter featuring, i believe, over a 1000 industry experts, researchers and academics, including as you mentioned, ill mosque. but also people are young talon, the co founder of skype, essentially asking for a moratorium all for the brakes to be put on a i development. particularly in the generative space with a focus on models like g, p, t for, but also moving on to other areas. you know, as you mentioned, mid journey text, the image and so on. and, and so this is really come off the back of this, you know, massive alms race. we're effectively seeing where as i mentioned, within the space of less than a year, mid journeys on version 5, all of the big tech companies are piling in to try and build better and more accessible models. and perhaps, you know, the concern is that there isn't enough proactive safety and ethics considerations
11:39 am
going on here and the fear is it be reactive. but to little too late with regards to the comment from your, from your come into their around and training. he is indeed right that these models operations the data they consume immense amounts of data to get to the state that they're in. we're talking, you know, hundreds of millions of photos and indeed most of the web being scraped in the context of text. and indeed, many of the kind of companies, the license stock imagery, and indeed artist the saying, well look, this is a violation of our copyright. it's a most to what is a, is a controversial issue with some people saying, well, look, they're not actually copying. they're sort of taking inspiration from um, but obviously some people are saying, well look, you know, we need to update copyright to make it fit for purpose in this new age of synthetic content. and using data in this way, without permission, without royalties, is something which we can't accept moving forwards,
11:40 am
or you're at an ai company or you are you in an arms race? and do you think there needs to be someone outside of the company is developing the stuff, stepping in with some kind of regulatory measures? definitely. so we work on text to radio, which is kind of the next big wave of what you're going to see being generated to jamie, i ordered better to be eyes what we call that. i think it's going to have to be public. private partnership later is the raising doctors and companies are raving. dr. going, anyone has never the solution. neither is putting regulation just positively because i don't think dc or anybody, any of the politicians really get the grass or what the technology is. and on the other hand, a lot of people working on it might not know the social must not fully understand the social political aspects of things going into it. so something by a public private partnership, that's what's on some sort of funds, almost like a constitutional law kind of things in place. and these can bear country by country
11:41 am
or by geographic boundaries on how we come up with solutions is going to be the best, at least for me. i think that's going to be the best way to go about it. i want to bring in another video comment. this is jesse lurk. he's the co founder of accountable tech. the rapid escalation and the a i arms are, is really underscores how far behind us has fallen when it comes to regulating tack . we've done nothing to grapple with the catastrophic societal harm social media giant. so i'm the least over the past 15 years. we don't even have a federal privacy law in place. and now these companies are trying to capture the next market, rushing out half baked generative a. i tools that pose entirely novel threats to the information ecosystem. we're not prepared to grapple with them and we do need accountability. that's the job of government. but in the short run, we all need to re learn our critical thinking skills and approach content with
11:42 am
a healthy skepticism, especially if it seems to confirm your existing biases. so sam, i'm checking in on our youtube audience right now, and it seems like the tone is fairly scary and just opium. is there real reason to be fearful here, or is this being blowing out of proportion? i think there's real reason to be fearful. i spent the last 5 years working globally on this issue talking to people who already face similar problems, right? attacks on them with fake images, attacks using nonconsensual sexual images of women, which is the biggest way people use these synthetic media debate models already. and all of those 5 years before this letter came out, folks were saying, we need action. we need to think about how this is going to be controlled. we need to think about the safeguards that are going to be built into the systems. so i think it's right for us to say, be fearful. now, at the same time, we also have to recognize that when we create a hype cycle around this, it benefits certain people. it benefits people who get away with saying you can't
11:43 am
trust what you see online. it benefits big companies. when we say, let's put a moratorium on development. it benefits the incumbents, like jesse said, so i think we have to be fearful, but also really thinking about what are the homes that we already know are going to be generated here? let's not think about hypotheticals. a lot of that, for example, that future of life lead. i was very focused on big picture like future hypotheticals of home. when we know the homes already from what's happening already . the way in which social media is playing out the way in which miss and just information is playing out. so it's time to respond to those needs right now, rather than play to sort of hypothetical fears. further down the line, henry pointed out rather astutely though, who is it that's going to step in because like i look at congress in the u. s. and what their average age is, that they, this is technology that they don't. i don't think they fully get social who's can step into regulate those who should step in and regulate those. yeah, i think the tick tock, hearing the other day, was another example that like the zuckerberg hearing a few years back,
11:44 am
which sort of highlighted some of the am ignorant, a lack of knowledge around them. emerging technologies and regulation is a really tricky one. i mean, we have plays around the world who are considering and are actively working to implement legislation. and we have a notably, i'm in the e u, the u. a i act which would classify many of these generative technologies as well. they called high risk, which would then put further and measures in place for creators of these tools in terms of how they source i data, how they disclose the outputs of their, their tools to audience isn't to 2 people online. um, but then we also have company, i'm country, sorry, like china who have introduced this year to deep synthesis law, which takes it, you know, a step further and says, you know, we're going to actively moderate the content in terms of whether it's fake or real and we get to say if you will, being, you know, essentially
11:45 am
a criminal and publishing they content which it's, you know, perhaps a little bit too far considering their track record and censorship. but i think you'll come to again, he's right that currently governments, we're, you know, salmon. i have been warning for years and about this, this problem and trying to get full full, you know, legislation going in to get key stakeholders engaged. i'm and is often is the case, it takes a real shock. it takes acts up the heights like to weight people up. and, and i think the u. s. and, you know, has a, has a fair bit of work to do as those, the u. k. to kind of get in line here. yeah. but i henry it country by country. can you trust a country by country solution for this because the internet seems to know no boundaries that way. well, i'm not entirely sure what your sanity would be. i mean the u. n is the only body that i'm aware of that could potentially try and get all of it met the states involved in some kind of draft legislation to, to cover this kind of action. but again, look at the dynamic between china and the us around ship manufacturing and the ions,
11:46 am
right? not just between companies but also between countries. i think it's very unlikely that you're going to get that kind of international consensus built that way. unfortunately. which leads to a difficult challenge of countries trying to balance innovation, right, would safety, which is one to strike, jump in there. so i think there's also a person that them out or i think there's also a different framework that's required on how we think about it. and saw that, right? so there is in danger. so for example, people take phone calls where they're, where they can, you know, coffee towards that somebody that you know, end up and educating people on having safe words or having these secret traces within their family. not sharing information online to a stranger on the phone, even if they do sound like somebody that you know, there's like an image or conversation that needs to happen in terms of security. also if there is revenge foreign and
11:47 am
a lot of this is used to abuse women of more education that goes into how the police feel today how easily and quickly complaints can be made. so they're immature things that needs to be done and then there are more policy wide things. but at the same time, i think a huge responsibility also live with the people, think these technologies and how you philosophically deal with it and what you what so for example, the creators who data you're using, how do you incentivize those creators? how do you pay those creators? how do you create more opportunities to build things that, that, that they can monetize upon. and i think a lot of these conversations are happening, but because of all the fear mongering around it because i agree with. 5 sam, yours is like a lot of money to be made in your monitoring. right and get him to the hands of the wrong people were talking about it and these images and more philosophical questions get enough of the table. and i think we need the frameworks for those way
11:48 am
more than the machines are going to take over the world because we're nowhere near that. it's still there. i want to stop the conversation here per 2nd guys, because i want you to expand on something you said at the top to make sure our viewers here. to explain this idea that you need to have a safe word with your family. like why, what are you talking about? i want people to get this yes. for example, there are some difficulties available now where my voice can be used to, you know, for a phone call and somebody can call my dad and be like, hey dad, i'm in this, you know, emergency, can you give me your bank account details, or can you give me your credit card information so i can pay for x, y, and z. that's like the most simplest, you know, scan ever. and what we need to have is our family, this conversation of like, okay, if this happens, what's the 1st question that you're going to ask? that question remains within all if you don't write it anywhere, no, like, you know, put it on your phone, don't put it on a google doc. and that way,
11:49 am
if you do receive one of these phone calls, it's again, a lot of it goes back to very simple like how to not get down to one to one. but i think the need to have these conversations with our kids, with our families is there, and we don't have enough of that conversation. sam, you're going to jump in. yeah, i, you know, one of the things that i think is really important is that we don't place too much pressure on individuals to handle this. it's, i just came back from last week. we ran a meeting with 30 activists and journalists from across africa talking about this because the agents be in response to these types of a i generated images and, you know, we weren't able to, when we looked at images to necessarily discern that they were a, i generated and what that tells us is that instead of trying to put the pressure on each of us, for example, to look at these pictures we see on line and try and guess whether they're manipulated. we really need to think of this pipeline. and this is where, placing the responsibility on tech companies and distributors matters like is a tech company that is building one of these tools. so they placing the signals
11:50 am
within that enable you to see that it was made with a synthetic tool of a providing watermarks that show that it was generated with a particular site of training data are those durable across time so that they're available to someone watching it cuz otherwise we end up in this world words like we start getting skeptical about every image. we look really closely, we try and apply all media literacy and it undermines truthful images, right? and that's the, the biggest worry is that like we create a world where we can trust and the thing because we create this assumption. and if we place that pressure on individuals, particularly individuals, round the world who don't have access to sophisticated detection tools, who exist in environments where they're not being supported by the platforms very well that were doomed to be in a very tough situation. so let's not place the pressure as, as exclusively on us as the viewers we really need to push the responsibility of the pipeline to the technology developers and the distributors. i want to share something that you're talking about here. this is a story about the great cascadia earthquake of 2001. it has pictures of scared
11:51 am
families when it's happening, the city is destroyed. i mean, look at some of these, right. this are quick never happened. and if you really look at this photo, which is a great photo of someone on like a battered beach holding a canadian flag and you zoom in on it. the hand is backward on the right hand side, right? but this looks so real. and what i'm really concerned about, henry is here in the states, we saw how misinformation and disinformation affected election and 2016. i think it affected brett's. it there as well with cambridge analytical. what does cambridge analytical look like in a i for future elections? yeah, it's a good question and i think i just very brief diesel into echo sams coleman and, you know, i get a lot of journalists reaching out to me to go over the last few days saying, you know, can you tell us how to spot a deep fake as an individual and i kind of have to copy everything i say by saying,
11:52 am
you know, look, these images like the ones you just showed are getting better and better. if you look at the images that mid journey was generating back in july of last year. you know, the new ones are coming out at the moment. all, you know, leaks ahead in terms of realism. and so if we put the burden on the individual to look for tell tale signs like the hands which are improving all the time. and it's going to give people false confidence that they can detect something when actually it might be trained out of the models by the new versions in terms of the election context is a really interesting one. and you know, again simon, i think we've been hearing every single election in the u. s. mid term presidential . the, this is going to be the one that deep fakes calls chaos. right? the a video is going to leak or an audio clips going to leave the night before the election is going to swing it or it's gonna cause, you know, chaos. i'm wondering, as you're talking, we're showing the tromp getting arrested photos just so you know what? go ahead, yes, yes. like or image and see which, which luckily didn't fool many people. in contrast, the image of pope frances,
11:53 am
which we may get on to know. but um, but yeah, i think the, this election i think will be different. not necessarily because i think that kind of worst case scenario indistinguishable fake the, even the media and experts such as you know, such as myself and others can't detect. but i think is going to be different because big in a see just a mass proliferation of this content as we're seeing right now. you know, there are videos on line of all of the presidents with their voice. it's close to playing my recall party and it's kind of a really convincing on one level, right? and there's a lot of images it showed of the precedence and these kind of kooky scenarios, a lot of means a lot of artistic content. and then as you said, some, you know, low scale information. luckily most of it is detectable right now. but the, the movement of travel, another direction of travel and the speed of advances means that that's not something that we can be as sure about as we have been in previous selection, that it won't have
11:54 am
a serious impact. i'm really confused and potentially play into the kind of fact with information ecosystem that we are currently experience experiencing. as you mentioned in the us, on here in the u. k. sale if you want to jump in. yeah, it feels like there's a big shift that's really important to name that's around the ability to do it. this is the volume, right? this is being quite nice before and then potentially all sort of personalized that we've already seen. you know, the sense in which, you know, you can ask chat g p t to do something in the style. all right? so when people have done research on this, they look at this ability to create volume. the ability to pass lights, which of course, can be used to target people, and then we're seeing this commercialized and available to people. and so all of those make this a more vulnerable election to maybe organize factors. lots of people just want to have fun with the process like follows. so we have to be careful of all of those. i also want to name, but one of the biggest threats that we've seen in this is in elections, but elsewhere is also nonconsensual sexual images. and we don't always know they've been used and they used to target women, politicians, women, journalists,
11:55 am
women activists. and so that's definitely going to be happening under the surface even if we don't know it's happening visibly. and so we should name that one is a threat that doesn't get the headlines around like disinformation, but it's equally harmful to all the civic space. yeah, youtube, charming in here, i'm looking at someone named anti cage taro, insights one of my favorite parts of the show when i use people's youtube handles and after he is kinda crazy names. but she has real concerns and says, my fear of ai is folksy, it is neutral, but it's very racist because of contracts with police. the military's, i don't know if that's true and that racism is going to be recreated in art and writing. i'm an artist, there's a lot of decks that can be created record time, but also those extra white ad i guess are sort of us are key to talk for a moment about that. the way that i think race and gender identity might play into what we're getting from me. i hear yeah,
11:56 am
definitely. so that's why i think that's the easiest answer is that's not true. there are biases that are built into like charge. it could be, it would be know that example where it would say all the nice things about buy them, but not from these and people at the back and are fixing these every single day. again, it's human that the round the you are going to see somebody says in the data, but then add more and more new versions of these models are coming out there. 6 there are different, almost like stages being built where you can, you can dictate what kind of work for what kind of opinion you want to have. and so, for example, if you want it to be shakespearian or if you want it to be more liberal, whatever you want it to be, you can change and get opinions more and that. so i think more and more of these models get trained. we're going to see more optionality for people to generate things that they want. and again,
11:57 am
can it be used to create harmful content, a 100 percent. but all the biases that we see are kind of added to built in by the human entities model. so the model themselves are not basis yet, so they reflect, humanity, immunity, henry, i'm going to ask you to soft question now to give you 60 seconds to answer, because the show is going to end whether you do or not. but my son said, you know, when they said that chest was uniquely human, and computers would never be able to beat humans and chess. and of course they did made it look easy. what does this tell us? what does i tell us about being uniquely human? is there something we learn about humanity from this? well, thank you for the generous 60 seconds. yeah, that's a really tough question. i think look, we've seen time and time again a i able to replicate certain aspects of human performance capabilities. i'm in quite narrow domains and this i think is one of the 1st that really flip the narrative on his head that, you know, low skilled, repetitive,
11:58 am
narrow jobs were going to be replaced by a i was seeing, you know, creative industries getting really shaken up by this, at the same time, i do think that a, i is not able to be fundamentally and creative in the way we talk about it necessarily. humans. i don't think there is an intentionality, that there's not a consciousness which is for acting on the world headline that way. so i, yeah, some aspects can maybe be got it and it's air henry and i want to thank you. awesome sam for being on the show today and all the humans that joined us out there watching. we'll see you next time on the stream. ah, ah. with a round 10 women are being murdered in mexico every
11:59 am
day. almost always by men. an epidemic of gender based violence that threatens to spiral out of control. now specialists police squads run by women, a trying to reverse the trend and bring the perpetrators to justice. but can they overcome years of moto culture under different? behind the scenes with the fem aside detected on a job 0 in bar for 3, a fraud takes on the big issues. this isn't a one off. he's talking about a systemic issue here. black gloves don't really matter in the police. world unflinching questions is war with lawanda, imminent rigorous debate. people who are dying because of lack of medical treatment, challenging conventional wisdom. the fact that people are starting to get angry about this is in itself a sign of progress. join me, mark him on hill for abroad. what al jazeera,
12:00 pm
the levy breach on the powder river is widening that's ominous for the town of pyro downstream. as more storms bear down on the farming community this week, out of 1700 residents were told to evacuate. the county of monterey has performed more than 170 high water rescues. as a result of this flood, the storms are the result of atmospheric rivers, long currents of waste. you're in the air that caused rain and snow fall. california has experienced no less than 10 such once rare phenomenon. since january an impact of climate change and the probable trend into the future, ah who winning.

22 Views

info Stream Only

Uploaded by TV Archive on