tv The Stream Al Jazeera September 6, 2023 10:30pm-11:01pm AST
10:30 pm
it, while our own solar system is 4600000000 years old. some of the oldest particles that i've been discovered, dates back, 7000000000 years old of the particles, the if we think on the surface come from the asteroids. so the asteroids, these are typically found on over, it's around the song between the orbits of moans and they will be subject to. and so those objects, some of them offered to be radios. the research by the school of physical sciences of the university of kent will go on to form the basis of a citizen science based project. and the hope is that it will spell public interest in space science as well as provide an answer to what's out there. so anybody, jagow, i'll just sarah rochester. the, this is alice has 0 and these are the top stories or us judge has ruled the donald trump and his 18 co defendants will be tried together on election midland cases in
10:31 pm
georgia. the judge has said that the trial will start on october 23rd. severe weather is causing catastrophic floods in bulgaria, greece and turkey. a head least 14 people have been confirmed dead and several are missing. forecasters are predicting more rain in the coming days. at least 17 people have been killed in a russian misalignment, half of the town. of course, then to discuss, further involved in your zalinski condemned the attack on the city nearly 20 kilometers from the front line. the white house is condemned. what it called russia's brutal attack on constantine. you've got us secretary of state anthony blinking is in keyed for a 2 day visit and has pledged additional military aid to ukraine. the package includes abrams tanks, highmore is rocket systems, javelin messiahs, and other munitions. for announcing new assistance, totaling more than $1000000.00 in this common effort. that includes $665500000.00 in new military and civilian security systems. in total,
10:32 pm
we committed over $43000000000.00 and assistance security system. since the beginning of the russian aggression in the democratic republic of congo, 6 soldiers of being tried in a court martial for their involvement and the killing of $56.00 protesters. earlier defense lawyers ask the constitutional court to stop the proceedings. they say the trial is politically motivated. soldiers are accused of ordering. the killings of protesters demanding that un peacekeepers leave. the d. r. c. bonds, military leader has begun releasing political prisoner as a brief, a leaky and game a granted them amnesty during his an alteration on monday is also met with opposition leader albert on the elsa and gave him a seized power moment south to present italy bound go was declared the winner in dispute, the directions and those are the headlines on al jazeera coming up next in just a few short moments is the stream, the due state you i am very delighted to participate in the all
10:33 pm
the other confidence, brightest legal students. putting minds against them, this means in africa would really be interesting to see how we miss uh gather together for tournament. unlike any other properties, i'm expecting maybe the apps we can cod on human and people slides is now in session with new applicant move on to 0. the hello and welcome to the stream. what you're watching is josh rushing generated by a guy on today's show. we look into the dangers of ai powered this information and how it could be used to manipulate the public fears of a new wave of this information loom a sophisticated ai programs become increasingly adept at making images and text.
10:34 pm
how could i be used to distort our sense of reality and what's being done to protect the public from misinformation and malicious actors? and what you're watching now is the real josh rushing or is it? well, that's the point of today's show. so let's talk to some experts about that with us to talk about the future of ai and this information from cambridge into you. k is henry, either an expert broadcasters, instead of media research or in new york, sam gregory, executive director of witness an organization that looked at the intersection of video and human rights. and in austin, texas. i was running the team, the chief operating officer at opus a i and artificial intelligence company. hey, one more guest at the table. that's you. so if you're watching this on youtube and
10:35 pm
we've got a live producer right there in the box, waiting to get your questions to me so i can get them to the expert. so why don't we do this together? right. okay. um, sam, can you start as off i i want to talk about the letter that came out today. that's news. let's say that for just a 2nd, because what gets me is it seems like i mean of weeks ago chat g p t was like the thing. and then it's like a couple of weeks later, we're already to what g t p for and things seem to be changing really fast. and i know the deal with a i is it's like, and the fear is that you've crossed a tipping point before you realize it. so can you just kinda catch up, maybe people who aren't obsessed with us like me? where are we right now? and they are development. why is this important? so we've had for 5 years of really foundational research that started to create the ability to do things like creative fakes, like a face or someone who never did. something started to create the ability to have
10:36 pm
interactions with job thoughts that gave you a power and human answers started to make it easier to falsify or plug in someone's voice. and so what's happened in the last year, as we saw on the scene that's very rapid progress of these tools getting into the public eye and becoming accessible. so between the summer of last year, we've gone from tools like a tool called mid journey that came out in july 2022. and last week grades with an image of the pope in a puff, a jacket or a power to arrest the president. trump. i've been in power law, we see the expense of these tools like charge you bt, but apparently allow you to communicate with what appears to be someone who is giving you an on. so, um, and maybe doesn't appear like a machine, but it's actually a machine based on a large language model. right, right, right. i want to bring in the video comment. this is from a professor at princeton university on our, our vin. no already on here. check this out there is an open letter making around studying, calling for a moratorium, a new
10:37 pm
a i. unfortunately, this is a distraction from the real issues. the idea that the guy is gonna break free and kill us. that's pure speculation. the real firms actually happening are totally different. hey, i companies train these models on the back of people's data, but give them nothing and exchange. we don't have to solve that today. we don't need it 6 months, tax the companies and compensated the artists. unfortunately, we don't seem to have the political well to consider anything like that. the right guy. so i want to show the audience my laptop. this headline says, getty images assuming stable the fusion for staggering $1.00 trillion dollars, that's $150000.00 for image and all the images they took. so this is what he was talking about. the video card comment, this kind of internet scraping henry, i think you can touch on that. but he began to comment by mentioning the letter today. that alon mosque was also a part of, can you bring us up to date on that letter, please? and then maybe you can touch on this idea that he brought in. so yeah, certainly,
10:38 pm
so and this lecture was published today, it was an open black to featuring, i believe, over a 1000 industry experts free such as an academics, including as you mentioned, a little mosque. but it was that people like young tile into kind of founder of skype, essentially asking for a moratorium or for the brakes to be puts on a development, particularly in the generative space. we're not focused on models like g p t full but also moving on to other areas. um, you know, as you mentioned, mid journey text image and so on. and, and so this is really come off the back of this, you know, must evolves, right. you square it, but to be seeing where as i mentioned with in the space of less than a year and attorneys on version 5 of all of the big tech companies are piling in to try and builds back to or more accessible models. and perhaps, you know, the concern is that there isn't enough proactive safety and ethics considerations going on here and the fear is it be reactive. but to little too late with regards
10:39 pm
to the comment from you or from, you'll come into that around training. he's these indeed, right, that these models alterations the data. they consume immense amounts of data to get to the state that they're really talking, you know, hundreds of millions of photos and indeed most of the web being scraped in the context of text. and indeed, many of the kind of companies licensed the imagery and indeed office the saying, well look, this is a violation of our copyright. it's a most of what is a, um, is a controversial issue, which some people say, well, look, then we'll actually copying that. so taking inspiration from him, but obviously some people are saying, well look, you know, we need to update copyright to make it fit for purpose in this new age of synthetic content and using data in this way without permission, without royalties. it's something which we can't accept moving forward. all right, here at an a i company, are you, are you in an arms race and do you think there needs to be someone outside of the company is developing the stuff?
10:40 pm
stepping in with some kind of regulatory measures are definitely so we work on text to video, which is kind of the next big wave of what you're going to see being generated to trade in a i or directory. yeah, it's what we call it. i, i think it's going to have to be a public, private partnerships. they, there's raising taxes and companies, those raising taxes in any one is never the solution. neither is putting regulation . just have posit me because i don't think dc or anybody, any of the politicians really get the grass or what the technology is. and on the other hand, a lot of people working on it by not know this ultrabook much not when you understand the social, political aspect of things going into it. so something by a public private partnership, that's what's on some sort of funds, almost like a constitutional law kind of things in place. and these can vary country by country or by geographic boundaries on how we come up with solutions is going to be the bat, at least for me. i think that's going to be the best way to go about it. i'm going
10:41 pm
to bring in another video comment. this is jesse alert. he's a co founder of accountable tech in the rapid escalation in the eye arms, or is really underscores how far behind the u. s. has fallen when it comes to regulating tech. we've done nothing to grapple with the catastrophic societal harm, social media giants. so on the least over the past 15 years, we don't even have a federal privacy lawn place. and now the same companies are trying to capture the next market. rushing out half baked generative a i tools that pose entirely novel threats to the information eco system. we're not prepared to grapple with them and we do need accountability. that's the job of government. but in the short run, we all need to re learn our critical thinking skills and approach content with a healthy skepticism, especially if it seems to confirm your existing biases. so sam, i'm checking in on our youtube audience right now,
10:42 pm
and it seems like the tone is fairly scary and just opium. is there a real reason to be fearful here, or is this being blowing out of proportion? i think there's a real reason to be fearful. i spent the last 5 years working globally on this issue, talking to people who already face similar problems, right? to a tax on them with fake images, a tax using non consensual sexual images of women, which is the biggest way people use these uh, synthetic media, deep site models already. um, on one of those 5 years before this letter came out, folks was saying, we need action. we need to think about how this is going to be controlled. we need to think about the safeguards that are going to be built into the systems. so i think it's right for us to say, be fearful. now, at the same time, we also have to recognize that when we create a hype sites around this, it benefits certain people. it benefits people who get away with saying you can't trust what you see online. it benefits big companies. when we say, let's put a moratorium on development. it benefits the incumbents, like jesse said,
10:43 pm
so i think we have to be fearful, but also really thinking about what are the homes that we already know are going to be generated here? let's not think about hypothetical. it's a lot of that, for example, that future of life letter was very focused on the big picture like future hypotheticals of homes. when we know the homes already from what's happening already, the way in which social media was playing out the way in which miss send us information is playing out. so it's time to respond to those needs right now, rather than play just sort of hypothetical fios further down the line. henry osler pointed out rather, is totally though, who is it is going to step in because like i looked at congress and the us and what their average age is, that they is. this is technology that they don't, i don't think they fully get so so who can step into regulate those who should step into regulate those? and yeah, i think the tick tock here in the other day was another example of it. like, does look about hearing a few years back, which sort of highlighted some of the ignorance, the lack of knowledge around emerging technologies. regulation is
10:44 pm
a really tricky one. i mean, we, we have plays around the world who are considering and are to be working to implement the legislation. we have a note to be in the u. d. u, a i act which would classify many of these generative technologies is what they call high risk, which would then put further measures in place for creatives of these tools in terms of how they suicide data, how they disclose the outputs of the, the tools to audiences and to, to people online, but then we also have company and country. so a like china who have introduced, they see at the deepest emphasis which takes it, you know, a step further and says, you know, we're going to actively moderate the content in terms of whether it's faithful, real. and we get to say, if you're being, you know, essentially a criminal and publishing they content, which is, you know, perhaps a little bit too far considering that track record in censorship. but i think you'll come to again, he's right, the current lead governments. we, you know,
10:45 pm
so i'm, and i have been boring for years about this, this problem and trying to get full full um, you know, legislation going in to get key stakeholders engaged as often is the case. it takes a real shock. it takes like the heights like to wait people up and i think the us um, you know, has a cost effective what to do as those b u. k. to kind of get in line here. yeah. but henry, is it, is it country by country? can you trust a country by country solution for this? because the internet seems to know no boundaries that way. a while i'm not entirely sure what you were tentative would be, i mean, the u. n. is there anybody that i'm aware of that could potentially try and get all of it met the states involved in some kind of draft. so it legislation to, to cover this kind of auction. but again, look at the dynamic between china and the us around cit, manufacturing on the ions raise, not just between companies, but we'll say between countries i think is very unlikely that you're going to get
10:46 pm
that kind of international consensus built that way. unfortunately. um, which leads to a difficult challenges for countries trying to balance innovation right. with safety, which is a dad wants to strike stuff in there. so yeah, i think there's also the person that sam other yeah, i think is also a different framework that's required on how we think about it and saw that, right. so there's image of injury. so for example, people take phone calls where they're, where they can, um, you know, coffee, the words of somebody that you in the window and educating people on having safe words or having these uh, secret phrases within their family. not sharing information online to a stranger on the phone. uh, even if they do sound like somebody that you know. so there's like an image of conversation that needs to happen in terms of personal security. and also if there is a revenge born and a lot of this is used to abuse women of more education that goes into how the police deals with the how easily and quickly complaints can be made. so the image
10:47 pm
of things that needs to be done and then they're more policy wide things. but at the same time, i think it's huge responsibility. all the life with the people are building these technologies and how you philosophically deal with it and what you put. so for example, the creator who's data you're using, how do you incentivize those create those? how do you pay those creators? how do you create more opportunities for creat goes to build things that that, that they can monetize upon. and i think a lot of these conversations are happening, but because of all the, your monitoring around it because i agree with. 5 henry, and so i'm here, there's like a lot of money to be made into your monitoring. right and get into the hands of the wrong default. we're talking about it. and these images and more philosophical questions get knocked off the table. and i think we need frameworks for those way, more than all the machines are going to take over the world because we're nowhere near that. it's still humans at the home of every house, or i'm gonna stop the conversation here for a 2nd guys, cuz i want you to expand on something you said at the top to mixture of yours here
10:48 pm
. to explain this idea that you need to have a safe word with your family, like why, what are you talking about? i want people to get this yes. a, for example, there's tons of technology is available now where my voice can be used to, you know, for a phone call and somebody can call my dad and be like, hey, dad, i'm in this, you know, emergency, can you give me your bank account details or can you give me your credit card information so i can pay for x, y, and z? it's like the most simplest, you know, of scam ever. and what we need to have is with our family, this conversation of like, okay, if this happens, what's the 1st question that you're going to ask? and that question remains within all, if you don't write it anywhere, know like, you know, put it on your phone, don't put it on google docs. and that way, if you do receive one of these phone calls, it's again, a lot of that goes back to very simple like how not get scanned 101. but i think
10:49 pm
the need to have these conversations with our kids, with our families is there, and we don't have enough of that conversation. sam, you're going to jump in. yeah, i, you know, one of the things that i think is really important is that we don't place too much pressure on individuals to, to handle this sits, i just came back from last week. we ran a meeting with 30 activists and john list from across africa. talking about this, such as the agents, the in response to these types of a i generated images and, you know, we, we weren't able to, when we looked at images to necessarily to son that they weigh i generated. and what that tells us is that instead of trying to put the pressure on each of us, for example, to look at these pictures we see online and try and guess whether that manipulated . we really need to think of this pipeline. and this is where, placing the responsibility on tech companies and distribute those matters like is a tech company that's building one of these tools that they placing the signals within it that enable you to see that it was made with a synthetic tool. are they providing watermarks that show that it was generated
10:50 pm
with the protection, the side of training data, all those jo, ripple across time so that they're available to, and someone watching it because otherwise we end up in this world words like we start getting skeptical about every image, we look really closely that we try and apply on media literacy and it undermines truthful images, right? and that's the, the biggest worry is that like we create a world where we can trust and the thing because we create this assumption. and if we place that pressure on individuals, particular individuals around the world who don't have access to sophisticated detection tools, who exist in environments, whether or not being supported by the platforms very well, then why do to, you know, to be in a very tough situation. so let's not place the pressure as, as exclusively on us as the viewers we really need to push the responsibility of the pipelines and the technology developers and the distributive. i want to show something that you're talking about here. this is a story about the great cascadia earthquake of 2001. it has pictures of scared families when it's happening, the cities destroyed. i mean, look at some of these,
10:51 pm
right? the sort quite never happened. and if you really look at this photo, which is a great photo of someone on like a battered beach holding a canadian flag and you zoom in on it. the hand is backward on the right hand side . right, but the smoke so real and what i'm really concerned about henry is here in the states. we saw how misinformation and this information effect another election in 2016. i think it effected brett's. it. there as well with cambridge and i'll let it go. what is cambridge analytical look like in a i for future elections. right? yeah, it's a good question and i think i just very brief. these ones that go sams come in, i'm, i'm, you know, i get a lot of jobless reaching out to me to give her the last few days saying, you know, can you tell us how to spot a deep space because an individual and i kind of have to copy everything i say by saying, you know, look, these images like the ones you just showed are getting better and better. if you look at the images, the mid journey was generating back in july of last year. you know,
10:52 pm
the new ones that are coming out at the moment i'll, you know, leaks ahead in terms of realism. and so if we put the button on the individual to look the tell tale signs like the hands which are improving the time. and it's going to get people false confidence that they can detect something when actually it might be trained out of the models by the new versions. and in terms of the election context is a really interesting one. and you know, again simon, i think we've been hearing every single election us submit the presidential that this is going to be the one, the dfcs schools. chaos right. the video is going to leak or and would you eclipse, going to leave the night before the election is going to swing it or it's going to cause you know, the chaos. and i'm wondering, as you're talking, we're selling the trump getting arrested photos just so you know what? good. go ahead. yes. yes. like for example, you can see which, which luckily didn't fool many people. in contrast, the image of pipe francis, which we may get on to but um, but yeah, i think the, the selection i think will be different. not necessarily because i think that kind
10:53 pm
of worst case scenario indistinguishable, fake the, even the media and experts such as, you know, such as myself and others comp the tact. but i think is going to be different because we're going to see just a mass of proliferation, of this content as wasting right now. you know, there were videos on line of all of the presidents with that voice is chloe and playing mary comp policy. and it's kind of irritated convincing on one level, right? and there's a lot of images, it just showed the presidents and these kind of cookie scenarios, a lot of means a lot of autistic content. and then as you said, some, you know, low scale to information of luckily, most of it is detectable right now. but the, the movement of travel, i know there's are action of trouble and the speed of advances means that that's not something that we can be as sure about as we have been in previous selection, is that it was, have a serious impact and really confused. and potentially play into the kind of fact with information ecosystem that we are currently experience experiencing. as you
10:54 pm
mentioned in the us on here in the u. k. say i will talk you on a jump that yeah, yeah, it feels like there's a big shift that's really important to name that's around the ability to do it doesn't volume, right. this has been quite an issue before. and then potentially will sort of personalize that. we've already seen, you know, the sense in which, you know, you can always chat to be to, to do something in the style. all right, so when people have done research on this, they look at this ability to create volume, the ability to pass lives, which of course can be used to target people. and then we're seeing this commercialized and available to people. and so one of those make this a more vulnerable election to may be organized, packed as well. so people are just wanna have fun with the process like phils. um, so we have to be cafe level one of those. i also want to name, but one of the biggest threats that we've seen and this is an election spouse, whereas also known consensual sexual images. and we don't always know they've been used and they used to target women, politicians, women, journalist, women activists. and so that's definitely going to be happening under the surface even if we don't know what's happening visibly. and so we should name that one is
10:55 pm
a threat that doesn't get the headlines around like this information, but it's equally harmful to all the civic space. youtube charming in here i'm looking at someone named a anti case taro, insights it's one of my favorite parts of the show when i use people's youtube handles. and i have to read these kind of crazy names. but she has real concerns as, as my fear of a i is bulky, it is neutral, but it's very racist because the contracts with police and military's, i don't know if that's true and that racism is going to be recreated in art and writing. i'm an artist, there's a lot of decks that can be created on record time, but also those to extra white says add that i guess that are sort of um, as a key to be talking for a moment about that. the way that i think raising gender identity might play into what we're getting from a i hear a yeah. definite i. so that's why i, i think that's the easiest answer is that's not true. there are biases that are built into uh like charges to be able to get it. we know that example where it
10:56 pm
would say all the nice things about buy them, but nothing about the drum. and these are on people at the back and are fixing these every single day. again, it's human that the round, so you are going to see some biases in the data, but then add more and more new versions of these models are coming down there. 6 there are different, almost like um, stages being built the way you can, you can dictate what kind of way. so what kind of opinion you want it to have. um and so for example, if you want it to be experian or if you want it to be more of, you know, no liberal whatever you want it to be, you can change and get opinions more in that. so i think i was more and more these models get trained, we're going to see more option now that people people to generate things that they want. and again, going to be used to create a harmful content, a 100 percent. but all the biases that we see are kind of data built in by the
10:57 pm
human entities mano, so the model themselves are not bases yet. so they reflect to vanity mandy and henry, i'm gonna ask you to also ask a question. i'm gonna give you 60 seconds to answer it because the show is going to end whether you do or not. but, but, but my son said, you know, when they said the test was uniquely human and computers would never be able to beat humans and chess. and of course they did made it look easy. what does this tell us? what is a i tell us about being uniquely human? is there something we learn about humanity from this as well? thank you for the generals. 60 seconds. yeah, that's a really tough question. i think we've seen time and time again able to replicate certain aspects of human performance capabilities. i'm in quite narrow domains and this i think, is one of the 1st that really flipped the narrative on his head, you know, low scale, repetitive not results. so we're going to be replaced by ai. we're seeing, you know, creative industry is getting really shaken out by this. at the same time,
10:58 pm
i do think that i is not able to be fundamentally creative in the way that we talk about it necessarily. humans, i don't think there is an intentionality, that there's not a consciousness which is for acting on the world, henry in that way. so i, yeah, some aspects couldn't maybe be the edits there. henry and i want to thank you us, um, sam for being on the show today and all the humans that joined us out there watching . we'll see you next time on the stream the . the world of high frequency share trading expos time is indian who is basically trading. i couldn't last. $30000000.00 was
10:59 pm
a terrifying experience. how onto efficient intelligence. as rates the stakes and risks on the money markets is markets go faster and faster. we're opening up the possibility for an instability, for no money buttons on average is here is a time for the west to we think the best option for the ukraine rush award and what, what those options look like. what is us strategy when it comes to around for almost 200 years, americans have generally been stuck with 2 political choices, but cannot ever change because it comes to us politics developed and law imprisoned without trying to. i'll just say a richard and list remain behind boss in egypt. ha, who didn't seem detained since february 2020 the job yet
11:00 pm
a chief detained since august 2021. i'll just say a recalls for the immediate release of its generalised, detained in egypt. journalism is not a crime the general venue in the hallway, your top stories analysis era and you as judge has ruled the donald trump and his 18 co defendants will be tried together on election midland cases in georgia. the judge has said that the trial will start on october 23rd, like hannah has more on the judge's ruling from atlanta. the important point is that he was told by the prosecution that with the 2 people not being tried or 19 people are being tried. the try.
12 Views
Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=1490436749)