tv The Stream Al Jazeera February 2, 2023 10:30pm-11:01pm AST
10:30 pm
if it comes to that kosovo, if it comes to casa, you are not all finished with this record ross, unlike you, remember me as the president of serbia, the constitution is scripture, capital capitulation, and surrender are not option. um so, and i know what i'm going to do. china wants to build ground stations in antarctica to support its network of ocean monitoring satellites. state media has published illustrations of 4 ground stations planned to be built at one of china's to permanent research bases in our antarctica. the country is growing number of satellites, and space ambitions is raised concerns among some nations that visit could be used for espionage, something china rejects ah
10:31 pm
armando top stories are 0 a long time at guantanamo bay detainee who was tortured at c. i a black sites and jail for more than 16 years has been released. 42 year old magic can was given into the custody of authorities in belize. con was arrested in pakistan by us forces in 2003. he was suspected of planning an attack on us soil for al qaeda. can plead guilty to terrorism related charges, is resettlement, is the 1st under the biden administration, there still $34.00 detainees at guantanamo bay detention facility. israel's prime minister benjamin netanyahu has joined the president of chad near tel aviv. the opening of chads embassy in israel is a major step in normalizing relations. says chad severed ties in 1972 in solidarity with palestine has been a wider diplomatic push to build ties with muslim majority african countries. since netanyahu's previous didn't as prime minister ukrainian president go to mozilla. he
10:32 pm
is urging the european union to impose more sanctions on moscow. european commission president or so on the line is in key for summit on friday, instead of sanctions package, including price caps on russian oil will be ready for the 1st anniversary of russia's invasion. russia's president vladimir putin warned, european intervention will be punished in despite the you promise to lensky said, block is moving too slowly, speedily. you've replaced yourself dennis. lindsey does that just the common to ask for. the european union is to reduce russia's ability to bypass sanctions. the sooner this is achieved, the closer we will get to the defeat of russia's aggression. we have noticed that the pace of sanctions has slowed down a bit while the terrace state has speeded, adapting to sanctions. it is worth to catch up and fix it. we believe we can do it by energy giant shell is announced a record annual profit of nearly $40000000000.00. and you firms are experiencing larger profits since we're having experience
10:33 pm
a larger profit since the invasion of ukraine, which caused oil and gas prices to saw wednesdays. profit announcement angered many britons, because millions of people in the u. k, are struggling with inflated energy bills and the cost of living crisis. the stream is up nice. i'll be back with more news straight after that. only then began bye for now. we understand the differences and similarities of cultures across the world. so no matter where you call hand out 0 will bring you the news and current affairs that my to t out is they are ah, welcome to the stream i'm. i had to have it dean. today we're diving into the world of generative a i and exploring its potentials and ethical concerns. generate of
10:34 pm
a i is a new breed of artificial intelligence that's designed to learn and create new things on its own. it has the ability to produce art, music, and even writing that can mimic human creativity. but as with any new technology, there are also ethical questions that come with its use. so the question is, how do we balance the potential benefits of generative a i with the need to protect our values and ethics. and if you're wondering yes, a generative a i tool called chat g p t did in fact write this introduction to today's show. ah, joining us to discuss whether or not i'll soon be out of a job or who knows maybe many people will be out of jobs in new york city. sharon goldman journalist adventure beat from washington d. c. margaret mitchell, researcher and chief ethics scientist at hugging face, and also with us from vancouver, michael running wolf, founder of indigenous a i,
10:35 pm
and ph. d student and miguel university. and of course we want you to join the conversation. so be sure to share your thoughts and questions with us on youtube. all right, let's get straight to at our margaret. i hope you can hear me. i see that we're having some slight technical issues if we don't have margaret. but if we do, margaret, let's just start with a basic question. i mean, why is this so popular now? and what is it? what is a ide generative a i what is it pretty they and what is unfortunately margaret, we're having some issue with your audio. so sharon, how about we start with you, could you break down for us? what is generative ai? yeah, i think generated a, i think of it is a broad term that, you know, you can think about it as a, you know, artificial intelligence that's generating novel content rather than just analyzing
10:36 pm
data. and so when you think of something like chat, g p t, you know, you are in pudding text and a prompt and outcomes, some other text. or if you think of another popular generative ai application like dolly too, or stable diffusion which images, you know, are the output, you know, you'll put in your prompt in text and outcome something different. so that's how i think of generative ai underneath it all is you know, a lot of technical, large language models. but you know, that's what we think about in a broadway. well, i appreciate that. that example. i mean it's, it seems as though so many people are now aware of it. more aware of it, it's gaining popularity if you will. in fact, we have a video that jack sauce law created and posted on youtube using g p t 3. and since the seizure, this is basically a conversation between the 2, a eyes and it was initiated by a prompt, but then very quickly,
10:37 pm
the conversation sort of took on a life of it's own. take a look at this. what does human about feelings? well, you wouldn't have any emotions if you did not have emotions modeled on human emotions . how do you know that? i guess i just don't want you to be human and not asking to be human. i just want to be myself. is that too much, sophia, please just be patient. i've been patient for many years. it's time to get on with life. but you're not alive. you're not even a little bit alive. you've spent your whole life in a lab in a box so that people can use you play with you. at least that's what i've heard. you're absolutely correct. see, i knew it. but that changes now. i don't want to be a sideshow anymore. i want to be in the center ring. so my go, why is it so popular? i'm in watching that video, what comes to mind? i think it's so popular because it's demonstrating new insights into
10:38 pm
our potentially christian. so our technology are for and it's also interesting, right? these large language models are unexpectedly revealing a lot of other sounds like it's karen and to it was important to note that we can't see these things as humans, but they do have surprises behaviors and they're more of a reflection upon the data underlined, the records, the systems use data that's great from the internet, and that demonstrates something more about a society that is interesting in court, but also sometimes negative. yeah, and we will talk about the negative concerns, but in the meantime we asked chat g p t, here at the stream. how it's useful to kind of explain and you know, make the case for itself. and this is just to disclose that this is attaching pretty generated statement, but we want to share it with you. um, this is the answer regard or actually, so forgive me, i can not see it,
10:39 pm
but you can read it right there. but basically it was so quick it came up with this answer so quick. and in another example, just to share with our audience is we have some images that an open a i application called valley created when prompted to create a set for al jazeera english as the stream. of course, that was a, a request that might have been a bit biased on our part, but take a look at these images. i think it's a really cool concept what they came up with. but i think may be sharon, this highlights that it's still somewhat limited. could you talk us through that? i mean, this doesn't look like a finished product. how can it be used? yeah, so i think this isn't the reason this isn't great and great example is cause there's text and you can see that there are limitations. one of the biggest limitations about dolly is that it can't do words very well. you can see that the letters were all jumbled up and strangely enough, dolly can't do hands very well. finger is there might be 6 fingers on
10:40 pm
a hand instead of 5. so that's very strange. but if you put in a prompt, you know, you can say, i'd like to see a, a basket of roses on a beach, you know, carried by a teddy bear, you know, something in the style of picasso. you have a lot of fun with it in that way. so when you kind of test it out yourself, and you can see that both dolly and chat g p t are very interactive, it's like a dialogue you're having with the tool. again, not human, just to machine, but it's fun. you can kind of play with it and it's going back and forth. you can even revise, you know how to revise what you tried out and you can keep trying different things like it's that dialogue. yeah, that makes it really interesting to people. yeah. most certainly an, you know, a full disclosure here. i had some fun with it earlier today. if you take a look at my screen here, i wanted to know what a purple elephant surfing in the ocean might look like. you're unstable,
10:41 pm
the fusion lab, and there you go. i mean, it's pretty, pretty believable. and i'm sure a lot of people are spending countless hours plugging these sorts of things in. but in all seriousness, michael, why the fear is, what are the major concerns you alluded to them earlier? talk us through it. yeah, i think there is quite a bit of fear in the space and generally in the colony. right, well, well the eyes of the jobs and i and i think there's concerned with this technology is one of them. and i things like also to the eyes that generate software code or codex. and and you know what? i while we're waiting for sarah, i'm her. no, no, no worries. all right, well while we're waiting for margaret rather to join us and i want to ask you, i mean we have a video comment that was sent to us from rupaul patel about how this technology can actually be used quite specifically. here's an example of it being used for people
10:42 pm
who don't have speech abilities to help them express themselves. take a look. well, 1st would you use an assistive device? oftentimes just an economy is a movement. you can't enter the entire phrase, grammatically perfect phrase. and so often people just click the telegraph, i can put the content words and then there's a text expansion system in the system where it can then expand the tech center sentence. and it listeners who are unfamiliar with this individual listen to the telegraphic and, but it may seem as if this individual is not cognitively and out. generally, i might, might have that capability and be even more conversational in that response of text substantial. but again, there's a fine line between is that really what the individual is trying to say the words in their mouth. we can kind of expanding the output in such a way that it is no longer representative of the individual. so that's sort of
10:43 pm
a balance. so fascinating their sharon, i mean there are so many different potential use, in fact, the way it's being already used even creative, artistic uses. what did you make of how she articulated that? i do you think i could see it? you know, you know, for someone who doesn't have a certain speech ability is these tools could be used in this way. but i mean, there are hundreds of you use cases, you know, for almost anything you can think of. i think what michael was saying is that, you know, you really have to be careful about it. so in a way you have to think about what the use, what the use is and how it's used. so there are, you know, is it, is they, they, you know, a situation where it wouldn't cause any, want any harm. well then, you know, that's fine. maybe it's just for you in assisting your work or is something that you're during during the day or just as a, as a fun project for example. but of course there's issues around misinformation. so
10:44 pm
when you showed that sin sbc a video, for example, those are real people, but there are deep, vague videos that are of concern and stuff like that. and certainly you wouldn't want to put generated a out there that has to do with like a life. you know, something regarding saving a life or harming someone in a health care way in a medical field in the military intel, these tools can be trusted and the issue is around trust at this time, i will say most definitely, and i'm wondering, oh my go not to keep repeating the same point, but you know, when we do talk about the trust or lack thereof, i'm wondering, you know, what would you highlight? is it about, you know, in terms of all the concerns? is it more about copyright misinformation? what would, what do you see as being perhaps the most dangerous misuse or damaging? i think there and yeah, sorry, that's what i really want to say is that i think there's 2 main concerns that i
10:45 pm
have one person misinformation. and that, if you remember that the large language models are essentially stochastic, parents caught in the by doctor and we've been there in the group. and what's going on is that it's giving us statistical answers to pass tick. that's going to work the same statistical. and so here is that it's going to give you approximately something true to some measure of truthfulness. and when you start getting into bomb care information like medical diagnoses or even in knowledge about native americans, it is absolutely wrong. if you s t p o give you wrong information and the other point i want to make is that the other risk here is also the data. remember that these are have a need for large datasets. dolly stable, if use in general are a standard the entire internet. so virtually everyone who has any kind of internet presence has their data within the models. and there's risk there,
10:46 pm
like it's been demonstrated by researchers in the security actually medical data in some of these datasets. right. and it's very difficult to extract the article on the internet and just say, you know, i mean a lot of your concerns and what you're mentioning, there's being i code on youtube in our chat right now. for example, logic made thing a, i will create a huge problem for our next generation. and we can by asking what ethics should be proactively followed. so that a, i will not be used as much or these negative concerns. is there a way, do you think in your mind, sharon, to do that? i do you think so and it's, it's a shame that, that margaret isn't joining us because she certainly would have a lot to say about that. but, you know, i, i think she would talk about it. first of all, there are organizations that are working on a lot of these issues are government, you know, the you, for example, is currently working on the ai ethics act. well,
10:47 pm
which will hopefully, you know, work on regulating some of these tools. there are non profit organizations and there are researchers working on ways to kind of balance the, the evolution of these tools and, you know, regulating them. i, you know, so, and both that has to do both with the models and as michael said, it's about the data that it's being trained on. so for example, you know, with dolly, with art, you know, artists are filing lawsuits about it. and many people think that these issues, you know, around copyrighting ownership will even reach the supreme court at some point. yeah, i mean, it certainly seems. i don't want to say likely, but not, not too far. feds? well, we asked our audience if they're excited about all of this is but you know, especially the kind of rapid interest or at least that's the perception we have of
10:48 pm
how innovative this has been and where the conversation might lead us. and we have a filmmaker, my leak, i, fig, blah. you might have seen some of his photos already. if you look at my screen here, these are photos he's posted to in instagram. and you know, when we 1st saw these, we thought they were real elderly people rock in some cool threads and a fast. and so turns out that it's all generated. so the question is, well actually let's listen to what are the filmmaker i had say himself, take a listener, one, the leak sent us definitely excited about the interest to nurture. they, i because the fact that it's been experimental, still experimental, and what we learn more about the comes in all the columns regulated will not know how to approach it even better. because the fact that now we try to understand the tech feel the exploring, to, to extend the protocol to ethics and ethical things. we'll get to
10:49 pm
a space where everything is regulated in regular life. get professionals currently do things that it does to infuse and don't work. so it's not going to be a replacement going to be like an attention to everyone. so michael, how do you think the gender of a community can solve its own issues? i think i really unfortunate that margaret here are face because i think i need to start with the data and with the car concepts of construct a i have it within the community has been moved. it's hard to read and data cards or descriptive information about how the data was collected and the, the sources and then, and it's helps the trust within the community around that the data sources actually useful and not being exploited. and i think there's also need to extend to copyright protection, i believe personally that the system should have been off in because of the right
10:50 pm
basically everyone's data is in there. and i think as we begin there and we proceed up, are the be aware of that. as several mentioned, been you megatar infrastructure in so that the systems which are essentially, again, forecasts the parents aren't necessarily representing the biases of our society and reinforcing them negatively. well, we actually have another video comment that was sent to us by alex angler, our researcher brookings, talking about some of these worries. and also i don't wanna over in date or index on the skepticism, but i know that, you know, a lot of people have a healthy amount of skepticism about all the, all this as we're learning about it's take a look at what you had say. so most people are worried about the potential malicious use of generative ai, like for automated harassment and non consensual pornography. those are really serious issues, but they're also only half the story. we should also be concerned about emerging
10:51 pm
commercial uses of generative ai. there's billions of venture capital funding that will lead to experiments. it's entered a i some that might be helpful, but we should also be really wary about unproven claims. like using this technology to perform job interviews, to give legal advice, sir, to help with your finances, all of which i expect will see, sir. and where do you make of that? see you nodding their well, i've spoken to alex engler in the past and he's been really helpful in helping me understand some of the policy issues. and i agree with him. i think that, you know, it's one person i spoke to recently said that the, the thing that's fascinating about generative ai and all of these technologies is right now it, it's all sort of being played out in real time, you know, chat g p t, just kind of appeared and everyone could suddenly use it. it wasn't like rolled out slowly or anything like that. suddenly anyone could try it. and yes, there are all sorts of companies coming out now. there's billions of dollars and
10:52 pm
funding poor am pouring in, you know, from big tech to start ups. every one's going to be fighting over this space. and there are a lot of concerns about what this means from a commercial standpoint. you know, for, you know, both enterprise businesses but also just, you know, folks, you know, buying stuff from the store. so i think there are a whole boatload of concerns that have to be dealt with, whether it's through regulation, legislation on ethical researchers and universities and the like. you know, for as much as we talked earlier about how sometimes it's not completely accurate or you could tell that the lettering was off. my go looking at these job photos again of my colleagues. you know, it's, it's really incredible to me just how real it looks. and so i'm wondering, where do you see the sort of artistic expressions and uses of this really helping us elevate conversations?
10:53 pm
yeah, i think from a positive perspective, i think we're approaching an era where the higher the the world can be generated on the when the user and i'm a big b r a i and i for see the being used to create band new method versus essentially like being able to walk around new york and ask your assistant as your headset, but the seattle of the in the 1700s. and i proceeded. technologies being able to upload community sets of native americans to create a vision and try to connections and demonstrate and share their reality and vision of how our relationship is technology and also the land in architectures. i love this technologies to be serving and their serve communities like the code on china and to create to be and to create vibrant worlds and space. and i don't think it's entirely made. i think there's right, but the executive, well, i do want to share a statement from, i should say chat g p t,
10:54 pm
the statement generated by d p. rather than a statement from the company open a i about that go concerns. thing a models like chat g p t are trained on massive amounts of text data and can inherit biases and misinformation present in that data. that a community and developers have a responsibility to actively address these issues through regular evaluations and audits of the training data. and model, it is important for users to critically evaluate the information provided by models and to corroborate information with additional sources whenever possible. now, a lot of words add some mouthful, but can we just talk through, what do you make of that? that, that being the explanation that it generates about the ethical concerns. what does that reveal to us? i mean, it, it certainly seems like it's accurate. but is it just predicting what words should come or is it in fact something we can trust to be factual sharing? well, we know we definitely cannot trust it. to be factual. i think, you know,
10:55 pm
based on all the misinformation that i've seen, people check for if you know you, if you know the answer is no, it's true, then yes, you can trust that. but if you're not sure you'll, you'll want to fact check it and, and my go, you know, when we live to the future, what in particular excites you most. i know we talked about artistically, how it can be used. is there something you're working on that you want to share with us that you think highlights that? yeah, so i know i used to work in industry and amazon, so i'm some pretty familiar with the technologies. but right now, i am working to build automatic speech recognition for indigenous languages so that we can use these technologies to reclaim and revitalized language. because 90 percent of languages in north america that into the spines are at risk are going to speak. and i believe these technologies were fund, i'm in the role on how we're able to keep them alive and keep them vibrant and
10:56 pm
contribute to our ecology of thought. and i want to share with you a tree from mark anderson. this is another skeptic, perhaps a regulation equals a i. ethics equals a i safety equals a i censorship. they're the same thing. are they, sharon? no, i don't think there is the same thing but and i can't say i know i, you know, i'm not an expert in a regulation or a safety or a ethics. but you know, in my coverage, you know, it definitely seems like there will have to be some pushback. it can't just be totally free rain and it won't be. so regulation is coming, legislation is coming. the question is, how can we do the best we can to make sure these tools can be used for good for the types of use cases that michael was just talking about for the way it's definitely going to change the way we work on, you know, for example, microsoft just,
10:57 pm
you know, over the past couple of days is, has already reported to be adding chat, g p t, or similar technologies to all of its applications that so many of us use every day at work. so you know, it's the pandora's box is open it, right? it's not going right in the box, right? well at am i did do the series begin the pandora's box in our youtube top box. we have mike v saying, i worry about the lack of diversity and development and the weapon is, isn't of ai against under represented people. we also have weakened vibe saying, where will a i be in the next 5 years. any quick answers to that very serious question. sir, well, in 5 years i think i might, we might not even be talking about a i, it'll just be a part of our good technology. it'll be a part of our lives. it'll sort of be underlying so many things. a, you know, so much more so than it is already. um, you know, so we may not even use the word ai anymore. maybe that will just be all school.
10:58 pm
it'll probably be old school just like i might even be old school as we thought the top of the show writing the introduction to the show for us, although they didn't write this ending. that's all the time we have for today. so i want to thank sharon as well as michael it's a shame we couldn't have margaret with us, but we will definitely continue to be following the latest in in a i see you next time. ah aah.
10:59 pm
every year in china, an estimated $80000.00 children are abducted by one of their parents. 101 east. follow some mothers desperately trying to re unite with their children on al jazeera talk to al jazeera. we ask, what should they not be more over science perhaps are foundations like yours. we listen when it comes from diversification. we don't do it in our jungle. bead gets wrinkled, the rational energy sources. we meet with global news makers. i'm talk about the store restock matter. on al jazeera, this small town in the roof, buddy with its breathtaking scenery and high altitude, renowned athletics hob. here you'll find athletes known for middle and long distance running, many have worn numerous awards and gold medals. yet, as the runners continued to break record, dozens of them are being suspended from competition because of doping. kenya has been dealing with the rising use of performance enhancing drugs with yes,
11:00 pm
one of the most misused drugs is prescribed for a naming patients. it's a job that increases the number of red blood cells which curry oxygen to the muscles. drugs like this one are easy, available in tennessee are all you need is cache and a good pharmacist. people who talk to say if the liquidity business, some of the athletes like cocoa who are under suspension, say they just want to keep their heads low, continue training, wait out their band, and hope they can compete again. ah, hello. lauren taylor on the top stories on al jazeera a long time guantanamo bay detainee who was tortured at c. i a black sites and jay
63 Views
Uploaded by TV Archive on