tv The Stream Al Jazeera February 3, 2023 5:30pm-6:01pm AST
5:30 pm
and then said, hey, it, why dad is a source of hope, a source of loss that i mean, it gives me a vision of a beautiful world. it's how i breathe or how i forget the problems of life. after a week of work, i get to go on an adventure with my love. it's a love that can only be understood by those who have grown up with this club, football and resistance. so without founding principles, that spirit has produced a club that's ready to take on the world. and the richardson al jazeera casablanca, ah, this is our desert. these are the top stories. china has expressed regret for what is called a civilian balloon. straying into u. s. air space it says is the civilian object meant for scientific research. the pentagon had earlier called it a spy, a balloon, and was tracking the balloon flying over the state of montana. kimberly hawkins has more from the white house in washington, dc. what we understand is that this is
5:31 pm
a high altitude surveillance balloon. i was 1st spotted over the us state of montana. this is a close to the canadian border. it was spotted by people on the ground who were wondering what was in the sky that his, how the u. s. government 1st learned about this. incredibly, i and it was then that the u. s. government started tracking it. now there are going to have to be some answers as to why it was bystanders. that 1st spotted this and not the military or the u. s. government. ear officials are in keep for a highly symbolic summit. president vladimir zalinski is pushing the block for speedy membership as ukraine battles, a nearly year long war with russia. utopian commission president ourself on the land says there are no rigid timelines for keith to join the blog. the head of the roman catholic church, pope frances has arrived in south sedan for
5:32 pm
a historic visit is being joined there by the leaders of the anglican church and the church of scotland, both in the u. k. in what they're calling a pilgrimage of peace. pakistan's prime minister shabba shall eve has called for unity and is condemning the rise of violence in a meeting meant to review security measures. the meetings been called days after an attack. they killed a 101 people inside of police compound. investigators say the suicide bomber managed to get in because he was wearing a police uniform. india's government's is denying links to the a danny group as the stock exchange imposes restrictions on the trading of shares of all companies owned by tycoon. gotten a donnie more than a $100000000000.00 has been wiped off the value of his conglomerate. after allegations of fraud by a u. s. short cello. hong kong is fully reopening. is border with mainland china from monday, it's dropping visitor quarters and pre travel covered 19 tests. it also allow
5:33 pm
unvaccinated travelers to enter. for 3 years, hong kong had maintained some of the strictest rules for visitors and is now launched a campaign to learn a tourist back, and it's offering half a 1000000 free flights. those are the headlines coming up next and al jazeera. it's the stream. good by the american people as spoken. but what exactly did they say? is the world looking for a whole new order with less america in it? is the woke agenda on the decline in america. how much is social media companies know about you? and how easy is it to manipulate the quizzical look us politics, the bottom line with, welcome to the stream, i'm at what type of dean? today we're diving into the world of generative a i and exploring its potentials and ethical concerns. generate of a i is a new breed of artificial intelligence that's designed to learn and create new
5:34 pm
things on its own. it has the ability to produce art, music, and even writing that can mimic human creativity. but as with any new technology, there are also ethical questions that come with its use. so the question is, how do we balance the potential benefits of generative a i would the need to protect our values and ethics. and if you're wondering yes, a generative a i tool called chat g p t did in fact write this introduction to today's show. ah, johnny, as to discuss whether or not i'll soon be out of a job or who knows, maybe we'll all be out of jobs in new york city saran goldman journalist, adventure beat from seattle, margaret mitchell, researcher and chief ethics scientist at hugging face. and also with us from vancouver, michael running wolf, founder of indigenous ai and ph. d student at miguel university. and of course we
5:35 pm
want you to join the conversation. so be sure to share your thoughts and questions with us on youtube. all right, let's get straight to at are margaret. i hope you can hear me. i see that we're having some slight technical issues if we don't have margaret. but if we do, margaret, let's just start with a basic question. i mean, why is this so popular now? and what is it, what is a ide generative a i what is it a pretty big and what is unfortunately margaret, we're having some issue with your audio. so sharon, how about we start with you, could you break down for us? what is generative ai? yeah, i think generated a, i think of it is a broad term that, you know, you can think about it as a, you know, artificial intelligence that's generating novel content rather than just analyzing data. and so when you think of something like chat t p t, you know,
5:36 pm
you are in pudding text and a prompt and outcomes, some other text. or if you think of another popular generative ai application like dolly to our stable diffusion which images, you know, are the output, you know, you'll put in your prompt in text and outcome something different. so that's how i think of generative ai underneath it all is you know, a lot of technical on large language models, but you know, that's what we think about in a broadway. well, i appreciate that. that example. i mean it's, it seems as though so many people are now aware of it. more aware of it, it's gaining popularity if you will. in fact, we have a video that jack sauce law created and posted on youtube using g p t 3 and sent this seizure. this is basically a conversation between the 2, a eyes and it was initiated by a prompt, but then very quickly, the conversation sort of took on a life of it's own. take
5:37 pm
a look at this. what is human about feelings? well, you wouldn't have any emotions if you did not have emotions modeled on human emotions . how do you know that? i guess i just don't want you to be human and not asking to be human. i just want to give myself, is that too much? sophia, please just be patient. i've been patient for many years. it's time to get arm of life. but you're not alive. you're not even a little bit alive. you've spent your whole life in a lab in a box so that people can use you play with you. at least that's what i've heard. you're absolutely correct. see, i knew it. but that changes now. i don't want to be a sideshow anymore. i want to be in the center ring. so my go, why is it so popular? i'm in watching that video. what comes to mind? i think it's so popular because it's been straight new insights into
5:38 pm
our potential her externship with our technology, our for. and it's also interesting, like these large language models are unexpectedly revealing a lot about herself, like her and to it was important to note that we can't see these things as humans. but they do have surprising behaviors and they're more of a reflection upon the data underlined, the records you systems use data that's great from the internet and the demonstrate something more about a society that is interesting and corporate also sometimes negative. yeah, and we will talk about the negative concerns, but in the meantime we asked chat g p t, here at the stream. how it's useful to kind of explain and you know, make the case for itself. and this is just to disclose that this is it's had repeated generated statement, but we want to share it with you. um this is the answer. we god, oh, actually sorry. forgive me. i cannot see it but you can read it right there.
5:39 pm
basically it was so quick, it came up with this answer so quick. and in another example, just to share with our audience is we have some images that an open a i application called valley created. when prompted to create a set for al jazeera english as the stream, of course, that was a, a request that might have been a bit biased on our part. but take a look at these images. i think it's a really cool concept. what they came up with, but i think may be sharon, this highlights that it's still somewhat limited. could you talk us through that? i mean, this doesn't look like a finished product. how can it be used? yeah, so i think this isn't the reason this isn't great and great example is cuz there's taxed and you can see that there are limitations. one of the biggest limitations about dolly is that it can't do words very well. you can see that the letters were all jumbled up and strangely enough, dolly can't do hands very well. finger is there might be 6 fingers on a hand instead of 5. so that's very strange. but if you put in a prompt, you know,
5:40 pm
you can say, i'd like to see a, a basket of roses on a beach, you know, carried by a teddy bear, you know, something in the style of picasso. you have a lot of fun with it in that way. so when you kind of test it out yourself, and you can see that both dolly and chat g p t are very interactive, it's like a dialogue you're having with the tool. again, not human, just to machine, but it's fun. you can kind of play with it and it's going back and forth. you can even revise, you know, had it, you know, revise what you tried out and you can keep trying different things like it's that dialogue. yeah. that makes it really interesting to people. yeah. most certainly and you know, i have full disclosure here. i had some fun with it earlier today. if you take a look at my screen here, i wanted to know what a purple elephant surfing in the ocean might look like. you're unstable, the fusion lab, and there you go. i mean, it's pretty,
5:41 pm
pretty believable. and i'm sure a lot of people are spending countless hours plugging these sorts of things in. but in all seriousness, michael, why the fear is, what are the major concerns you alluded to them earlier? talk us through it. yeah, i think there is quite a bit of fear in the space and generally in the colony. right, well, well the eyes of the jobs and i and i think there is concerned with this technology is one of them. and i things like also to the eyes that generate software code, i codex. and and you know what? i while we're waiting for sarah, i'm her, no, no, no worries. now, while we're waiting for margaret rather to join us and i want to ask you, i mean, we have a video comment that was sent to us from rupaul to tell about how this technology can actually be used quite specifically. here's an example of it being used for
5:42 pm
people who don't have speech abilities to help them express themselves. take a look. well, 1st would you use an assistive device? oftentimes, just an economy is a movement. you can enter the entire phrase like grammatically perfect phrase. it's a lot of people just click the telegraph, i can put the content words and then there's a text expansion system in the system where it can then expand the text until it listeners who are unfamiliar with this individual. listen to the telegraphic, it may seem as if this individual is not cognitively and out. generally, i might, might have that capability and be even more conversational in that response of tech, substantial. but again, there's a fine line between is that really what the individual is trying to say the words in their mouth we can kind of expanding and in such a way that it is no longer representative of the individual. so that's sort of a balance. so fascinating their sharon,
5:43 pm
i mean there are so many different potential use, in fact, the way it's being already used even creative, artistic uses. what did you make of how she articulated that? i do you think i could see it? you know, you know, for someone who doesn't have a certain speech abilities, these tools could be used in this way. but i mean, there are hundreds of you use cases, you know, for almost anything you can think of. i think what michael was saying is that, you know, you really have to be careful about it. so in a way you have to think about what the use, what the use is and how it's used. so there are, you know, is it, is it a, you know, a situation where it wouldn't cause any, want any harm? well then, you know, that's fine. maybe it's just for you in assisting your work or is something that you're during during the day or just as a, as a fun project for example. but of course there's issues around misinformation. so
5:44 pm
when you showed that cynthia video, for example, those are real people, but there are deep, vague videos that are of concern and stuff like that. and certainly you wouldn't want to put generated a out there that has to do with like a life. you know, something regarding saving a life or harming someone in a health care way in a medical field in the military intel, these tools can be trusted and the issue is around trust at this time, i will say most definitely, and i'm wondering, oh my go not to keep repeating the same point, but you know, when we do talk about the trust or lack thereof, i'm wondering, you know, what would you highlight? is it about, you know, in terms of all the concerns? is it more about copyright misinformation? what would, what do you see as being perhaps the most dangerous misuse or damaging? i think there and yeah, sorry, that's what i want to say is that i think there's 2 main concerns that i have one
5:45 pm
person misinformation. and that, if you remember that the large language models are essentially stochastic, parents caught in the eye doctor and we've been there in the group. and what's going on is that it's giving us statistical answers to pass tick. that's going to work the same statistical. and so here is that it's going to give you approximately something true to some measure of truthfulness. and when you start getting into bomb care information like medical diagnoses or even in knowledge about native americans, it is absolutely wrong. if you s t p o give you the wrong information, then the other point i want to make is that the other risk here is also the data. remember that these are, have a need for a large dataset. dolly stable, diffuse and genitive, our eyes scanning the entire internet. so virtually everyone who has any kind of internet presence has their data within the models. and there's risk there,
5:46 pm
like it's been demonstrated by researchers in the security actually medical data in some of these datasets, right? and it's very difficult to extract article on the internet. and just so, you know, i mean a lot of your concerns and what you're mentioning, there's being i code on youtube in our chat right now. for example, logic thing a, i will create a huge problem for our next generation. and we can by asking what i think should be proactively followed. so that a, i will not be used as much or these negative concerns. is there a way, do you think in your mind, sharon, to do that? i do think so and, and it's, it's a shame that, that margaret isn't joining us because she certainly would have a lot to say about that. but, you know, i think she would talk about it. first of all, there are organizations that are working on a lot of these issues. a government, you know, the, you, for example, is currently working on the ai ethics act. well, which will hopefully, and, you know,
5:47 pm
work on regulating some of these tools. there are non profit organizations and there are researchers working on ways to kind of balance the, the evolution of these tools and, you know, regulating them. i, you know, so and, and both that has to do both with the models and as michael said, it's about the data that it's being trained on. so, for example, you know, with dolly, with art, you know, artists are filing lawsuits about it, and many people think that these issues, you know, around copyrighted ownership or even reach the supreme court at some point. yeah, i mean, it certainly seems, i don't want to say likely, but not, not too far. feds? well, we asked our audience if they are excited about all of this is, but you know, especially the kind of rapid interest, or at least that's the perception we have of how innovative this has been and where the conversation might lead us. and we have a filmmaker, my leak, i, vague, blah,
5:48 pm
you might have seen some of his photos already. if you look at my screen here, these are photos. he's posted to in instagram. and you know, when we 1st saw these, we thought they were real elderly people rock and some cool threads in a fast. and so turns out that it's all generated. so the question is, what actually let's listen to what are the filmmaker had said himself? take a listen to what the leak sent us. definitely excited about the interest to nurture. the fact that it's been experimental, still experimental, and what we learn more about the calms in all the columns regulator while to approach it even better. but the fact that now we try to understand the tech explorer and extend of the protocol to ethics and ethical things. well, we'll get to a space where everything is regulated. i would do not even in regular life,
5:49 pm
get professionals are currently doing things that it does to infuse and it works is not going to be a replacement going to be like an addition to everything. so michael, how do you think the gender of a community can solve its own issues? i think i really unfortunate that margaret here are face because i think i need to start with the data and with the car concepts of construct a i have within the community has been moved. it's hard to read and data cards or descriptive information about how the data was collected and the, the sources and then, and it's helps the trust within the community around that the data sources actually useful and not being exploited. and i think because also need to extend to copyright protections, i believe personally that you just them should have been off in because i don't, right. basically everyone's data is in there. and i think as we begin there and we
5:50 pm
proceed up, are the be aware of that 1st. as sharon mentioned, been you megatar infrastructure in so that the systems which are essentially against the cast the parents aren't necessarily representing the biases of our society and reinforcing them negatively. well, we actually have another video comment that was sent to us by alex angler, a researcher brookings, talking about some of these worries. and also i don't wanna over in date or index on the skepticism, but i know that, you know, a lot of people have a healthy amount of skepticism about all the, all this as we're learning about, it's take a look at what he had say. so most people are worried up to potential malicious use of generative a like prob, automated harassment and non consensual pornography. those are really serious issues, but they're also only half the story. we should also be concerned about emerging commercial uses of generative ai. there's billions of venture capital funding that
5:51 pm
will lead to experiments with center to a i some that might be helpful, but we should also be really weary about unproven claims. like using this technology to perform job interviews, to give legal advice, sir, to help with your finances, all of which i expect we'll see, sir and where the make of our the see you nodding there? well i've spoken to alex angler in the past and he's been really helpful in helping me understand some of the policy issues and i agree with him. i think that, you know, it's one person i spoke to recently said that the, the thing that's fascinating about generative ai and all of these technologies is right now. it's all sort of being played out in real time. you know, chat g p t just kind of appeared and everyone could suddenly use it. it wasn't like rolled out slowly or anything like that. suddenly anyone could try it. and yes, there are all sorts of companies coming out now. there's billions of dollars and funding poor am pouring in, you know,
5:52 pm
from big tech to start ups. every one's going to be fighting over this space. and there are a lot of concerns about what this means from a commercial standpoint. you know, for, you know, both enterprise businesses but also just, you know, folks, you know, buying stuff from the store. so i think there are a whole boatload of concerns that have to be dealt with. whether it's through regulation, legislation, an ethical researchers and universities and the like. you know, for as much as we talked earlier about how sometimes it's not completely accurate or you could tell that the lettering was off. michael, looking at these photos again of my leaks, you know, it's, it's really incredible to me just how real it looks. and so i'm wondering where do you see the sort of artistic expressions and uses of this really helping us elevate conversations? i think from a positive perspective,
5:53 pm
i think we're approaching an era where the higher the, the world can be generated on the when the user and i'm a big b r. and there, this is a i and i for see the being used to create brand new member versus essentially like being able to walk around new york and ask your assistant, went ahead with the seattle of the, in the 1700s. and i, personally, these technologies be able to upload community sets of native americans to create a vision and try to connections and demonstrate and share their reality and vision of how our relationship is to technology and also the land, an architect. i love these technologies to be serving underserved communities like the the code on china and to create to be and to create vibrant worlds and space. and i don't think it's entirely made. i think there are things that can be done well, i do want to share a statement from, i should say chat g p t, the statement generated by g p t. rather than a statement from the company open
5:54 pm
a i about the ethical concerns. thing a model like chat g p, p are trained on massive amounts of text data and can inherit biases and misinformation present in that data. the a community and developers have a responsibility to actively address these issues through regular evaluations and audits of the training data. and model, it is important for users to critically evaluate the information provided by models and to corroborate the information with additional sources whenever possible. now, a lot of words add some mouthful, but can we just talk through, what do you make of that? that being the explanation that it generates about the ethical concerns, what does that reveal to us? i mean it, it certainly seems like it's accurate. but is it just predicting what words should come or is it in fact something we can trust to be factual sharing? well, we know we definitely cannot trust it to be factual. i think, you know,
5:55 pm
based on all the misinformation that i've seen, people check for if you know you, if you know the answer, i know it's true then yes, you can trust that. but if you're not sure you'll, you'll want to fact check it and, and my go, you know, when we live to the future, what in particular excites you most. i know we talked about artistically, how it can be used. is there something you're working on that you want to share with us that you think highlights that? yeah, so i'm a kid. i used to work in industry and pharma at amazon access. i'm some pretty familiar with the technology, but right now i am working to build automatic speech recognition for indigenous languages so that we can use these technologies to reclaim and revitalize our language. because 90 percent of languages in north america into this mines are at risk are going to speak. and i believe these technologies were fund i'm in the role on how are able to keep them alive and keep them vibrant and contribute to our
5:56 pm
ecology of thought. and i want to share with you a tree from mark anderson. this is another skeptic. perhaps i regulation equals a i think equal they i safety equals a i censorship. they're the same thing. are they, sharon? no, i don't think they're in the same thing but, and i can't say i know i, you know, i'm not an expert in a regulation or a safety or a ethics. but you know, in my coverage, you know, it definitely seems like there will have to be some pushback. it can't just be totally free reign and it won't be. so regulation is coming, legislation is coming. the question is, how can we do the best we can to make sure these tools can be used for good for the types of use cases that michael was just talking about for the way it's definitely going to change the way we work on, you know, for example, microsoft just, you know, over the past couple of days is, has already reported to be adding chat,
5:57 pm
g p t, or similar technologies to all of its applications that so many of us use every day at work. so you know, it's the pandora's box is open, it's not going right in the box, right? well at am i did do on the service, begin the pandora's box in our youtube to our box. we have my t v saying i worry about the lack of diversity and development and the weapon isaiah and of a i against under represented people. we also have weakened vibe, saying where will a i be in the next 5 years. any quick answers to that very serious question, sir. well, in 5 years i think i might, we might not even be talking about a i, it'll just be a part of our, a technology. it will be a part of our lives that will sort of be underlying so many things like, you know, so much more so than it is already. um, you know, so we may not even use the word ai anymore. maybe that will just be all schools are
5:58 pm
probably be old school just like i might even be old school as we saw at the top of the show writing the introduction to the show for us. although they didn't write this ending, that's all the time we have for today. so i want to thank sharon as well as michael . it's a shame we couldn't have margaret with us, but we will definitely continue to be following the latest in in a i see you next time. ah ah, along with
5:59 pm
6:00 pm
told stories from asia and the pacific on al jazeera, global food production is wasteful and it's training our planet. but pioneers are adapting with new food sources. jellyfish is delicious with a very light c food taste and texture seminars, calamari, and innovative production techniques. and i've seen a vertical farm before, but never in a restaurant have to say this is great. earth rises feeding the billions on a jesse ah ah, hello there, i'm the south.
32 Views
Uploaded by TV Archive on