Skip to main content

tv   The Stream  Al Jazeera  February 3, 2023 7:30am-8:01am AST

7:30 am
only to be discarded by western governments. they had still, ah, fight her sometimes with her members on may families, airway colleagues or waste their cultures and now they seem to be forgotten. so this is really terrible. back in pakistan, the prosecutors are just a tiny group on top of the millions of afghans who fled here after the soviet invasion in 1979. and during the subsequent civil war for these women it's been more than a year in limbo. but for s m 1st good news for families on the move heading to the airport on the flight to spain, the emotions clear as they had for europe. we can only hope they'll be able to return safely. one day. the dean baba al jazeera. ah, this is al jazeera, these, the top stories european commission present or serve on the land says
7:31 am
a new sanctions package, including price caps and russian oil will be ready for the 1st anniversary of russia's invasion. she made the comments soon after arriving in kiva, head of a summit over you, cranes pushed to join the e. u. the pentagon says it's tracking a suspected chinese spy balloon spotted in u. s. s. based on wednesday. they are reports, militia leaders even considered shooting. it down but decided against it because of the potential safety risk from falling debris. our pants can correspond, particle hain't, has more from washington. so what they're saying is that they are tracking that's that they've taken steps to make sure that no surveillance can in fact be done. but again, the official statement is there's a balloon, we're watching it. it doesn't pose a danger to air travel or collecting of sensitive information in the pentagon. what they're telling reporters on background unnamed officials that they believe this is the chinese are what we do know is that they did close the air space for
7:32 am
a bit on wednesday in billings, montana. we really, they were put it up afterwards. you raptors in the iraq's, those ers electronic surveillance radar up planes. israel's prime minister benjamin netanyahu has attended the opening of charged embassy near television, the major step and normalizing relations between the 2 countries. tog 7 ties with israel in 1972 in solidarity with palestinians. meanwhile, israel sudan have agreed to normalize relations. israel's foreign minister says an agreement will be signed once the african nation, transitions to a civilian government, israel and see dom started the process of normalizing as part of the abraham accords in 2020 u. s. house republicans have voted to move democratic representative ilan omar from the chambers foreign affairs committee. the somali born muslim was ousted over her palace, criticism of israel. democrats, of the keys,
7:33 am
a republicans of targeting oman because of identity. a detainee who spent 16 years in guantanamo bay has been released and transferred to police pakistan. imogene con, was arrested by us forces in 2003. okay. those are you headlines. nice continues here. after the string talk to al jazeera, we ask, but should they not be more oversized, perhaps of foundations like yours? we listen when it comes to diversification, we don't do it in order to beat gets wriggled, the russian manager source. we meet with global news makers and talk about the stool restock matter. on al jazeera, i welcome to the stream i had to have a dean. today, we're diving into the world of generative a i and exploring its potentials and ethical concerns. generate of a i as
7:34 am
a new breed of artificial intelligence that's designed to learn and create new things on its own. it has the ability to produce art, music, and even writing that can mimic human creativity. but as with any new technology, there are also ethical questions that come with its use. so the question is, how do we balance the potential benefits of generative a i with the need to protect our values and ethics. and if you're wondering yes, a generative a i tool called chat g p t did in fact write this introduction to today's show. ah, johnny, as to discuss whether or not i'll soon be out of a job or who knows, maybe we'll all be out of jobs in new york city saran goldman journalist, adventure beat from seattle, margaret mitchell, researcher and chief ethics scientist at hugging face. and also with us from vancouver, michael running wolf, founder of indigenous ai and ph. d student at miguel university. and of course we
7:35 am
want you to join the conversation. so be sure to share your thoughts and questions with us on youtube. all right, let's get straight to at our margaret. i hope you can hear me. i see that we're having some slight technical issues if we don't have margaret. but if we do, margaret, let's just start with a basic question. i mean, why is this so popular now? and what is it, what is a ide generative a i what is it a pretty big and what is unfortunately margaret, we're having some issue with your audio. so sharon, how about we start with you, could you break down for us? what is generative ai? yeah, i think generated a, i think of it is a broad term that, you know, you can think about it as a, you know, artificial intelligence that's generating novel content rather than just analyzing data. and so when you think of something like chat t p t, you know,
7:36 am
you are in pudding text and a prompt and outcomes, some other text. or if you think of another popular generative ai application like dolly too, or stable diffusion which images, you know, are the output, you know, you'll put in your prompt in text and outcome something different. so that's how i think of generative ai underneath it all is you know, a lot of technical on large language models, but you know, that's what we think about in a broadway. well, i appreciate that. that example. i mean it's, it seems as though so many people are now aware of it. more aware of it, it's gaining popularity if you will. in fact, we have a video that jack sauce law created and posted on youtube using g p t 3 and sent this seizure. this is basically a conversation between the 2, a eyes and it was initiated by a prompt, but then very quickly, the conversation sort of took on a life of it's own. take
7:37 am
a look at this. what is human about feelings? well, you wouldn't have any emotions if you did not have emotions modeled on human emotions . how do you know that? i guess i just don't want you to be human and not asking to be human. i just want to give myself, is that too much? sophia, please just be patient. i've been patient for many years. it's time to get arm of life. but you're not alive. you're not even a little bit alive. you've spent your whole life in a lab in a box so that people can use you play with you. at least that's what i've heard. you're absolutely correct. see, i knew it. but that changes now. i don't want to be a sideshow anymore. i want to be in the center ring. so my go, why is it so popular? i'm in watching that video. what comes to mind? i think it's so popular because it's been straight and new insights into
7:38 am
our potentially kristen said with our technology our for and it's also interesting like these large language models are unexpectedly revealing a lot better. sounds like it's too, it was important to note that we can't see these things as humans, but they do have surprises, behaviors, and they're more of a reflection upon the data underlined the records you systems use data that's great from the internet. and that demonstrates something more about a society that is interesting and could also sometimes negative. yeah, and we will talk about the negative concerns. but in the meantime we asked chat g p t, here at the stream. how it's useful to kind of explain and you know, make the case for itself. and this is just to disclose that this is, it's had repeated generated statement, but we want to share it with you. um, this is the answer regard. oh, actually sorry. forgive me. i cannot see it, but you can read it right there. basically,
7:39 am
it was so quick it came up with this answer so quick. and in another example, just to share with our audience is we have some images that an open a i application called valley created when prompted to create a set for al jazeera english as the stream. of course, that was a, a request that might have been a bit biased on our part, but take a look at these images. i think it's a really cool concept what they came up with. but i think may be sharon, this highlights that it's still somewhat limited. could you talk us through that? i mean, this doesn't look like a finished product. how can it be used? yeah, so i think this isn't the reason this isn't great. and great example is cuz there's taxed and you can see that there are limitations. one of the biggest limitations about dolly is that it can't do words very well. you can see that the letters were all jumbled up and strangely enough. dolly can't do hands very well. finger is there might be 6 fingers on a hand instead of 5. so that's very strange. but if you put in a prompt, you know,
7:40 am
you can say, i'd like to see a, a basket of roses on a beach, you know, carried by a teddy bear. you know something in the style of picasso. you have a lot of fun with it in that way. so when you kind of test it out yourself, and you can see that both dolly and chat g p t are very interactive, it's like a dialogue. you're having with the tool, again, not human, just to machine, but it's fun. you can kind of play with it and it's going back and forth. you can even revise, you know, had it, you know, revise what you tried out and you can keep trying different things like it's that dialogue. yeah. that makes it really interesting to people. yeah. most certainly and you know, i full disclosure here. i had some fun with it earlier today. if you take a look at my screen here, i wanted to know what a purple elephant surfing in the ocean might look like. you're unstable, the fusion lab. and there you go. i mean, it's pretty, pretty believable,
7:41 am
and i'm sure a lot of people are spending countless hours plugging these sorts of things in. but in all seriousness, michael, why the fear is, what are the major concerns you alluded to them earlier? talk us through it. yeah, i think there is quite a bit of fear in the space and generally in the colony. right, well, well the eyes of the jobs and i and i think there is concerned with this technology is one of them. and i things like you have to also to the eyes that generate software code. i codex. and and you know what? i while we're waiting for sarah, i'm her, no, no, no worries. now, while we're waiting for margaret rather to join us and i want to ask you, i mean, we have a video comment that was sent to us from report to tell about how this technology can actually be used quite specifically. here's an example of it being used for
7:42 am
people who don't have speech abilities to help them express themselves. take a look for somebody who's an assistant to guys. oftentimes just an economy is a movement. you can enter the entire full phrase, grammatically perfect phrase. and so a lot of people just want to tell it, but i can put the content words and then there's a tex expansion system in the system where it can then expand the text into a whole sentence. and it listeners who are unfamiliar with this individual. listen to the telegraph, it may seem as if this individual is not cognitively and down. generally, i might, might have the ability of be even more conversational in that response of tech, substantial. but again, there's a fine line between is that really what the individual is trying to say the words in their mouth we have expanding and in such a way that it is no longer representative of the individual. so that's sort of a balance so fascinating. their sharon,
7:43 am
i mean there are so many different potential use, in fact, the way it's being already used even creative, artistic uses. what did you make of how she articulated that? i do you think i could see it, you know, but you know, for someone who doesn't have a certain speech abilities, these tools could be used in this way. but i mean, there are hundreds of you use cases, you know, for almost anything you can think of. i think what michael was saying is that, you know, you really have to be careful about it. so in a way you have to think about what the use, what the use is and how it's used. so there are, you know, is it, is it a, you know, a situation where it wouldn't cause any one any harm? well then, you know, that's fine. maybe it's just for you in assisting your work or is something that you're during during the day or just as a, as a fun project for example. but of course there's issues around misinformation. so
7:44 am
when you showed that cynthia video, for example, those aren't real people, but there are deep, vague videos that are of concern and stuff like that. and certainly you wouldn't want to put a generated a out there that has to do with like a life. you know, something regarding saving a life or harming someone in a health care way in a medical field in the military intel, these tools can be trusted and the issue is around trust at this time. i will say most definitely, and i'm wondering, oh my god, not to keep repeating the same point but you know, when we do talk about the trust or lack thereof, i'm wondering, you know, what would you highlight? is it about, you know, in terms of all the concerns? is it more about copyright misinformation? what would, what do you see as being perhaps the most dangerous misuse or damaging? i think there and yeah, sorry, that's what i want to say is that i think there's 2 main concerns that i have. why
7:45 am
not part of the misinformation? and that if you remember that the large language models are essentially stochastic . parents caught in the eye doctor and we've been there in the group. and what's going on is that it's giving us statistical answers to pass tick. that's going to work the same statistical. and so here is that it's going to give you approximately something true to some measure, very tricky notes. and when you start getting into a bomb care information, make medical diagnoses or even in knowledge about native americans, it is absolutely wrong if you give the wrong information. and the other point i want to make is that the other risk here is also the data. remember that these are, have a need for large datasets. dolly stable if use in generative are a standard the entire internet. so virtually everyone who has any kind of internet presence or data within the models. and there's risk there,
7:46 am
like it's been demonstrated by researchers in the security actually medical data in some of these data sets. right? and it's very difficult to extract that article on the internet and just say, you know, i mean a lot of your concerns and what you're mentioning, there's being i code on youtube in our chat right now. for example, logic thing a, i will create a huge problem for our next generation. and we can by asking what i think should be proactively followed. so that a, i will not be used as much or these negative concerns. is there a way, do you think in your mind, sharon, to do that? i do think so and, and it's, it's a shame that, that margaret isn't joining us because she certainly would have a lot to say about that. but, you know, i think she would talk about it. first of all, there are organizations that are working on a lot of these issues. a government, you know, the, you, for example, is currently working on the ai ethics act. well, which will hopefully, and, you know,
7:47 am
work on regulating some of these tools. there are non profit organizations and there are researchers working on ways to kind of balance the, the evolution of these tools and, you know, regulating them. i, you know, so and, and both that has to do both with the models and as michael said, it's about the data that it's being trained on. so, for example, you know, with dolly, with art, you know, artists are filing lawsuits about it, and many people think that these issues, you know, around copyrighted ownership or even reach the supreme court at some point. yeah, i mean, it certainly seems, i don't want to say likely, but not, not too far. feds? well, we asked our audience if they are excited about all of this is, but you know, the specially the kind of rapid interest, or at least that's the perception we have of how innovative this has been and where the conversation might lead us. and we have a filmmaker, my league, i, fig, blah,
7:48 am
you might have seen some of his photos already. if you look at my screen here, these are photos he's posted to in instagram. and you know, when we 1st saw these, we thought they were real elderly people rock and some co threads in a fast. and so turns out that it's all generated. so the question is, what actually let's listen to what are the filmmaker had said himself, take a listen to what the leak sent us. definitely excited about the interest. because the fact that it's been experimental, still experimental, and what we learn more about the columns and all the columns regulated. when i know how to approach it, even better than the fact that now we've trans understand the tech exploring to, to extend the protocol to ethics and ethical things. while we'll get to a space where everything is regulated, i will do not in regular life,
7:49 am
get professionals are currently doing things that it does to infuse and it works is not going to be a replacement. going to be like an attention to everyone. so michael, how do you think the gender of a community can solve its own issues? i think i really don't person, margaret here i could face because i think i need to start with the data and with the car concepts of construct a i do have it within the community has been mentored and data cards or descriptive information about how the data was collected and the sources and then, and it helps the trust within the community around that the data source is actually useful and not being exploited. and i think there's also need to extend to copyright protections, i believe personally that you just them. so having often, because otherwise, basically everyone's data is in there. and i think as we begin there and we proceed
7:50 am
up, are the be aware of that. as i mentioned before, megatar infrastructure and so that the system is returned. essentially, again, forecasts the cars aren't necessarily representing the biases of our society and reinforcing negatively. well, we actually have another video common that was sent to us by alex angler, a researcher brookings, talking about some of these worries. and also i don't wanna over in date or index on the skepticism, but i know that, you know, a lot of people have a healthy amount of skepticism about all the, all this as we're learning about, it's take a look at what he had say. so most people are worried up to potential malicious use of generative a like probe automated harassment and nonconsensual pornography. those are really serious issues, but they're also only half the story. we should also be concerned about emerging commercial uses of generative ai. there's billions of venture capital funding that
7:51 am
will lead to experiments with center to a i some that might be helpful, but we should also be really weary about unproven claims. like using this technology to perform job interviews, to give legal advice, sir, to help with your finances, all of which i expect we'll see, sir. and when you make about the seo nodding there. well i've spoken to alex angler in the past and he's been really helpful in helping me understand some of the policy issues and i agree with him. i think that, you know, it's one person i spoke to recently said that the, the thing that's fascinating about generative ai and all of these technologies is right now. it's all sort of being played out in real time. you know, chat g p t just kind of appeared and everyone could suddenly use it. it wasn't like rolled out slowly or anything like that. suddenly anyone could try it. and yes, there are all sorts of companies coming out now. there's billions of dollars and funding poor am pouring in, you know,
7:52 am
from big tech to start ups. every one's going to be fighting over the space. and there are a lot of concerns about what this means from a commercial standpoint. you know, for, you know, both enterprise businesses but also just, you know, folks, you know, buying stuff from the store. so i think there are a whole boatload of concerns that have to be dealt with. whether it's through regulation, legislation, an ethical researchers and universities and the like. you know, for as much as we talked earlier about how sometimes it's not completely accurate or you could tell that the lettering was off. michael, looking at these photos again of my leaks, you know, it's, it's really incredible to me just how real it looks. and so i'm wondering where do you see the sort of artistic expressions and uses of this really helping us elevate conversations? i think from a positive perspective,
7:53 am
i think we're approaching an era where the higher the the world can be generated on the when the user and i'm a big b r a i and i for see the being used to create brand new member versus essentially like being able to walk around new york and ask you a heads up with the seattle of the in the 1700s. and i perceive these technologies being able to uplift community sets of native americans to create a vision and connections and demonstrate and share their reality. and vision of how our relationship with the technology and also the land in architect i love this technology is to be a servant under served communities like the code on china and to create to be and to create vibrant worlds and space. and i don't think it's entirely native, i think there are things that can be done. well, i do want to share a statement from i should say chat g p t. the statement generated by
7:54 am
d p t. rather than a statement from the company opening about the ethical concerns. thing a model like chat g, p, r, trained on massive amounts of text data, and can inherit biases and misinformation present in that data. that a community and developers have a responsibility to actively address these issues through regular evaluations and audits of the training data and model. it is important for users to critically evaluate the information provided by models and to corroborate the information with additional sources whenever possible. now, a lot of words add a mouthful, but can we just talk through, what do you make of that? that being the explanation that it generates about the ethical concerns, what does that reveal to us? i mean it, it certainly seems like it's accurate. but is it just predicting what words should come or is it in fact something we can trust to be factual sharing? well, we know we definitely cannot trust it. to be factual. i think, you know,
7:55 am
based on all the misinformation that i've seen, people check for if you know you, if you know the answer, i know it's true then yes, you can trust that. but if you're not sure you'll, you'll want to fact check it and, and my go, you know, when we live to the future, what in particular excites you most. i know we talked about artistically, how it can be used. is there something you're working on that you want to share with us that you think highlights that? yeah, so i'm a kid. i used to work in industry in pharma at amazon, alexis, i'm some pretty familiar with the technology. but right now i am working to build automatic speech recognition for indigenous languages so that we can use these technologies to reclaim and revitalized language. because 90 percent of languages in north america that into this winds are at risk of going to me and i believe these technologies we're trying to find them in the role on how we're able to keep them alive and keep them vibrant and contribute to our ecology of thought,
7:56 am
and i want to share with you a tree from mark anderson. this is another skeptic. perhaps i regulation equals a i think equal they i safety equals a i censorship. they're the same thing. are they, sharon? no, i don't think they're in the same thing but, and i can't say i know i, you know, i'm not an expert in a regulation or a safety or a ethics. but you know, in my coverage, you know, it definitely seems like there will have to be some pushback. it can't just be totally free reign and it won't be. so regulation is coming, legislation is coming. the question is, how can we do the best we can to make sure these tools can be used for good for the types of use cases that michael was just talking about for the way it's definitely going to change the way we work on, you know, for example, microsoft just, you know, over the past couple of days as has already reported to be adding chat,
7:57 am
g p t or similar technologies to all of its applications that so many of us use every day at work. so you know, it's the pandora's box is open, it's not going right in the box, right? well at am i did do in the series, begin the pandora's box in our youtube to our box. we have my t v saying i worry about the lack of diversity and development and the weapon isaiah and of a i against under represented people. we also have weakened vibe, saying where will a i be in the next 5 years. any quick answers to that very serious question, sir. well, in 5 years i think i might, we might not even be talking about a i, it will just be a part of our, our technology. it will be a part of our lives that i'll sort of be underlying so many things like, you know, so much more so than it is already. you know, so we may not even use the word ai anymore. maybe that will just be all school.
7:58 am
it'll probably be old school just like i might even be old school as we thought the top of the show writing the introduction to the show for us, although they didn't write this ending. that's all the time we have for today. so i want to thank sharon as well as michael, it's a shame we couldn't have margaret with us, but we will definitely continue to be following the latest in a i see you next time. ah ah, it inspiring stories from around the was it something about when that when we went
7:59 am
back to the i thought we did the on human life capture and it's a fost one. this feel like that presentation of what i want people to remember me by groundbreaking foods, from award winning filmmakers. ah witness on a jazz eda for the 1970s was a pivotal time for cinema on theater in the middle east and north africa. back in the 2nd of a 2 part series al jazeera world meets the creative risk takers who broke new ground for censorship, and developed their own voice in the seventy's in the arab world stage. i'm screen on al jazeera
8:00 am
with i just, he knows our country, we're high poverty rates with inequality and these are shared. it is not an exception. many of the footballers in this country come from poor areas such as this one is many of the members of argentina, national team come from places such as this one where the football field do not cut, rusk, but the soil just like the one that you can see right here we've been talking to some of the children that live in this place, and they say that they would love to follow the steps of your dell, missy maria, and other members of the national team. ah, the pentagon is tracking what is suspects is a chinese spy balloon floating high across u. s. s.

31 Views

info Stream Only

Uploaded by TV Archive on