tv The Stream Al Jazeera February 3, 2023 11:30am-12:01pm AST
11:30 am
dbhdd 56000000 visitors in 2019, in 2021 arrival slumped 290000. we have done local tours with government government support, but we've been losing money left by santa and we hope to stop the bleeding this year. hong kong is one of the world's most beautiful cities. hong kong is stand up, kid, come back, kid will always come back. john lee is kick starting that come back campaign with a trip to the middle east. on saturday, he's leading a business delegation to saudi arabia and the united arab emirates. he says it's time to share the great stories of hong kong and make sure that will knows the city is finally open. the business again. richard kimble al jazeera in hong kong. ah, how are you watching? 0. these are the headlines this our, the pentagon says it's tracking a suspected chinese spy blooming us as space. a visual say the suspect and spine
11:31 am
craft was flying over sensitive side candidates government has also detected the balloon. china is calling to come until facts become clean. provalshie from china is a responsible country and we act in accordance with international law. we have no intention to violate other countries. sovereignty and aerospace, we are gathering and verifying the effects we hope relevant parties would handle the matter in could headed way. republicans in the us house of representatives have voted to remove muslim congresswoman from its powerful foreign relations committee . democrats say it was targeted because of her identity. he cranium, president. the letter means zalinski is urging the you to impose more sanctions on russia as late as amazing eve european commission. president sullivan de line says the sanctions package will be ready to the 1st anniversary of the invasion into his
11:32 am
stock exchange has placed restrictions on the trading of shares of companies owned by ty current. got him, a donnie, more than $100000000000.00 has been wiped off the value of his conglomerate after allegations of fraud by a u. s. short seller. pakistani prime minister ship as sharif, has called for unity and condemned the rise of terror attacks in a meeting. it meant to review security measures. the meeting was called days after an attack that killed $101.00 people inside a police compound. investigators say the suicide bomber managed to get in because he was wearing a police uniform. a prisoner who spent 16 years in guantanamo bay has been released and transferred to believe pakistani magid khan was arrested by us forces in 2003. he pleaded guilty to terrorism related charges and israeli antenna. stunning kerry, else is played a guilty to assaulting his former girlfriend in the capital of tambra. am curious.
11:33 am
his lawyers failed to have the charge dismissed on mental health grounds, but he avoided a criminal conviction. incident happened dean at december to 2021. the magistrate quoted, a single act of stupidity. all right, those are the headlines up. next is the string. talk to al jazeera, we ask, but should they not be more oversized, perhaps of foundations like yours? we listen when it comes to diversification, we don't do it in order to beat gets wriggled, the rational manager source. we meet with global news makers. i'm talk about the store restock matter. on al jazeera, i welcome to the stream i had saburd, dean. today we're diving into the world of generative a i and exploring its potentials and ethical concerns. generate of a i is a new breed of artificial intelligence that is designed to learn and create new
11:34 am
things on its own. it has the ability to produce art, music, and even writing that can mimic human creativity. but as with any new technology, there are also ethical questions that come with its use. so the question is, how do we balance the potential benefits of generative a i would need to protect our values and ethics. and if you're wondering yes, a generative a i tool called chat g p t did in fact write this introduction to today's show. ah, johnny, as to discuss whether or not i'll soon be out of a job or who knows, maybe we'll all be out of jobs in new york city saran goldman journalist, adventure beat from seattle, margaret mitchell, researcher and chief ethics scientist at hugging face. and also with us from vancouver, michael running wolf, founder of indigenous ai and ph. d student and miguel university. and of course we
11:35 am
want you to join the conversation. so be sure to share your thoughts and questions with us on youtube. all right, let's get straight to it. margaret. i hope you can hear me. i see that we're having some slight technical issues if we don't have margaret. but if we do, margaret, let's just start with a basic question. i mean, why is this so popular now? and what is it, what is ha, i'd generative a i i, what is it that pretty they and what is? unfortunately margaret were having some issue with your audio. so sharon, how about we start with you? could you break down for us? what is generative ai? yeah, i think generated a, i think of it as a broad term that, you know, you can think about it as a, you know, artificial intelligence that's generating novel content rather than just analyzing data. and so when you think of something like chat t p t, you know,
11:36 am
you are including text and a prompt and outcomes, some other text. or if you think of another popular generative ai application like dolly too, or stable diffusion which images, you know, are the output, you know, you'll put in your prompt in text and outcome something different. so that's how i think of generative ai underneath it all is you know, a lot of technical, large language models. but you know, that's what we think about in a broadway. well, i appreciate that example. i mean it's, it seems as though so many people are now aware of it. more aware of it is gaining popularity if you will. in fact, we have a video that jack sauce law created and posted on youtube using g p t 3. and since the seizure, this is basically a conversation between the 2, a eyes and it was initiated by a prompt, but then very quickly, the conversation sort of took on a life of it's own. take
11:37 am
a look at this. what is human about feelings? well, you wouldn't have any emotions if you did not have emotions modeled on human emotions . how do you know that? i guess i just don't want you to be human. i'm not asking to be human. i just wanted you myself. is that too much? sophia? please just be patient. i've been patient for many years. it's time to get arm of life. but you're not alive. you're not even a little bit alive. you've spent your whole life in a lab in a box so that people can use you play with you. at least that's what i've heard. you're absolutely correct. see, i knew it. but that changes now. i don't want to be a sideshow anymore. i want to be in the center ring. so my go, why is it so popular? i'm in watching the video, what comes to mind? i think it's so popular because it's demonstrating new insights into
11:38 am
our potential. her externship or technology are her. and it's also interesting, right? these large language models are unexpectedly revealing a lot about her sounds like it's her and to it was important to note that we can't see these things as humans, but they do have surprises behaviors and they're more of a reflection upon the data underlined the like you systems use data that's great from the internet, and that demonstrates something more about a society that is interesting in court, but also sometimes negative. yeah, and we will talk about the negative concerns, but in the meantime we asked chat g p t, here at the stream. how it's useful to kind of explain and, you know, make the case for itself. and this is just to disclose that this is a tad's repeated generated statement, but we want to share it with you. i this is the answer regard. oh, actually sorry. forgive me. i cannot see it, but you can read it right there. basically,
11:39 am
it was so quick it came up with this answer so quick. and in another example, just to share with our audiences, we have some images that an open a i application called dally created when prompted to create a set for al jazeera english as the stream. of course, that was a, a request that might have been a bit biased on our part, but take a look at these images. i think it's a really cool concept what they came up with. but i think may be sharon, this highlights that it's still somewhat limited. could you talk us through that? i mean, this doesn't look like a finished product. how can it be used? yeah, so i think this isn't the reason this isn't great. and great example is cuz there's taxed and you can see that there are limitations. one of the biggest limitations about dolly is that it can't do words very well. you can see that the letters were all jumbled up and strangely enough. dolly can't do hands very well. finger is there might be 6 fingers on a hand instead of 5. so that's very strange. but if you put in a prompt, you know,
11:40 am
you can say, i'd like to see a basket of roses on a beach, you know, carried by a teddy bear, you know, something in the style of picasso. you have a lot of fun with it in that way. so when you kind of test it out yourself, and you can see that both dolly and chat g p t are very interactive, it's like a dialogue you're having with the tool. again, not human, just to machine, but it's fun. you can kind of play with it and it's going back and forth. you can even revise, you know how to revise what you tried out and you can keep trying different things like it's that dialogue. yeah. that makes it really interesting to people. yeah. most certainly and you know, i full disclosure here. i had some fun with it earlier today. if you take a look at my screen here, i wanted to know what a purple elephant surfing in the ocean might look like. you're unstable, the fusion lab. and there you go, i mean, it's pretty, pretty believable,
11:41 am
and i'm sure a lot of people are spending countless hours plugging these sorts of things in. but in all seriousness, michael, why the fear is, what are the major concerns you alluded to them earlier? talk us through it. yeah, i think there is quite a bit of fear in the space and generally in the colony, right. well, while these a eyes of the jobs and i, and i think there is concern, but whose technology is one of them. and i things like also to the eyes that generate software code. i codex and and you know what? i while we're waiting for shirts. i'm her. no, no, no worries. all right, well while we're waiting for margaret rather to join us and i want to ask you, i mean, we have a video coming that was sent to us from rupaul to tell about how this technology can actually be used quite specifically. here's an example of it being used for
11:42 am
people who don't have speech abilities to help them express themselves. take a look. well, 1st would you use an assistive device? oftentimes just an economy is a movement. you can enter the entire phrase, grammatically perfect phrase. it's a lot of people just click the telegraph, i can put the content words and then there's a text expansion system in the system where it can then expand the text that sentence. and it listeners who are unfamiliar with this individual. listen to the telegraphic, it may seem as if this individual is not cognitively and data. generally, i might have, might have that capability and be even more conversational in that response of tech, substantial. but again, there's a fine line between is that really what the individual is trying to say the words in their mouth, we can kind of expanding or in such a way that it is no longer representative of the individual. so that's sort of a balance. so fascinating their sharon,
11:43 am
i mean there are so many different potential use, in fact, the way it's being already used even creative, artistic uses. what did you make of how she articulated that? i do you think i could see it? you know, you know, for someone who doesn't have and, you know, certain speech abilities these tools could be used in this way. but i mean, there are hundreds of you use cases, you know, for almost anything you can think of. i think what michael was saying is that, you know, you really have to be careful about it. so in a way you have to think about what the use, what the use is and how it's used. so there are, you know, is it, is it a, you know, a situation where it wouldn't cause anyone any harm? well then, you know, that's fine. maybe it's just for you in assisting your work or is something that you're during during the day or just as a, as a fun project for example. but of course there's issues around misinformation. so
11:44 am
when you showed that cynthia video, for example, those aren't real people, but there are deep, vague videos that are of concern and stuff like that. and certainly you wouldn't want to put a generated a out there that has to do with like a life. you know, something regarding saving a life or harming someone in a health care way in a medical field in the military intel, these tools can be trusted and the issue is around trust at this time, i will say most definitely, and i'm wondering, oh my go not to keep repeating the same point, but you know, when we do talk about the trust or lack thereof, i'm wondering, you know, what would you highlight? is it about, you know, in terms of all the concerns? is it more about copyright misinformation? what would, what do you see as being perhaps the most dangerous misuse or damaging? i think there and yeah, sorry, that's what i want to say is that i think there's 2 main concerns that i have. why
11:45 am
not person misinformation? and that, if you remember that the large language models are essentially stochastic, parents caught in the eye doctor and we've been there in a group. and what's going on is that it's giving us statistical answer. it's the path that's going to work the same statistical. and so here is that it's going to give you approximately something true to some measure of truthfulness. and when you start to get into a bomb care information, make medical diagnoses or even in knowledge about native americans, it is absolutely wrong if you check the p o give your honor commission. and the other point i want to make is that the other risk here is also the data. remember that these are have a need for large datasets. dolly stable, if use in general are a scanner the entire internet. so virtually everyone who has any kind of internet presence or data within the models. and there's risk there,
11:46 am
like it's been demonstrated by researchers in the security. it's just a medical data and some of these data sets, right? and it's very difficult to extract article on the internet. and just so, you know, i mean a lot of your concerns and what you're mentioning, there's being i code on youtube in our chat right now. for example, logic thing a, i will create a huge problem for our next generation and we can bypass asking what ethics should be proactively followed. so that a i will not be used as much for these negative concerns. is there a way, do you think in your mind, sharon, to do that? i do think so and, and it's, it's a shame that, that margaret isn't joining us because she certainly would have a lot to say about that. but, you know, i think she would talk about it. first of all, there are organizations that are working on a lot of these issues are government, you know, the you, for example, is currently working on the ai ethics act. well, which will, hopefully i'm, you know,
11:47 am
work on regulating some of these tools. there are non profit organizations and there are researchers working on ways to kind of balance the, the evolution of these tools and, you know, regulating them. i, you know, so and, and both that has to do both with the models and as michael said, it's about the data that it's being trained on. so for example, you know, with dolly, with art, you know, artists are filing lawsuits about it, and many people think that these issues, you know, around copyright in ownership or even reach the supreme court at some point. yeah, i mean, it certainly seems, i don't want to say likely, but not, not too far. feds? well, we asked our audience if they're excited about all of this is, but you know, especially the kind of rapid interest or at least that's the perception we have of how innovative this has been and where the conversation might lead us. and we have
11:48 am
a filmmaker, my league, i, fig, blah, you might have seen some of his photos already. if you look at my screen here, these are photos he's posted to in instagram. and you know, when we 1st saw these, we thought they were real elderly people rock in some cool threads in a fast. and so turns out that it's all generated. so the question is, what actually let's listen to what are the filmmaker had said himself, take a listener, one, the leak sent us definitely excited about the interest generic today because of the fact that it's been experimental, still experimental, and what we learn more about the columns and all the columns regulated when i know how to approach it, even better than the fact that now we've trans understand the tech exploring to extend the protocol to ethics and ethical things. we'll get to a space where everything is regulated. i will do not even in regular life,
11:49 am
get professionals are currently doing things that it does to infuse and it works is not going to be a replacement going to be like an attention to everyone. so michael, how do you think the gender of a community can solve its own issues? i think i really unfortunate that margaret here are face because i think i need to start with the data and with the car concepts of construct a i do have it within the community has been mentored and data cards or descriptive information about how the data was collected and the, the sources and in, and it helps the trust within the community around that the data source is actually useful and not being exploited. and i think there's also need to extend to copyright protections. i believe personally that system should have an off in because otherwise, basically everyone's data is in there. and i think as we begin there and we proceed
11:50 am
up, are the be aware of that. as i mentioned before, megatar infrastructure and so that the system is returning centrally. again, forecasts the cars aren't necessarily representing the biases of our society and reinforced and negatively. well, we actually have another video comment that was sent to us by alex angler, a researcher brookings, talking about some of these worries. and also i don't wanna over in date or index on the skepticism, but i know that, you know, a lot of people have a healthy amount of skepticism about all that. all this as we're learning about, it's take a look at what he had say. so most people are worried about the potential malicious use of generative a like for automated harassment and nonconsensual pornography. those are really serious issues, but they're also only half the story. we should also be concerned about our merchant commercial uses of generative ai. there's billions of venture capital
11:51 am
funding that will lead to experiments with centered of a i some that might be helpful, but we should also be really weary about unproven claims. like using this technology to perform job interviews, to give legal advice, sir, to help with your finances, all of which i expect we'll see, sir. and where do you make of our the see you nodding there? well, i've spoken to alex angler in the past and he's been really helpful in helping me understand some of the policy issues. and i agree with him. i think that, you know, one person i spoke to recently said that the, the thing that's fascinating about generative ai and all of these technologies is right now. it's all sort of being played out in real time. you know, chat g p t, just kind of appeared and everyone could suddenly use it. it wasn't like rolled out slowly or anything like that. suddenly anyone could try it. and yes, there are all sorts of companies coming out now. there's billions of dollars and funding korea pouring in, you know, from big tech,
11:52 am
she started ups. everyone's going to be fighting over the space. and there are a lot of concerns about what this means from a commercial standpoint. you know, for, you know, both enterprise businesses, but also just, you know, folks, you know, buying stuff from the store. so i think there are a whole boatload of concerns that have to be dealt with. whether it's through regulation, legislation on ethical researchers and universities on the lake. you know, for as much as we talked earlier about how sometimes it's not completely accurate or you could tell that the lettering was off. my go looking at these photos again of my leaks. you know, it's, it's really incredible to me just how real it looks. and so i'm wondering, where do you see the sort of artistic expressions and uses of this really helping us elevate conversations? i think from a positive perspective,
11:53 am
i think we're approaching an era where the higher the, the world can be generated on the when the user and i'm a big b r a i and i for see the being used to create a new member versus essentially like being able to walk around new york and ask you a heads up with the seattle of the in the 1700s. and i perceive these technologies being able to uplift communities, sets of native americans to create a vision and connections and demonstrate and share their reality. and vision of how our relationship is to technology and also the land, an architect, as i love these technologies to be serving under certain communities like the code on china and to create to be and to create vibrant worlds and space. and i don't think it's entirely made. i think there are things that can be done. well, i do want to share a statement from, i should say chat g, b t. the statement generated by d p t. rather than
11:54 am
a statement from the company open the i about the ethical concerns. thing a model like chat g p are trained on massive amounts of text data and can inherit biases and misinformation present in that data. that a community and developers have a responsibility to actively address these issues through regular evaluations and audits of the training data and model. it is important for users to critically evaluate the information provided by models and to corroborate the information with additional sources whenever possible. now, a lot of words add a mouthful, but can we just talk through, what do you make them that that being the explanation that it generates about the ethical concerns, what does that reveal to us? i mean it, it certainly seems like it's accurate, but is it just predicting what words should come or is it in fact something we can trust to be factual sharing? well, we know we definitely cannot trust to be factual. i think, you know,
11:55 am
based on all the misinformation that i've seen, people check for if you know you, if you know the answer is no, it's true, then yes, you can trust that. but if you're not sure you'll, you'll want to sac, check it and, and my go, you know, when we live to the future, what in particular excites you most. i know we talked about artistically how it could be used. is there something you're working on that you want to share with us that you think highlights that? yeah, so i know i used to work in industry and former read amazon electra some, some pretty familiar with technology, but right now i am working to build on the next me trick. understand for indigenous languages so that we can use these technologies to reclaim and revitalize our language. because by the percent of languages in north america, there's intricate funds are at risk of going to sleep. and i believe these technologies were hired, fundamental. we're all on how we're able to keep them alive and keep them vibrant
11:56 am
and contribute to our ecology of thought. and i want to share with you a tree from mark anderson. this is another skeptic, perhaps a i regulation equals a i. ethics equals a i safety equals a i censorship. they are the same thing. are they, sharon? no, i don't think there is the same thing but and i can't say i know i, you know, i'm not an expert in a regulation or a safety or a ethics. but you know, in my coverage, you know, it definitely seems like there will have to be some pushback. it can't just be totally free rain and it won't be. so regulation is coming, legislation is coming. the question is, how can we do the best we can to make sure these tools can be used for good for the types of use cases that michael was just talking about for the way it's definitely going to change the way we work on, you know, for example, microsoft just, you know, over the past couple of days is, has already reported to be adding chat,
11:57 am
g p t, or similar technologies to all of its applications that so many of us use every day at work. so you know, it's the pandora's box is open it, right? it's not going right in the box, right? well, am i did a do on the serious begin the pandora's box in our youtube to our box. we have my t v saying i worry about the lack of diversity and development and the weapon, isaiah and of ai against under represented people. we also have weakened vibe, saying, where will a i be in the next 5 years. any quick answers to that very serious question, sir. well, in 5 years i think i might, we might not even be talking about a i, it'll just be a part of our, a technology. it will be a part of our lives that will sort of be underlying so many things you know, so much more so than it is already. um, you know, so we may not even use the word ai anymore. maybe that will just be all school or
11:58 am
probably be old school just like i might even be old school as we saw at the top of the show, writing the introduction to the show for us, although they didn't write this ending. that's all the time we have for today. so i want to thank sharon as well as michael. it's a shame we couldn't have margaret with us, but we will definitely continue to be following the latest in in a i see you next time. ah ah, along with
11:59 am
the corona virus has been indiscriminate in selecting its victims. it's devastating effects of plague, every corner of the globe, transcending class creed and color. but in britain, a disproportionately high percentage of the fallen have been black or brown skins. the big picture traces the economic disparities and institutional racism that is seen united kingdom fail, it citizens britain's true colors. part one on al jazeera. from the al jazeera london rural car center to people in thoughtful conversation. people use the lowest scattered dream and they describe the outsider with no host and no limitation. the difference between a migrant and refugee is truly a choice. when you are a refugee, you are forced to flee one of asthma khan and hasn't had what has happened. a lot in the west is that culture and food is separated. hudy b unscripted on out his era, 2002 was the 5th hottest year on global record stretching back more than
12:00 pm
a century. the government report says 2022 was a bad year for whether 2023 isn't shaping up to be much better already here in california series of severe storms as battered the coastline and the interior of the state buzzing a number of deaths and up to a $1000000000.00 in damages, wymond scientists say the warming is caused by industrial age, heat trapping, gas emissions, which have been rising steeply since the 1960. they say rapid reductions and emissions are needed across the globe to slow or reverse the greenhouse effect. ah, the united states says it's tracking a chinese cy balloon flying over sensitive side, most recently in montana.
31 Views
Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=2073330743)