tv Inside Story Al Jazeera April 1, 2023 3:30am-4:01am AST
3:30 am
tuesday big drop in temperatures as it clouds over. and that rain does set in rain setting into across some parts of the philippines over the next hour. so some lie be showers, just pushing back, inhibit the possibility some localized flooding, the usual seasonal downpours around the equator. indonesia sinks and while the wet weather over the next few days. now we have seen some very wet weather recently across northern parts of india, particularly towards the northeast. now that west a weather orange one is in force here, pushing out so far, west bengal, leasing into bangladesh. the far north east of india. still some sherry rain just peppering the far north of the region as round the foothills of the himalayas. still unsettled for northern pass pakistan. ah, it's a $1000000000.00 money laundering operation to coal mafia is bigger than the coupled with financial institutions, regulators and governments complicit with. right of this right now.
3:31 am
in a 4 part series, al jazeera investigative unit goes under cover in southern africa. we can fill it 90 percent of the government once it's to the point, it's perfectly brandon, good part to on al jazeera is artificial intelligence spiraling out of control. and today we explore the dramatic rise of a i develop with tech giants advocating a pause. can we strike a balance between progress and possible threats? this is inside story. ah, welcome to the program as large language models like chat gpc for unleash unprecedented capabilities, generating airily human like responses. they also have the potential create
3:32 am
unforeseen threats. a eyes rapid ryan sparks urgent questions about our ability to manage it. i'm sure of any and those were not my words. none of them. we asked artificial intelligence to write the introduction of this program. our point a, i has just got a whole lot smarter than it used to be maybe even too smart. and that's why some tech leaders, including the lawn musk, including apple co founder, steve wozniak, a calling all a. i labs to quote, immediately pause for at least 6 months, the training of a i systems more powerful than chat g p. p. for this open letter with a response to open a eyes recent release of g p, p 4. that's a more advanced successor than it's widely used. a chat thought chat, g p 3. the letter goes on to say, are full a i systems should be developed only once. we are confident that their effects will be positive and that their risks will be manageable. this pause should be public and verifiable and include all key actors. if such reports cannot be enacted quickly, government should step in and institute a moratorium. a research and development should be re focused on making today's
3:33 am
powerful state of the art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. the. so our guests to day in vancouver, gary marcus, and america's professor at new york university and the founder of robust a. i and joe metric, a start ups acquired by ober in edinburgh, a to circus, arizona. director of research at the university of edinburgh center for tech. no morrow, futures. and in los angeles, ramesh trinity, austin, a professor at the university of california and founder of the digital cultures lab research group. gary, let's go to you 1st. you signed this open letter, tell us why her let me start with that. i don't actually expect to be a moratorium, but i think the situation is urgent. and i think the letter has gotten this on everybody's agenda and that's terrific. i don't actually think machines are too
3:34 am
smart, i think you're too and reliable, non trustworthy and make up stuff all the time. somebody just committed suicide after the dialogue with the child. but i think there are all kinds of dangers here . maybe they'll be too smart eventually. i don't think they are now, but they're already being widely adopted. there's almost no regulation in the corporate are fairly irresponsible about it, might you? and so this is a perfect storm. and so even though i don't expect a pause, per se, i think that we really need to look at what we're doing here at to say you also signed the letter. yeah, i think this letter demonstrates something really important that many scientists, innovators are not really fearful about what generative a model can do. and they see that they have to listen to the warnings of a i s and regulators over the years. and that kind of confession about this year for me is absolutely valuable. and the actions they suggest to take to meet bizarre and i think they're highly likely to be materialized. i actually argue that it's
3:35 am
practically impossible to see the materialization of what they say and even the national and global political economic arrangements in the words. but the good thing is that now we are become, have like good conversations with innovators and scientists talking about possibilities for, and coordinators collaborative at sports because i think that's really what we need if you want to, in a useful way, regulate and sovereign systems, ramesh d c, d, c, all the a i labs and governments and the military around the world that is using this already. do you see everybody just pausing this for a moment for 6 months? well, i support the idea of more for him, but i don't think that it's likely to be followed. and i also agree with gary just said that what we have with the system is not actual artificial intelligence. we have behavioral mimicry, right? systems are not intelligent, don't reason, they don't and they don't process,
3:36 am
they don't feel this is more using large amounts of data to generate patterns based on historical data sources. so it's very, very different. and i think in general, it's not merely about a moratorium on a, i think it's about a large scale, economic and democracy focus, notion of regulation, of all the big, big tech data service oriented companies. ramez. what difference does it make, whether it's actual intelligence in the way that we're capable of, or whether it produces the same effects? because when i'm chatting with chat, g p, d, g p t, i can't really tell the difference. that's true because it's mimicking human behavior, but it's tends to mimic mass cultural patterns. it lacks any creativity. it lacks any sense of morality whatsoever. you can basically teach the system that the world is flat or the world is round and it will do either. so in a sense what's actually being sacrificed is what actually makes us human. so the cause of the i was really about trying to understand as a former, a i developer,
3:37 am
human intelligence and trying to build machines that could do things in that way. and as humans were highly creative, but we don't know exactly why we're creative, we have the capacity to be irrational at times. these systems might hallucinate because they basically kind of short circuit by having too much content or kind of what they call hallucinating. right. but at the same time, the systems are actually flattening human, creativity and human intelligence. so in that sense, they're actually deeply destructive in my minds to the capacity to learn and grow, let alone questions of diversity, democracy, democracy, and the economics associated with this, which i think are extremely important to discuss. ok, so gary, you have founded to ai companies, right? and so you have worked with this technology before we launch into how dangerous it could be, whether it needs to be stopped. how does it actually work? because i think a lot of us are just discovering really this in the last few weeks. it's old news
3:38 am
to the a community, but most of us are not part of the a community. i think the 1st thing people have to realize is the data is not magic . it's set of tools. and each of those tools have strengths and weaknesses, would be like talking about a tool kit as if the whole thing does everything you need. you really need to understand particular tools and where you use them. there are lots of species of ai that are out there. for example, classical symbolic ai that's been around for 75 years powers the root landing in your g p. s. system. it has nothing to do with the neural networks that are popular right now. the new systems are called neural networks, are actually lots of kinds of neural networks, and they have the strengths and weaknesses, the very good at adding recognition. they're not very good as the other guess at reasoning, and they're not very reliable. they're not very good at truth. so they make stuff up all the time. they can be exploited by bad actors. and to make it crazy. excuse me, conspiracy theories. ok, but the way the system that is most on now,
3:39 am
the way it works is essentially by mimicry. as, as suggested there, they absorb a lot of data and they try to match the patterns, but they don't have a detailed understanding of what they're talking about. so they make up things like saying that you must die in a car crash in 2018 when you didn't really or biography of me by borrow one of the recent systems that made up a title for my book. made up quotes about me, mistake made mistakes about what the argument was in that book and so forth. so one of the problems here is they're just not trustworthy. when they start to give people medical advice, for example. and some of that advice is probably not going to be good, but it's going to look authoritative as if, as if it were true and knowledgeable and so forth. and part of the problem with garry does, if i can jump in isn't, isn't what you're saying. there is that they still make mistakes, but this technology is still fairly new. and like any technology, it gets refined, right? it gets improved upon. well, and some of this is actually getting worse. so for example, is getting worse apparently, according to
3:40 am
a news guard study that was released early this week at discriminating truth from false it. so it's getting more and more plausible if you read it, it seems more and more true, but it's not actually getting more and more true. and i think for technical reasons, if you give me the time i can go into the fundamental architecture is just not good at facts. it's good at mimicry of large statistical patterns, but not representing individual facts about the world. it's not really getting better at that. ok, in fact, if i could just say one more thing is really, really important. we did not call for a ban on all of artificial intelligence. we called for a ban on one specific thing and encouraged, as you said in your opening statement, but a lot of people missed research and other things like how to make this stuff trustworthy and reliable. so we're not saying don't do any at all, which is how almost everybody is interpreting that if you actually read the letter, it's only one specific project which is making a more and more plausible. ready but uncontrollable beast that could be used for all kinds of bad purposes. but to say if the technology is not actually intelligent,
3:41 am
and it's just mimicry, and it's maybe not, as you know, doesn't have the wow factor that, that perhaps 1st time uses will find in it. then why is it so urgent for us to regulate it? and to put rules and norms around it and have watch dogs around it. i think one of the biggest problems and to me, it sounds like that's like the most important problem is that this technologies are very much good at producing information. that is not an adaptable with any conception of tools. and so they can, they are really good in producing misinformation and this information and very good as passively taking this friend, different kinds of propaganda and conspiracy theories. and so you can imagine there are many malicious actors in the world that can use the systems in order to create a trap. stories that sound plausible and interesting for communities of people. and
3:42 am
basically, they can prevent any kind of you school interesting democratic conversations between groups of people because they can pay per click on that. so they can be used for really interesting things. if you want to learn about quantum mechanics and you have no idea about it, you can go and have a chat with systems and many of the conversations are low. right? but many of the conversations pertain to our religious beliefs, political beliefs, and the notion of truth, then we go to the read off politics and history. and religion is a very completely, completely kind of different notion as compared to when we are talking about the truth of the existence of a table in this room or not. and because the system, as the previous speakers mentioned, are really good in, in me. can you and they can kind of me political information, political statement that some you must have given. and so they can kind of be used
3:43 am
in order to enforce reinforce some kinds of biases. again, this systems are trained on some specific data. they can be fine, you on the general models can be fine too on specific kinds of test. and so you can just easily imagine and misuse of updates. gracious, then let's let one of those misuses up on screen. we've got pictures of, of the pope and this went viral over the last few days. so i'm pretty sure you've all seen them. pictures of the pope in a long white extravagant puffer jacket. and people thought you had so much swag and he was so fashionable. this is completely made up. 2 not as proper jack, i don't even know if the jacket actually exists in real life. but the point is that this was not a picture. this was generated by a i, we can show you another picture, donald trump being arrested and seen. it looks like he's being arrested, maybe at the foot of trump tower in new york. it's unclear. but the picture point is that the visual, i'm not even going to call it a picture,
3:44 am
looks very believable. so you talk about truth. those are just some early day examples of things that just in the last few days people have believed have seen. and actually it turned out they were generated by a ramesh. the what happens if we find ourselves in a world where we don't know if we can trust the words we hear in the pictures. we see it's deeply destructive. and i think that it's also important to note the basis of all of the system, the way of these systems work is based on all of our personal data, right? so we live in a world where our personal data, without even really being aware of it, is being collected indiscriminately by 3rd parties, corporations, and states, pretty much all the time, fast computers, and infinitely cheap. essentially the cloud based storage is what creates these sorts of system and they use patterns and correlation to generate trips and false truth than one thing we've learned. quite clearly with many of our big tech
3:45 am
platforms, especially our social media platforms. if they don't just order the world in any sort of rational or neutral way, if that were to even exist, they tend to prioritize content that will grab our attention, hijacker, dopamine and our cortisol and sensationalize. let me turn to a gary because gary, i want to put to you the doomsday scenario that i'm sure you're already aware of because it's making the rounds in the a i community. and this was one that was detailed by eliezer. you'd kosky, he's widely regarded as a founder in the field of alignments. alignment is kind of the buzzword in a, i have 4 of yours between that, that involves a lining, the robots with humans, interests and values. so he's a, he's the leading researcher in that field, and he wrote an article saying this, quote, many researchers steeped in these issues including myself, expect that the most likely result of building a super humanly smart ai under anything remotely like the current circumstances is
3:46 am
that literally every one on earth will die, not as in maybe possibly some remote chance, but as in that is the obvious thing that would happen. the moratorium on new large training on new large training runs needs to be indefinite and world wide. there can be no exceptions, including for governments or military's. how do you feel about that? everybody would die if we let a i development continue unchecked. i think we have no idea. i think that's overstate it. i think he suggesting it as a 100, right? 100 percent probability and it's a much smaller probability, but it's not 0. we don't have weight that angle is per 2nd. you're saying the probability that everybody would die as a result of a i is not 0 in your view. there is a channel that you can have large, but it's not, but it's not 0. i am going to explain my answer. what we have seen in recent weeks is that the corporate don't really care. that we have no governance in place,
3:47 am
that we don't really know what the systems are capable of. and they're only going to get more powerful. you know, one example that probably a lot of people saw is the chat g p a. i lied basically. not literally. so because it has no intentions but tricks a human user into doing captures by saying that it was a machine with a visual me, there was a person with a visual impairment and was actually a machine. and so that's an example of the way that these systems can actually trick humans into doing things. i don't think that the terminator scenario is the right one to worry about where machines develop some kind of will and want to take us over. but i do think that bad actors can use these things in ways that we just have no idea. your new things are happening every day. the systems are getting more powerful. and so that's why i favor, at least a little bit of a pause to get a handle on what's going on here, on, especially in the recognition that governments don't get know what to do and that the corporate are not really being super responsible about it. mean, imagine that this technology, we're 100000 times more efficient, more powerful,
3:48 am
more sensible, and a bad actor gets older them. what does that mean? we don't actually understand. so i don't think as a scientist, i can say probability is 0. as a scientist, i can say unlikely a lot of things have to come together in different ways, but i can't say here's a guarantee that that's not a possible option, especially given the political climate that i'm seeing. romesha can artificial intelligence ever become sentient. this, these types of a system louis are not sentients on any level in the ways in which humans operate, they can beula, they can do pass, they can act like human being. but in reality, what they're actually doing is just patterning themselves based on stuff. this being collected about a surveil, you know, based on surveillance of us, right? all the while. exploiting our personal data hijacking a digital economy that's more unequal, especially on a global level. and you know,
3:49 am
when the systems are sort of going off the rails, hiring people in kenya at $1.00 an hour to try to clean it up, just like facebook did with content moderators in the philippines. so let's move completely away from this. this sensationalizing, the catastrophe using blue co, red call nonsense that en, red it in the new york times this week. and let's read the, the, the systems for what they are obfuscating human creativity, dumbing things down. and, and essentially exploiting data economy, the highly unequal and unjust. ok, so what's to, so what's the, what's the worst case scenario? what's the likely like reasonably likely worst case scenario that you're looking at and that you hope can be averted, right? if we're going to just set the doomsday scenario side, because that appears to be out there far out there. i mean, what do you think is the likely thing we need to prepare for that would happen if you know, if we don't have guard rails in place. and i need to kind of clean up our conception
3:50 am
of a substantial risk. i think this conception of a central risk coming out of terminator is just to, in my view is just a story. and what happens is that the basic essential, reese, but the reason that extension is, is that we have this gradual kind of insurance of all of the ethical and social and political and economic reese that's happening there. so gradually you're developing better and better systems that can do things. that road our sense of cognitive abilities, that kind of as like put us in a position to ask questions about, what does it need to do? and that is where we face existential risks. so this gradual accumulation of ethical social, political risks and brings us today today essential dangers. so you can kind of like buy into this kind of analysis and that's, that's the position that i kind of depend on work on. then we need to think about
3:51 am
multiplicity and figure out ways to take the system, but not just a system, basically that the companies and the institutions that give rise to, to the production of the systems. and we need to kind of make a reconfiguration of economic and political arrangements. and i think mainly the united states and other parts of the world. but i say united states, because many of this generate today are models that have captured the imagination of the public in the world is coming from there. so what does that look like? were to reconfigure political economic arrangements? what do you, what do you mean concretely? i mean, and change that one country in the ward or a specific part of the kind of a country kind of decides to develop the system and then and then release the system for free. and basically they say the systems are going to benefit all of humanity without having to concrete sense of what, what is humanity?
3:52 am
what is all of humanity and what does it mean to benefit all united? so do you mean it should be government control? then? i think there should be a government control, but also there should be global like meeting global coordination. it shouldn't be the case. i just saw people from different countries come together and talk together and then they go there should be some global coordinations that materialize. some of this concerns and safeguards, the development of the systems in the name of innovation is great for the whole more than let's make a benefit of humanity that kind of narrative should be stopped. and i think all of us need to react and reflect on this narrative and kind of propose a different dream and different imagination that we want to move towards. and then do say i systems can become useful for humanity. so gary, a to say dimension, a word that's been conspicuously absent from this conversation so far, which is innovation. and again, you created to a, i based companies. so you must believe on some pretty fundamental level that this
3:53 am
innovation can also be used for good, right? or at least at the very least in ways that are not, not harmful to humanity. what are the, what are the, perhaps the opportunities that it presents? what i mean, give us some hope. i think that we can have hope that will build better a i think there are some positive uses of this particular technology that we have right now is large language models. but i think that large language models are fraught with risk. i don't think they're going to kill us. oh, you know, i said not 0. i don't think it's very likely. but i do think it's likely that it's going to have some pretty dystopian, a fax on the fabric of society, new misinformation, the thing that didn't get really emphasizes the vast quantity of how this is going to happen. nap see it is much greater scale than we've ever seen before. we're going to see cyber crime, the new new scale that we haven't seen before. think the risks we should be focusing on, at least for now, are near term, but they're in then. yes, there are some value in these current systems. they can, for example,
3:54 am
help computer programmers program faster. but i'm not sure that's worth the risk at the same time. i'm not saying stifle innovation and saying the innovation should be around how to make these systems truthful. f, o, ethical, trustworthy, et cetera. i've had to make these so that they're stable systems that we can use right now. they're like bowls in a china shop, the powerful, the reckless. we don't have any real control over them. we should be striving towards a ne i that we control rather than that's unpredictable and, and dangerous. and that's kind of what we have now. so i'm not saying every innovation is a good one. i'm saying innovation can be a good thing. we need to innovate something different than this particular technology where we need to innovate better guard rails around the technology that might be okay to. we might have to invent new policies around the misinformation, new tools to detect it in the way that we have new tools to detect spam. so lots of things we might innovate on, but there's this a day fix about this one tack in this one tack has problems,
3:55 am
we shouldn't just assume it's the right one. we've had 15 different tools in the history of a i. this one happens to do some things. well, there's a lot of money behind. it doesn't mean it's the right innovation. we do need innovation, but maybe not. so in the spirit, obviously of testing the testing the tool in the system, i asked it to generate some questions to ask you guys. and so it did come up with a few good ones. if you could create a i to tackle one global crisis, which would you choose? let me go to mesh for this one. tackle one global crisis with a i. how would you, what global crisis, and how would it do it? i would, i would, i would ask the system to, to wrecking what are the components of the economy that is supporting such an ice system. right. so here's, here's the, here, the 2 major challenge that i see that are very interconnected to the,
3:56 am
to this technology. one is the global economics associated with right. we live on it in a sort of visually economy that's becoming more and more feudal by the moment where personal data is, is being harvested by for corporate evaluation. sometimes it's not even associate with profitability. what are you gonna want? what do you want to do about that? what about the climactic concerns as well? recognizing as your previous as, as the previous guess, mentioned before the previous mentioned that we have to have global dialogue about these technologies to ensure that these technologies actually support a global community. because the only way we're going to deal with climate issues is by figuring out how we're going to share the earth and repair the earth. and that involves the type of global dialogue that is using many different voices around the world. that is the opposite of what's occurring with sorts of systems. so that's, that's, that, that's, i think those are critical issues that i think have to be connected to every major
3:57 am
technology platform. first and foremost, we need to think about economic welfare and democratic walter. alright, fix its own business model and think about environment ramirez. thank you to. so i'm going to give you a very unfair what i'm about to do to you. i'm going to give you 40 seconds to tackle this one. last a i generated question. if, how do you envision the role of a i in shaping the future of humanity? good luck. i would like to learn about humanity and to learn about humanity. today i need to go and talk to many or learn from different communities and their perspectives. and what they want and how they see the worth. and i think the training of the training data systems receive if they want to get the you about different parts of the work, should include narrative, their reach, and the narrative of people from all around the world. and that i would say after you get that concept of humanity,
3:58 am
then we can come to you and talk about the amazing thank you so much for your answer. i will let chad g p t for know that i've got some good answers to questions . thank you. guys, all of you, a gary marcus, 2 years out in remission of austin. thank you so much for joining us on inside story today. and you can see the program again any time by visiting our website, l 0 dot com. and for further discussion, go to our facebook page that facebook dot com forward slash ha inside story. you can also join the conversation on twitter or handle at a g inside story, from me thorough venue in the entire team here and go off by from the ah,
3:59 am
a unprompted and uninterrupted discussions. from our london broadcast center on al jazeera brought forth the law. will the law, when with neither side, willing to negotiate is the ukraine war becoming a forever war is america's global leadership, increasingly fragile. what will us politics look like? as we had to the presidential election of 2024, the quizzical look good us politics. the bottom line, landmark case has been sent shock waves around the world. it's enormous it's phenomena historical and paved the way for the potential to penalize climate in action is the will wake up call for the government. this is really something that can make a turning point or thrice, meets the citizens using the law to hold governments and corporations to account if they don't want to do it by asking,
4:00 am
then let's go to the court. the case for the climate on a jesse many of us, a living with the effects of ecological breakdown. so what would be stories in which technology calls the promise of salvation for the planning, millionaires, big tech and an unwavering faith in innovation. alleyway investigates where the techno optimism is helping or hindering the fight against climate change. it's a distraction self delusion. it is key. i just masking over it all hail the planet episode to on al jazeera al jazeera sent to the stage 311 here. the 5 other 9 are either been deployed to save just one enemy global experts and discussing idea of it being dear country been established in democracy. it was bound to explore an abundance of world class programming. of it least polluted in life. impacts designed to inform, motivate, and inspire you.
16 Views
Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=883402787)