Skip to main content

tv   France 24  LINKTV  April 4, 2023 5:30am-6:01am PDT

5:30 am
cyril: is artificial intelligence spiraling out of control? today, we explore the dramatic rise of ai development. with tech giants advocating a pause, can we strike a balance between progress and possible threats? this is "inside story." ♪ welcome to the program. as large language models like chatgpt4 unleash unprecedented capabilities, generating eerily human-like responses, they also
5:31 am
have the potential to create unforeseen threats. ai's rapid rise sparks urgent questions about our ability to manage it. i'm cyril vanier, and those were not my words. none of them. we asked artificial intelligence to write the introduction of this program. our point, ai has just got a whole lot smarter than it used to be, maybe even too smart. and that's why some tech leaders, including elon musk, including apple co-founder steve wozniak, are calling all ai labs to "immediately pause for at least six months the training of ai systems more powerful than chatgpt4." now, this open letter was in response to open ai's recent release of gpt4, that's a more advanced successor than its widely used ai chat bot, chatgpt3. the letter goes on to say powerful ai systems should be developed only once we are confident that their effects will be positive and that their risks will be manageable. this pause should be public and verifiable and include all key actors. if such a pause cannot be enacted quickly, governments
5:32 am
should step in and institute a moratorium. ai research and development should be refocused on making today's powerful state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. ♪ so, our guests today, in vancouver, gary marcus, an emeritus professor at new york university and the founder of robust.ai and geometric.ai, startups acquired by uber. in edinburgh, atoosa kasirzadeh, director of research at the university of edinburgh's center for technomoral futures. and in los angeles, ramesh srinivasan, a professor at the university of california and founder of the digital cultures lab research group. gary, let's go to you first. you signed this open letter. tell us why. >> perfect, let me start with that. i don't actually expect it to be a moratorium, but i think the situation is urgent, and i think the letter has gotten this on
5:33 am
everybody's agenda, and that's terrific. i don't actually think machines are too smart. i think they're too unreliable and untrustworthy. they make up stuff all the time. somebody just committed suicide after their dialogue with the chatbot. i think there are all kinds of dangers here. maybe they'll be too smart eventually. i don't think they are now, but they're already being widely adopted. there's almost no regulation, and the corporates are fairly irresponsible about it, in my view. and so, this is a perfect storm. and so, even though i don't expect a pause per se, i think that we really need to look at what we're doing here. cyril: atoosa, you also signed the letter. >> yeah, because i think this letter demonstrates something really important, that many scientists and innovators are now really fearful about what generative ai models can do, and they see that they have failed to listen to the warnings of activists and regulators over the years, and that kind of confession about their fear for me is absolutely valuable.
5:34 am
the actions they suggest to take to me seem bizarre. and i think they're highly unlikely to be materialized. i actually argue that it is practically impossible to see the materialization of what they say, given the national and global, political, economic arrangements in the world. but the good thing is that now we can have like good conversations with these innovators and scientists, talking about possibilities for a coordinated collaborative effort, because i think that's really what we need if we want to in a useful way regulate and govern the systems. cyril: ramesh, do you see all the ai labs and governments and the military around the world that is using this already, do you see everybody's just pausing -- do you see everybody just pausing this for a moment, for six months? >> well, i support the idea of the moratorium, but i don't think that that's likely to be followed. and i also agree with what gary just said, that what we have with these systems are not actual artificial intelligence. we have behavioral mimicry,
5:35 am
right? these systems are not intelligent. i don't reason. -- they don't reason. they don't think. they don't process. they don't feel. this is more using large amounts of data to generate patterns based on historical data sources. so it's very, very different. and i think in general, it's not merely about a moratorium on ai, i think it's about a large-scale economic and democracy focused notion of regulation of all the big big tech data surveillance oriented companies. cyril: ramesh, what difference does it make whether it's actual intelligence in the way that we're capable of or whether it produces the same effects? because when i'm chatting with chatgpt, i can't really tell the difference. >> that's true, because it's mimicking human behavior, but it tends to mimic mass cultural patterns. it lacks any creativity. it lacks any sense of morality whatsoever. you can basically teach the system that the world is flat or the world is round and it will do either. so, in a sense, what's actually being sacrificed is what actually makes us human.
5:36 am
so the cause of ai was really about trying to understand as a former ai developer human intelligence and trying to build machines that could do things in that way. and as humans, we're highly creative, but we don't know exactly why we're creative. we have the capacity to be irrational at times. these systems might hallucinate because they basically kind of short circuit by having too much content or kind of what they call hallucinating right, but at the same time, these systems are actually flattening human creativity and human intelligence, so in that sense, they're actually deeply destructive, in my mind, to the capacity to learn and grow, let alone questions of diversity, democracy, democracy and the economics associated with this, which i think are extremely important to discuss. cyril: okay, so, gary, you have founded two ai companies, right, and so you have worked with this technology. before we launch into how dangerous it could be, whether it needs to be stopped, how does it actually work?
5:37 am
because i think a lot of us are just discovering really this in the last few weeks. it's old news to the ai community, but most of us are not part of the ai community. >> i think the first thing people have to realize is that ai is not magic. it's a set of tools, and each of those tools have strengths and weaknesses. it would be like talking about a toolkit as if the whole thing does everything you need. you really need to understand particular tools and where you use them. there are lots of species of ai that are out there. for example, classical symbolic ai that's been around for 75 years, it powers the route planning in your gps system. it has nothing to do with the neural networks that are popular right now. the new systems are called neural networks. there are actually lots of kinds of neural networks, and they have their strengths and their weaknesses. they're very good at pattern recognition. they're not very good as the other guests said at reasoning and they're not very reliable, they're not very good at truth. so they make stuff up all the time. they can be exploited by bad actors into making up crazy conspiracy theories.
5:38 am
but the system that is most dominant now, the way it works is essentially by mimicry, as suggested there. they absorb a lot of data and they try to match the patterns, but they don't have a detailed understanding of what they're talking about, so they make up things, like saying that elon musk died in a car crash in 2018 when he didn't really, or a biography of me by one of the recent systems that made up a title for my book made up quotes about me, made mistakes about what the argument was in that book and so forth. so one of the problems here is they're just not trustworthy. when they start to give people medical advice, for example, some of that advice is probably not going to be good. but it's going to look authoritative, as if it were true and knowledgeable and so forth. cyril: if i can jump in, isn't what you're saying that they still make mistakes, but this technology is still fairly new, and like any technology, it gets refined, right, it gets improved upon. >> well, in some ways, it's actually getting worse.
5:39 am
so, for example, it's getting worse apparently, according to a newsguard study that was released earlier this week, at discriminating truth from falsehood. so it's getting more and more plausible, if you read it, it seems more and more true. but it's not actually getting more and more true. and i think for technical reasons, if you give me the time, i can go into the fundamental architecture, it's just not good at facts. it's good at mimicry of large statistical patterns, but not representing individual facts about the world. it's not really getting better at that. cyril: okay. >> in fact, if i could just say one more thing, it's really important, we did not call for a ban on all of artificial intelligence. we called for a ban on one specific thing, and encouraged, as you said in your opening statement, but a lot of people missed, researching other things like how to make this stuff trustworthy and reliable. so we're not saying don't do any ai at all, which is how almost everybody's interpreting, but if you actually read the letter, it's only one specific project which is making a more and more plausible but uncontrollable beast that could be used for all kinds of bad purpose.
5:40 am
-- bad purposes. cyril: atoosa, if the technology is not actually intelligent and is just mimicry, and it's maybe not as, you know, it doesn't have the wow factor that perhaps first-time users will find in it, then why is it so urgent for us to regulate it and to put rules and norms around it and have watchdogs around it? >> sure. i think one of the biggest problems, and to me it sounds like that's the most important problem, is that these technologies are very much good at producing information that is not adaptable with any conception of truth. and so, they are really good in producing misinformation and disinformation, and very good at facilitating the spread of different kinds of propaganda and conspiracy theories. and so, you can imagine there are many malicious actors in the world that can use these systems in order to create and craft stories that sound plausible and
5:41 am
interesting for communities of people. and basically, these systems can prevent any kind of useful interesting democratic conversations between groups of people, because they can propagate propaganda, as i said. so they can be used for really interesting things. if you want to learn about quantum mechanics and you have no idea about it, you can go and have a chat with the systems and many of the conversations are low stake, right, but many of the conversations pertain to our religious beliefs, political beliefs, and the notion of truth. when we go to the realm of politics and history and religion, it's a very complicated completely kind of different notion as compared to when we are talking about the truth of the existence of a table in this room or not. and because these systems as the previous speakers mentioned are really good in mimicking humans, they can kind of mimic political information, political statements that some humans have
5:42 am
given, and so they can kind of be used in order to enforce or reinforce some kinds of biases. again, these systems are trained on some specific data. they can be fine-tuned on the general models and be fine-tuned on specific kinds of text, and so, you can just easily imagine how many misuses of these three -- of these free systems can be. cyril: let's put one of those those misuses up on screen. we've got pictures of of the pope, and this went viral over the last few days, so i'm pretty sure you've all seen them, pictures of the pope in a long, white extravagant puffer jacket, and people thought he had so much swag and he was so fashionable. this is completely made up. it's not his puffer jacket. i don't even know if the jacket actually exists in real life. but the point is that this was not a picture, this was generated by ai. we can show you another picture, donald trump being arrested in a scene, it looks like he's being arrested maybe at the foot of trump tower in new york. it's unclear, but the picture
5:43 am
-- the visual, i'm not even going to call it a picture, looks very believable. so you talk about truth, those are just some early day examples of things that just in the last few days people have believed or have seen and actually it turned out they were generated by ai. ramesh, what happens if we find ourselves in a world where we don't know if we can trust the words we hear and the pictures we see? >> it's deeply destructive, and i think that it's also important to note the basis of all of the systems, the ways these systems work is based on all of our personal data, right? so we live in a world where our personal data without us even really being aware of it is being collected indiscriminately by third parties, corporations and states pretty much all the time. fast computers and infinitely cheap essentially cloud-based storage is what creates these sorts of systems, and they use patterns and correlation to generate truths and false
5:44 am
truths. and one thing we've learned quite clearly with many of our big tech platforms, especially our social media platforms, is they don't just order the world in any sort of rational or neutral way. if that were to even exist, they tend to prioritize content that will grab our attention, hijack our dopamine and our cortisol and sensationalize us. cyril: let me turn to gary. gary, i want to put to you the doomsday scenario that i'm sure you're already aware of, because it's making the rounds in the ai community. and this was one that was detailed by elieza yutkovsky. he's widely regarded as a founder in the field of alignment. alignment is kind of the buzzword in ai for viewers -- that involves aligning the robots with humans interests and values. he's a leading researcher in that field and he wrote an article saying this, "many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart ai
5:45 am
under anything remotely like the current circumstances is that literally everyone on earth will die," not as in maybe possibly some remote chance, but as in that is the obvious thing that would happen. the moratorium on new large training, on new large training runs needs to be indefinite and worldwide. there can be no exceptions, including for governments or militaries. how do you feel about that, everybody would die if we let ai development continue unchecked? >> i think we have no idea. i think that's overstated. i think he's suggesting it as a 100% probability, and it's a much smaller probability, but it's not zero. we don't have -- cyril: wait, let me pause for a second, you're saying the probability that everybody would die as a result of ai is not zero, in your view. >> there's a chance that that could happen, but it's not zero. i'm going to explain my answer. what we have seen in recent weeks is that the corporates
5:46 am
don't really care that we have no governance in place. that we don't really know what these systems are capable of. and they're only going to get more powerful. one example that probably a lot of people saw is that chatgpt lied basically, not literally so, because it has no intentions, but tricked a human user into doing captchas by saying that it was a machine with a visual -- i mean, there was a person with a visual impairment who was actually a machine, and so that's an example of the way that these systems can actually trick humans into doing things. i don't think that the terminator scenario is the right one to worry about, where machines develop some kind of will and want to take us over. but i do think that bad actors can use these things in ways that we just have no idea of and new things are happening every day. the systems are getting more powerful. and so, that's why i favor at least a little bit of a pause to get a handle on what's going on here, especially in the recognition that governments don't yet know what to do and that the corporates are not really being super responsible about it.
5:47 am
i mean, imagine that this technology were 100,000 times more efficient, more powerful, more sensible, and a bad actor gets hold of them. what does that mean? we don't actually understand. so i don't think as a scientist i can say probability is zero. as a scientist, i can say it's unlikely. a lot of things have to come together in different ways. but i can't say, here is a guarantee that's not a possible option, especially given the political climate that i'm seeing. cyril: ramesh, can artificial intelligence ever become sentient? >> these types of ai systems absolutely are not sentient on any level, in the ways in which humans operate. they can fool us, they can dupe us, they can act like human beings, but in reality, what they're actually doing is just patterning themselves based on stuff that's being collected about us. you know, based on surveillance of us. all the while, exploiting our personal data, hijacking a digital economy that's more
5:48 am
unequal, especially on a global level. and when these systems are sort of going off the rails hiring people in kenya at one dollar an hour to try to clean it up just like facebook did with content moderators in the philippines. so, let's move completely away from this sensationalizing, the catastrophizing blue pill, red pill nonsense that i read in the new york times this week, and let's see these systems for what they are, obfuscating human creativity, dumbing things down, and essentially exploiting a data economy that's highly unequal and unjust. cyril: okay, so, what's the worst case scenario? what's the reasonably likely worst case scenario that you're looking at and that you hope can be averted, if we're going to just set the doomsday scenario aside, because that appears to be out there. far out there. what do you think is the likely thing we need to prepare for
5:49 am
that would happen if we don't have guardrails in place? >> we first need to kind of clean up our conception of existential risk. i think this conception of essential risk coming out of a "terminator" is just in my view a fictitious story. and what happens is that we face existential risk, but the reason that we face extension risk is that we have this gradual kind of ignorance of all of the ethical and social and political and economic risks that have been there. so gradually, we are developing better and better systems that can do things that erode our sense of cognitive abilities that kind of put us in a position to ask questions about, what is it mean to humans? -- what does it mean to humans? and that is where we face existential risk. so this gradual accumulation of ethical, social, political risk bring us to the existential
5:50 am
dangers. so if you kind of like buy into this this kind of analysis and that's a position that i kind of defend and work on, then we need to think about multiplicity and plurality of ways to tame the systems. but not just the systems, basically the companies and the institutions that give rise to the production of these systems. and we need to kind of make a reconfiguration of economic and political arrangements. i think mainly the united states and other parts of the world, but i say the united states because many of these generative ai models that have captured the imagination of the public in the world is coming from there. cyril: sorry, what does that look like, to reconfigure political and economic arrangements? what do you mean concretely? >> i mean that it should not really be the case that one country in the world or a specific part of a country kind of decides to develop these systems and then releases these systems for free. and basically they say these systems are going to benefit all
5:51 am
of humanity without having a concrete sense of what what is humanity, what is all of humanity, and what does it mean to benefit all of humanity? cyril: so do you mean that it should be government controlled then? >> yeah. i think there should be a government control, but also there should be meaningful global coordination. it shouldn't be the case that just some people from different countries come together and talk together and then they go, there should be some global coordinations that materializes some of these concerns and safeguards the development of these systems in the name of innovation, it is great for the whole world and let's make ai systems that benefit all of humanity. that kind of narrative should be stopped. and i think all of us need to react and reflect on this narrative and kind of propose a different dream, a different imagination that we want to move towards, and then these ai systems can become useful for humanity. cyril: so, gary, atoosa did mention a word that's been conspicuously absent from this conversation so far, which is innovation. and again, you created two ai based companies, so you must
5:52 am
believe on some pretty fundamental level that this innovation can also be used for good, right, or at least at the very least in ways that are not harmful to humanity. what are perhaps the opportunities that it presents? i mean, give us some hope. >> i think that we can have hope that we'll build better ai. i think there are some positive uses of this particular technology that we have right now. these large language models. but i think that large language models are fraught with risk. i don't think they're going to kill us all. you know, i said not zero. i don't think it's very likely. but i do think it's likely that it's going to have some pretty dystopian effects on the fabric of society. the misinformation. the thing that didn't get really emphasized is the vast quantity of how this is going to happen. we're going to see it at much greater scale than we've ever seen before. we're going to see cyber crime. at a new scale that we haven't seen before. i think the risks we should be
5:53 am
focusing on at least for now are near term, but they're immense. yes, there are some value in these current systems. they can for example help computer programmers program faster. but i'm not sure that's worth a risk. at the same time, i'm not saying stifle innovation. i'm saying the innovation should be around how to make these systems truthful, ethical, trustworthy, etc. how to make these so that they're stable systems that we can use. right now, they're like bulls in a china shop. they're powerful. they're reckless. we don't have any real control over them. we should be striving towards an ai that we control rather than it's unpredictable and dangerous and that's kind of what we have now. so i'm not saying every innovation is a good one. i'm saying innovation can be a good thing. we need to innovate something different than this particular technology, or we need to innovate better guard rails around the technology. that might be okay, too. we might have to invent new policies around misinformation. new tools to detect it, in the way that we have new tools to detect spam. so, lots of things we might innovate on.
5:54 am
but this one tech has problems. we shouldn't just assume it's the right one. we've had 15 different tools in the history of ai. this one happens to do some things. -- some things well, there's a lot of money behind it. doesn't mean it's the right innovation. we do need innovation, but maybe not this one. cyril: so, in the spirit obviously of testing the tool and the system, i asked it to generate some questions to ask you guys. it did come up with a few good ones, if you could create ai to tackle one global crisis, which would you choose? let me go to ramesh for this one. >> tackle one global crisis with ai. cyril: what global crisis and how would it do it? >> i would ask the ai system to recognize what are the components of the economy that is supporting such an ai system. here are the two major
5:55 am
challenges that i see that are very interconnected to this technology. one is the global economics associated with it. we live in a sort of digital economy that's becoming more and more futile by the moment, where personal data is being harvested for corporate valuation. sometimes that's not even associated with profitability. ai, what do you want to do about that? what about the climactic concerns, as well? recognizing as the previous guest mention, that we have to have global dialogues about these technologies to ensure that these technologies actually supported global community. because the only way we're going to deal with climate issues is by figuring out how we are going to share the earth and repair the earth. and that involves a type of global dialogue using many different voices around the world, that is the opposite of what is occurring with these sorts of systems.
5:56 am
i think those are critical issues that i think have to be connected to every major technology platform, first and foremost we need to think about economic welfare and democratic welfare. cyril: fix its own business model and think about the environment. thank you. i'm going to give you 40 seconds to tackle this one last ai generated question, how do you envision the role of ai in shaping the future of humanity? good luck. >> i would like the ai to first learn about humanity, and in order to learn about humanity, the ai needs to go and talk to many or learn from different communities and their perspectives and what they want and how they see the world. and i think the training of the training data that these systems receive, if they want to get to be about different parts of the world, should include narratives, rich and deeper narratives of people from all
5:57 am
around the world and then i would say after you get that conception of humanity, then we can continue and talk about this question. cyril: amazing. thank you so much for your answer. i will let chatgpt4 that it got some good answers to its questions. thank you guys to all of you gary marcus, atoosa kasirzadeh, and ramesh srinivasan. thank you so much for joining us on "inside story" today. you can see the program again anytime by visiting our website, aljazeera.com. and for further discussion, go to our facebook page, facebook.com/ajinsidestory. you can also join the conversation on twitter. our handle @ajinsidestory. -- our handle, @ajinsidestory. from me, cyril vanier, and the entire team here in doha, bye for now. ♪
5:58 am
"@@4zñ ■
5:59 am
6:00 am
♪♪♪ ♪♪♪ ♪♪♪ yaara bou melhem: new zealand, aotearoa, is blessed with fresh water, with its turquoise rivers and lakes and snow-capped mountains. its pristine and dramatic landscapes are the sets

22 Views

info Stream Only

Uploaded by TV Archive on