tv France 24 LINKTV June 5, 2023 5:30am-6:01am PDT
5:30 am
different perspective, on al jazeera mohammed: artificial intelligence, powerful technology that can transform human lives. but industry leaders warn that the tool they built could one day pose an existential threat to humanity. so how can governments regulate ai without stifling innovation? this is "inside story." ♪ hello, and welcome to the program. i am mohammed jamjoom. a tool that can write poems,
5:31 am
create art, advance scientific discoveries, and even do your personal shopping. our world is experiencing an artificial intelligence revolution, and it is happening at a breathtaking pace. but the technology's capacity to perform human functions has also raised concern about the threat it poses to millions of jobs. there are also fears it has the potential to undermine democracies. governments around the world are moving quickly to set codes of conduct and possible regulations. but can they act fast enough to contain ai's risks and harness its power to be a force for good? we will get to our guests in a moment, but first, this report. reporter: with almost every major technological leap forward comes concern about the consequences. from the early days of the world wide web in the 1990's and the explosion of people browsing facebook in 2005, major social media sites a decade later, there were fears about their impact on society and jobs. this time around, the record advancements of artificial intelligence tools such as
5:32 am
chatgpt has caused alarm, even from the tech leaders creating them. >> if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. we want to work with the government to prevent that from happening. reporter: experts seem to agree that ai has the potential to both transform human lives, and threaten human extinction. >> they will be able to manipulate us into doing whatever they want. it would be like you manipulating a two-year-old. so they don't need to pull levers or press buttons. the idea of air gapping them and everything will be ok isn't going to work. reporter: the nonprofit center for ai safety lists some of the biggest risks. they include being used to develop advanced weapons, being abused by government bodies to monitor and sensor citizens, and a more immediate risk, ai generated misinformation that could destabilize society and interfere in elections. but ai's transformative powers are also revolutionizing medicine and science, making
5:33 am
people more efficient and improving their quality of life. researchers have used the technology to find new antibiotics, and help a paralyzed man walk once again. governments are working together to mitigate the risks with proposed regulations to harness its benefits. the eu and the u.s. have said they are planning to release a code of conduct to develop common ai standards. >> there is almost always a gap when new technologies emerge between the time at which those technologies emerge and have an impact on people, and the time it takes for governments and institutions to figure out how to legislate or regulate about them. and we feel the fierce urgency of now, particularly when it comes to generative ai. >> within the next weeks, we will advance a draft of an ai code of conduct, of course also taking industry input. reporter: artificial intelligence is developing as fast, if not faster, than
5:34 am
governments' attempts to regulate. the question is, can we keep up? ♪ mohammed: for more on this, let's bring in our guests. in london, david krueger, assistant professor in machine learning and computer vision at the university of cambridge. he helped draft the statement on ai risk. in new york, sarah myers west, managing director of the ai now institute, which focuses on the social implications of artificial intelligence. and in los angeles, ramesh srinivasan, professor of media information studies at the university of california, and the founder of the research group digital cultures lab. a warm welcome to you all, and thanks so much for joining us today on "inside story." david, let me start with you today. a few days back, you, along with tech leaders and scientists, issued a warning about the perils posed by artificial intelligence. let me go ahead and read that statement in full. it says, mitigating the risk of extinction from ai should be a
5:35 am
global priority alongside other societal-scale risks such as pandemics and nuclear war. so my question is, why was this warning just one sentence, and how difficult was it to get the other signatories on board? david: yeah, thanks for asking. so, the main reason i wanted to make just a simple one sentence statement like this, there's actually two reasons. one is that, in the past, there has been a lot of discussion of ai risks and a growing amount of concern about various risks of ai. but in particular, the risk of human extinction, and in particular from artificial intelligence getting out of control, is one that i think there has been something of a taboo about discussing. i have spoken to a number of people in private who knowledge that they think this is a serious concern, but were concerned about expressing it publicly because they felt that they might be viewed poorly and
5:36 am
judged by their colleagues, and it might also affect their job prospects. so we wanted to break that taboo and give everyone who was feeling these kind of concerns a place to voice them. the other reason that we decided to focus on such a short statement is because, in the past, a lot of similar legislative statements have included a lot of extra detail which i consider sort of unnecessary baggage about exactly how it might lead to extinction, or what we should do about that. and i think those are discussions that we should definitely be having, but people have a lot of different views on that, and i think we should focus on what we agree on here as a way of starting a more rich discussion about those things. mohammed: so david, if you are trying to get a conversation going around this issue, with a warning this stark, do you expect that you and your colleagues will be, at some point, issuing recommendations to go along with it? david: no, i don't think so, necessarily.
5:37 am
i mean, i do hope that people think about this and bring ideas to the table, but a big part of what we are trying to do is engage the rest of society, policymakers and civil society and everyone else who has a lot of their own expertise and knowledge and network and positions they can use to help address this problem, and actually have a lot better ideas about what specifically should be done, in particular in terms of regulation and international cooperation, than people like me who are primarily technical researchers. just to elaborate -- mohammed: go ahead, david, go ahead. david: this is similar to what jeffrey hinton, the turing award winner was saying. he's just here to sound the alarm and is hoping that other people will pick up on this and try to figure out what we can do about it. mohammed: sarah, there is obviously a lot of talk right now about the potential existential threats to humanity posed by artificial intelligence. do you believe that the ways in which these discussions and debates about these concerns are
5:38 am
being framed are actually constructive? sarah: what i think is it is helpful that we are all agreeing on the need for intervention. but i am concerned that the focus on existential threats is going to draw attention and resources towards concerns that are positioned far out in the future as people are being harmed by ai systems in the here and now. ai is already widely in use, it is shaping people's lives, their access to resources, their life chances, whether they are going to receive quality health care, access to financial resources or credit, well-paying jobs, it impacts their education. and these are the kinds of concerns that we really need to be foregrounding, and particularly addressing the ways that ai, as it is widely in use today, is functioning to ramp up patterns of inequality in society while rendering them even harder to see. mohammed: sarah, let me follow up with you. you are talking about specific concerns in the here and now when it comes to ai. i want to ask you specifically
5:39 am
about things like racial and gender prejudices. how much concern is there right now when it comes to ai around those specific issues? sarah: we have decades of research already pointing to widespread challenges with gender and racial biases in tech broadly, and then specifically within artificial intelligence technologies of many different sorts. so i think that there is widespread concern. and more than enough evidence to be able to act on. and in the united states, we have already seen some activity by regulators to this exact effect. the white house issued its executive order on racial equity, the equal employment opportunity commission is focusing on racial and gender-based discrimination, as well as discrimination against the disability community in the use of algorithmic systems and hiring practices. so this is already a fairly
5:40 am
robust debate and one that we need to continue to push forward. mohammed: ramesh, governments are now racing to regulate ai, but what are the concrete steps that can be taken in order to ensure that creativity and innovation are not stifled? ramesh: yeah, i am really glad you asked that. i think we want to discuss concerns around extinction, i think we should probably discuss what is happening with our climate and the incredible species loss we are seeing all around the planet right now. that is happening right now, and that is imminent. as far as what is actually occurring, i think it is very important to note that the way these so-called generative ai models function is they use our personal data, they are trained on human data, and they essentially replicate and identify patterns, and therefore, reinforce the sorts of biases and forms of discrimination that were just being spoken about. so i think if we want to really
5:41 am
think in a forward-looking way, we have to think in a public way, and we have to think in a global way. and as the story mentioned earlier, we see again and again technologies being introduced dominantly by private, for-profit or corporate valuation-oriented kinds of innovators, and the rest of us just have to deal with the effects of that. so, another vision would be a design-oriented vision where we have stakeholders around the world, around the planet, actually, together, designing, regulating, auditing technologies such as this, so we can ensure it has the sorts of checks and balances built into it so it actually serves specific purposes that help all of us, that lift our species up, rather than elevate a few of us likely at the cost of almost everybody else. mohammed: ramesh, you mentioned checks and balances, which leads me to my next question. the u.s. and the eu, they are drawing up a voluntary code of conduct for artificial intelligence.
5:42 am
but if this is to be effective going forward, doesn't the tech industry have to be involved? and if so, does that call upon the tech industry to regulate itself? because that is not something that has really worked out all that well in the past, has it? ramesh: you are absolutely right, it does not work out all that well, because the technology industry's entire goal is to not necessarily be regulated, or to be regulated in a way that supports its own purposes. i think we all remember quite well when the cambridge analytical scandal broke out, when mark zuckerberg said, you know, we will just build better ai, whatever that means. we will just make our technologies more efficient or more innovative -- i am paraphrasing there -- whatever that means. so, when you hear these calls for regulation, i am not trying to be inherently cynical, but we have to understand that when sam altman and openai say, hey, we want regulation, that is a great thing for them, because the
5:43 am
regulatory apparatus that they would be developing with the state, if that was to even be collaborative, would be one that would likely suit them and place them in a top dog position. certainly, we should absolutely not take any bait that the tech industry or the generative ai industry -- which is a small set of players actually, it is some sort of oligopoly i could say -- would regulate itself. we should not fall to that bait. there are plenty of scholars out there, including my two colleagues here on the show, who know plenty about ai and can actually -- if we actually had power or teeth in our advice, could actually push certain types of regulations that would actually be more innovative because the benefits of these technologies could benefit all of us, and benefit our planet. mohammed: sarah, i saw you nodding along to a lot of what ramesh was saying there. it looked like you wanted to jump in, so go ahead. sarah: yeah, i wanted to underscore what ramesh said, both about the value of having a
5:44 am
conversation that really foregrounds the public, as well as the environmental harms of generative ai systems. i think one thing that is really key to remember is that there is nothing about the future of artificial intelligence that is inevitable, and we have a number of examples to point to where public involvement or public pushback has effectively shaped the trajectory of ai. the one that struck me as ramesh was talking was in the netherlands, local communities were upset about the environmental impact of the construction of data centers that were drawing on their local groundwater supplies, both reducing the amount of clean water and polluting the water. and as a result, the netherlands instituted a temporary pause on the construction of data centers for hyper scalars, the kinds of companies that are building generative ai. so it is really important to remember that that kind of pushback is very effective in shaping the way that this
5:45 am
technology works, and to ensure that it is accountable and responsive to the public interest. mohammed: sarah, just to break this down a little bit for our viewers, because some of this can get pretty complex, when you are talking about generative ai systems, how is that differentiated from other ai systems that we are all using in our daily lives right now? sarah: it is a great question, and i think it is important to always ground discussions about ai in their material reality. ai as a field has been around for over 70 years, almost 80 years, and it has meant very different things over the course of that history. what we are talking about with ai in the present day is a set of data-centric technologies that rely on massive pools of data, look for patterns within them, and they run on large-scale computational power. and these are resources that only the largest tech companies really have access to. now, generative ai functions within that sort of same definitional space.
5:46 am
it works in roughly the same way. but instead of looking for patterns within text to, say, recommend a decision, it's using those patterns to replicate patterns of human speech, or to generate images that look similar to other images within the data set. so, same basic principle, only it is about the creation of text that, again, mimics the way that we talk. it doesn't have the deeper contextual understanding. mohammed: all right, let's just take a step back for a moment and look at some of the other concerns being raised about ai right now. investment bank goldman sachs says ai could affect 300 million jobs, but it may also mean a productivity boom, increasing the total value of goods and services produced globally by 7%. the rapid growth of this technology is raising concern. according to a reuters/ipsos poll, 61% of americans believe it could threaten civilization. so, ramesh, let me start with you here.
5:47 am
the technology's capacity to perform human functions, raising concern about the threat that it poses to millions of jobs -- what industries in particular, what jobs in particular are most at risk right now? ramesh: many, many industries. any industry, at least when we are talking about openai, gpt, generative ai models, we are talking about almost any job. and i want to really think about workers and people here in this discussion, and people around the planet whose work relies upon drafting, or authoring, or writing-oriented jobs, or service-oriented jobs. so you can imagine call center workers, content moderator workers, administrative work, legal assistant work, how this might affect the insurance industry, and so on. so, there are material-specific, very, very direct, clear and
5:48 am
present challenges -- let's call them that -- issues that are facing us right now. so we really need to think as this technology rolls out how workers are going to be protected around the world, and/or the macroeconomics associated with this. because already, we live in a world where eight people have equivalent wealth to four billion. many of those eight are connected to the vectors of new technology. so i think it is extremely important that we think about what are jobs of the future going to look like, and an economic -- how the economy globally is going to be shaped by this. i think it is worth noting also that content moderators who are working with gpt are being paid pennies on the dollar, so to speak, and are in nairobi, kenya. this is the same pattern the tech industry has followed for many years. for example, with exploited ptsd-inducing content moderation work in the philippines and other places that were connected to facebook, which you all have reported on. mohammed: david, this reuters/ipsos poll i mentioned a few minutes ago, it says that 61% of americans believe ai could threaten civilization.
5:49 am
from your perspective, why has the level of alarm around ai been growing so much these past few months? david: i think the main reason that it has been growing is because of progress in ai capabilities. specifically with chatgpt and gpt4. so i think historically a lot of researchers were willing to dismiss these risks as too far off to worry about. which, by the way, i think is a big mistake, and i wish that we were taking climate change seriously when the consequences were decades off, for instance. and i think we are at risk of making the same mistake here. but i think a lot of researchers recently saw the progress that has been made by scaling up existing methods and decided that, actually, maybe powerful ai systems -- that is, systems that are smarter than people and able to take control if they, for some reason, were to do that -- might be coming in a matter of years or decades. and so, it's something that we urgently need to work on now. i think another factor is
5:50 am
probably just looking at sort of the race to deploy these systems, despite known issues that the other guests have mentioned. and sort of a failure of responsibility on the part of the large developers in big tech. and so i think a lot of people were hoping and expecting that this technology would both progress slower and also be deployed more responsibly. i actually wanted to respond to a few things that came up earlier as well. i think the conversation we've had here is fairly characteristic of how this conversation has gone in the past, where we are talking about a bunch of different risks, and nobody else is really addressing the elephant in the room, which is this extinction risk. and i think i have given my reason for that, which is i think people have said maybe it's too far off, or it just does not seem that plausible. but what we are seeing now is that a growing number of researchers, including some of the most prominent ai scientists -- and not just big tech ceo's, but over 100 ai
5:51 am
professors such as myself, saying that this is a serious concern, and that in fact, it is a priority to start working on it now. so, we need to plan ahead, even if we think these risks are years or decades away. and i don't view this as something that needs to compete for attention with addressing the present-day risks. i don't think this is a zero-sum game for attention. i think we all want regulation. we have to have discussions about what kind of regulation, but i think actually the more focus that the rest of society has on ai and its impacts, both the present impact and the future ones, anticipating what could be coming down the line, the better. and i think that also benefits everyone who's working on any of these risks to have society more clued in and paying more attention. mohammed: so, david, if we're talking about the need for regulation, and as you said, the extinction risk that should be addressed, we know that there are lots of governments right now that are trying to figure out ways to regulate. we know that the eu is currently at the forefront of all of this. they are trying to enact what
5:52 am
they are calling the ai act. they are hoping to get that passed by the end of the year. but that wouldn't go into effect for at least two to three years from now at the earliest. so, how much concern is there that ai, which is progressing at this breathtaking pace, that it's developing faster than it can be controlled, and that it's developing at a much quicker pace than the discussions that are going on right now to try to regulate it? david: i think there is a lot of concern. i guess coming back to my point about how we need to start now, i have said in other interviews that i have said this letter is overdue and i think, really, it is a shame and reflects poorly on the machine learning and research community that we were not having this discussion more earlier. so, even if you don't think that advanced ai, sometimes called agi for general intelligence, which is something -- a hypothetical future system that
5:53 am
can do everything people can do. even if you don't see why that might be a risk, which i think most people can understand why there is at least some concern there, but if you don't think it is a risk, you still owe it to the public to communicate that when we are talking about this being far away, we are still talking about a matter of decades, in a lot of cases, and that is something that could be happening within our lifetimes, even for people who think it is far off. and i think this is going to be an incredibly transformative technology. and what we have seen so far with chatbots and with all the deployment that ai has seen in a number of sectors is really just scratching the surface of where this technology could go. and i think most people in the field understand this. i am not sure why they have not been communicating this to the public more in the past, but i think it is great that now we are seeing that happen more. mohammed: sarah, so, if we're talking about how this will be regulated going forward, you are obviously going to need international cooperation in order to regulate ai. i want to ask you about how difficult that is going to be.
5:54 am
because countries that are developing legislation, they are going to need to cooperate with each other, they are going to need to cooperate with countries that they are in competition with. one example of course is the u.s. and china. i mean, are these countries going to be able to work together to do this? sarah: well, i think what has been helpful is that regulators around the globe are standing up and paying attention. and also, that they recognized that seeing greater accountability within the tech industry is in the national interest. there has been a rise in attention to industrial policy for the tech industry, and putting accountability at the center. now, there's ongoing dialogues between regulators that have been happening for quite some time around tech-related issues. we are seeing different rulings take place. microsoft/activision being one case in point, where the eu came down differently from the u.k. but i think what is important is
5:55 am
that there's a global consensus that regulation of the tech industry is in the broad public interest, and ongoing dialogues that are facilitating that kind of conversation to move forward. mohammed: ramesh, it looked to me in the last couple of minutes like you wanted to jump in, so please go ahead. ramesh: yeah, i mean i just think two major issues. first of all, i just want to be very clear that these systems -- and when i speak of these systems, i mean openai and other sort of generative ai systems -- are not intelligent in the way humans are, despite the fact that they can mimic human intelligence, and in cases, sort of fool us to think that they are intelligent. they are not creative, they are not necessarily associated with meaning making, and those two questions i think are very, very important to think about today. i think the other issue i want to mention is, when we talk about sort of extinction in relation to these technologies, i think it is very, very important we actually look at
5:56 am
the scenarios by which those concerns are actually really valid. otherwise, if we focus on a sort of alarmist frame rather than looking at the specific ways in which these technologies are threatening aspects of our lives around this planet, we actually block our ability to actually take the type of aggressive action that is needed right now. so i think we are all on the same page that action is needed now. the question is, is what are the harms and risks, and what are the ways in which we can innovate forward so that not only the industry is advancing, but actually all of our lives are advancing as well in relation to these technologies. mohammed: david, we have less than a minute, but i know you wanted to get a point in there, so please go ahead. david: yeah, yeah. so, first of all, i think if anyone wants to claim these systems are not creative, i just encourage listeners to go out and play with them themselves. you can get them to generate all sorts of interesting things. so i think they certainly show some form of creativity. i don't know if there is some mystical notion of creativity or meaning making besides that that they are lacking. but i don't think that they require anything like that to pose a risk.
5:57 am
and second of all, i guess this point about focusing on concrete risks, the problem is that we are talking about risks that are coming years down the line that we want to start preparing for now. and i don't have a crystal ball, so i can't say exactly what kind of form it is going to take. but i think we do have to worry that ai systems are going to get smarter than people, and that is going to mean that we are going to lose our ability to control them, and that's going to mean that we might go extinct. so what we do about that now? well, i think we should maybe be less focused on innovating and more focused on what can we do to control the development of more powerful ai systems, and how can we use the systems and capabilities we already have for socially beneficial things instead of just racing to make ai smarter and smarter, which a lot of the field still focused on doing. mohammed: all right. we have run out of time, so we are going to have to leave the conversation there. thanks so much to all of our guests, david krueger, sarah myers west, and ramesh srinivasan. and thank you, too, for watching. you can see the program again any time by visiting our website, aljazeera.com. and for further discussion, go to our facebook page, that's facebook.com/ajinsidestory.
5:58 am
28 Views
IN COLLECTIONS
LinkTV Television Archive Television Archive News Search ServiceUploaded by TV Archive on