tv Inside Story Al Jazeera June 2, 2023 10:30am-11:01am AST
10:30 am
and reviewing the headlines, dissecting what they say, which has decided to go live with the really a full scale innovation, exposing how the media is used to shape the one factor that never seems to make a difference is it's on through, it never happens until political powers can suppress free speech if you tried to record that need to be in public, the police much in effect, we are under constant surveillance. the listening post you guy to the media on out is 0. artificial intelligence, powerful technology that can transform human lives. but industry leaders warned that the tools they built could one day expose an existential threat to humanity. so helping governments regulate they are without stifling innovation. this is inside store the
10:31 am
hello and welcome to the program. i'm how much is room a tool that can write poems, create art, advanced scientific discoveries, and even do your personal shopping. our world is experiencing an artificial intelligence revolution, and it's happening at a breathtaking pace. but the technology is capacity to perform. human functions has also res, concern about the threat it poses to millions of jobs. there are also fears it has the potential to undermine democracies. governments around the world are moving quickly to set codes of conduct and possible regulations. but can they act fast enough to contain a eyes risks and harness as power to be a force of good? we'll get to where i guess in a moment the 1st this report from alex baird. with almost every major technological leap forward comes concerned about the consequences from the early days of the world wide web in the nineties and the explosion of people browsing facebook in 2005 major social media sites. a decay license there was fee is about the impact on
10:32 am
society and jobs. this time around, the rapid advancements of us official intelligence tools such as chet t p t has caused a lot even from the tech lead is creating them. if this technology goes wrong, it can go quite wrong and we want to be vocal about that. we want to work with the government to prevent that from happening. experience seem to agree that it has the potential to both transform human lives. and the reason that human extinction they'll be able to manipulate just into doing whatever they want it will be. the human is collecting a 2 year old. so they don't need to believe is a prescott, it's the idea of ag. dropping them every single day. ok. isn't going to work the non profit st if i i safety list some of the biggest risks that include being used to develop advised weapons being abused by government bodies to monitor and seems that citizens and a more immediate risk. i generation of misinformation that could be stabilized
10:33 am
society and interfere in elections. but i always transformative power is also revolutionizing medicine in science, making people more efficient and improving big quality of life. researches have used the technology to find new antibiotics and help of paralyzed men work once again the government. so we're going to give it to mitigate the risk with proposed regulations to honda sits, benefits to be you in the us have said they are planning to release a code of conduct to develop comment. i stand it's, there's almost always a gap when new technology is a merge between the time in which those technologies emerge and have an impact on people. and the time it takes for governments and institutions to figure out how to legislate or regulate about them. and we feel the fear urgency of now, particularly when it comes to generative a i, within the next weeks we will advance a draft of an a i code of conduct, of course,
10:34 am
also taking the industry. the inputs, artificial intelligence is developing as fast if not foster, then governments, it seems to rationalize. the question is, can we keep up, alex bid inside story. all right, for more on this let's bring in our guests in london, david krueger, assistant professor in machine learning and computer vision at the university of cambridge. he help draft the statement on a risk in new york, sarah myers west managing director of the a. i now institute, which focuses on the social implications of artificial intelligence and then los angeles, from us or, and of us on professor immediate information studies at the university of california and the founder of the research group, digital cultures lab, a warm welcome to you all, and thanks so much for joining us today on inside story. and david, let me start with you today a few days back you, along with tech leaders and scientist issued
10:35 am
a warning about the perils post by artificial intelligence. let me go ahead and read that statement in full. it says, mitigating the risk of extinction from a i should be a global priority. alongside other societal scale risk such as pandemic and nuclear war. so my question is, why was this warning? just one sentence and how difficult was it to get the other signatories onboard? uh yeah, thanks for asking. so the main reason i want to make just a simple one sentence statement like this. there's actually 2 reasons. one is that in the past, there's been a lot of discussion of at risk and a growing amount of concern about various risk of a i. but in particularly the risk of human extinction. and in particular from artificial intelligence sort of getting out of control is one that i think there's been something of a taboo about discussing. so i've spoken to a number of people in private who acknowledge that they think this is
10:36 am
a serious concern. but we're concerned about expressing it publicly because they felt that they might be viewed poorly and judged by their colleagues, and that it might also affect like their job prospect. so we wanted to sort of break that taboo and give everyone who was feeling these kinds of concerns a place to voice. then the other reason that we've tried that we decided to focus on such a short statement is because in the past, a lot of similar letters or statements have sort of included a lot of extra detail which i considers are on the cert baggage about exactly how it might lead to extinction or what we should do about that. and i think, you know, those are discussions that we should definitely be having, but people have a lot of different views on that. and i think we should focus on what we agree on here. as of. ready starting a, a more rich discussion about those things. so david, if you are trying to get a conversation going around this issue with a warning,
10:37 am
the start, you expect that you and your colleagues will be at some point, issuing recommendations to go along with it. you know, i don't think so necessarily. i mean, i think i do hope that people, you know, think about this and, and bring ideas to the table. but a big part of what we're trying to do, i think, has engaged the rest of society, policy makers and civil society, and everyone else who has, you know, a lot of their own expertise and knowledge and network concessions that they can help use to address this problem and might actually have a lot better ideas about what specifically should be done in particular in terms of regulation and international cooperation. then people like me who are primarily technical researchers. so sarah is there is obviously a lot of business like similarly. yep, go ahead. go ahead, davis. go ahead. this is similar to what jeffrey hampton, the turn award winner who is saying is, you know, he's just here to sound the alarm and he's hoping that, you know, other people will pick up on this and try and figure out what we can do about it. says there are, there is obviously a lot of talk right now about the potential,
10:38 am
existential threats to humanity pose by artificial intelligence. do you believe that the ways in which these discussions and debates about these concerns are being framed are actually constructive. and what i think is, is helpful, is that, you know, we're all agreeing on the need for intervention. but i am concerned that the focus on x a central tribes is going to draw attention and resources towards concerns that are positioned far out in the future. of people are being harmed by ai system as in the here. and now, as already widely in use, it's savings people's lives, their access to resources, their life chances, whether they're going to receive quality, health care, access to financial resources or credits, well paying jobs that impacts their education. and these are the kinds of concerns that we really need to be for granting, and particularly in trusting the way that a i, as it's widely in use today, is functioning to ramp up patterns of inequality in society while rendering them
10:39 am
even harder to see. so let me follow up with you, you're talking about specific concerns in the here. and now when it comes to a, i want to ask you specifically about things like racial and gender prejudices of how much concern is there right now when it comes to a i around those specific issues. and we have decades of research already pointing to widespread challenges with gender and racial biases intact broadly, and then specifically within artificial intelligence technologies of many different sorts. so i think that there's, there is widespread concern and, and you know, more than enough evidence to be able to act on. and then the united states, we've already seen some um, some activity by regulators in this exact fact. you know, the white house issue, just executive order and regional equity. the, the equal employment opportunity commission is focusing on racial and gender based
10:40 am
discrimination as well as a discrimination against the disability community in the use of algorithmic systems and hiring practices. so this is already a fairly robust debate, and one that we need to continue to push forward. refresh and governments are now racing to regulate a i. but what are the concrete steps that can be taken in order to ensure that creative any and innovation are not stifled? yeah, i'm really glad you asked that, and i think we want to discuss concerns around the extinction. i think we should probably discuss what's happening with our climates and the incredible species loss we're seeing all around the planet right now. that's happening right now. and that's imminent, as far as what is actually occurring. i think it's very important to note that the way these so called generative a models function is they use our personal data. they're trained on human data and they essentially replicate and identified patterns and therefore reinforced the
10:41 am
sorts of biases and forms of discrimination that were just being spoken about. so i think if we want to really think in the forward looking way, we have to think in a public way and we have to think in a global way. and as the story mentioned earlier, we see again and again. technology is being introduced dominantly by private, for profit or corporate valuation oriented kinds of, in innovators. and the rest of us just have to deal with the, the effects of that. so another vision would be a design oriented vision. or we have stakeholders around the world around the planets actually together designing, regulating, auditing technologies such as this. so we can ensure that it has the sort of checks and balances built into it. so it actually serves specific purpose. is that how all of us then lift our species up rather than um, elevate a few of us likely at the cost of almost everybody else. room as you mentioned,
10:42 am
checks and balances, which leads me to my next question. the us and the e. u. they are drawing up a voluntary code of conduct for artificial intelligence. but if this is to be effective going forward, doesn't the tech industry have to be involved? and if so, does that call upon the tech industry to regulate itself? because that's not something that's really worked out all that well in the past, has it? and you're absolutely right. it doesn't work out all that well because of the technology industries, entire goal is to not necessarily be regulated or to be regulated in a way that supports its own purposes. i think we all remember quite well when the cambridge analytic scandal broke out, when mark sucker berg said, you know, we'll just build better a i whatever that means. you know, we'll just make our technology is more efficient or more innovative. i'm paraphrasing there, whatever that means. so when you hear these calls for regulation, i'm not trying to be inherently cynical, but we have to understand that when
10:43 am
a sam altman and open a, i say, hey, we want regulation. that's a great thing for them because the regulatory apparatus that they would be developing with the states, if that was to even be collaborative, would be one that would likely suit them and place them in like a top dog position. certainly there is no there's, we should absolutely not take any bates that the tech industry or the, the sort of generate of an industry which is a small set of players. actually it's some sort of all a golf fully. i could say, wouldn't regulate itself. we should not fall to that bait. there are plenty of scholars out there, including my 2 colleagues here on the show. you know, plenty about a, i and can actually, we actually had power or some teeth in, in our advice could actually push a certain types of regulations that would actually be more innovative. because the benefits of these technologies could benefit all of us and benefit our plan. so i
10:44 am
saw your nodding along to a lot of what were mesh was saying that looks like you want to jump in. so go ahead . yeah, i wanted to underscore what were mesh then both about the value of having a conversation that really foregrounds the public as well as the environmental harms of joiner to be i systems. i think one thing that's really key to remember is that there's nothing about the future of artificial intelligence that is inevitable . and we have a number of examples to point to where a public involvement or public push push back has effectively shape the trajectory of a i, the one that that sort of struck me as were mesh was talking was in the netherlands . um, you know, local communities were upset about the environmental impact of the construction of data centers that were drawing on their local brown water supplies and both reducing the amount of clean water, including the water and as a result. and then other than instituted a temporary pause on the construction of data centers for paper scalars,
10:45 am
the kinds of companies that are buildings on or to be i. so i think it's really important to remember that that kind of push back is very effective in shaping the way that this technology works and to ensure that it's accountable and responsive to the public interest. so it just to break this down a little bit for our viewers, because some of this can get pretty complex when you're talking about generative ai systems. how is that differentiated from other ai systems that we're all using in our daily lives right now? it's a great question and i think it's important to always sort of ground discussions about a i in their material reality a i as a field has been around for over 70 years, almost 80 years. and it's meant very different things over the course of that history. what we're talking about with a guy in the present day is this, that a data center technologies that rely on massive pools of data look for patterns within them. and they run on large scale computational power,
10:46 am
and these are resources that only the largest tech companies really have access to now generated a i functions within that sort of like same definitional space that it works in, in roughly the same way. but instead of looking for patterns within text, tuesday recommended a decision. it's using those patterns to replicate patterns of human speech or to generate images that look similar to other images within the data thought. so same, same basic principle only. it's about the creation of attacks that you know, again, mimics the way that we talk. it doesn't have the deeper contextual understanding. all right, let's just take a step back for a moment and look at some of the other concerns being raised about a i right now, investment bank, goldman sachs, says a, i could effect 300000000 jobs, but it may also mean a productivity boom, increasing the total value of goods and services produced globally by 7 percent. the rapid growth of this technology is raising concern according to
10:47 am
a reuters episodes poll, 61 percent of americans believe it could threaten civilization. so remeasured, let me start with you here. the technology is capacity to perform human functions raising, concerned about the threat that opposes to millions of jobs. one industries in particular, what jobs in particular, at most risk right now, this is many, many industries, any industry at at least one. we're talking about the open a i g p t generate of a i models. we're talking about almost any job and, and i want to really think about workers and people here in this discussion and people around the planet to who, whose work relies upon drafting or authoring, or sort of writing oriented jobs or service oriented jobs. so you can imagine call center workers, content moderator, workers, administrative work, legal assistant, work, how this might affect the insurance industry and so on. so,
10:48 am
you know, there are materials specific, very, very direct present, you know, clear and present challenges. let's call them about issues that are facing us right now. so we really need to think as this technology rolls out, how workers are going to be protected around the world and or the macro economics associated with us. because already, you know, we live in a world where 8 people have equivalent wealth to 4000000000, many of those that are connected to the vectors of new technology. so i think it's extremely important that we think about what our jobs of the future going to look like and an economic sort of how the economy globally is going to be shaped by this . i worth knowing. also, the content moderators. we're working with g, b, t, r, p, paid, you know, pennies on the dollars, so to speak and are in nairobi, kenya, and this is the same pattern. the tech industry has fallen for many years. for example, with exploited a p t. s. d inducing content moderation work in the philippines and other places
10:49 am
that were connect. facebook which you all reported on. david, this writer, zips us, pull that i mentioned a few minutes ago which as it's 61 percent of americans believe a i could threaten civilization from your perspective. why has the level of alarm around a i've been growing so much these past few months? i think the main reason that it's been growing is because of progress in a capabilities specifically with tattoo p t and g p t 4. so i think historically a lot of researchers are willing to dismiss these risks as too far off to worry about. which by the way, i think is a big mistake. and i wish that, you know, we were taking climate change seriously when the consequences were decades off. for instance, i think we're at risk of making the same mistake here. but i think a lot of researchers recently saw the progress that has been made on by scaling up existing methods and decided that actually may be powerful. there are systems that is systems that are smarter than people and able to take control if they for some
10:50 am
reason where to do that might be coming in a matter of years or decades. and so it's something that we originally to work on. now. i think another factor is probably just looking at sort of the race to deploy the systems despite known issues that the other guys have mentioned. and sort of a failure of sort of responsibility on the part of the large developers and big tech can. so i think a lot of people are hoping and expecting that this technology would both progress slower and also be deployed more responsibly. i actually want to respond to a few things that came up earlier as well. so i think the conversation we've had here is fairly characteristic of how this conversation has done in the past where, you know, we're talking about a bunch of different risks and nobody else is really a dresser. and, you know, the opens in the room, which is the extinction risk. and i think i've given my reason for that, which is, i think people have said, you know, maybe it's too far off or it just doesn't seem plausible. but what we're seeing now
10:51 am
is that a growing number of researchers, including some of the most prominent a i, scientists, and not just a big tech ceo's but over a 100. and i professors such as myself saying this is a serious concern. and in fact, it's a priority to start working on it now. so we need to plan ahead even if we think these wrists are years or decades away. so, and i don't do this as something that needs to compete for attention with addressing the present day risk. so i don't think this is a 0 sum game for attention. i think we all want regulation and then discussions about what kind of regulation. but i think actually the more focus, the rest of society has on a i and its impacts both the present impact and the future ones anticipating what could be coming down the line, the better. and i think that also benefits everyone who's working on any of these risks to have society more clued in and paying more attention. so david, if we're talking about the need for regulation and as you said, the extinction risk that should be addressed. we know that there are lots of
10:52 am
governments right now that are trying to figure out ways to regulate. we know that the you is currently at the forefront of all this. they're trying to enact what they're calling the a i act. they're hoping to get that past by the end of the year, but that wouldn't go into effect for at least 2 to 3 years from now at the earlier . so how much concern is there that a i, which is progressing at this breathtaking pace that it's developing faster than it can be controlled out and, and that it's developing at a much quicker pace than the discussions that are going on right now to try to regulate i think there's a lot of concern that's, that's uh, i guess coming back to my point about how we need to start now. i mean i, i've said in other interviews that i think this letter is overdue, and i think it's really, really ashamed and reflects poorly on the machine learning research community that we weren't having this discussion board earlier. so even if you don't think that
10:53 am
advanced, they are sometimes called a g i for general intelligence, which is going something that's a hypothetical feature system that can do everything that people can do. even if you don't see why that might be a risk, which i think most people can understand why there is at least some concern there. but if you don't think it's a risk, i think you still owe it to the public to communicate that when we're talking about this being far away, we're still talking about a matter of decades in a lot of cases. and that's something that can be happening within our lifetimes. um, even for people who think it's far off. and i think this is going to be incredibly transformative technology and what we've seen so far with top thoughts and with all of that appointment that a i have seen in a number of sectors is really just scratching the surface of where this technology could go. and then most people in the field understand this, and i'm not sure why they haven't been communicating goes to the public more in the past, but i think it's great that now we're seeing that happen more sarah. so if we're
10:54 am
talking about how this will be regulated going forward, you're obviously going to need international cooperation in order to regulate a i want to ask you about how difficult that's going to be. because countries that are developing legislation, they're going to need to cooperate with each other. they're going to need to cooperate with countries that they are in competition with. one example, of course, is the us and china. i mean, are these countries going to be able to work together to do this to? well, i think what's been helpful is that regulators around the globe are standing up and, and paying attention. and, and also that they recognize that, you know, seeing greater accountability within the tech industry is in the national interest . there's been a rise in attention to industrial policy for the tech industry and putting accountability at at the center. now there's ongoing dialogues between regulators that have been happening for, for quite some time around tech related issues. we are seeing different different
10:55 am
rulings on take place. microsoft activision being 11 case and points where the you came down differently from, from the u. k. but i think what's, what's important it is that there's, you know, a global consensus that regulation of the tech industry is in the broad public interest mom and, you know, ongoing dialogue that are facilitating that kind of, of conversation to move forward. remeasured look to me in the last couple of minutes like you want to jump in, so please go ahead. yeah, i mean i just think 2 major issues. first of all, i just want to be very clear that these systems, when i, when i speak of the systems, i mean open the i in other sort of generated the systems are not intelligent in the way humans are, despite the fact that they can mimic human intelligence and in, in cases sort of pull us to think that they are intelligent, they're not creative. they're not necessarily associated with meaning making, which is, and those 2 quick questions i think are very, very important to think about today. i think the other issue i want to mention is
10:56 am
when we talk about sort of extinction in relation to these technologies, i think it's very, very important. we actually look at the scenarios by which of those concerns are actually really valid. otherwise, if we focus on a sort of alarmist frame, rather than looking at the specific ways in which these technologies are threatening aspects of our lives around this planet. we actually block our ability to actually takes the type of aggressive action that's needed right now. so i think we're all on the same page. the action is needed. now the question is, what are the farms and risk and what are the ways in which we can, you know, the forward so that not only the industry is advancing, but actually all of our lives are advancing as well in really big into these technologies. david, we have less than a minute, but i know you wanted to get a point in there, so please go ahead. yeah, yeah. well, so 1st of all, i think if anyone wants to claim the assistance or not creative, i just encourage listeners to go out and play with them themselves. you can get them to generate all sorts of interesting things. so i think they certainly so some
10:57 am
form of creativity, i don't know if there's some mystical notion of creativity or meaning making. besides that, that been locked in the require anything like that, the pros, a risk. and 2nd of all, i guess this point about folks here on concrete risk, you know, the problem is that we're talking about risk that are coming years down the line that we want to start preparing for now. and i don't have a crystal ball, so i can't say exactly what farm is going to take. but i think we do have to worry that a, our systems are going to get smarter than people. that's going to be in that we're going to lose our ability to control them and that's going to mean that we might go extinct. so what do we do about that now? well, i think we should maybe be less focused on innovating and more focused on what can we do to control the development of more powerful a systems and how can we use the systems and uh, capabilities already have for socially done official things. instead of just raising like a smarter and smarter which a lot of the field is still focused on doing. all right, we have run out of time, so we're going to have to leave the conversation there. thanks so much. all of our guest, david krueger, sarah myers, west, and remeasure and of awesome. thank you for watching. you can see the program again
10:58 am
any time by visiting our website. i'll just you or dot com and for further discussion, go to our facebook page at facebook dot com forward slash a inside story. and also during the conversation on twitter handle is at a inside story from him, had mentioned to him and a whole team here. bye for now. the the join the global conversations to just say good
10:59 am
stuff. industry could be convenient, is say, this is a dialogue, we don't always talk to people that have different opinions that we do. everyone has the police must have it here as the society doesn't do enough to recognize and celebrate women. it was, it was fun to have an american occupation of a middle eastern countries, the street on algae 0. when the shots came from the holiday in buford corrects, we heard some noise this was new in us. my colleague was one of the most dangerous intersections in san diego. you didn't come into the front entrance. that was what happened to the people who were shot. they came into the wrong entrance, the nightly pyrotechnics to the camera man. so let's get the hell out of here, san diego, holiday and well, hotels on out of the era. when the news breaks, devices like phases, you can imagine being used in the way across the front line when people need to be
11:00 am
heard. and the story needs to be told seed was always very hard for me to find a job because i come from a very poor neighborhood with exclusive interviews. and we're back from the heart. and this is our found era has teams on the ground next to bring you more award winning document trees and life news. the the us announces its 1st sanctions in the recent conflict and so done along with saudi arabia, suspense, these thoughts move on. some students capital talk to me and sixty's in west offered with this being more fighting the out on the clock. this is out there life from dell ha also cut me off. the bill is .
17 Views
Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=390281487)