Skip to main content

tv   Inside Story  Al Jazeera  June 2, 2023 2:30pm-3:01pm AST

2:30 pm
of june 1967, 6 days the re drew the map of the middle east to dark colored hair crust appeared from a distance just as we were focusing on. they dropped the bombs on the run out as they were exposed the events leading to the will and its consequences, which it still felt today is really all me. and it was such as the, the, the stuff is live, the war in june on our zara artificial intelligence, powerful technology that can transform human lives. but industry leaders warned that the tools they built could one day pose an existential threat to humanity. so helping governments regulate, they are without cycling innovation. this is inside store the
2:31 pm
hello and welcome to the program. i'm how much is room a tool that can write poems, create art, advanced scientific discoveries, and even do your personal shopping. our world is experiencing an artificial intelligence revolution and it's happening at a breathtaking pace. but the technology is capacity to perform human functions has also res, concern about the threat it poses to millions of jobs. there are also fears it has the potential to undermine democracies. governments around the world are moving quickly to set codes of conduct and possible regulations. but can they act fast enough to contain a eyes risks and harness as power to be a force of good? we'll get to our guess in a moment, the 1st this report from alex baird with almost every major technological leap forward comes concerned about the consequences from the early days of the world wide wave in the ninety's and the explosion of people browsing facebook and 2005 major social media sites. so take a laser,
2:32 pm
there was fee is about the impact on society and jobs this time around. the rapid advancements of artificial intelligence tools such as chet t p t has caused a lot even from the tech late is creating them. if this technology goes wrong, it can go quite wrong. and we want to be vocal about that. we want to work with the government to prevent that from happening. experience seem to agree that it has the potential to both transform human lives and the reason that human extinction they'll be able to manipulate just into doing whatever they want. it will be the human is collecting a 2 year old, so they don't need to believe is a prescott and the idea of ad capping them every single day. okay. isn't going to work the non profit st. if i, i safety list, some of the biggest risks that include being used to develop advised weapons being abused by government bodies to monitor and seems to citizens and a more immediate risk. i generate that misinformation that could be stabilized
2:33 pm
society and interfere in elections. but i always transformative power is also revolutionizing medicine and science, making people more efficient and improving the quality of life. researches have used the technology to find new antibiotics and help a paralyzed man walk once again the government. so we're going to give it to mitigate the risk with proposed regulations to harness its benefits. to be you in the us have said they are planning to release a code of conduct to develop comment. i stand it's, there's almost always a gap when new technology is a merge between the time in which those technologies emerge and have an impact on people. and the time it takes for governments and institutions to figure out how to legislate or regulate about them. and we feel the fear urgency of now, particularly when it comes to generative a i, within the next weeks we will advance a draft of an a i code of conduct, of course,
2:34 pm
also taking the industry. the inputs, artificial intelligence is developing as fast as not foster, then governments, it seems to rationalize. the question is, can we keep up, alex bid inside story. all right, for more on this let's bring in our guests in london, david krueger, assistant professor in machine learning and computer vision at the university of cambridge. he help draft the statement on a risk in new york, sarah myers west managing director of the a. i now institute, which focuses on the social implications of artificial intelligence and then los angeles, from us or, and of us on professor immediate information studies at the university of california and the founder of the research group, digital cultures lab, a warm welcome to you all, and thanks so much for joining us today on inside story. and david, let me start with you today a few days back you, along with tech leaders and scientist issued
2:35 pm
a warning about the perils post by artificial intelligence. let me go ahead and read that statement in full, and it says, mitigating the risk of extinction from a i should be a global priority. alongside other societal scale risk such as pandemic and nuclear war. so my question is, why was this warning? just one sentence and how difficult was it to get the other signatories onboard? uh yeah, thanks for asking. so the main reason i want to make just a simple one sentence statement like this. there's actually 2 reasons. one is that in the past, there's been a lot of discussion of at risk and a growing amount of concern about various risk of a i. but in particularly the risk of human extinction. and in particular from artificial intelligence sort of getting out of control is one that i think there's been something of a taboo about discussing. so i've spoken to a number of people in private who acknowledge that they think this is
2:36 pm
a serious concern. but we're concerned about expressing it publicly because they felt that they might be viewed poorly and judged by their colleagues, and that it might also affect like their job prospects. so we wanted to sort of break that taboo and give everyone who was the ceiling. these kinds of concerns a place to voice on. the other reason that we've tried that we decided to focus on such a short statement is because in the past, a lot of similar letters or statements have sort of included a lot of extra detail which i consider so on a certain package about exactly how it might lead to extinction or what we should do about that. and i think, you know, those are discussions that we should definitely be having, but people have a lot of different views on that. and i think we should focus on what we agree on here. as of. ready starting a, a more rich discussion about those things. so david, if you are trying to get a conversation going around this issue with a warning,
2:37 pm
the start, you expect that you and your colleagues will be at some point, issuing recommendations to go along with it. no, i don't think so necessarily. i mean that's, i do hope that people, you know, think about this and, and bring ideas to the table. but a big part of what we're trying to do, i think, is engage the rest of society, policy makers and civil society. and everyone else who has, you know, a lot of their own expertise and knowledge and network conclusion is that they can help us to address this problem that might actually have a lot better ideas about what specifically should be done in particular, in terms of regulation and international cooperation then people like me who are primarily technical researchers. so sarah is there is obviously a lot of business like some way to go ahead go ahead davis. go ahead. this is similar to what jeffrey hampton, the turn award winner who is saying is, you know, he's just here to sound the alarm and he's hoping that, you know, other people will pick up on this and, and try and figure out what we can do about it. so is there a, there is obviously
2:38 pm
a lot of talk right now about the potential, existential threats to humanity pose by artificial intelligence. do you believe that the ways in which these discussions and debates about these concerns are being framed are actually constructive? what i think is, is helpful, is that, you know, we're all agreeing on the need for intervention. but i am concerned that the focus on as a central tribes is going to draw attention and resources towards concerns that are positioned far out in the future. of people are being harmed by ai system as in the here. and now, as already widely in use, it's savings people's lives, their access to resources, their life chances, whether they're going to receive quality, health care, access to financial resources or credits, well paying jobs that impacts their education. and these are the kinds of concerns that we really need to be for granting and particularly interesting the way that a i, as it's widely in use today, is functioning to ramp up patterns of inequality in society. well rendering them
2:39 pm
even harder to see. so let me follow up with you, you're talking about specific concerns in the here. and now when it comes to a, i want to ask you specifically about things like racial and gender prejudices of how much concern is there right now when it comes to a i around those specific issues. and we have decades of research already pointing to wide spread challenges with gender and racial biases in tax broadly. and then specifically within artificial intelligence technologies of many different sorts. so i think that there's, there is widespread concern and, and you know, more than enough evidence to be able to act on. and then the united states, we've already seen some, some activity by regulators in this exact effect. you know, the white house issue, just executive order and regional equity and the equal employment opportunity commission is focusing on racial and gender based discrimination as well as
2:40 pm
a discrimination against the disability community in the use of algorithmic systems and hiring practices. so this is already a fairly robust debate, and one that we need to continue to push forward. remeasured governments are now racing to regulate a i. but what are the concrete steps that can be taken in order to ensure that create tiffany and innovation are not stifled? yeah, i'm really glad you asked that, and i think we want to discuss concerns around the extinction. i think we should probably discuss what's happening with our climates and the incredible species loss we're seeing all around the planet right now. that's happening right now. and that's imminent, as far as what is actually occurring. i think it's very important to note that the way these so called generative a i models function is they use our personal data. they're trained on human data and they essentially replicate and identified patterns and therefore reinforced the
2:41 pm
sorts of biases and forms of discrimination that were just being spoken about. so i think if we want to really think it before, we're looking way we have to think in a public way and we have to think in a global way. and as the story mentioned earlier, we see again and again. technology is being introduced dominantly by private, for profit or corporate valuation oriented kinds of, in innovators. and the rest of us just have to deal with the, the effects of that. so another vision would be a design oriented vision. or we have stakeholders around the world around the planets actually together designing, regulating, auditing technologies such as this. so we can ensure that it has the sort of checks and balances built into it. so it actually serves specific purpose. is that how all of us then lift our species up rather than um, elevate a few of us likely at the cost of almost everybody else. room as you mentioned,
2:42 pm
checks and balances, which leads me to my next question. the us and the e. u. they are drawing up a voluntary code of conduct for artificial intelligence. but if this is to be effective going forward, doesn't the tech industry have to be involved? and if so, does that call upon the tech industry to regulate itself? because that's not something that's really worked out all that well in the past, has it? and you're absolutely right. it doesn't work out all that well because the technology industry is entire goal is to not necessarily be regulated or to be regulated in a way that supports its own purposes. i think we all remember quite well when that cambridge analytic, a scandal broke out. when mark sucker, briggs said, you know, we'll just build better a i whatever that means, you know what, just make our technology is more efficient or more innovative. i'm paraphrasing there, whatever that means. so when you hear these calls for regulation, i'm not trying to be inherently cynical, but we have to understand that when
2:43 pm
a sam ultimate in opening, i say, hey, we want regulation. that's a great thing for them because the regulatory apparatus that they would be developing with the states, if that was to even be collaborative, would be one that would likely suit them and place them in like a top dog position. certainly there is no there's, we should absolutely not take any bait that the tech industry or be the sort of generative ai industry, which is a small set of players. actually, it's some sort of all a golf fully. i could say wouldn't regulate itself. we should not fall to that base . there are plenty of scholars out there, including my 2 colleagues here on the show. you know, plenty about a, i and can actually, if we actually had power or some teeth in, in our advice, could actually push to certain types of regulations that would actually be more innovative. because the benefits of these technologies could benefit all of us and benefit our plan. sir, i saw you nodding along to
2:44 pm
a lot of what were mesh was saying that looks like you want to jump in. so go ahead . yeah, i wanted to underscore a refresh then both about the value of having a conversation that really foregrounds the public as well as the environmental harms of joiner to be i systems. i think one thing that's really key to remember is that there's nothing about the future of artificial intelligence that is inevitable . and we have a number of examples to point to where a public involvement or probably push push back has effectively shape the trajectory of a i, the one that that sort of struck me as were mesh was talking was in the netherlands . um, you know, local communities were upset about the environmental impact of the construction of data centers that were drawing on their local brown water supplies and both reducing the amount of clean water, including the water. and as a result, the netherland instituted a temporary pause on the construction of data centers for papers killers,
2:45 pm
the kinds of companies that are buildings on or to be i so think it's really important to remember that that kind of push back is very effective in shaping the way that this technology works and to ensure that it's accountable and responsive to the public interest. so it just to break this down a little bit for our viewers, because of this can get pretty complex when you're talking about generative a systems. how is that differentiated from other ai systems that we're all using in our daily lives right now? or it's a great question and i think it's important to always sort of ground discussions about a i in their material reality a i as a field has been around for over 70 years, almost 80 years. and it's meant very different things over the course of that history. what we're talking about with a guy in the present day is, is that a data center technologies that rely on massive pools of data look for patterns within them. and they run on large scale computational power,
2:46 pm
and these are resources that only the largest tech companies really have access to now generated they, i functions within that sort of like same definitional space that it works in, in roughly the same way. but instead of looking for patterns within text, tuesday recommended a decision. it's using those patterns to replicate patterns of human speech or to generate images that look similar to other images within the data thought. so same, same basic principle only. it's about the creation of attacks that, you know, again, mimics the way that we talk. it doesn't have the deeper cont, extra understanding. all right, let's just take a step back for a moment and look at some of the other concerns being raised about a i right now, investment bank, goldman sachs, says a, i could effect 300000000 jobs, but it may also mean a productivity boom, increasing the total value of goods and services produced globally by 7 percent. the rapid growth of this technology is raising concern according to
2:47 pm
a reuters episodes poll, 61 percent of americans believe it could threaten civilization. so remish, let me start with you here. the technology is capacity to perform human functions raising, concerned about the threat that opposes to millions of jobs. once industries in particular, what jobs in particular at most at risk right now. stage is many, many industries. any industry, at least when we're talking about these open a i g p t, generator they are models. we're talking about almost any job. and i want to really think about workers and people here in this discussion and people around the planet to who who's worked relies upon drafting or authoring, or sort of writing oriented jobs or service oriented jobs. so you can imagine call center workers, a content moderator, workers, administrative work, legal assistant, work, how this might affect the insurance industry and so on. so,
2:48 pm
you know, there are materials specific, very, very direct present, you know, clear and present challenges. let's call them about issues that are facing us right now. so we really need to think as this technology rolls out, how workers are going to be protected around the world and or the macro economics associated with us. because already, you know, we live in a world where 8 people have equivalent wealth to 4000000000, many of those that are connected to the vectors of new technology. so i think it's extremely important that we think about what our jobs of the future going to look like and an economic sort of how the economy globally is going to be shaped by this . i worth knowing also, that content moderators. we're working with g p t r, p paid, you know, pennies on the dollars, so to speak and are in nairobi, kenya, and this is the same pattern and the tech industry has fallen for many years. for example, with exploited a p t. s. d inducing content moderation work in the philippines and other places
2:49 pm
that were connect. facebook which you all reported on. david, this writer, zips us, pull that i mentioned a few minutes ago which as it's 61 percent of americans believe a i could threaten civilization from your perspective. why has the level of alarm around a i've been growing so much these past few months, and i think the main reason that it's been growing is because of progress in a capabilities specifically with patrick p t and g p t 4. so i think historically a lot of researchers are willing to dismiss these risks as too far off to worry about which by the way, i think it's a big mistake. and i wish that, you know, we were taking climate change seriously when the consequences were decades off, for instance. and i think were at risk of making the same mistake here. but i think a lot of researchers recently saw the progress that has been made on by scaling up existing methods and decided that actually may be powerful. a systems that is systems that are smarter than people and able to take control if they for some
2:50 pm
reason where to do that might be coming in a matter of years or decades. and so it's something that we originally to work on. now. i think another factor is probably just looking at sort of the race to deploy these systems despite known issues that the other guys have mentioned and sort of a failure of sort of responsibility on the part of the large developers and big tech can. so i think a lot of people were hoping and expecting that this technology would both progress slower and also be deployed more responsibly. i actually want to respond to a few things that came up earlier as well. so i think the conversation we've had here is fairly characteristic of how this conversation has gone in the past where, you know, we're talking about a bunch of different risks and nobody else is really addressing. you know, the opens in the room, which is the extinction risk. and i think i've given my reason for that, which is i think people have said, you know, maybe it's too far off or it just doesn't seem that possible. but what we're seeing
2:51 pm
now is that a growing number of researchers, including some of the most prominent a scientists and not just a big tech ceo's but over a 100 ad professors such as myself saying this is a serious concern. and in fact, it's a priority to start working on it now. so we need to plan ahead even if we think these wrists are years or decades away. and i don't do this as something that needs to compete for attention with addressing the present day risks. i don't think this is a 0 sum game for attention. i think we all want regulation and then discussions about what kind of regulation. but i think actually the more focus that the rest of the society has on a i and its impacts both the present impact and the future ones anticipating what could be coming down the line, the better. and i think that also benefits everyone who's working on any of these risks to have society more clued in and paying more attention. so david, if we're talking about the need for regulation and as you said, the extinction risk that should be addressed. we know that there are lots of
2:52 pm
governments right now that are trying to figure out ways to regulate. we know that the u is currently at the forefront of all this. they're trying to enact what they're calling the a i act. they're hoping to get that past by the end of the year, but that wouldn't go into effect for at least 2 to 3 years from now at the early. so how much concern is there that a i, which is progressing at this breathtaking pace that it's developing faster than it can be controlled out and, and that it's developing at a much quicker pace than the discussions that are going on right now to try to regulate i think there's a lot of concern that's, that's uh, i guess coming back to my point about how we need to start now. i mean i, i've said in other interviews that i think this letter is overdue, and i think it's really, really the same and reflects poorly on the machine learning research communities that we weren't having this discussion board earlier. so even if you don't think
2:53 pm
that advanced, they are sometimes called a g i for general intelligence, which is going something that's a hypothetical feature system that can do everything that people can do. even if you don't see why that might be a risk, which i think most people can understand why there is at least some concern there. but if you don't think it's a risk, i think you still owe it to the public to communicate that when we're talking about this being far away, we're still talking about a matter of decades in a lot of cases. and that's something that can be happening within our lifetimes, even for people who think it's far off. and i think this is going to be incredibly transformative technology and what we've seen so far with top thoughts and with all of that appointment that a, i have seen a number of sectors, is really just scratching the surface of where this technology could go. and then most people in the field understand this, and i'm not sure why they haven't been communicating those to the public more in the past, but i think it's great that now we're seeing that happen more sarah. so if we're
2:54 pm
talking about how this will be regulated going forward, you're obviously going to need international cooperation in order to regulate a i want to ask you about how difficult that's going to be. because countries that are developing legislation, they're going to need to cooperate with each other. they're going to need to cooperate with countries that they are in competition with. one example, of course, is the us and china. i mean, are these countries going to be able to work together to do this to? well, i think what's been helpful is that regulators around the globe are standing up and paying attention and, and also that they recognize that, you know, seeing greater accountability within the tech industry is in the national interest . there's been a rise in attention to industrial policy for the tech industry and putting accountability at at the center. now there's ongoing dialogues between regulators that have been happening for, for quite some time around tech related issues. we are seeing different different
2:55 pm
rulings on take place. microsoft activision being 11 case and points where the you came down differently from, from the u. k. but i think what's, what's important it is that there's, you know, a global consensus. that regulation of the tech industry is in the broad public interest. mom and you know, ongoing dialogue that are facilitating that kind of a conversation to move forward. remeasured look to me in the last couple of minutes like you want to jump in. so please go ahead. yeah, i mean, i just think 2 major issues. first of all, i just want to be very clear that these systems and when i, when i speak of the systems, i mean opening, i, and other sort of generate a v i. systems are not intelligent in the way humans are, despite the fact that they can mimic human intelligence and, and in cases sort of pull us to think that they are intelligent. they're not creative. they're not necessarily associated with meaning making, which is, and those 2 quick questions i think are very, very important to think about today. i think the other issue i want to mention is
2:56 pm
when we talk about sort of extinction in relation to these technologies, i think it's very, very important. we actually look at the scenarios by which of those concerns are actually really valid. otherwise, if we focus on a sort of alarmist frame, rather than looking up a specific ways in which these technologies are threatening aspects of our lives around this planet. we actually block our ability to actually takes the type of aggressive action that's needed right now. so i think we're all on the same page. the action is needed. now the question is, what are the farms and risk and what are the ways in which we can, you know, the forward so that not only the industry is advancing, but actually all of our lives are advancing as well in really big into these technologies. david, we have less than a minute, but i know you wanted to get a point in there, so please go ahead. yeah, yeah. well, so 1st of all, i think if anyone wants to claim the, the systems are not creative. i just encourage listenings to go out and play within themselves, you can get them to generate all sorts of interesting things. so i think they
2:57 pm
certainly so some form of creativity. i don't know if there's some mystical notion of creativity or meaning making. besides that, that been locked in the require anything like that, the pros, a risk. and 2nd of all, i guess this point about focusing on concrete risks. you know, the problem is that we're talking about risk that are coming years down the line that we want to start preparing for now. and i don't have a crystal ball, so i can't say exactly what farm is going to take. but i think we do have to worry that a, our systems are going to get smarter than people. that's going to be in that we're going to lose our ability to control them. and that's going to mean that we might go extinct. so what do we do about that now? well, i think we should maybe be less focused on innovating and more focused on what can we do to control the development of more powerful a systems and how can we use the systems and uh, capabilities already have for certainly none official things. instead of just raising like a smarter and smarter which a lot of the field is still focused on doing. all right, we have run out of time, so we're going to have to leave the conversation there. thanks so much. all of our guests. david krueger, sarah myers, west, and remeasure and of awesome. thank you for watching. you can see the program again
2:58 pm
any time by visiting our website. i'll just do a comment for further discussion. go to our facebook page at facebook dot com, forward slash a inside story. and also during the conversation on twitter handle is at a inside story for me and i had mentioned to him and a whole team here, bye for now. the the, the latest news as it breaks, as well. some authorities are looking into declaring last generation of criminal organization. they are receiving some support from society,
2:59 pm
with detailed coverage. these read the ministry has, should demolition orders for the home, some of which are funded by the european union from around the world. it's a realty here, it has to be in my, in the terror as people try to find the way out. join the global conversations to just say the best, you know, good. the people industry. but we continue to say, this is a dialogue. we don't always talk to people that have different opinions that we do . everyone has to police must have it here is that society doesn't do enough to recognize and celebrate women. it was, this was fun to have an american occupation of the meetings and countries, the street on algae 0 adult in norway, now to time into one cabinet. the missing piece is a family history. but finding her bus mother interlock things little so nice. she
3:00 pm
discovers shocking revelations about the international adoption process. this is the or stage one go last between 2 continents on a journey to unlock the secrets of power. no place like of a witness documentary on out, just in the hello there. i miss dozier tightened the hall with the top stories here on out of here. there's been heavy shilling and gum fight in the capital. call tomb of the shaky see spot and through dawn has collapsed. earlier the united states and saudi arabia suspend the talks between the army on the rapids pulled forces. washington has also sanctioned companies linked to both sides of corresponding haven't morgan has more from the call to well, we were able to hear heavy artillery fire being fired for.

19 Views

info Stream Only

Uploaded by TV Archive on