Skip to main content

tv   Inside Story  Al Jazeera  June 15, 2023 2:30pm-3:01pm AST

2:30 pm
every month you'll have someone from home that will say, oh please, i need my me for the, with the economy in free, full and children spread around the world. how does the family survive in martin same population as the pool? about 4 people, you'll always have a responsibility to say money home. i just feel like i'm stretching myself. weakness transactions on out to 0. 6 and it is adults to it's congratulations it's described as a significant step toward setting a robot for artificial intelligence, european, and please pass a draft law to keep a check on the risks posed by ai. ai act could serve as a framework for creating a global watchdog for could it also suppress innovation? this is inside the
2:31 pm
hello and welcome to the program. i'm how much of the world is experiencing an artificial intelligence revolution? this changing the way we live and work, but the rapid growth in a i is also raising global concerns. are our jobs under threat? can it create this information? well, it amplify bias and misinformation. you and i even governments are grappling with such questions. now the european parliament has taken the 1st step to address these concerns. it is past a draft of what will be called b a. i act the legislation would ban systems considered to pose unacceptable risks to human lives. but that could put your a p and government on a collision course with american tech giants who are investing billions of dollars in the technology. and i think that you've all heard and probably agree that a, a is too important not to recognize. and it's to importance uh too badly regulates
2:32 pm
a good recommendation that we all agree on as soon as possible must be a common objective. and of course, that we need sound enforcements. we need these obligations that this little gives us to be a real thing on ground for people to be safe. we're on the 1st to tucker the fast moving and evolving technology with a concrete legislation proposed on how to address those powerful foundation models to bits trust and provide transparency and over the size of the systems. the absolute absolute minimum that we need to offer here is transparency. it must be clear that this content has not been made by human. we'll also go one step further and ask the developer to if he's not large models to be more transparent and share their information with providers and how the systems would change and how they were developed. natasha butler's been following the development of the european parliament. here's her report from strasbourg. a lease proposal is aimed at
2:33 pm
ensuring that a, our systems are safe and ethical, and also the tech companies comply by rules and regulations. and if they don't, they would face a very large fines. now the idea of the noise that they would categorize a are systems into different levels of risk that would be low risk systems that have less rules and regulations. but the highest risk to move was the regulations will be in place. and then they'll be a category a deemed unacceptable risk, a systems and amongst them would be facial recognition in public, in real time that it would be exceptions to this. for example, police could use a biometrics events in public and real time, a sale, looking for a missing charles example. but naturally it would be bind many m e p site that it simply doesn't fit into the use values. it would be an erosion of rights, but there, awesome. we say that it has its place in terms of security and policing. nevertheless,
2:34 pm
there are so many challenges with a life, because it's such a foss developing system. and what policy makers of really saying is, how do you balance protecting people's rights withdrawing to also support and balance innovation? because what they don't want is for big you a, i companies to abandon your opinion and go, for example, to the united states. and then you would be left behind in this take a our race. natasha butler for inside story, stressful. all right, for more on all of this, i'm joined by our guests in chicago is i to saw. cause here is that the director of research at the center for techno model futures at the university of edinburgh. she's also a senior policy fellow on social and policy implications of generative ai for u. k. research and innovation in new york is mark sense and tech entrepreneur and ceo of filler an interview intelligence platform using a technology. and also in chicago is david kruger,
2:35 pm
professor of machine learning at the university of cambridge, his work focuses on existential safety. in a i learning a warm welcome to you all. thanks so much for joining us today on inside story after. so let me start with you today. this draft law that has been passed by european m p's. how significant a step is this towards setting a rulebook for artificial intelligence going forward? and i think it's a significant 1st step for sure about the very difficult and a long way ahead of us. i am really happy to see how europe, especially with working on this act, is really trying to talk out st very carefully and responsibly about the regulations of a i in the, in, in the word. and so i think this is definitely a significant. ok, because it also shows to the other content and so in other countries and the words that we are able to do something. and so this is a 1st step. let's continue to walk together. david,
2:36 pm
the last time i spoke with you earlier this month, you, along with tech leaders and scientists that issued a warning about the perils posed by artificial intelligence. does this draft law that has passed make you feel that the warnings that you and others have issued about a i are now being taken more seriously? i wouldn't say that. i mean, i'm also happy to see this legislation being passed on, but it's part of the work for a long time since long before we made that statement. and it's certainly not geared specifically at ex, essential risk. i think it's more due to the kind of problems we're already seeing . you know, 5 years ago when people started drafting the law are the, there has been, you know, a lot of other signs that people are starting to think more about. essential, right? so i am encouraged by reason developments for not just one in particular. oh dad, if i could follow up with you, what are some of the other signs that you're seeing that make you believe that others are now factoring in the so called existential threats? i think because it's talking to a lot of people including, you know,
2:37 pm
i on media like this, but i guess also the u. k. government has, has made public statements about this. i mean, i'm not sure what exactly their plans are or that i know that they're doing all of the right things there. but at least they're, you know, saying that they are taking it seriously and trying to convince leaders, at least with, you know, the us to talk about this. i think it's still maybe missing the full interest. all scope that we need to really handle effectively. we need to include other countries like china as well. um, but it might be a secondary direction, maybe to some say the mark. so if governments are now racing to regulate a i what are some of the concrete steps that can be taken from your perspective? that would make sure that creativity and innovation aren't stifled. yeah we, we welcome this new policy from, from a you, i think it just has to be applied in, in the right y a the, to the right way,
2:38 pm
full innovation and the right way to pray to disease is just making. so there is regulation around some of the coal was and some of the by that to the out that we, there are still many laws in place, but you cannot use any kind of always the way to say, i'll run a thing else full back to i mean if we, if we follow the currently in place right now and apply those in a way, which is rather than say, i am with the same people. i think this could be a very encouraging step forward. i to so i'm a i of course, is also become a focus of concern when it comes to pretend that it's the potential to create misinformation. uh, also when it comes to the fake technology, these are a i generated images and videos that mimic people. how worried from your perspective, should we be about this at this particular time? yeah, i think we should definitely be very worried, especially with the advancements and the developments of the generative models that
2:39 pm
we're seeing we, we are going to get better and better synthetic media that is produced. basically the content like images or tags or videos or audio voices that are generated by just technologies. and sometimes it is very hard for humans to kind of like judge whether the content that they are seeing on social media other platforms. is there a way to buy an a system or is generated uh, you know, by like, incredible or is coming from incredible source. and i think uh, when we get to a situation where the scale of the social scale we feel per life in judging and distinguishing between the kind of content that is coming from credible sources and the kind of content that is generated by any i system, i think then we basically are at the verge of a very kind of existential crisis. if i didn't call that right? because then basically the very threats of the democratic institutions. what does
2:40 pm
it mean to trust each other? what does it mean to trust different governments and different kind of institutions in any country? also globally kind of, we've been looked at and i, i don't have a positive reason of deluxe after the background stage. david, i saw you not even longer, some of what are to us, i was saying there. and certainly if you use the verbiage of saying that this is existential, that's a word that you and your colleagues have used when talking about the threat posed by a i, um, how much you concern, do you have that a eyes developing faster than it can be controlled and also did you want to jump in and add to what i to so i would say yeah, absolutely. um yeah, i have a lot of concern about that. i mean that's why we created that statement. so i think on. yeah, definitely that's, that's my number one concern is that, you know, in general we're just scratching the surface of how extreme, uh, things to get with a i both and try. and so it's just the raw intelligence or power of the systems, but also the social impact that, that will lead to on. so i think, you know, misinformation is definitely one issue that i think is important. and that could
2:41 pm
have that kind of catastrophic consequences that you so much and, and i think could play a role in future ex, essential risk scenarios on i think it's interesting that we haven't seen it happen like have even larger impacts already with the techniques that are out there, but i think we could be about to across a threshold where it starts to really have some really huge impacts on our information eco system. so i think about, you know, the, the 2024 election in us going american, i think there's also been growing reports of people using the assistance for fraud, which is another risk that is, you know, related to this uh, they can use and the ability to you know, sup, size, voice, and, and imitate people's voice and things like those on. and i believe that also similar to tactics that russia or operatives and russia have used in the past trying election. there's somebody trying to sort of so discord using more advanced
2:42 pm
tentative techniques on social media platforms already. this is something i heard about recently, generally speaking, i think this issue of like sort of degrading trust is, is a really strange one. like, uh, i think it will probably push people to like sort of by necessity put more trust in large institutions like governments or the, the largest media companies on which, you know, has, has a number of consequences. right? so right now i think we don't have to do that because we can still sort of verify for ourselves when we see some content. you know that, that, that thing really happened. it seems like um, but in the future we might, we might still be able to maintain that kind of verifiability for while using technical tools. but those tools will not be all right. things that are widely understood. so you'll have to differ to authority about whether or not to trust to know that this martin is legitimate more and more of the time. right, right. i mark um the rapid growth in
2:43 pm
a i is raising global concerns when it comes to many different issues. i want to ask you as a tech entrepreneur, when people ask you if, due to a i, the jobs are now more under threat than before. what is your response? i'm as far as with any monumental sort of shifting them all, which i do see a lot of the same as the industrial revolution and the other ships historically. yes, that will be sort of the jobs that are threatening them all. but if i see it more as a jewel shift and a skill shift in them, all right, we're going to lose. so what about the off this the, the opposing view as well. the way they all have their own side is to help us to help us do our job and to help us move the value into society. there was a good thing to boom around the coal in a, in the market that we are doing, finding jolts and getting title forward as well. so i do believe it's gonna be a shift. i've gotten a significant shift in the skills that i needed,
2:44 pm
but i believe i see a i can tell i think we look forward into a, into the world and use that guys to call what concern is also being expressed by the united nations secretary general, who has backed calls for a global watchdog to ensure that artificial intelligence technology is safe, secure, responsible, and ethical alarm bells over the latest form of occupational intelligence genetic. he's a, i, a deafening and they are low. those from the developers will design that the scientists and experts have called on the world wicked, the creating a i and exit central. so that to humanity on the bod with the risk of nokia was we must take those warnings seriously. after this, uh, the un secretary general is also backed up proposal for the creation of an international watchdog to monitor a i from your vantage point. how long might that take to become a reality to?
2:45 pm
so i just wanna say that this whole process is going to be absolutely complicated. and basically many, many kind of organizations like you want me to listen to the voices of lots of different experts. just saying that we're going to have a watch dog to monitor a high does not mean too much because a, i can mean anything. basically simple task, any kind of work makes to stand up, can do some kind of reasoning is on a system. so what does it mean to say we want to monitor or yeah, then there is this very complicated task of interpreting this notion. i think the very, the very statement that we need to kind of like how to think about the conception of a large start to monitor a. it's great. but when we compare, for example, this proposal with the proposal for a footboard with the watch side for monitoring the development of kind of the nuclear energy. we see there are so many different defenses which induced to foreclose that they're both watch dogs proposal for watch dogs to monitor
2:46 pm
a kind of a system. but for the case of like basically the, the, the, the, the monitoring of the new tier energy we can distinguish or return basically in heavy the use of nuclear energy or so i need to get in the clear image. and if there technologies for military purposes, that makes sense. but if you say, okay, let's kind of like and inhibit the use of ai system for military purposes. that just doesn't make sense because i think most of the countries in the world are using one or many other versions of a i in their military development. so we need to be very careful when we, when it comes to the interpretation of these terms. and i think this is going to be a very complicated process, and the best we can do is just to bring in different voices, different stakeholder together, and do like very complicated brainstorming. otherwise the statements are gonna just be like statements that make no sense. and when you're going to really put them down into operation, every state, like we can't really do much, marcus,
2:47 pm
are you reacting to a lot of what are to, as i was saying there, so please go ahead if you want to jump in. yeah, thank you to. so i, i certainly agree with, with a lot of what you said it's, it's very easy to monitor building a new thing. why, why the products are going to the come by is very, very easy to build an algorithm. and the built out, so who is a great headline, implementation of that, what store is going to be hard, it's going to be very grateful. okay, good. and we certainly need to hear more about the thoughts around how this is going to be monitored. and even from all right, seems to be actually to be useful. david, let me get your thoughts on this. i mean, i mean, do you think that it's realistic that a, a watch dog can, can be created and, and can it be created in the kind of a timeline in order to actually be effective because a i of course, is developing very, very rapidly. yeah, i mean,
2:48 pm
the way i think about that kind of question these days, it is not like candidate but must it. and i think the answer to that is yes, we need all these things to happen like as soon as possible on. so i don't know much about you know, how, how things work up to you and actually uh yet. um, so uh, i don't know how long it's normally you would expect something like this to take. um, but yeah, i think just to respond to those questions like what to monitor and, and how, how it makes sense. i mean, i agree that you know this, this is something that we should get. it should be a lot of discussion about that. but i think i, and i think it's actually pretty easy right now to monitor at least the largest models. and those are actually very difficult to create and only a few organizations have the capacity to, i mean, arguably only open a i right now has the capacity to build a model like jupiter before. um and so i think if what we're most concerned about is the, the largest, you know, frontier models. and it could be pretty easy to start monitoring models like that. so basically, any time, you know, if we just have to political will to do it,
2:49 pm
you can monitor the, the large computer that is needed to build them. and, uh, you can monitor the production and distribution of the computer. so we're talking about, you know, tons or possibly hundreds of millions of dollars worth of computer trips that are, you know, specialized high end chips that will need to be co located in the same city every time that might change as we get a cheaper, faster trips and better technologies, so that might make monitoring more difficult over time, but i think we can start right now on. yeah, so i think in that sense it is actually more similar to nuclear weapons if you care most about just the, the biggest models which are the ones that i think are most likely to pose extinction or has been in the near term because, you know, they're the ones that have the most intelligence that are of you know, potentially coy as to become intelligent enough to get out of control. uh to so, um, aside from the draft law that was passed. uh, you also have the us and the european union drawing up
2:50 pm
a voluntary code of conduct for artificial intelligence. but as far as i understand for this to be effective, it would need uh, the tech industry to basically sign up as well. and, and what i'm curious about is, um, if you will leave that for the tech industry to be involved, can they be trusted to be putting up safe guards for themselves to be trying to regulate themselves? so i think we need to definitely have different forms of regulation, like some kind of self regulation, obviously is needed. i think many like research labs and many companies already. i have some versions of surface sets regulation like they have or some kind of like safety and ethics them or safety or transparency team or policy team. so they have the research lab have some kind of like researchers and experts that are trying to somehow self regulate. so we have, i think, a minimal level of a self regulation within the company. but that's just not enough because the systems can really disrupt human society. at a global scale,
2:51 pm
and so we definitely need to kind of bring the conversation to a more serious and more advanced stage where we kind of go and ask to take a c o for example. what do you mean exactly by sales regulations? so when some oakland, as one example talks about regulation and comes and says, this is really important on this, the next stage just live to needs to come and say, what does it exactly mean with self regulation and for regulation in general, what he does not mean otherwise we are going to just pass the awards together and, and, and talk about things. and then the development of a i is going forward. unfortunately, i think we would fail in properly regulating so we need to get much more concrete, brittany, and different stakeholders to the table. you know, bringing people who have done lots of work on responsible and ethical unsafe a i for many years. bringing those experts on views on the, on the table, and the views of the ceo's uptake companies on the table. i put them into like dynamic and personal conversation. otherwise i don't think we're gonna end up some
2:52 pm
are good. david, i can see that you, you wanted to jump in uh a lot there, so please go ahead. yeah, i mean i, i agree with everything i just said um, yeah, just to add to that a little bit. i mean, i think sort of seems like no, we absolutely cannot trust tech companies to regulate themselves. i mean, i think that's just, you know, on the face of it kind of absurd at the point that we're at when the, even the ceo's of these companies are saying we need regulation and they're not just talking about self regulation. now. i think um, yeah, there are good people, i think working on some of the policy and ethics teams that some of these companies may have some good ideas. so we said listen to them and bring them into the conversation. but like i said they, we can leave it up to them. i think, you know, the dynamics here is that there is just the just very powerful incentives to build more power fully. i systems and at some point we may need to stop building more powerfully. i systems, and if your business as
2:53 pm
a company is founded on building more and more powerfully i systems, then that's going to be a significant threat to your business. and that means that i don't think in, in, you know, a competitive profit driven marketplace companies are going to take that kind of action when it's necessary. so we really need, you know, regulation that is able to step in and say, what you're doing is not a safe, so it's not trustworthy. it's unacceptable to develop or deploy this kind of system . and even if that's going to really hurt your bottom line, and you just can't do it, whereas a lot of the sort of self regulation and regulation that we've seen so far as like, well, you know, you can do it, you have to like, do it this way or do a little bit of extra stuff, you know, on top and it's not really drawing the sidelines on this. and that's a, uh, you know, at this point you are not allowed to, to develop into play this kind of system anymore. but that's what we need to to go to my mind. mark whenever one talks about regulation concerns are raised about the
2:54 pm
possibility of over regulation. i want to ask you about that specifically. if it's perceived that there is over regulation on the part of the you with this a i act would, would you essentially start seeing some big tech firms deciding to leave the you in uh, essentially go and set up shop in a country with less regulations or possibly even the us, i mean is that one of the concerns for the you right now and all of this? i certainly can. so yes, and i just want to pick up on one thing that i disagree with with david there in the regulation turn. so limiting the use of i organizing the development of i all i, we would lose a global subject uh, competitive as office. i think the focus for regulations should be around the fundamentals about legal around the coal, around safety standards and those issues. and as long as we're working, collaboration in the right way that i think it's a very, very encouraging staff who are doing the limiting on our ability or no computer
2:55 pm
ability we just for the. so this involves from other nations or some other other areas as well, and that would be a very, very boring situation to be in terms of me and the way in which i look at right regulations. we also paying thousands of companies to. so what, what exactly other signs of name is a self regulation intact right now varies. uh we would not have customers if we did anything on that vehicle. we did. uh nothing illegal. uh so the self regulation that we've had for years. well, so renee are welcome governments getting involved and i'm putting a little bit more solidity behind knowing that we're after so we don't have a lot of time left. but um, we know we spoken a lot about the complexities around this issue. the, the complexities and trying to build a framework for, for regulation, for legislation i want to ask you, i mean, how do you go about trying to strike
2:56 pm
a balance between progress and threat? that's a very good question and i just want to say, i don't think there is, as you should not expect me or anyone else to give you like a simple answer to this question. but i want to kind of like, i say one thing that i imagined is really important. and that is that all the governments do not just listen to the way ceo's of the company's talk about regulation and then they, they don't want to kind of build up their ideas around how the ceo's want regulations toward here there's this unfortunate of narrative that some people in different governments do not understand technology and to see ols on people who are coming from, be companies understand technology. i don't think the space is this binary. there are people like me there. people like david, there's so many other people in the space. so many people like over the years who have done lots of work on ethical a, i responsibility i and these community people who know about technology to
2:57 pm
a certain degree, i know about regulation and policy and historical social complex. it is about that they need to bring an families to bring a very like loud voice into this space. unfortunately, that's not what's happening in the us. that's not what's happening in the u. k. and i really hope that, that, that changes. and i think if that changes then we, we are going to have more productive and hopefully optimistic kind of conversations about how to go and resolve this super complicated trade off problem as your career convention. all right, well we have run out of time, so we're going to have to leave the discussion there. thanks so much to all of our guess a to success here is that they mark sense and, and david krueger and thank you for watching. you can see the program again any time by visiting our website. i'll just here a dot com and for further discussion, go to our facebook page. that's facebook dot com, forward slash age and sign story. you can also during the conversation on twitter handle is that a j inside story for me and how much i'm drawing a whole team here. bye for now. the
2:58 pm
the reporting in the field means i also get to with this not just news as, as breaking, but also history as it's unfolding, dropping from one day i might be covering politics. i might be covering protests. but what's most important to me is understanding what they are going through so that i can convey the headlines in the most human way possible. just here to we believe everyone has a story worth hearing. it's
2:59 pm
a well phyllis list and drama. see chasing india celebrities. it's a high stakes game one when he goes behind the lines with those hunting for the big, big chest on out just the ever cost. he's everywhere and it's choking out plan. it's very, very dangerous. we could spend years cleaning this, but breakthroughs all be showing that it is possible to change our relationship with this mandate, substance abuse room, plastic waste of fries on which is 0. how do you turn this into this? in a world where the news never ends. understanding what's behind the headlines is more important than ever. it takes listening to the people behind the news and to the journalist for reporting their stories except intimacy that makes every
3:00 pm
international story local at heart. i'm only give you that host of the take a daily news podcast powered by the local reporting of elders here. find us where ever you get your podcast. we understand the differences and similarities of cultures across the world. so no matter what moves with the news and kind of files that matter to you. so i'm for the back to going to have with the look at on main stories on the alger 0, at least 79 people seeking to migrate to europe. have drowned off. greece is southern coast after that, both sang survivors are being treated in the town of kind of monta. it's believe that hundreds of migrants were on board for the vessel. john strapless has more from kind of my time greece. well we've spoken to the hospital director here and to the head of the doctors union here.

13 Views

info Stream Only

Uploaded by TV Archive on