Skip to main content

tv   Inside Story  Al Jazeera  June 15, 2023 10:30am-11:01am AST

10:30 am
or facing the teams? does the un fits the purpose was like many critics sites, just pump solutions doesn't get anywhere near enough done to the amount of money that is put into its hard hitting interviews. do you think look to their lives on washington enough for money to go on its own and build it's on thoughts providing on for centuries, people have been taken care of are. so i have every confidence that future generations will do it as well via the story on told to how does era and it is adopted. congratulations. it's described as a significant step toward setting a robot for artificial intelligence, european, and please pass the draft law to keep a check on the risks posed by ai. a ai acts could serve as a framework for creating a global watched on, because it also suppressed innovation. this is inside the
10:31 am
hello and welcome to the program. i'm how much of june the world is experiencing an artificial intelligence revolution. this changing the way we live and work, but the rapid growth in a i is also raising global concerns. are our jobs under threat? can it create this information? well, it amplify bias and misinformation. you and i even governments are grappling with such questions. now the european parliament has taken the 1st step to address these concerns. it is past a draft of what will be called b a. i act the legislation would ban systems considered to pose unacceptable risks to human lives. but that could put your a p. m. government on a collision course with american tech giants who are investing billions of dollars in the technology. and i think that you've all heard and probably agree that a, a is too important not to recognize. and it's to importance uh too badly regulates
10:32 am
a good recommendation that we all agree on as soon as possible must be a common objective. and of course, that we need sound enforcements. we need these obligations that this know gives us to be a real thing on ground for people to be safe. we're on the 1st to tackle the fast moving and evolving technology with a concrete legislation proposed on how to address those powerful foundation models to bits trust and provide transparency and over the size of the systems. the absolute absolute minimum that we need to offer here is transparency. it must be clear that this content has not been made by human. we will also go one step further and ask developers that he's not large models to be more transparent and share their information with providers and how the systems were change and how they were developed. natasha butler's been following the developments of the european parliament. here's her report from strasbourg. a lease proposal is aimed at
10:33 am
ensuring that a, our systems are safe and ethical, and also the tech companies comply by rules and regulations. and if they don't, they would face a very large fines. now the idea of the noise that it would categorize a are systems into different levels of risk that would be low risk systems that have less rules and regulations. but the highest risk to move was the regulations will be in place. and then they'll be a category a deemed unacceptable risk a i systems and amongst them would be facial recognition in public in real time that it would be exceptions to this. for example, police could use a biometrics events in public and real time the sale looking for a missing child, for example. but naturally it would be bind many m e p site that it simply doesn't fit into the use values. it would be an erosion of rights, but they're all some who say that it has its place in terms of security and
10:34 am
policing. nevertheless, there are so many challenges with a life, because it's such a foss developing system. and what policy makers of really saying is, how do you balance protecting people's rights withdrawing to also support and balance innovation? because what they don't want is for big you a, i companies to abandon your opinion and go, for example, to the united states. and then the you would be left behind in this take a rice, natasha butler for inside story, stressful. all right, for more on all of this, i'm joined by our guests in chicago is i to saw. cause here is that the director of research at the center for techno model futures at the university of edinburgh. she's also a senior policy fellow on social and policy implications of generative ai for u. k. research and innovation in new york is mark sense and tech entrepreneur and ceo of pillar an interview intelligence platform using a technology. and also in chicago, is david kruger,
10:35 am
professor of machine learning at the university of cambridge, his work focuses on existential safety. in a i learning a warm welcome to you all. thanks so much for joining us today on inside story after. so let me start with you today. this draft law that has been passed by european m p's. how significant a step is this towards setting a rulebook for artificial intelligence going forward? and i think it's a significant 1st step for sure, but it's a very difficult and long way ahead of us. i am really happy to see how europe, especially be working on this act is really trying to talk as st very carefully and responsibly about the regulations of a i in the, in, in, towards and so i think this is definitely a significant. ok, because it also shows to the other content and so in other countries and the words that we are able to do something. and so this is a 1st step. let's continue to walk together. david,
10:36 am
the last time i spoke with you earlier this month, you, along with tech leaders and scientists that issued a warning about the perils posed by artificial intelligence. does this draft law that has passed make you feel that the warnings that you and others have issued about a i are now being taken more seriously? i wouldn't say that. i mean, i'm also happy to see this legislation being passed on, but it's part of the work for a long time since long before we made that statement. and it's certainly not geared specifically at ex, essential risk. i think it's more geared up the kind of problems we're already seeing. you know, 5 years ago when people started drafting the law are the, there has been, you know, a lot of other signs that people are starting to think more about essential risk. so i am encouraged by a recent developments, but not just one in particular. oh data if i could follow up with you, what are some of the other signs that you're seeing that make you believe that others are now factoring in the so called existential threats. and i think because
10:37 am
it's talking to a lot of people, including, you know, by, on media like this. but i guess also the u. k. government has, has made public statements about this. i mean, i'm not sure what exactly their plans are or that i know that they're doing all of the right things there. but at least they're, you know, saying that they are taking it seriously and trying to convene leaders at least with, you know, to us, to talk about those. so i think it's still maybe missing the full interest, all scope that we need to really handle effectively. we need to include other countries like china as well. um, but it might be a secondary direction, maybe to some say the mark. so if governments are now racing to regulate a i what are some of the concrete steps that can be taken from your perspective? that would make sure that creativity and innovation aren't stifled. yeah, we, we welcome to the policy from the, from the you. i think it's just has to be applied in, in the right. why the, the,
10:38 am
the, the right way, full innovation in the right way and pray tibbetts, he is just making so little, there is regulation around some of the coal was and some of the boxes that have come out that we, there are still many laws in place that you cannot use any kind of technology, whether it's, hey, i'll run a thing else full back to. i mean, if we, if we follow the currently in place right now and apply those in a way, which is rather than say, i am with the same people. i think this could be a very encouraging step for a to. so a i of course, is also become a focus of concern when it comes to pretend that the potential to create misinformation. uh, also when it comes to the fake technology, these are a i generated images and videos that mimic people. how worried from your perspective, should we be about this at this particular time? yeah, i think we should definitely be very worried, especially with the advancements and the developments of the generative models that
10:39 am
we're seeing we, we are going to get better and better synthetic media that is produced. basically the content like images or tags or videos or audio voices that are generated by just technologies. and sometimes it is very hard for humans to kind of like judge whether the content that they are seeing on social media other platforms. is there a way to buy an a system or is generated uh, you know, by like, incredible or is coming from incredible source. and i think uh when we get to a situation where at a scale of the social scale we feel per life in judging and distinguishing between the kind of content that is coming from credible sources and the kind of content that is generated by any system. i think then we basically are at the verge of a very kind of existential crisis. if i didn't call that right, because then basically the very threats of the democratic institutions. what does
10:40 am
it mean to trust each other? what does it mean to trust different governments and different kind of institutions in any country? also globally kind of who'd been newest and i, i don't have a positive vision of the deluxe after the background stage. david, i saw you nodding along to some of what i to so i was saying there. and then certainly if you use the verbiage of saying that this is existential. that's a word that you and your colleagues have used when talking about the threat posed by a i um, how much you concern, do you have that a eyes developing faster than it can be controlled? and also did you want to jump in and add to what up to so i would say yeah, absolutely. um, yeah, i have a lot of concern about that. i mean that's why we created that statement. so i think on. yeah, definitely that's, that's my number one concern is that, you know, in general we're just scratching the surface of how extreme, uh, things to get with a i built into. and so it's just the raw intelligence or power of the systems, but also the surface impact that that will lead to on. so i think, you know, misinformation is definitely one issue that i think is important and that could
10:41 am
have that kind of catastrophic consequences that you so much and, and i think could play a role in future ex, essential risk scenarios on i think it's interesting that we haven't seen it happen like have even larger impacts already with the techniques that are out there . but i think we could be about to across a threshold where it starts to really have some really huge impacts on our information eco system. so i think about, you know, the, then 2024 election and you ask an american. i think there's also been growing reports of people using assistance for fraud, which is another risk that is, you know, related to this uh, they can use and the ability to, you know, sup, size, voice and, and imitate people's voice and things like those on. and i believe that also similar to tactics that russia or operatives and russia have used in the past 20 election. so there's some way trying to sort of so discord using more advanced
10:42 am
tentative techniques on social media platforms already. this is something i heard about recently. generally speaking, i think this issue, it's like sort of degrading trust as it's a really strange one. like, uh, i think it will probably push people to like sort of by necessity put more trust in large institutions like governments or the, the largest media companies on which, you know, has, has a number of consequences. right? so right now i think we don't have to do that because we can still sort of verify for ourselves when we see some content. you know that, that, that thing really happened. it seems like um, but in the future we might, we might still be able to maintain that kind of verifiability for while using technical tools. but those tools will not be all right. things that are widely understood. so you'll have to differ to authority about whether or not to trust snow that this martin is legitimate more and more of the time. right, right. i mark um the rapid growth in
10:43 am
a i is raising global concerns when it comes to many different issues that i want to ask you. as a tech entrepreneur, when people ask you if, due to a i, the jobs are now more under threat than before. what is your response? i'm as far as with any monumental, instead of shifting them all, which i do see a lot of the same as the industrial revolution and the other ships historically. yes, that will be sort of the jobs that are threatening them all. but i see it more as a jump shift and a skill shift in them. all right, we're, we're going to lose the what about the off this the, the opposing view as well. the and we, they all have the on site is to help us to help us do our job and to help us move into society. there is a good thing to boom around the coal in a, in the market that we are doing, finding jolts and getting paid more for it as well. so i do believe it's gonna be a shift. i've gotten a significant shift in the skills that are needed,
10:44 am
but i believe actually a r l i think we look forward into a and into the world. and usually that goes to call what concern is also being expressed by the united nations secretary general, who has backed calls for a global watchdog to ensure that artificial intelligence technology is safe, secure, responsible, and ethical alarm bells over the latest form of occupational intelligence, jeanetta d, v i a deafening and they are low. those from the developers will design that the scientists and experts have called on the world. wicked be creating a, an existential set to humanity. on the bod with the risk of nokia was we must take those warnings seriously. after us, uh do you and secretary general is also backed up proposal for the creation of an international watchdog to monitor a i from your vantage point. how long might that take to become a reality to?
10:45 am
so i just wanna say that this whole process is going to be absolutely complicated. and basically many, many kind of organizations like you want me to listen to the voices of lots of different experts. just saying that we're going to have a watch dog to monitor a high does not mean too much because a, i can mean anything. basically simple task, any kind of words mix to stand up can do some kind of reasoning is on a system. so what does it mean to say we want to monitor a yeah, then there is this very complicated task of interpreting this notion. i think the very, the very segment that we need to kind of like how to think about the conception of a large start to monitor its grade. but when we compare, for example, this proposal with the proposal for a footboard with the watch side for monitoring the development of kind of nuclear energy. we see there are so many different defenses which induced to foreclose and they're both watch dogs proposal for watch dogs to monitor a kind of
10:46 am
a system. but for the case of like basically the, the, the, the, the monitoring of the new tier energy we can distinguish or return basically in heavy the use of nuclear energy or a, sorry, need to get nuclear energy, nuclear technologies for military purposes. that makes sense. but if you say, okay, let's kind of like it inhibits the use of ai system for military purposes. that just does not make sense. because i think most of the countries in the world are using one or many other versions of a i in their military development. so we need to be very careful when we, when it comes to the interpretation of this terms. and i think this is going to be a very complicated process, and the best we can do is just to bring in different voices, different stakeholder together, and do like very complicated brainstorming. otherwise the statements are gonna just be like statements that make no sense. and when you're going to really put them down into operation, every state, like we can't really do much, marcus,
10:47 am
are you reacting to a lot of what are to us? i was saying there. uh so please go ahead if you want to jump in. yeah, thank you to so i, i certainly agree with, with a lot of what you said it's, it's very easy to monitor building a new thing where, where the products are going to the come by is very, very easy to build an algorithm. and to end to end, to build out. so who is a great headline, implementation of that, what store is going to be hard, it's going to be very grateful. okay, good. and we certainly need to hear more about the thoughts around how this is going to be monitors and from wal mart loretta seems to be actually to be useful. david, let me get your thoughts on this. i mean, i mean, do you think that it's realistic that a, a watch dog can, can be created and, and can it be created in the kind of a timeline in order to actually be effective because a i of course, is developing very, very rapidly. yeah, i mean,
10:48 am
the way i think about that kind of question these days is not like candidate, but must it. and i think the answer to that is yes, we need all these things to happen like as soon as possible on. so i don't know much about you know, how, how things worked up to you and actually uh yet. um, so i don't know how long it's normally you would expect something like this to take . um, but yeah, i think just to respond to those questions like what to monitor and, and how, how it makes sense. i mean, i agree that you know this, this is something that we should get. it should be a lot of discussion about it, but i think i, and i think it's actually pretty easy right now to monitor at least the largest models. and those are actually very difficult to create and only a few organizations have the capacity to, i mean, arguably only open the i right now has the capacity to build a model like cupids before. um and so i think if what we're most concerned about is the, the largest, you know, frontier models. and it could be pretty easy to start monitoring models like that. so basically, any time, you know, if we just have to political will to do it,
10:49 am
you can monitor the, the large contribute that is needed to build them. and, uh, you can monitor the production and distribution of the computer. so we're talking about, you know, tons or possibly hundreds of millions of dollars worth of computer trips that are, you know, specialized high end chips that only to be co located in the same city every time that might change as we get a cheaper, faster trips and better technologies, so that might make monitoring more difficult over time, but i think we can start right now on. yeah, so i think in that sense it is actually more similar to nuclear weapons if you care most about just the, the biggest models which are the ones that i think are most likely to pose extinction or has been in the near term because, you know, they're the ones that have the most intelligence that are of you know, potentially coy as to become intelligent enough to get out of control. i to so aside from the draft law that was passed, you also have the us and the european union drawing up
10:50 am
a voluntary code of conduct for artificial intelligence. but as far as i understand for this to be effective, it would need uh, the tech industry to basically sign up as well. and, and what i'm curious about is, um, if you believe that for the tech industry to be involved, can they be trusted to be putting up safe guards for themselves to be trying to regulate themselves? so i think we need to definitely have different forms of regulation, like some kind of self regulation, obviously is needed. i think many like research labs and many companies already. i have some versions of surface sets regulation like they have or some kind of like safety and ethics. they are safe to have transparency, team or policy team. so they have the resources off have so i'm kind of like researchers and experts that are trying to somehow self regulate. so we have, i think a minimal level of a self regulation within the company. is that just not enough? because the systems can really disrupt human society at a global scale. and so we definitely need to kind of bring the conversation to
10:51 am
a more serious and more advanced stage where we kind of go and ask to take a c o for example. what do you mean exactly by sales regulations? so when some oakland, as one example talks about regulation and comes and says, this is really important to this. the next stage is live 2 needs to come and say, what does it exactly mean with self regulation? and for regulation in general, what he does not mean, otherwise we are going to just pass the words together and, and, and talk about things. and then the development of a i is going forward. unfortunately, i think we would fail in properly regulating so we need to get much more concrete, brittany, and different stakeholders to the table. you know, bringing people who have done lots of work on responsible and ethical unsafe a i for many years. bringing those experts on views on the table and the views of the ceo's uptake companies on the table. i put them into like dynamic and personal conversation. otherwise i don't think we're gonna end up some are good. david,
10:52 am
i can see that you, you want to jump in uh a lot there, so please go ahead. yeah, i mean, i agree with everything i just said. um, yeah, just to add to that a little bit. i mean, i think sort of seems like no, we absolutely cannot trust tech companies to regulate themselves. i mean, i think that's just, you know, obviously it's made kind of absurd at the point that we're at when the, even the ceo's of these companies are saying we need regulation and they're not just talking about self regulation. now. i think um, yeah, there are good people, i think working on some of the policy, you know, 6 teams that some of these companies may have some good ideas. so we said listen to them and bring them into the conversation. but like i said they, we can leave it up to them. i think, you know, the dynamics here is that there is just the just very powerful incentives to build more power fully. i systems and at some point we may need to stop building more powerfully. i systems, and if your business has
10:53 am
a company is founded on building more and more powerfully i systems, then that's going to be a significant threat to your business. and that means that i don't think in, in, you know, a competitive profit driven marketplace companies are going to take that kind of action when it's necessary. so we really need, you know, regulation that is able to step in and say, what you're doing is not safe. it's not trustworthy. it's unacceptable to develop or deploy this kind of system. and even if that's going to really hurt your bottom line, and you just can't do it, whereas a lot of the sort of self regulation and regulation that we've seen so far as like, well, you know, you can do it, you have to like, do it this way or do a little bit of extra stuff, you know, on top, and it's not really drawing these hard lines on this hand. let's say, uh, you know, at this point you are not allowed to, to developer to play this kind of system anymore. but that's what we need to, to go to my, my mark whenever one talks about regulation concerns are raised about the
10:54 am
possibility of over regulation. i want to ask you about that specifically, if it's perceived that there is over regulation on the part of the you with this a i act would, would you essentially start seeing some big tech firms deciding to leave the you in uh, essentially go and set up shop in a country with less regulations or possibly even the us. i mean is that one of the concerns for the you right now and all of this? i certainly can. so yes, and i just want to pick up on one thing that i disagree with with david there in the regulation turn. so limiting the use of i or i missing the development of i all i, we would lose a global. so the car, competitive as office, i think the focus full regulation should be around the fundamentals around legal, around the cold, around safety standards and those issues. and as long as we're working, collaboration in the right way that i think it's a very, very encouraging staff who are doing the limiting on our ability or no computer
10:55 am
ability, which is for the so that this involves from other nations or some other other areas as well, and that would be a very, very boring situation to be in terms of me and the way in which by look at right regulations. we also paying thousands of companies to so what, what exactly other side. so name is a lot of self regulation in tech. right now varies. we would not have customers if we did anything on that vehicle. we didn't know think of legal. so the self regulation that we've had for years. well, so renee are welcome governance, getting involved and i'm putting a little bit more solidity behind behind that we're after. so we don't have a lot of time left. but we know we've spoken a lot about the complexities around this issue. the, the complexities and trying to build a framework for, for regulation, for legislation i want to ask you, i mean, how do you go about trying to strike
10:56 am
a balance between progress and threat? that's a very good question and i just want to say, i don't think there is, as you should not expect me or anyone else to give you like a simple answer to this question. but i want to kind of like, i say one thing that i imagine is really important. and that is that almost the governments do not just listen to the way ceo's of the company's talk about regulation and then they, they don't want to kind of build up their ideas around how the ceo's want regulations toward here. there's this unfortunate narrative that some people in different governments do not understand technology and to see ols on people who are coming from, be companies understand technology. i don't think this price is this binary. there are people like me there. people like david, there's so many other people in the space. so many people like over the years, who have done lots of work on ethical a, i responsibility i. and these communities of people who know about technology to
10:57 am
a certain degree. i know about regulation and policy and historical social complex . it is about that. they need to think, i have to bring a very like loud voice into this space. unfortunately, that's not what's happening in the us. that's not what's happening in the u. k. and i really hope that, that, that changes, and i think if that changes, then we, we are going to have more productive and hopefully optimistic kind of conversations about how to go and resolve this super complicated trade off problem as you're correct. i mention all right, well we have run out of time, so we're going to have to leave the discussion there. thanks so much to all of our guess a to success here is that a mark simpson and david krueger and thank you for watching. you can see the program again any time by visiting our website. i'll just or dot com. and for further discussion, go to our facebook page. that's facebook dot com, forward slash age and sign story. you can also during the conversation on twitter handle, is that a j inside story for me and how much i'm drawing a whole team here. bye for now. the
10:58 am
the in february, 2023. a $149.00 carriage freight train travel through this policy in the higher $38.00 cause the rate of 11 we're carrying parts of this material. one of them is the most toxic chemical ever test. in the united states. it was terrified, blanketing our communities felt lines investigates safety practices within america's railroad industry. time is money. money is everything. the room on the jersey to every month. you'll have someone from home that will say, oh please,
10:59 am
i need my me for the with the economy in free, full and children spread around the world. how does the family survive in martin same population as to pull about 4 people? you'll always have various points opinions, extending my new home. i just feel like i'm stretching myself. weakness transactions on out jo 0. the latest news as it breaks of the b instead of being spent to make the network better. experts say investments are needed in technologies that make up facebook with detail coverage. they will likely remain in the hospital for the next 2 to 3 weeks as they advance in their recovery process from around the world. they say they are progressing from the south, advancing around the columbus of a week, the
11:00 am
challenging place to work from. as a journalist, you're always pushing a boundaries part of the center. we are the ones traveling the extra mile where all the media goals. we go there and we give them a time to tell their story. the greek rescue workers pull survivors and bodies from the sea up to the boats. things killing at least 1790 migrants. hundreds, i'm missing the robot this and, and this is on the 0 life and do have also coming up. the governor of citizens, westoff for region is killed. now ami plains fighters from the rapids support forces.

23 Views

info Stream Only

Uploaded by TV Archive on