tv France 24 AM News LINKTV June 16, 2023 5:30am-6:01am PDT
5:30 am
>> it is adopted, congratulations. >> it is described as a significant step towards setting a rulebook for ai. european and he's dropped a lot to keep a check on the risk posed by ai. the ai act could serve as a framework for creating a mobile watchdog, but could also suppress innovation. this is "inside story". ♪ welcome to the program, i am
5:31 am
mohammed jamjoom. the world's experience again ai revolution that is changing the way we live and work. the rapid growth in ai is creating global concerns. are our jobs under threat, can it create disinformation, lit and will fight i.s. and misinformation? the european parliament has taken the first step to address these concerns. it has passed a draft of what will be called to the ai act. the legislation would ban systems considered to pose unacceptable risks to human lives. that could put european governments on a collision course with american tech giants who are investing billions of dollars in the technology. >> you have all heard and agreed that ai is too important not to regulate. it is too important to badly regulate. a good regulation that we all agree on as soon as possible must be a common objective. and of course, that we need sound enforcements.
5:32 am
we need these obligations, that this law gives us, to be a real thing on the ground for people to be safe. >> among the first to tackle the fast and evolving technology with a concrete legislature proposal of how to address those powerful foundation models to build trust and provide transparency and oversight of these systems. the absolute minimum that we need to offer is transparency. it must be clear that this content has not been made by a human. we also go one step further and ask albers -- developers to be more transparent and share information with providers on how systems work trained. >> here is natasha butler's report from stress port. >> the proposed law is aimed to ensuring that ai systems are safe, and ethical and also that tech companies comply by rules and regulations, if they don't
5:33 am
they would face very large fines. the idea of the law is it would categorize ai systems into different levels of risk. there would be low risk systems that have less rules and regulations, but the higher the risk about the more rules and regulations. then there will be a category of deemed unacceptable risk ai systems. amongst them would be facial recognition in public, in real-time. there would be exceptions to this. for example, police could use biometrics in public if they are looking for a missing child. but largely, it would be banned. many say it does not fit into the eu's values and would be an erosion of rights. there are some who say it has its place in terms of security and policing. there are so many challenges with ai because it is such a fast developing system. what volumes makers have seen is how do you balance protecting
5:34 am
people's rights with supporting an ovation. what they don't want is for big eu ai companies to abandon the european union and go to the united states. then the eu would be left behind. >> for more on all of this, i am joined by our guests, and chicago is atoosa kasirzadeh, the director of research at the centre for technomoral futures at the university of edinburgh. she is also seen her policy fellow on his social and policy implications of generative ai for u.k. research and innovation. in uark -- in new york is mark simpson, ceo of pillar.hr. and also in chicago is david krueger, professor of machine learning at the university of cambridge, his work focuses on safety in ai learning. thank you so much for joining us
5:35 am
today on inside story. this draft law that has been passed by european mps, how significant a step is this towards setting of rulebook for ai? >> it is a significant first step. but a very difficult and long way ahead of us. i am happy to see how europe, especially with working on this act is trying to think carefully and responsibly about the regulations of ai in the world. this is a significant act, it also shows to the other continents and countries in the world that we are able to do something. this is a first step, let's continue the walk together. >> david, the last time i spoke with you, you along with tech leaders and scientist had issued a warning about the perils posed by artificial intelligence. does this draft law that has
5:36 am
passed make you feel that the warnings that you and others have issued about ai are now taken more seriously? >> i would not say that, i am also happy to see this legislation being passed. but it has been in the works for a long time, long before we made that statement. it is not geared specifically at existential risk. it is more geared at the kind of problems we were already seen, people started drafting the law. there has been a lot of other signs that people are starting to think more about existential risks, i am encouraged, but not this one. >> what are the other signs you are seeing that make you believe that others are now factoring in the existential threats? >> just talking to a lot of people, including on media like this, also the u.k. government has made public statement about this. i am not sure what exactly their
5:37 am
plans are, or that i -- that they are doing the right things, but at least they are saying that they are taking it seriously and trying to convene leaders with the u.s. to talk about this. it is missing the full international scope that we need. to really handle this number we need to include other countries like china as well. it might be a step in the right direction. too soon to say. >> if governments are racing to get late ai, what are the concrete steps that can be taken from your perspective that would make sure that creativity and innovation are not stifled? >> we welcome this new policy from the eu, it just has to be applied in the right way. the right way for innovation and for creativity is making sure there is regulation around some of the bad actors. there are still many laws in
5:38 am
place, but you cannot use any kind of technology, whether it is ai or anything else, for bad acting. if we follow the laws that are currently in place right now, and apply those in a way in which is relevant to ai, this could be a very encouraging step forward. >> ai is also becoming a focus of concern when it comes to the potential to create misinformation and deepfake technology. these are ai generated images and videos that mimic people. how worried, from your perspective, should we be about this? >> i think we should be airing worried, especially with the advancements in the developments of generative models that we are seeing. we are going to get better and better synthetic media that is produced. the content like images and texts or videos, audio voices
5:39 am
that are generated by this technology. sometimes it is very hard for humans to judge whether the content that they are seeing on social media or other platforms is generated by an ai system or by a credible source. when we get to a situation where at scale, a social skill, we feel paralyzed in judging and establishing between the kind of content that is coming from credible sources and the content coming from an ai system, we basically are at the verge of a very existential crisis. because then the very threat of democratic institutions, what does it mean to trust each other, to trust french governments and different institutions in any country, globally, would be loosened.
5:40 am
i do not have a positive vision of the life after that stage. >> i saw you nodding along, she use the verbiage of saying this is existential, that is a word that you and your colleagues have used when talking about the threat posed by ai. how much concern do you have that ai is developing faster than it can be controlled? did you want to add to what she was saying? >> i have a lot of concern about that. that is why we created that statement. that is my number one concern, is that we are got -- we are scratching the surface of how extreme things will get with ai, in terms of the raw intelligence and the social impact. misinformation is one issue that is important, and that could have the kind of catastrophic consequences. it could play a role in future existential risk scenarios.
5:41 am
it is interesting that we have not seen it happen -- have a larger impact already, but we could be about to cross a threshold where it's arts to have really huge impacts on our information ecosystem. i think about the 2024 election in the u.s., because i am an american. there is also been growing reports of people using the systems for fraud, which is another risk that is related to this fake news and ability to synthesize voice and imitate voices. i believe that also similar to tactics that russia or operatives in russia has used in the past, trying to sue discord using advanced generative ai techniques on social media performs already. i think this issue of grading
5:42 am
trust is a huge one. it will push people to put more trust in large institutions like governments or the largest media companies. which has a number of consequences. right now we do not have to do that because we can still verify for ourselves when you see content. in the future we might still be able to maintain the verifiability, but those tools will not be widely understood. you will have to defer to authority about whether or not to trust. >> market, the rapid growth in ai is raising local concerns when it comes to many different issues. as a tech entrepreneur, when people ask you if due to ai jobs
5:43 am
are more under threat, what is your response? >> as with any monumental shift in the market, which i do cai as part of that, same as the industrial revolution and other shifts, yes there will be certain jobs that are a threat in the market. i see it more as a job shift and skills shift in the market rather than we are all going to lose all of our jobs. there is the opposing view as well that with ai, alongside to help us do our jobs and add more value into society. there is a potential moon around the corner -- room around the corner. i do believe there will be a shift. in a significant shift in the skills that are needed. i believe ai can help us if we look forward into the world in years and decades to come. >> concern is also being expressed by the united nations
5:44 am
secretary general has backed calls for a little watchdog to ensure that artificial intelligence technology is safe, secure, responsible and ethical. >> over the latest form of ai, generative ai, and they are from the developers. the scientists and experts have called on the world to act, declaring ai an existential threat to humanity, on par with the risk of nuclear war. we must take those warnings seriously. >> do you -- the u.n. secretary, has backed a proposal for a creation of an international watchdog to monitor ai. from your vantage point, how long might that take to become a kelce -- to become a reality? >> this whole process is going to be complicated.
5:45 am
basically many organizations need to listen to the voices of lots of different experts. just saying we are going to have a watchdog to monitor ai does not mean too much because ai can mean anything. in simple terms, any kind of algorithmic system that can do reasoning is an ai system. what does it mean to say we want to monitor ai? then there is a complicated cast of interpreting this notion. the statement that we need to think about the conception of a watchdog to monitor ai is great, but when we compare this proposal with the proposal for monitoring the development of nuclear energy, we see there are differences between these two watchdogs. they are both watchdogs, but for the case of the monitoring of nuclear energy, we can
5:46 am
distinguish or inhibit the use of nuclear energy for military purposes. that makes sense. but if you say, let's inhibit the use of ai systems for military purposes, that does not make sense because most of the countries in the world are using one or many other versions of ai in their military development. we need to be very careful when it comes to the interpretation of this term. this is going to be a very complicated process, the best we can do is to bring in different voices and do complicated brainstorming. otherwise the students are going to be statements that make no sense, and when we want to put them down into our -- into operation, we cannot do much. >> nessel you reacting to what she was saying. go ahead. >> thank you, i agree with a lot of what you said.
5:47 am
it is easy to monitor building a nuclear weapon, where the products going and are very hard to come by. it is easy to build an algorithm, and to build ai. a watchdog is a great headline, it is the implementation of that watchdog that will be hard, it will be complicated. and we need to hear more about the thoughts around how this is going to be monitored and from what actions. >> let me get your thoughts on this, do you think it is realistic that a watchdog can be created, and can it be created in the kind of timeline in order to be effective? because ai is developing very rapidly. >> the way i think about it is, it is not can't it, but must it? the answer is yes, we need all of these things to happen as soon as possible. i do not know much about how
5:48 am
things work at thank you and. yet. i do not know how long it is normally expected to take. what to monitor and -- i agree that this is something that we should have discussion about, but i think it is easy right now to monitor the largest ai e-models, those are difficult. only a few organizations have the capacity to create them. only openai has a model to create gpt four. what we are most concerned about is the largest frontier models, and it could be easy to monitor models like that. if we just have the will to do it, you can monitor the large compute that is needed to build to them, and you can monitor the production and distribution of that compute. we are talking about tens or
5:49 am
hundreds of millions of dollars of computer chips that are specialized high-end chips that all need to be colocated in the same facility. that might change as we get cheaper and faster tips and better technology. that might make monitoring difficult over time, but if we start right now, -- nine in that sense it is more similar to nuclear weapons if you care about the biggest models, which are the ones that are most likely to pose extension -- extinction risk. they are potentially posed to become intelligent enough to get out of control. >> aside from the draft law that was passed, you also have the u.s. and european union drawing up a voluntary code of conduct for artificial intelligence. as far as i understand, for this to be effective, it would be the tech industry to sign up as well. what i am curious about is, if
5:50 am
you believe that for the tech industry to be involved, can they be trusted to be putting up sick arts for themselves? -- safeguards for themselves? >> we need to have different forms of regulation, self regulation obviously is needed. many research labs and ai companies already have some versions of self regulation, they have some kind of safety and ethics team or safety and transparency team. they have the researchers and experts that are trying to self regulate. we have a minimal level of self-regulation. but that is not enough, because it cannot really disrupt human society at a global scale. we need to bring the conversation to a more serious and more advanced stage, where we go and ask the tech ceos,
5:51 am
what do you mean by self-regulation? when one person talks about regulation and says this is important, the next stage is, what does the suffering elation mean, and what he does not mean. otherwise we will pass the words together and talk about things and the development of ai will go forward, and unfortunately i think we would fail in properly regulating. need to get more concrete after bringing different stakeholders to the table. bringing people who have done a lot of work on responsible and ethical and safe ai for many years. bringing those experts to the table and the views of the ceos of tech and committees and put them into dynamic conversation. otherwise i do not think we are going to end up somewhere good. >> i can see that you wanted to jump in a lot there. >> yeah, i agree with everything she said. to add to that, the short answer
5:52 am
is, we cannot trust that companies to regulate themselves. that is just on the face of it absurd, at the point where we are at, where the ceos of these companies are saying, we need regulation. and they are not just talking about self-regulation. there are good people working on the policy and ethics teams at these companies, they have good ideas. so we should listen to them and bring them into the conversation. but we cannot leave it up to them. the dynamics here is that there is very powerful incentives to build more powerful ai systems. at some point, we may need to stop building powerful ai systems. if your business as a company is founded on building more and more powerful ai systems, that is going to be a significant threat to your business. that means that i do not think
5:53 am
in a competitive profit driven marketplace companies are going to take that action when it is necessary. we need regulation that is able to step in and say what you are doing is not safe, it is not trustworthy, it is unacceptable, and even if that is going to hurt your bottom line, you cannot do it. whereas a lot of the self-regulation and regulation we have seen so far is, you can do it but you have to do it this way, or do extra stuff on top, and it is not really throwing hard lines in the sand. at this point you are not allowed to develop or deploy this system anymore. but that is what we need to go. >> whenever one talks about regulation, concerns are raised about the possibility of overregulation. i want to ask you about that. if it is perceived that there is overregulation on the part of the eu with this ai act, would
5:54 am
you essentially start seeing big tech firms deciding to leave the eu and go and set up shop in a country with less regulations? is that one of the concerns for the eu? >> it certainly is a concern, and i want to pick up on one thing that i disagree with david. regulation in terms of limiting the use of ai, we would lose a global competitive advantage. the focus for regulation should be about the fundamentals, legal, ethical, and safety standards, and those issues. as long as we are working in collaboration, then i think it is a very encouraging step forward. i do think that limiting our ability, which puts us as -- at a disadvantage from other nations, that would be a worrying situation.
5:55 am
in terms of for me, and the way in which i look at regulation, we are helping thousands of companies to work more ethically as it stands. there is a lot of self-regulation in tech right now. that we would not have customers if we did anything unethical or illegal. the suffering elation that we have had for years will still remain. i welcome governments getting involved, i am putting more solidity beyond that work. >> we do not have a lot of time left, we spoke a lot about the complexities around this issue, trying to build a framework for regulation, legislation. how do you go about trying to strike a balance between progress and threat? >> that is a good question. i want to say i do not think there is a -- you could not expect me to give you a single answer to this question.
5:56 am
i want to say one thing that i imagine is really important. that is i hope the government do not just listen to the way ceos of big companies talk about regulation, and then they do not want to build up the ideas about how ceos smart regulations to work. there is an unfortunate narrative that some people in different governments do not understand technology and the ceos and people who are coming from big companies understand technology. i do not think the space is this binary, there are people like me, like david, there are other people in this space who have done loads of work on ethical ai responsibility. these communities of people who know about technology and know about regulation and policy and historical, social complexities, they need to bring a very loud
5:57 am
voice into this space. unfortunately that is not what is happening in the u.s., or the u.k.. i hope that that changes. if that changes, we are going to have more productive and hopefully optimistic conversations about how to go and resolve this super complicated problem. >> we have run out of time. we will have to leave the discussion there. thank you to all of our guests, thank you for watching. you can see the program anytime i visiting our website at al jazeera.com. you can also join the conversation on twitter, our handle is @ajinsidestory. bye for now. ♪
6:00 am
woman: in the 1960s and the 1970s, there was a loosely affiliated group of artists living in los angeles who grew up working through painting and influenced by abstract expressionism, but by the mid-sixties, they were looking for ever subtler kind of effects, and so you could almost say that light was their medium. man: rather than paint and canvas, you've got something that has 3 dimension and is full of ambiguity and full of mystery. different man: irwin and larry bell and helen pashgian, all of these artists, are at the top of their game now.
44 Views
IN COLLECTIONS
LinkTVUploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=1096368947)