tv To the Point Deutsche Welle June 8, 2023 9:30pm-10:01pm CEST
9:30 pm
the tire i can 30 for years. i was a metal. i didn't know. now i know the never ending story of asbestos starts june 21st on d w. the says the launch of chat g p t. it's clear that what once seemed to be the stuff of science fiction is here now with the potential to transform our lives. some say artificial intelligence will enhance human capacity as well did others, including some of the very researchers developing the new technology or, and it could drive humanity to extinction. their calling on policy makers to act
9:31 pm
quickly and decisively to regulate risk. but it's not even possible from a she's capable of learning at a pace that far out strip this human intelligence to the benefits of a i and the education medicine and elsewhere out the way the perils were asking. artificial intelligence, our machines, voice to take, control the hello and welcome to to the point. it is a great pleasure to introduce our guests around the whole house is professor for intel of artificial intelligence at the free university of berlin. working on technologies, including autonomous driving buying onyx and brain computer interfaces. john stoker is types of outlets. chief technology correspondent, or,
9:32 pm
and joining us virtually from humble work is due to the moment. she's professor for ethics n i t at the university of handbook. and she also sits on the german ethics council . and it's a great honor to welcome a special guest, a pioneering researcher, who has been referred to as the father of modern a i. he's scientific director at the renowned a. i lab i. d. s. i a, you can submit to move our joins us virtually from switzerland. let me ask all of you to just give us a very quick take on how you see this balance between uh, between risks and benefits and the color of the scientists warning of possible extinction. compare a high to societal scale risk, such as pandemic send even nuclear war. would you agree? well, i think that's a little excise you're right, because the risk i see are more they come from everyday life. i say risk regarding employment. what are we going to do with people who is being displays the new
9:33 pm
information technology is isaac of risk because the information, whether it's through one, whether it's not through. when we read something on the internet, i see research regatta being isolation of people privacy. i say the risk of this type and i think that the regulation is required. and i, i, i think the biggest immunization of human kindness is sort of the found and is more relate to positively to climate change and to a thank you and we'll come back to regulation a little bit later. let me go over to unit. if i may, and as an i t f assist, what risks were a, use the most and do you see the balance between these and potential benefits as more negative or more positive to i see it quite balanced because of, if you think about it a, i as, as a basic technology that can be used for various purposes, we're talking about path and recognition mostly for,
9:34 pm
for various purposes. so it has lots of advantages. lots of disadvantages, of course, as well, depending on how it's used. and i think much of the debate is to certain degree shirking a bit of responsibility by for training a i as it's natural force by focusing on the very long term documents by voters that actually look at how is currently already being used and implemented. and what the car and threats are both in terms of buyers and discrimination, but also infringement of privacy. and of course, you know, not down playing the positive side, but to focus more on these very concrete, immediate, dangerous. instead of looking very far ahead into the future. thank you very much. and over to josh and the warnings have proliferated since the launch of the chat box, the dialogue technology chat, g p t. why is that? this technology, of course, has been in development for a long time. now. that's very true, i think one and you know particular reason is that it's actually p t and it's web interface makes it very much approachable for everyone. everybody can use it and
9:35 pm
like it just enter a query and then you'll get an answer and that is sort of very close to the reality of people. and i think that's one key aspect. the other aspect is that we're now seeing that the technology is also coming from the work of knowledge workers of white call over. because if you will, lawyers, journalists, diplomats, for example, the people at this table, yes, very to very, very true. and i think those are the 2 key reasons why there's so much attention on this issue now. and indeed, many of us have become a mature a i t behavioral researchers in the sense that we're going to chat to p t and testing it and seeing if we can beat it. and we'll come back to that a bit later. generative a i like that, which is driving chat g p t is still in its early days. it's sometimes loony and eroni. yes. missteps have provoked as much snickering as worry. the products of its creative a may look harmless, but they are convincing and they're in flies danger. a lot of people saw this photo
9:36 pm
of pope francis and a luxurious buffer jacket was real. but you can tell by the distortion of his fingers, it was a i whether it's russian president vladimir putin kneeling in front of the chinese leaders, asian ping, or ex us president. donald trump getting arrested by the police lies and manipulation are only a few keyboard clicks away. a i tools like mid journey and all e creek photo realistic images that can even full professional. sometimes. this image one photographer board l toxin, a sony walk photography award. but he turned down the prize because he created the image with a i to spark a debate. what does it mean when you can no longer trust what you see in pictures?
9:37 pm
and in fact, you can find more information on how to spot a. i generated images on r d w. com innovation site, with fact checking. and one of those who's worried about precisely that risk is the c o of the company at the center of the store, the co founder of open to a guy who's technology powers, chuck jpg. if this technology goes wrong, it can go quite wrong and we want to be vocal about that. we want to work with the government to prevent that from happening. it's one of my areas of greatest concern that the more general ability of these models to manipulate, to persuade, and to provide sort of one on one, you know, interactive this information. and let me go over now to professor schmidt hoover and ask how i can be certain that you yourself are real and not an a. i generated avatar it's for now you would have to take my word for it. on the
9:38 pm
other hand, what is the value off the word of an avatar? you? you didn't sign the open letters that has been floating around the internet and i gather from what you said elsewhere that you see more promise than powell in artificial intelligence. yes, absolutely. i'm an optimist, and i refer to all the cases where a i is already making human lives long and healthy and easy. now i'm many of those who are now warning. they are mostly seeking attention because i know that i'm a i just top yeah. um, uh, grabbing more attention then documentary is about the benefits of a i, and i can afford about a guy supported weapon systems have, you know, concerned that there's danger in an autonomous system that makes split 2nd decisions
9:39 pm
where no human oversight can adequately control or 2nd guess what's going on in that black box? it is true is that we have new types of weapons. you can buy yourself a drawing for a 300 yours. and maybe you attach a little group to it and then a flight over to your neighbors ground and then maybe put some poison into his coffee or something like that, or the other. and the police is using the same technology to track that and you the house on existing, recognize like regulatory a framework of laws that make you go to jail in case you get caught on much more worried about a 60 year old truly ex, essential threats. and form of hydrogen bonds, which can wipe out 10000000 people within one flash without any a i. let me just do about another wrist. it concerns many people in the classic
9:40 pm
science fiction, dis, tokyo robots of essentially run a mock you yourself created a form of artificial curiosity that definitely could out strip human intelligence. in fact that's you're a, are we really able to implant in such applications an ethical compass to ensure that no matter how they develop they will do no harm? well, as long as you know, as a program, uh, what is that ethical encompass? you can program it. on the other hand, you put 10 s is in one the room and they have 10 different opinions about what is ethical and what's not. as um, as long as it's humans don't get to act together and agree on what is actually correct and what is and i see and little hope that you will
9:41 pm
come and build a i is that implements that thing which is not well defined. and we will in fact, see whether it's human beings can get together to do the necessary policy making and regulation a little bit later in the discussion. but let me ask you one last question and it relates to the world of work which i will mentioned as one of his key concerns. how do we make sure that job loss doesn't produce massive social the stabilization to be, have a long history of that? so it was what happened was when jobs last 6 months, well, let me see 200 years ago almost all jobs were in agriculture, 60 percent of all people were in agriculture. and today it's may be 1.5 percent. but nevertheless, the unemployment rates are really low and the less than one, maybe 5 percent or something like that because lots of new jobs were created. and this is going on as we speak all the time new jobs are being created. because
9:42 pm
someone who does the playing man likes to invent new ways of interacting with other people and making jobs professional activities out of that. so that's not going to stop. and as long as you keep planning to, to adapt as a new situation, you, you're, when you won't have to worry and profession it too. but i thank you very much for joining us on the program some as much as i was. thank you. and let me now get a take from the 3 of you on what was clearly a very tech know optimistic perspective on a r i. and i'd like to begin if i may wish to go well and pick up on any point that you wish to address. but also i definitely like to hear your take on the weapons systems. because clearly i could be used to augment and support larger as well as smaller weapons systems and thereby shrinks the window for de escalation in
9:43 pm
a confrontational situation. yeah, there's of course, uh, information technology and the official intelligence or do i just want to acknowledge that you can use them to improve the quality of life of people or you can use them for weapon systems. many people have protested in the past about the use of weapons in general, and i'm more of to the point of this program about the use of the official intelligence for weapons. but the i, i see the problem is right now morning, every day life i said before, i see the problem in jobs. the current, the picking knowledge of the promotion is months is going much faster. and then it was going in the, in the 2nd and industrial relations on the 1st. so it took $100.00 zeros for the phones to take over. for example, in the us, so that everybody had a telephone. it took 70 years for more of them because of that population took our car. it took also 100 years for the electrical network to be distributed in the us
9:44 pm
. now it takes 10 years for smartphones to be more than people. it takes 2 weeks for trips just to be, to have to meet you in years. so the, the acceleration of the process is such that we have to be aware that this is not like 200 years ago a before is not like the 2nd industrial revolution of the 1st one. and unit same on i must say, i've just been in brussels talking to some of the new policy makers who will be negotiating the new, the new legislation on a i and that speed that role just described is absolutely one of the major concerns . so let me ask you, in a technology moving this fast, how hard is it to get under the hood as one might say, and understand exactly how the algorithms arrive at the outputs that they produce. because if we want to try to ensure account ability and also ethical
9:45 pm
behavior on the part of a i, clearly we need to understand what's inside to the problem is really that many of these large systems, i'm not understandable, you country, you understand how systems reach their decisions, what all they make their predictions. i don't think that we will have to have explained ability for all in order to ensure accountability or regulation old forms . so i think there must be forms of regulation and accountability even if we don't understand the systems. right. so for certain systems we may require explain ability and this can only go so far, right? so we may say for instance, in the traditional system but also maybe in science, we may want to understand how system rich section conclusions. but there's a practice to pay very often in terms of maybe accuracy. so explain ability in terms of a side doesn't come as a cost. so we have will have to decide wherever you need it and where we don't need it and for what reasons. but i think apart from that we need to have regulation in
9:46 pm
place, irrespective of the question whether you can always look under the hood. and i want to come back to the regulatory issue. but let me ask you another question about the technology itself. i recently had the interesting task of, of a dialogue with the chief technical officer of open a i. and she said that even as generative a i is now moving towards multimode ality. that means processing not just language, but images, sound inputs and, and much more that it will still remain prone to what she called the illusions. what does that mean that we must, essentially, it's because we've talked about explanatory capacity, but if a guy is ready to lie, who can explain to you? i think what people must understand is the way the systems function. what happens is basically we just think of check the chapter p t. so this, this system is analyzing large amounts of data and is trying to understand how the
9:47 pm
text is structured, how certain types of tech such as novels or crime stories, but also so that communication instruction and it produces highly plausible content . but without any relation to truth. so everything that is being done, writing is just plausible. patents of content, speech code tends to be a picture. and that explains why we have these delusions because the whole underlying thing is not really presenting something that is existing, but making up something out of patrick. and if you understand this rational now, then it becomes to be that of course, if we continue to do so, you may be changing certain issues in order to feed in as sources for texas cetera . but i'm in line. the whole system is just the generation of plausible content, but not of true and younger. so that takes us back essentially to this short report that we saw earlier on the deep stakes on these pictures and images that look very convincing. but in fact, are simply also lies. but there is certainly
9:48 pm
a double threat to democracy here. say many observers, both those kind of images and the degree to which they can be used for propaganda. this information manipulation, but then also the whole other set of issues around the future of work and, and job loss. how do we get out in front of that? how should we respond pro actively? it's true because of the genie is out of the bottle and now it's about how to react . i think there are 2 ways that are important. first, it's important to promote what you could call a i electricity you among the general population. people need to understand how we, i works, the basics of it and how they can use it, you know, and how they can put it to good use in their own life that's important. and then the 2nd and the 2nd field that is important is regulate and we don't need smart regulation that fosters innovation. yes. because there are vast benefits as you can see me too, but pointed out. but that also mitigates the risk for people on the ground. and we
9:49 pm
need regulation that make sure that fundamental rights are protected. what that regulation could look like. i want to discuss in just a moment with, with the 3 of you. but let me just ask a little bit about where the industry stands today and relo as we heard, leading researchers had been issuing warnings, including spokes people for the leading companies in this area. whether it's microsoft opening i or others. but are they walking the talk? are they now slowing down and prioritizing safety over a speed to market? i don't think so. so i was telling before the, the big difference between a i in the nineties and a guy today is that a, i in the ninety's was an academic program. it was the owner of the university. then many people moved from the university will be as companies, i know companies like google lama. so phase will comes on the leaders in the philadelphia and there's no, there's no way for the university to compete. now august, these big companies,
9:50 pm
and it's a, it's an extension of the problem for these companies. for example, chuck u t p is, is owned in part by microsoft. and google lost the product of his value in the stock market just because they don't have an equal alternative. and so for these companies, it's an ex, essential for that if they do not develop the same speed that the companies are doing. and also especially considering the chinese companies who are not going to abide by european laws over by americans. again, that's the point we'll come back to in just a moment. but if i can go to unit and pick up on this point, in fact, some critics say that the biggest, everyday risk as well, has, has called it earlier, is which was a, not in the technology so much as the business model that it will serve. whether it's turbo capitalistic search engines that are expert at manipulating us to buy
9:51 pm
things, or whether it's surveillance, capitalism that puts a facial surveillance, workplaces that can measure everything from workers efficiency to their moods. yeah, i would totally agree. i mean, a, i doesn't do anything on its own, it's always people deploying technology specific purposes and for certain utilities . and that's the problem. so we're not talking about a, i mean, if you're getting upset about people using a i in order to manipulate or control people and that is indeed the real danger. so i think we must be very careful about all these narratives about a i being so like a force of nature and we can't do nothing about it. it's really a lot about shrinking responsibility in these discussions. so i think it is important to recognize that it's people doing things and taking decisions and being responsible for what they're doing. so shirking responsibility is something that many accused politicians for quite some time, but it looks now like they are awakening from their torpor. the european union will, in fact soon begin negotiations on the a, i act that it's hoping will become
9:52 pm
a global gold standard for risk based regulation. and interestingly enough, for the 1st time, it's also working on standards and technical norms. at the same time that it negotiates the legislation, which it has never done before. you want us to think that regulators can get out in front of the wave as well. to be completely honest. you know, the regulation of technology is never really in front of the way. it's because technology both fast and, and you can fully predict what's going to happen. so that being said, i think the you is on a good path, because if the core of this legislative package that's now being negotiated is a risk based approach. so the idea to regulate our defense, artificial intelligence and its applications according to the risk, but it poses to the safety and, and to the fundamental rights of users. and i think that is, it's the right approach. and i think that is sort of the approach that will allow
9:53 pm
lawmakers to adapt that regulation over the next couple of years as technology levels. of course, the use perspective on risk is not necessarily the perspective of, say china, which so called you mentioned a moment ago. and there is a lot of concern that we could see a race to the bottom in which authoritarian states develop and use this technology use a i for purposes that we would consider off limits massive social surveillance. for example, optimist hope that global standards, global technical norms might be able to present that are countries like china taking part in standards? are they amendable to global governance when it comes to a oh, i don't think so. i think the china is more interested in copying the technology, but it's coming from the us. so from europe and in fact, they were very successful in developing an ecosystem of companies which mirror yeah,
9:54 pm
causes them in the us. so they're still having good because they have, by the way of having facebook, a couple of the platforms and one worrying aspect of that development is that the chinese were using so called social points. they were, they were watching what people are doing, and then you can get good points or bus points according to your behavior. and that's one of the kind of misbehavior that if we have in the future and due to just in terms of the success of regulation. if we look at something like general a i, that's a high technology that finds its way into multiple applications across a very complex value chain. how kind of should responsibility be allocated for errors from this function? because this is a, a question where the original creators may have part of the account ability, but shouldn't it also be on the part of the applications? so that's a hard question and it's really want results. so basically, to
9:55 pm
a certain degree what, what is happening in the, in the you is that they're trying to regulate sector specific a application. so that's one question. you know, you can't regulate it once because it's a basic technology. you need to look into very specific applications. and the 2nd question you were addressing is, who was in charge of systems, you're unable to use it to a certain degree. this is loading entirely novel problem because even also for other products, you have to do some kind of what is a problem of usage and what was the problem that was already in the product before it was and send out. so there are some precedents where you can, you know, draw upon, but it will be an increase in your problem a problem. and the question is really, what type of regulation do we need for? what type of a i applications. sometimes we may need to have x, i'm to regulation where you basically do need to say, well, we need to regulate in advance that only products which have sufficed with the scrutiny which comply to certain standards quality standards, including from bias, etc, can be going to go into the market and for others it will be ex, costs,
9:56 pm
the only inspection when something goes wrong. it may be for some systems that you may have to have real time assessments if they continue to develop in real time. but that's includes the minority offices. thank you. and very briefly, if i may just come back to our title, our machines price to take control. but do you think, can we get this under control? we can, but you know this, it's important to have this debate now, and it's important to come with up with smart rules now to make sure that we reap the benefits of artificial intelligence and that we mitigate the risk and that we don't let it take hold thank you very much. thanks to all of you for being with us . thanks to our viewers. see you soon. i of the
9:57 pm
9:58 pm
focus on a t w the, the the is, it is someplace power and key more people than ever on the move world wide and such are based on life for the crisis can. so this was the, the best time to find out about baby story info. my grand stay up to date. don't miss our highlights. the t w program online d, w dot com highlights journalism. listen,
9:59 pm
overcoming divisions registered for the dw global media on 2023 in germany, and online the increasingly fragmented world with a growing number of voices digital. the amplified where disclosure completion. we really need overcoming divisions, division for tomorrow's journalism. register now and join us for this discussion at the 16th edition of d. w's global media forum, the
10:00 pm
to see the way news line from berlin renewed funding breaks out in southern ukraine . keep a ques, a small scale of showing curse on where people are being evacuated from flooding caused by the destruction of the cost cut down. also coming in you ministers back plan to tighten europe's policies on asylum seekers. but is this the breakthrough deal? many have been hoping for.
15 Views
Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=49910350)