Skip to main content

tv   To the Point  Deutsche Welle  June 9, 2023 12:30am-1:01am CEST

12:30 am
some young children watching mine, charles, instead of going to class others, can attend classes after they finish working the millions of children over the world of chance go to school, we ask why? because education makes the world a more just make up your own mind. dw made for minds since the launch of chat, g p t, it's clear that what once seemed to be the stuff of science fiction is here now with the potential to transform our lives. some say artificial intelligence will enhance human capacity as well. the others, including some of the very researchers developing the new technology war and it could drive humanity to extinction. their calling on policy makers to act quickly
12:31 am
and decisively to regulate risk. but it's not even possible from the she's capable of learning at a pace that's far out the strip must human intelligence. does the benefits of a i and the education medicine and elsewhere outweigh the perils. we're asking artificial intelligence, our machines for us to take control the hello and welcome to to the point. it is a great pleasure to introduce our guests around the whole house is professor for intel of artificial intelligence at the free university of berlin. working on technologies, including autonomous driving, buying onyx and brain computer interfaces. general stoker is types of outlets chief technology correspondence or,
12:32 am
and joining us virtually from humble work is judith z mon, she's professor for ethics n i t at the university of hamburg. and she also sits on the german ethics council . and it's a great honor to welcome a special guest, a pioneering researcher, who has been referred to as the father of modern a. i. he scientific director at the renowned a. i lab i. d. s. i a, you can submit to move, i joins us virtually from switzerland. and let me ask all of you to just give us a very quick take on how you see this balance between uh, between risks and benefits. and a couple of the scientists warning of possible extinction. compare a high to societal scale risk such as pandemic send even nuclear war. would you agree? well, i think that's a little exercise. you're right because the, the risk i see are more, they come from everyday life. i say risk regarding employment. what are we going to
12:33 am
do with people who is being displaced by new information? technology is isolated risk without getting information, whether it's through or whether it's not through. when we read something on the internet, i see risk regatta being isolation of people privacy. i say the risk of this type, and i think that the regulation is required. and i, i think the big simulation of human kindness is sort of the found and is more relate to positively to climate change and to a thank you and we'll come back to regulation a little bit later. let me go over to unit if i may, and as an i t f assist, what risks were a, use the most. and do you see the balance between these and potential benefits as more negative or more positive to i see it's quite balanced because of, if you think about it, a i as, as a basic technology that can be used for various purposes, we're talking about path and recognition mostly for,
12:34 am
for various purposes. so it has lots of advantages. lots of disadvantages, of course, as well depending on how to use. and i think much of the debate is to certain degree shirking a bit of responsibility by portraying a i as it's natural force by focusing on the very long term documents by voters that actually look at how is currently already being used and implemented. and what the car and threats are both in terms of buyers and discrimination, but also infringement of privacy. and of course, you know, not down playing the positive side, but to focus more on these very concrete, immediate, dangerous. instead of looking very far ahead into the future. thank you very much. and over to josh and the warnings have proliferated since the launch of the chat box, the dialogue technology chat and g p t. why is that? this technology, of course, has been in development for a long time. now. it's very true, i think one and you know particular reason is that it's actually p t and it's web interface makes it very much approachable for everyone. everybody can use it and i
12:35 am
could just enter a query and then you'll get an answer. and that is sort of very close to the reality of people. and i think that's one key aspect. the other aspect is that we're now seeing that the technology is also coming from the work of knowledge workers of white call over. because if you will, lawyers, journalists, diplomats, for example, the people at this table, yes, very to very, very true. and i think those are the 2 key reasons why there's so much attention on this issue now. and indeed, many of us have become a mature a i t behavioral researchers in the sense that we're going to chat to p t and testing it and seeing if we can beat it. and we'll come back to that a bit later. generative a i like that, which is driving chat g p t is still in its early days. it's sometimes loony and eroni. yes. missteps have provoked as much snickering as worry. the products of its creative a may look harmless, but they are convincing and they're in flies danger. a lot of people saw this photo
12:36 am
of pope francis and a luxurious buffer jacket was real. but you can tell by the distortion of his fingers, it was a i whether it's the russian president vladimir putin kneeling in front of the chinese leaders, agent ping, or ex us president. donald trump getting arrested by the police lies and manipulation are only a few keyboard clicks away. a i tools like mid journey and all e creek photo realistic images that can even full professional. sometimes. this image one photographer bores outbox and a sony walk photography award. but he turned down the prize because he created the image with a i to spark a debate. what does it mean when you can no longer trust what you see in pictures?
12:37 am
and in fact, you can find more information on how to spot a. i generated images on our dw com innovation site, with fact checking. and one of those who's worried about precisely that risk is the ceo of the company at the center of the storm, the co founder of open to a guy who's technology powers, chuck gpc. if this technology goes wrong, it can go quite wrong and we want to be vocal about that. we want to work with the government to prevent that from happening. it's one of my areas of greatest concern that the more general ability of these models to manipulate, to persuade, and to provide sort of one on one, you know, interactive this information. and let me go over now to professor schmidt hoover and ask how i can be certain that you yourself are real and not an a. i generated avatar it's for now you would have to take my word for it. on the
12:38 am
other hand, what is the value of the word of an avatar? you? you didn't sign the open letters that has been floating around the internet and i gather from what you said elsewhere that you see more promise than powell in artificial intelligence. yes, absolutely. i'm an optimist, and i refer to all the cases where a i is already making human lives long and healthy and easy. now i'm many of those who are now warning. they are mostly seeking attention because i know that i'm a, i just tell people um, uh, grabbing more attention, then documentary is about the benefits of a i and health care. what about a guy supported weapon systems have, you know, concerned that there's danger in an autonomous system that makes split 2nd
12:39 am
decisions where no human oversight can adequately control or 2nd guess what's going on in that blackbox? it is true is that we have new types of weapons. you can buy yourself a drawing for a 300 yours. and maybe you attach a little group to it and then a flight over to your neighbors, the ground and then maybe put some poison into his coffee or something like that, or the other. and the police is using the same technology to track that and you the house on existing, recognize like the regulatory framework of the laws that make you go to jail in case you get caught on much more worried about a 60 year old and truly ex, essential threats and form of hydrogen bonds, which can wipe out 10000000 people within one flash without any a i. so let me just do about another wrist. it concerns many people in the classic
12:40 am
science fiction, dis, tokyo robots of essentially run a mock you yourself created a form of artificial curiosity that definitely could out strip human intelligence. in fact that's you're a, are we really able to implant in such applications an ethical compass to ensure that no matter how they develop they will do no harm. but as long as you know, as a program a, what is that ethical encompass? you can program it. on the other hand, you put 10 s as it is and one of the room and they have 10 different opinions about what is ethical and what's not. as um, as long as it humans, um don't get to act together and agree on what is actually correct and what is and i see and little hope that you will come and
12:41 am
build a i is that implements that thing which is not well defined. and we will in fact, see whether it's human beings can get together to do the necessary policy making and regulation a little bit later in the discussion. but let me ask you one last question and it relates to the world of work which i will mentioned as one of his key concerns. how do we make sure that job loss doesn't produce massive social these stabilization to be, have a long history of that? so it was what happened was when jobs last 6 months, well, let me see 200 years ago almost all jobs were in agriculture, 60 percent of all people were in agriculture. and today it's may be 1.5 percent. but nevertheless, the unemployment rates are really low and the less than one, maybe 5 percent or something like that because lots of new jobs were created. and this is going on as we speak all the time new jobs are being created. because
12:42 am
someone who does the playing man likes to invent new ways of interacting with other people and making jobs professional activities out of that. so that's not going to stop. and as long as you keep planning to, to adapt as a new situation, you, you wouldn't, you won't have to worry and profession it too, but i thank you very much for joining us on the program some as much as i was. thank you. and let me now get a take from the 3 of you on what was clearly a very tech know optimistic perspective on a i. and i'd like to begin if i may wish to go well and pick up on any point that you wish to address. but also, i definitely like to hear your take on the weapons systems, because clearly i could be used to augment and support larger as well as smaller weapons systems, and thereby shrinks the window for de escalation in
12:43 am
a confrontational situation. yeah, there's of course, uh, information technology and not official intelligence, or do i just have to one technology, you can use them to improve the quality of life of people, or you can use them for weapon systems. many people have protested in the past about the use of weapons in general and, and more to the point of this program about the use of the official intelligence for weapons. but the i, i see the problem is right and all morning, every day life i said before, i see the problem in jobs. the current, the picking knowledge of the promotion is months is going much faster than it was going in the, in the 2nd, the industrial revolution on the 1st. so it took 100 zeros for the phones to take over, for example, in the us, so that everybody had a telephone. it took 70 years for more of them because of that population took our car. it took also 100 years for the electrical network to be distributed in the us
12:44 am
. now it takes 10 years for smartphones to be more than people. it takes 2 weeks for a trip to just get paid to have to meet you in years. so the, the acceleration of the process is such that we have to be aware that this is not like 200 years of a before. it's not like this, like on the industrial revolution of the press, one and unit same on i must say, i've just been in brussels talking to some of the new policy makers who will be negotiating the new, the new legislation on a i, and that speed that role just described is absolutely one of the major concerns. so let me ask you, in a technology moving this fast, how hard is it to get under the hood as one might say, and understand exactly how the algorithms arrive at the outputs that they produce. because if we want to try to ensure account ability and also ethical
12:45 am
behavior on the part of a i, clearly we need to understand what's inside to the problem is really that many of these large systems are not understandable. you country, you understand how systems reach their decisions, what all they make their predictions. i don't think that we will have to have explained ability for all in order to ensure accountability or regulation old forms . so i think there must be forms of regulation and accountability even if we don't understand the systems, right? so for certain systems we may require explain ability and this can only go so far, right? so we may say for instance, in the traditional system, but also maybe in science, we may want to understand how system reach section conclusions. but there's a practice to pay very often in terms of maybe accuracy. so explain ability in terms of a side doesn't come as a cost, so we have will have to decide wherever you need it and where we don't need it. and for what reasons. but i think apart from that we need to have regulation in place,
12:46 am
irrespective of the question whether you can always look under the hood. and i want to come back to the regulatory issue. but let me ask you another question about the technology itself. i recently had the interesting task of, of a dialogue with the chief technical officer of open a i. and she said that even as generative a i is now moving towards multimode ality. that means processing not just language, but images, sound inputs and, and much more that it will still remain prone to what she called the illusions. what does that mean that we must, essentially, because we talked about explanatory capacity, but if a guy is ready to lie, who can explain to you? i think what people must understand is the way the systems function. what happens is basically we just think of check the chapter p t. so this, this system is catalyzing large amounts of data and is trying to understand how the
12:47 am
text is structured, how certain types of tech such as novels or crime stories, but also so that communication instruction and it produces highly plausible content . but without any relation to truth. so everything that is being done, writing is just plausible patents of content, speech code that tends to be a picture. and that explains why we have these delusions because the whole underlying thing is not re presenting something that is existing, but making up something out of patrick. and if you understand this rational now, then it becomes to be that of course, if we continue to do so, you may be changing certain issues in order to feed in as sources for texas cetera . but i'm in line. the whole system is just the generation of plausible content, but not of true and younger. so that takes us back essentially, to the short report that we saw earlier on the deep stakes on these pictures and images that look very convincing. but in fact, are simply also lies. and there is certainly
12:48 am
a double threat to democracy here. say many observers, both those kind of images and the degree to which they can be used for propaganda. this information manipulation, but then also the whole other set of issues around the future of work and, and job loss. how do we get out in front of that? how should we respond pro actively? that's true because of the genie is out of the bottle and now it's about how to react. i think there are 2 ways that are important. first, it's important to promote what you could call electricity you among the general population. people need to understand how we, i works, the basics of it and how they can use it, you know, and how they can put it to good use in their own life that's important. and then the 2nd and the 2nd field that is important is regulate. so we don't need smart regulation that fosters innovation. yes. because there are vast benefits as you continue to be pointed out. but that also mitigates the risks for people on the
12:49 am
ground. and we need regulation that make sure that fundamental rights are protected . what that regulation could look like, i want to discuss in just a moment with, with the 3 of you. but let me just ask a little bit about where the industry stands today and role as we heard. leading researchers have been issuing warnings, including spokes people for the leading companies in this area, whether it's microsoft opening i or others. but are they walking the talk? are they now slowing down and prioritizing safety over speed to market? i don't think so. so i was telling before the, the big difference between a i in the 9th is on a, i today, is the in the ninety's was an academic program. it was the owner of the university . the many people moved from the universities with these companies. i know companies like google ramos on facebook and so they are leaders in the philadelphia and there's no, there's no way for the university to compete. now,
12:50 am
august of these big companies and it's a, it's an extension of the problem for these companies. for example, chuck u t p is, is owned in part by microsoft and google last part of his value in the stock market just because they don't have an equal alternative. and so for these companies, it's, and it's essential for that if they do not develop the same speed that the companies are doing. and also especially constituting the chinese companies who are not going to abide by european laws over by americans. again, that's the point we'll come back to in just a moment, but if i can go to unit and pick up on this point. in fact, some critics say that the biggest, everyday risk as well, has, has called it earlier is which was a not in the technology so much as the business model that it will serve. whether it's turbo capitalistic search engines that are expert at manipulating us to buy
12:51 am
things, or whether it's surveillance, capitalism that puts a facial surveillance workplaces that can measure everything from workers efficiency to their moods. yeah, we're totally agree. i mean, a, i doesn't do anything on its own, it's always people deploying technologies for certain purposes and for certain utilities. and that's the problem. so we're not talking about a, i am. and if you're getting upset about people using a i in order to manipulate or control people and that is indeed the real danger. so i think we must be very careful about all these narratives about a i being so in the course of nature. and we can't do nothing about it. it's really a lot about shrinking responsibility in these discussions. so i think it is important to recognize that it's people doing things and taking decisions and being responsible for what they're doing. so searching responsibility is something that many accused politicians for quite some time, but it looks now like they are awakening from their torpor. the european union will in fact soon begin negotiations on the a. i act that it's hoping will become
12:52 am
a global gold standard for risk based regulation. and interestingly enough, for the 1st time, it's also working on standards and technical norms. at the same time that it negotiates the legislation, which it has never done before. yes. do you think that regulators can get out in front of the wave as well, to be completely honest. you know, the regulation of technology is never really in front of the way if, because technology both fast and, and you can fully predict what's going to happen. so that being said, i think the you is on a good path, because if the core of this legislative package that's now being negotiated is a risk based approach. so the idea to regulate are different artificial intelligence and its applications according to the risk, but it poses to the safety and, and to the fundamental rights of users. and i think that is, it's the right approach. and i think that is sort of the approach that will allow
12:53 am
lawmakers to adapt that regulation over the next couple of years as technology levels. of course, um, the use perspective on risk is not necessarily the perspective of say china, which so called you mentioned a moment ago. and there is a lot of concern that we could see a race to the bottom in which authoritarian states develop and use this technology use a i for purposes that we would consider off limits massive social surveillance. for example, optimist hope that global standards, global technical norms might be able to prevent that. are countries like china taking part in standards? are they amendable to global governance when it comes to a oh, i don't think so. i think the china is more interested in copying the technology, but it's coming from the us. so from europe and in fact, they were very successful in developing an ecosystem of companies which mirror yeah,
12:54 am
causes them in the us. so they're still having good because they have, by the way of having facebook with a couple of the platforms. and one worrying aspect of that development is that the trainees were using so called social points. they were, they were watching what people are doing. and then you can get good points or bus points according to your behavior. and that's one of the kind of a misbehavior that if we have in the future and due to just in terms of the success of regulation. if we look at something like general a i, that's a high technology that finds its way into multiple applications across a very complex value chain. how kind of should responsibility be allocated for errors from this function? because this is a, a question where the original creators may have part of the account ability, but shouldn't it also be on the part of the applications? so that's a hard question and it's really want results. so basically, to
12:55 am
a certain degree what, what is happening in the, in the you is that they're trying to regulate sector specific a application. so that's one question. you know, you can't regulate it once because it's a basic technology. you need to look into very specific applications. and the 2nd question you were addressing is, who was in charge of systems, you're unable to use it to a certain degree. this is loading entirely and all the problem because even also for other products, you have to do some kind of what is a problem of usage and what was the problem that was already in the product before it was and send out. so the awesome president's way you can, you know, draw up on, but it will be an increasingly prevalent problem. and the question is we, what type of regulation do we need for? what type of a i application. sometimes we may need to have x, i'm to a regulation where you basically do need to say, well, we need to regulate in advance that only products which have a survive switching scrutiny, which comply to certain standards quality standards, including from bias, etc, can be going to go into the market and for other things will be ex, costs, the only inspection when something goes wrong. it may be for some systems that you
12:56 am
may have to have real time assessments if they continue to develop in real time. but that's includes the minority of this. thank you. and very briefly, if i may just come back to our title, our machines price to take control. but he's thinking, can we get this under control? we can, but you know this, it's important to have this debate. now, and it's important to come with up with smart rules now to make sure that we reap the benefits of artificial intelligence and that b mitigate the risk and that we don't let it take hold. thank you very much. thanks to all of you for being with us. thanks to our viewers. see you soon. i of the
12:57 am
a special edition of conflict soon with tim sebastian. this is a border crossing point from moldova into ukraine. severest fight to the car moment between ukraine and russia. is roughly a 100 kilometers away. the big question dominates here is where the mo,
12:58 am
the next target complex 30 minutes dw, profiteering instead of response of the global business of asbestos. this is knowledge of the people that are in don't deserve to be treated with any kind of courtesy by the governments of the world. the never ending story of asbestos starts june 21st on d w. oh good. i wish i could have some more to say you just click away, find out basic documentary on you to really see the world as he's never seen it before. the dr. no,
12:59 am
to dw the the the the page, the sculpture, the
1:00 am
love and respect the this is deed of the news live from the land. renewed finding breaks out in southern ukraine. cubic use as moscow of selling pass on where residents of being floods caused by the destruction of a dam. e u. ministers, back time to tighten europe's policies on asylum seekers. but is this the breakthrough deal? many.

14 Views

info Stream Only

Uploaded by TV Archive on