Skip to main content

tv   Us and Them  Deutsche Welle  September 2, 2024 3:15am-3:46am CEST

3:15 am
or bring about humanities extinction. that's a big question. tends for more news downloads and dw news app or visit our website and d, w. com. i'm david levitz. thank you for watching the not just another day. so much is happening all at once. we take time to understand this is the day an in depth look, it's covering these events. analyzed by experts and critical thinking is. this is with the weekdays. on d, w the there is a significant risk of human extinction from advanced systems to
3:16 am
finding out spaces 30 cd or aging programs. i would write to sort of disease stroke . we don't currently know how to steer these systems, how to make sure that they robustly understand and the 1st place or even follow human values. once i do understand the, i love the take those ease i would write to create such kind of a technology use to contribute to the human, the human to say. even the brain of systems can be connected to the type of space. this is a rapidly evolving technology. people do not know how it currently works, much less how future systems will work. we don't really have ways yet to make sure that what's being developed is going to be safe. these a i did a google nice human being also the one of the book on the beam receives this very small group of people are developing really powerful technologies. we know very little about people's concerns about generative a i y thing out, humanity them from a seer that is left on checked
3:17 am
a i could potentially develop advanced capabilities and make decisions that are harmful to human. as the world grapples with the implications of this rapidly evolving field, one thing is, certain, the impact of ai onto manatee will be profile the
3:18 am
way the new. i take the state can be a nice a fusion between the human side and take a look at the side. he's funny and the one of the 1st we have over the cycle side or 9 trying to create the very, you know, the, the site bindix technology, especially focusing on the menu cut on the health care fields for the human and the human societies. the my name is yes, you sound okay. so i'm a broker. so if you need less, david screwed up japan and also the see, oh, sorry about all right. let's create the bright futures for the human of human societies. we've such kind of i systems the
3:19 am
i personally want to have an impact on making the world better and working on a nice, easy, certainly seems like one of the best ways to do that right. now. any public intellectuals, many professors scientists across the industry and academia recognize that there is a significant risk of human extinction from advanced systems. we've seen in recent years, rapid advancements in making our systems more powerful, bigger, more generally competent and able to do complex reasoning. and yet we don't have comfortable progress in safety guard rails, or monitoring or evaluations, or ways to know that these types of systems are going to be safe. i my name is gabriel mccovie gave you can call me, i am a grad student at stanford, and i do a safety research and a lead stanford a alignment. this is our student group and research communities focused on mitigating the risk. advanced the systems like mitigating weapons of mass
3:20 am
destruction, the these more catastrophic risk, unfortunately, do seem pretty likely. many leading scientists tend to put some single digit or sometimes sometimes double digit chances of excess central risk from advanced a. i other possible worst cases could include not extinction. events. brothers like very bad things like locking in totalitarian states or disempowering. many people concentrating power to where many people do not get to say in how a i will shape and potentially transformer societies. a, i is become such a device of topics. there are a lot of valid concerns. some believe it could lead to job loss is increased in a quality and even on ethical uses of a i. however, a i also has tremendous potential to benefit humanity. it could help us tackle some of the world's biggest problems, such as climate change, disease,
3:21 am
and poverty. the no toner mean the so it's 30 nice going to save us is you could have gone to still $3.00 and $1.00 on i you goal is that gonna save money? you only can. okay. that good. i'll put it on all still to technology the token you've seen that they used to know with the cook. i'm sick, i don't know. think it could tell you some until let's go to a king. first. the is kuntal technologies. i use that sole source to sinusoid, someone know that the cost is still people to me. i didn't know that of that data to this next to little signal. well, you little, but they've got to know. i mean, because there are you gonna see this one is that the federal kind of dakota them
3:22 am
only to us, the duty nice get to know how to detect the very important the humans intentional season that as a, from the brain to the pay. this is the 1st you my wish to move, then the brain genetic to intentional students, these intentional see, and that is to us, to meet it through the spinal cord, more than enough to the most of them to finally, we can how to systems and a few months, always back to yet 20 countries now use these devices as a medical device. this is i think there's definitely great ways a technology is used in edison, for example. there's cancer detection that's possible because of image recognition systems using ai that allows for detection without invasive tests, which is really fantastic and early detection as well. no technology is inherently
3:23 am
good or evil. it's only a few months we're doing of course we should be thinking about long term impact in terms of the direction which we're taking the technology. but at the same time, we also need to think about it less than a technical technical sense and more in terms of it impacting real life humans. today, the japan, i think, is quite optimistic about a technology. there's a lot of hype at the moment. it's like a shiny new toy that everybody wants to play with. whenever i go to the us are squarely or in the the use of entries. there's far more a need your to fear or concern. i was quite surprised to be honest.
3:24 am
readings on wednesdays. are every wednesday. so usually some guests we bring in, or some other sy researcher presents. let me have laughter. it's about. yeah. it's a good deal. kind of like our research lab happened to have h t m i a doctor to us b. c. something to plug, plug in your oh, you did plug in? yeah. they're right. sorry, i'm hallucinating. the wednesday meetings are really good for inviting you people to nice to to meet some new students. talk about why you're interested in the i'd say you're not. it's so if you're wanting to synthesize smallpox or if this is a chemical place like mustard gas, you can do that. access is already high and it will just be increasing across time . but there's still an issue of needing skill. so basically, uh, we need something like a, you know, top ph,
3:25 am
d and browser g to create a new pandemic that could take down civilization. there are some sequences on line which i want disclose that could kill millions of people, see more and more dangerous. yes. so with the access thing, a lot of people bring up lives and oh, you maybe don't just need to be a top picture. you also need some kind of by allowed to do experiments. is that still a thing? so it's, so this would, um, it depends on like how good the cook book is, for instance, excuse me. and certainly there are people who come in with disagreements like, oh, perfectly, i is not going for a long time or doesn't seem like important to work on these things. we can just build or celebrate, or whatever a there's a large potential, especially for people doing engineer kind of mix to cause a wide range of harm in the coming years. now there are other instances of catastrophic misuse that people are expecting to. one is with cyber attacks, we might have a systems in the coming years that are really good at programming,
3:26 am
but also really good at exploiting as 0 dave 100 all of these explaining software vulnerabilities and secure systems. the maybe the top use case of a i will be making money. you might see a lot of people being defrauded of money. might see a lot of attacks on public infrastructure threats against individuals in order to exploit them. i could be a wild west of digital cyber attacks and the company the beyond that. so there is a pretty big risk that our systems could actually get out of the control of their developers. we don't currently know how to steer these systems, how to make sure that they robustly understand and the 1st place or even follow human values. once they do understand them, they might learn to values, things that are not exactly aligned with what we want to assume is like having earth be suitable for for life. why it or uh,
3:27 am
making people happy. i was fortunate to have it raised a part of the family, especially a few years ago as it was a lot less mainstream. so there's only some uncertainty of pay. is this going to be actually something that's helpful in the 1st place? are you gonna have a stable job? things like this, but as time is going on, as we've seen a lot more capabilities, advancements lot more people raising the alarm for a safety in the risk. it's tends to be like every few days my mom will send me something like, hey, have you seen this new thing for to some of the worst case risk? a lot of experts think there's a pretty significant chance with them. many scientists put single or double digit chances of excess central risk from against the i. there's a recent interview where the us f t c. chair said that she's an optimist, so she has a 15 percent chance that i will tell everyone me. my vision is a little bit different. we could to create the a i systems this funny, the run
3:28 am
a new to create the did spc. it's, i think the generative a i system is that different from that a simpler programming system. it has a growing, rolling up uh, functions these a i little nice human being also the one over the ink book on dealing with things like one of the i knew much and that because of the human needs, also one of the leading things they did when i asked the 4th of the humans, they try to keep our society so kind of just and circumstances we human being have some problems, aging problem. so a disease on the accident and i systems or some take the rosie is we, we, i systems, we to support the some function the
3:29 am
same as the, as the seal. yeah. so she is, i don't think so that you know, i don't know who else on the, the kinds you then 1st go to know high prices will i don't. so when i see this. yeah. when i go get going in the middle, they see they're kind of doing that. japan now faces on 30 cbw using the program. it's about each age over the webcast in any kind of total for you. this is now almost 470 years old average. wow. if sign yum, height via i get called this i step put one tell you this may okay, that's good. it then and it came up with us to come and do apologize about the
3:30 am
getting cut off or giving them more. you sure kind of wish the next day can i do a couple of wash the contracts for the license of the have someone there and then come example kind of, i don't show my client that'll be able to do that. i'm kinda scared of the law is yeah, the sideshow, we just don't know who that will cut that much. it there. don't go to a month, a criminal uh, sort of gonna go to get them and speak up to the law, then cardinal defendant kind of and i don't wanna say that, so it got pushed. and i thought, well, what do the seo do on our show? pretty much that i know that the same thing. i didn't know both of the stereo system. they are both those things that you know
3:31 am
who good at the sky like and you go to there for the sake of some of them will the video not? i watched it so not the hatchet i'd write to sort up with these amazing societies providence in my child of the food my my that brought me microscope or some kind of pods every day. i spend spent a lot of time to have the such kind of experience on the judges. i love to lead to science fiction book, i in level read them by the i that i small the if you've heard about a i in the last couple years, chances are the technology you heard about was developed here. the breakthroughs
3:32 am
behind it happened here, the money behind it came from here. the people behind it live here. it's really all been centered here in the bay area. the a lot of the start ups that are the leading edge of a i. so that's open a i and rob the inflection names you might not yet be familiar with. they are backed by some of the big companies you already know that are at the top of the stock market. so that's microsoft, amazon, meta, google. and you know, these companies are based here, many of them in the bay area. so for all of the discussion that we've seen about a policy, there's actually very little that tech companies have to do. a lot of it is just voluntary. so what we are really,
3:33 am
depending on his guard rails is the benevolence of the companies themselves. so the i think is an example of a lot of the young people for are coming to move in now who are not ideological, who are really interested in the technology for aware of its potential harms. and see this as the most important thing that they can do with their time, their opportunity to work on. what many of them, paul, like the manhattan project of their generation. the have to realize that, unlike some other very general technologies that have been developed in the past, a i is mostly being pushed, especially the frontier systems by a small group of scientists in san francisco. and this very small group of people are developing really powerful technologies we know very little about. so this may
3:34 am
be, comes from a lot of historical text optimism among, especially the startup landscapes in the bay area. a lot of people are kind of used to this move fast and break things paradigm that sometimes ends up making things go well. but as is the case, if you're developing a technology that affect society, you don't want to move so fast the actually break society, the guy wants a global and indefinite pause on the development of frontier artificial general intelligence, surfing up posters, so that people can get more information on the issue is complicated, a lot of the public does not understand it, so the government does not understand it. you know, it's really hard to, to keep up with the development. another interesting thing is that uh, most of us working on this have no experience in activism. what we have mostly is like technical knowledge and familiarity with a i is it makes us concerned. i asked if you so very much the minority
3:35 am
and then actually a lot of the, the biggest safety names are working at a labs. you know, i think some of them do great work, but they're still much more under the influence of the broader corporation that's driving towards development. i think that's a problem. i think that somebody from the outside ought to be telling them like what they need to do. and unfortunately the case with a now is that like, there aren't external regulatory bodies that are really up to the top of regulating a guy from the same. now you're hearing this thing could kill us all and i am going to keep building it. i think part of the reason you have so much resistance to the a safety movement is because of the distance between people who talk about their genuine fear of the consequences and the risks to
3:36 am
humanity. if they build this, a god, so much of the debate around here has these really religious undertones. that's part of why they say that it can't be stopped and shouldn't be stopped. it, it really feels like, you know, and they, and they talked about it in that way. like i'm, i'm building a god and they're building it in their own image. right. the, i love the human of human human society, but i love the science fiction. i would write to create such kind of the technologies to contribute to the human, the human society. and so i allowed to, to redo the science speak some books, and also i love to see the movies in science fiction telling me
3:37 am
those movies. also what my, what one of them i also yes, that's how fortunate 3, some movies in us. so what you'll look in there, yes. most of the cases, technology is always a tax issue. in the app sort of fetus technologies should be a work for the human, non human, so say, and the movie a terminator. classic movie. the cyber dine is a fictional tech company that created the software for the sky. and that system, the ai system that becomes self aware and goes rogue cyber dines role. and the story is to represent the dangers of a guy getting out of control and to serve as a cautionary tale for the real world. is cyber dine named after the firm and terminator know it in time? you know, the stories that the company's name is at the top a side by 9 systems. yes. is obviously at some literal level. maybe you can unplug
3:38 am
some advanced systems and there are definitely a lot of hopes people are trying to actively make it easier to do that. some of the regulation now is focused on making sure that the data centers have some good. i'll switch it's cuz currently a lot of them don't. in general, this might be more tough and people realize in the future, we might be in a state in the future where we have pretty advanced systems. why the distributed throughout the economy dropped people's likelihoods. many people might even be in relationships with a guy systems, and it could be really hard to convince people that it's okay to unplug some why the distributed system like that. there are also risks of having a military arms race around developing a thomas ai systems where we might have many large nations developing wide, uh, stock piles of atomic weapons. and if things go bad, just like in the nuclear case where you could have this really big flash for that, the stories along the road, you might have a bad case where very large stock piles of a ton of weapons suddenly end up killing a lot of people from very small triggers. so probably
3:39 am
a lot of catastrophic misuse will involve humans and the loops in the coming years . they could involve using very persuasive systems to convince people to do things that they otherwise would not do. they could involve extortion, or cyber crime is or other ways of compelling people to do work. unfortunately, probably a lot of the current ways that people are able to manipulate other people in order to do bad things. might also work with people using ai or a i itself many playing people, people to do bad things like blackmail. yes. i know the important thing is, or how most of pianos change is the very awful lot of to put the dog was set up as a has of course the, so me to extend to brain and technologies on the part and us know we're here. so what's next? we human being homeless up is uh, overpaying the new rates. okay. is additionally,
3:40 am
or is another brand plus brands in the cyber space also we fortunately today have a new part than us and friends at the robots on song. okay. level take does all the so yeah. what worries me a little bit more about this whole scenario is that a technology doesn't necessarily me need to be a tool for global capitalism. but it is it's the only way in which it's kind of being developed. and so in that model, of course, we're going to be repeating all that kind of things that we've already done in terms of empire building and people being exploited, natural resources, being extracted. all these things are going to repeat itself because a, i is only another kind of thing to exploit that. i think we need to think about not just as humans who are inefficient humans that are unpredictable humans that are
3:41 am
unreliable, but finding beauty or finding value in the fact that we are unpredictable, that we are a unreliable whole. so probably like most emerging technologies, there will be disproportionate impacts on different kinds of people. a lot of the global south, for example, hasn't had as much space in how i is being shaped and steered at the same time though some of these risks are pretty global when we especially talk about catastrophic risk. these could literally affect everyone if everyone dies and everyone is kind of a stakeholder here everyone is potentially a victim. 20 percent is like total correctness of the quizzes. students versus how many non c a students do still plan to just keep doing research. i know there was like the ph, d versus rod. you know, i am somewhat in certain about grad school and things where the, i think i could be successful, but also maybe with
3:42 am
a timeline. various other considerations, trying to cash out impact in other ways might be more worth it. uh, median opened and salary supposedly $900000.00 us dollars. um, which is quite a lot. uh so yeah, it seems definitely the industry people have a lot of resources and fortunately, all the top a i, fortunately, all the top a g i labs that are pushing for our capabilities. also higher safety people. i think a reasonable world where people are making sure that emerging technologies are safe, is necessarily going to have to have a lot of safeguards and monitoring. even if there's a small risk. it seems pretty good to try to mitigate that risk further to make people more safe peace on the media to the site and very near ice caffrey goes to that, how to treat it. so when i was gone, there is no a i systems or, and there's no computer systems. that's a cut into situations or the young people use the lice with
3:43 am
a i n level. it's on the song, some technologies with a i, we support the growing up process is, uh, people have been pretty bad about predicting progress in a i 10 years in the future. there might be even viral or paradigm shifts. people don't really know what's coming next. but i suppose david the class, there's still some chance the vast majority of a i researchers are focused on building safe, beneficial ai systems that are aligned with human values and goals. while it's possible that a i could become super intelligent and posing x a central risk to humanity. many experts believe that this is highly unlikely. at least in the near future the
3:44 am
the get ready for an exciting auburn toyota. so little surprised. hi, irish. and i'm ready to dive into the hands of human to do you have you have a one to talk to me before you go to the spot and unexpected side. so slides this, the shadows of these pod costs and video shed lights on. the dog is devastating, colonial har is infected by germany across and he employed to score suppose good farms and destroy life. what is the legacy of this wide spread races, depression,
3:45 am
today? history. we need to talk about here, the stories, shadows of german colonialism. the her passion is bell, a living the dream in her homeland peru where most families are poor is actually on thinkable but money got them in makes it possible with her foundation. and her dedication for the answers are about to perform at lima as national theater. but at final rehearsals they're still work to do the .

6 Views

info Stream Only

Uploaded by TV Archive on