tv Us and Them Deutsche Welle September 1, 2024 5:30am-6:00am CEST
5:30 am
a can have a safe place to buy just present. do you have any news on instagram? and the no follow up the the, there is a significant risk of human extinction from advanced systems to finding out spaces 30 cd or aging programs. i'd write to sort of these folks, we don't currently know how to steer these systems, how to make sure that they robustly understand and the 1st place or even follow human values. once they do understand the i the they take a little as ease. i would the right to create such kind of the technologies to contribute to the human their human. so say even the brain of systems can be
5:31 am
connected to the type of space. this is a rapidly evolving technology. people do not know how it currently works, much less how future systems will work. we don't really have ways yet to make sure that what's being developed is going to be say, these a, i did a google nice human being also the one of the inputs on the beam. it seems this very small group of people are developing really powerful technologies. we know very little about people's concerns about generative a i, y thing out. humanity stem from a fear that if left unchecked a i could potentially develop advanced capabilities and make decisions that are harmful to human. as the world grapples with the implications of this rapidly evolving field. one thing is certain, the impact of ai onto manatee will be profile the,
5:32 am
[000:00:00;00] the way the new. i take the seat can deal nice a fusion between the human side and take a look at the side. this one is the one of the 1st we had over the cycle say 9 trying to create the very, you know, the, the sy bindix technology especially focusing on the menu cut on the head of the fetus for the human on the human societies.
5:33 am
the my name is yes, you sound okay. so i'm a broker. so over the last day of school by japan and also the see, oh, sorry about all right. let's create the bright futures for the human of human societies with such kind of i systems the i personally want to have an impact on making the role better and working on a nice, easy, certainly seems like one of the best ways to do that right now any public intellectuals, many professors scientists across the industry and academia recognize that there is a significant risk of human extinction from advanced systems.
5:34 am
we've seen in recent years, rapid advancements in making our systems more powerful, bigger, more generally competent and able to do complex reasoning. and yet we don't have comfortable progress in safety guard rails, or monitoring or evaluations, or ways to know that these types of systems are going to be safe. my name is gabriel mccovie gave you can call me i am a grad student at stanford, and i do a safety research and i lead stanford and i alignment this is our student group and research communities focused on mitigating the risk against the systems like mitigating weapons. of mass destruction, the, these more catastrophic risk unfortunately, do seem pretty likely. many leading scientists tend to put some single digit or sometimes sometimes double digit chances of excess central risk from advanced a. i other possible worst cases could include not extinction. events. brothers like
5:35 am
very bad things like locking in to tell a cherry and states or disempowering, many people concentrating power to where many people do not get to say in how a i will shape and potentially transformer societies. a i is become such a device, a topic. there are a lot of valid concerns, some believe it could lead to job losses, increased in a quality and even on ethical uses of a i. however, a i also has tremendous potential to benefit humanity. it could help us tackle some of the world's biggest problems, such as climate change, disease, and poverty. the rental told me name of the source, tony nice,
5:36 am
going to save us is you could have gone to store $3.00 and $1.00 on how you go stuck. going to save money, you only can okay, that good. i'll put it on all still to technology the token, the same type of use and all the cook. i'm sick, i don't know. think it to use some of your let's go to leaking flush the is kuntal technologies i use most the sole source to sinusoid. someone know that the cost is still people to me. i didn't know that all that data to this next to you little signal. well, you will, but they've got to know, i mean, because there are you gonna see this one is that the federal going to go to them? only 2 are showing as what a 1000000000 that's good. i don't know. how did the fix, the very important, the humans intentional seem to notice from the brain to the pay. this is if the human wish to move then the brain genetic to intentional student this, these intentional see. and that was to us to meet it through the spinal cord. more than enough, through the most of the, in the pine,
5:37 am
knotty. we can how to systems and a few months, always back to get 20 countries now use these devices as a medical device. so this is the, i think there's definitely great ways a technology is used in edison for example. there's cancer detection that's possible because of image recognition systems using ai that allows for detection without invasive tests, which is really fantastic and early detection as well. no technology is inherently good or evil. it's only 2 months we're doing of course we should be thinking about long term impact in terms of the direction which we're taking the technology. but at the same time, we also need to think about it less than
5:38 am
a technical technical sense and more in terms of it impacting real life humans. today, the japan, i think, is quite optimistic about and i, technology, there's a lot of hype at the moment. it's like a shining new toys that everybody wants to play. whenever i go to the us are squarely or in the, in the countries there's far more a need your fear or concern. i was quite surprised to be honest. readings on wednesdays are every wednesday. so there's usually some guests we bring in, or some other sy researcher presents. let me have laughter. it's about, yeah, it's a good deal. kind of like our research lab happen to have h t m i a doctor to us b. c. something to plug, plug in your oh, you did plug in? yeah. right. sorry,
5:39 am
i'm hallucinating. the wednesday meetings are really good for inviting people to nice to to meet some new students. talk about why you're interested in the i'd say you're not dates. so if you're wanting to synthesize smallpox, or if this is a chemical place like mustard gas, you can do that. access is already high and it will just be increasing across time . but there's still an issue of needing skill. so basically, uh, we need something like a, you know, top ph, d, and route g to create a new pandemic that could take down civilization. there are some sequences on line which i want disclose that could kill millions of people, see more and more dangerous. yes. so with the access thing, a lot of people bring up lives and oh, you maybe don't just need to be a top, fishy, also need some kind of by allowed to do experiments. is that still a thing? so it's, so this would, um, it depends on like how good the cook book is,
5:40 am
for instance, excuse me. and certainly there are people who come in with disagreements like, oh, perfectly, i is not going for a long time or does it seem like important to work on these things? we can just, let's build or excel rate or whatever. a, there is a large potential, especially for people doing engineer kind of mix to cause a wide range of harm in the coming years. now, there are other instances of catastrophic misuse that people are expecting to. one is with cyber attacks. we might have a systems in the coming years that are really good at programming, but also really good exploiting as 0 dave 100 all of these explaining software vulnerabilities and secure systems. the maybe the top use case of a i will be making money. you might see a lot of people being defrauded of money, might see
5:41 am
a lot of attacks on public infrastructure threats against individuals in order to exploit them. i could be a wild west of digital cyber attacks and the company the beyond that. so there is a pretty big risk that our systems could actually get out of the control of their developers. we don't currently know how to steer these systems, how to make sure that they robustly understand and the 1st place or even follow human values. once they do understand them, they might learn to values, things that are not exactly aligned with what we want to assume is like having earth be suitable for for lifeline or making people happy. i was fortunate to have it raised a part of the family, especially a few years ago as if it was a lot less mainstream. so there's always some uncertainty of pay. is this going to be actually something that's helpful in the 1st place? are you gonna have a stable job?
5:42 am
things like this. but as time's gone on, as we've seen a lot more capabilities, advancements slot where people raising the alarm for a safety and the risk it's tends to be like every few days my mom will send me something like, hey, have you seen this new thing for to some of the worst case risk, a lot of experts think there's a pretty significant chance with them. many scientists put single or double digit chances of, of excess central risk from, against the i. there's a recent interview where the us f t c chair said that she's an optimist, so she has a 15 percent chance, but i will tell everyone me. my vision is a little bit different. we could create the a i systems this funny, the one a new to create the space is i think the generative a i system is that the for that from that a simpler programming system it has a growing, rolling up uh, functions these
5:43 am
a i little nice human being also the one of the inputs on the beam with things like one of the new model and that because of the human needs. also one of the leading things they did when i seen for the task of the humans, they try to keep our society so kind of just the circumstances we human being have some problems aging problem. so a disease on the accident, a i systems or some technologies we with it. i systems will support the some function the same as the, as the seal. you know, so to is, i don't think so that you know, i don't know who else on the, the kinds you then 1st go to know high branches will. i don't see when i see this,
5:44 am
when i go get going. not them in the list either kind of just doing that too bad. now faces on 30 cbw raising programs, but each age over the world cost in any kind of total feed us is now almost over 70 years old average. wow. if style young hike via this i still put one tell you yeah. okay, that's did it, then any it came up with us to come and do a power cut off or given them or you sure kind of wish and i've taken a couple of course the contracts and then the license of the have someone come example kind of don't show my client that will be able
5:45 am
to describe them. oh okay. yeah. and if i show, we just don't know who that will cut the most of the day. i don't go to, i'm not a criminal uh sort of going to go to the civic or to the law then kind of what kind of understand i'd say that's a good question. i thought, well, what do the single deal when i show pretty much that i know the same thing. i didn't know both of the stereo system and both those things that you know who the, who this guy was going to kind of today for them to say that some of them will do to them. not what it. so not thought i'd write to sort of,
5:46 am
of these basing societies providence in my side of the fluids. my mother bought me microscope or some ethics that he could have pots every day. i spend spent a lot of time to have the such kind of experience on the challenges. i love to lead to science fiction book i in level read them by the i is a small the if you've heard about a i in the last couple years, chances are the technology you heard about was developed here. the breakthroughs behind it happened here, the money behind it came from here. the people behind it live here. it's really all been centered here in the bay area. the a lot of those start ups that are the leading edge of a i. so that's open
5:47 am
a i and rob the inflection names you might not yet be familiar with. they are backed by some of the big companies you already know that are at the top of the stock market. so that's microsoft, amazon, meta, google. and you know, these companies are based here, many of them in the bay area. so for all of the discussion that we've seen about a policy, there's actually very little that tech companies have to do. a lot of it is just voluntary. so what we are really, depending on his guard rails is the benevolence of the companies themselves. so the i think is an example of a lot of the young people for are coming to move in now who are not ideological, who are really interested in the technology for aware of its
5:48 am
potential harms. and see this as the most important thing that they can do with their time, their opportunity to work on. what many of them, paul, like the manhattan project of their generation. the have to realize that, unlike some other very general technologies that have been developed in the past, a i is mostly being pushed, especially the frontier systems by a small group of scientists in san francisco. and this very small group of people are developing really powerful technologies we know very little about. so this may be, comes from a lot of historical text optimism among, especially the startup landscapes in the bay area. a lot of people are kind of used to this move fast and break things paradigm that sometimes ends up making things go well. but as is the case, if you're developing a technology that affects society, you don't want to move so fast. the actually breaks aside
5:49 am
the guy wants a global and indefinite pause on the development of frontier artificial general intelligence, surfing up posters. uh so that people can get more information or the issue is complicated. a lot of the public does not understand it. so the government does not understand it, you know, it's really hard to, to keep up with the development. another interesting thing is that most of us working on this have no experience in activism. what we have mostly is like technical knowledge and familiarity with a i is it makes us concerned by our safety so very much the minority and then actually a lot of the, the biggest safety names are working at a labs. you know, i think some of them do great work, but they're still much more under the influence of the broader corporation that's driving towards development. i think that's a problem. i think that somebody from the outside ought to be telling them like what they need to do. and unfortunately the case with a i now is that like,
5:50 am
there aren't external regulatory bodies that are really up to the top of regulating a guy from the same. now you're hearing this thing could kill us all and i am going to keep building it. i think part of the reason you have so much resistance to the a safety movement is because of the distance between people who talk about their genuine fear of the consequences and the risks to humanity. if they build this a god, it's so much of the debate around here has these really religious undertones. that's part of why they say that it can't be stopped and shouldn't be stopped. it, it really feels like you know, and they, and they talk about it in that way. like i'm, i'm building
5:51 am
a god and they're building it in their own image. right. the, i love the human of human human society, but i love the science fiction. i would write to create such kind of a technologies to contribute to the human, the human society. and so i allowed to, to redo the science speak some books. and also i love the stories that the company's name is at the top a side by 9 systems. yes. is obviously at some literal level. maybe you can unplug some advanced systems and there are definitely a lot of hopes people are trying to actively make it easier to do that. some of the regulation now is focused on making sure that data centers have some good. i'll switch it's cuz currently a lot of them don't. in general, this might be more tough and people realize in the future, we might be in
5:52 am
a state in the future where we have pretty advanced systems, widely distributed throughout the economy. drop people's livelihoods, and people might even be in relationships with systems. and it could be really hard to convince people that it's okay to unplug some why the distributed system like that. there are also risks of having in a military arms race around developing autonomy systems where we might have many of large nations developing wide of stock piles of atomic weapons. and if things go bad, just like in the nuclear case where you could have this really big flash for that, destroys a lot more road. you might have a bad case where very large stockpiles of a ton of weapons suddenly end up going a lot of people from very small triggers. so probably a lot of catastrophic misuse will involve humans in the loop in the coming years. they could involve using very persuasive systems to convince people to do things that they otherwise would not do. they could involve extortion or cyber crime is or other ways of compelling people to do work. unfortunately, probably a lot of the current ways that people are able to manipulate other people in order
5:53 am
to do bad things might also work with people using ai or a i itself. many plain people people to do bad things like blackmail. yes. i know the important thing is the homeless of p is changes the very awful lot of to put the dog was set up as a has of course the so me to extend to brain and technologies on the part and us know we're here. so what's next? we human being homeless up is uh, overpaying the new brains. okay, is additionally the reason that we're brand plus brands in the cyber space or so we fortunately have the new pots in us and friends and the robots on song. okay. level take does all the so yeah. what worries me
5:54 am
a little bit more about this whole scenario is that a technology doesn't necessarily mean need to be a tool for global capitalism. but it is it's the only way in which it's kind of being developed. and so in that model, of course, we're going to be repeating all that kind of things that we've already done in terms of empire building and people being exploited, natural resources, being extracted. all these things are going to repeat itself because a, i is only another kind of thing to exploit that. i think we need to think about not just as humans who are inefficient humans that are unpredictable humans that are unreliable, but finding beauty or finding value in the fact that we are unpredictable, that we are a unreliable whole. so probably like most emerging technologies,
5:55 am
there will be disproportionate impacts on different kinds of people. a lot of the global south, for example, hasn't had as much space in how i is being shaped and steered at the same time though some of these risks are pretty global. when we especially talk about catastrophic risks. these could literally affect everyone if everyone dies and everyone is kind of a stakeholder here, everyone is potentially a victim. 20 percent is like total correctness of the students versus how many non c a students do still plan to just keep doing research. i know there was like the ph, d versus rod. you know, i am somewhat in certain about grad school and things where the, i think i, i could be successful but also maybe with a timeline. various other considerations, trying to cash out impact in other ways might be more worth it. uh, median opening, i salary supposedly $900000.00 us dollars. um, which is quite a lot. uh so yeah, it seems definitely the industry people have
5:56 am
a lot of resources and fortunately, all the top a i, fortunately, all the top a g i labs that are pushing for capabilities also higher safety people. i think a reasonable world where people are making sure that emerging technologies are safe, is necessarily going to have to have a lot of safeguards and monitoring. even if there's a small risk. it seems pretty good to try to mitigate that risk further to make people more safe. peace on the media to the side and very near ice caffrey goes to that, how to treat it. so when i was gone, there is no a i systems or, and there's no computer systems that's cut into situations to the end people. this does the life with a i n level, it's on the song, some technologies with a i, we support the growing up process is people have been pretty bad about predicting progress in a i 10 years in the future. there might be even viral there. paradigm shifts, people don't really know what's coming next. but i suppose david the class,
5:57 am
there's still some chance the vast majority of a researchers are focused on building safe, beneficial ai systems that are aligned with human values and goals. while it's possible that a i could become super intelligent and posing next, essential risk to humanity. many experts believe that this is highly unlikely. at least in the near future the
5:58 am
the did you know the doorway so way more electric cars per capita is in the us. that's really more than 80 percent of all calls sold in no way in 2023. we're electric. how does a country become an electric car? and what does this have to do in 30 minutes on dw, the, the totes, the only for detail at this point of the find it here repos every weekend on
5:59 am
d w the, this, the state of the news live from ballad. a campaign to vaccinate hundreds of thousands of children against folio begins and golf on the facts. and he 1st comes after the 1st case of the deadly, the c, as in golf block. this reported in 25 years. also coming up ex goes offline in brazil as a ban on the popular social media platform takes effect. is it a blow for free speech, auto stand against this information?
9 Views
Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=1621484497)