tv Sophie Co. Visionaries RT December 13, 2019 3:30am-4:00am EST
3:30 am
hello and welcome to. me sophie shevardnadze and. with technology evolving at a cost make speech and artificial intelligence no longer just a hollywood dream as a path ahead of us a dangerous one will our lives still be real while i'm here in oxford university to ask all these questions to one of the most prominent thinkers in this field nick bostrom. it's really great to have you with us so you're a philosopher you're author who writes about what's going to happen to us basically possibly so one of the ideas that you put forward is this idea of vulnerable world
3:31 am
right so correct me if i were wrong but if i get this correctly it's it's basically that humanity may come up with a technology that may do this to extension and therefore we would meet computer surveillance well that might be an oversimplification but the vulnerable world hypothesis. is the hypothesis does at some level of technological development it gets to be said to destroy 6 basic things so that by default one civilisation reaches the level of development. we'll get that a state that. a couple of different ways in which this could be true one maybe the easiest way to see is. if it just because. at some level of development even for a small group or individual to cause must destruction so imagine if nuclear weapons for example instead of. acquiring these rare difficult to obtain raw materials like
3:32 am
plutonium or highly enriched uranium imagine if it had been an easy way to do it like baking sand in the microwave and you could have. an idea of the atom and that if that had turned out to be the way things are then maybe at that point civilization would have come to an end then. with surveillance from what i understand you can't really predict future nothing can when you can survey people and watch what they're doing but then they will be inventing things under surveillance but you want to know that it is detrimental and something has gone wrong that that the fact of surveillance wouldn't really prevent well so if if one thinks that the world at some level of technology is vulnerable in this sense one can then obviously wants to ask well what could we possibly do in that situation to prevent the world from actually getting destroyed and it does look like insurgents in our house. ubiquitous surveillance would be the only thing that
3:33 am
could possibly prevent that. and now would even that work well i mean pads on the specifics of this so you'd have to think just how easy would it be to cause destruction would you just snap your finger or say like a magic world the world blows up well then maybe surveillance wouldn't suffice but suppose it's something that takes several weeks and you have to you know do build something in your apartment and maybe require some skill you know at that point you could imagine a very fine grained. surveillance infrastructure having that kept giving the capable it didn't intercept. also how much destruction is. created if somebody does this is it once it blows up or the whole of the earth maybe you could afford a few slipping through the net so you'd have to done look at the specifics now of course surveillance in itself also is a source of risk to human civilization you could. imagine various kinds of totally
3:34 am
terror and regimes becoming more effective more permanent. peter surveillance in itself is a totalitarian regime. what do you mean when i mean if you're surveilled 21st 7 of us that in essence is a giant computer police state well it depends on i think what this information would be used for. if it so that some. say central authority micro-manages what everybody is allowed to do with their lives and certainly that would be a total of terror and turn on president a degree. but suppose it was a kind of passive surveillance and people just went on with our lives and ollie if somebody actually tried to create this mass destruction thing would there be a response. maybe it would not look so totally i think i really realistic though
3:35 am
because as soon as someone is in charge of this total surveillance and then if it's passive like you're saying for a very specific things like total destruction of a city or the world they would for sure take advantage of it as possible their money i mean i'm going to just the way humans are made yeah well i think to varying degrees there are institutional checks and balances in different colors right now we have a lot of very powerful tools and in some places of the world they're used by despots and you know other parts of the world they're used by the more democratically accountable and liberal governments and the thing in between certainly it would be the case that if you created this kind of extremely fine grained surveillance infrastructure that it would create. a very substantial danger that either immediately or after some period of time it would be captured and by some nefarious group or in. it on the news for oppressive purposes that certainly i think that
3:36 am
that is one major reason for why. people are rightly in my view very suspicious of the surveillance technologies and whether. it could still be the case because it's not something we get to choose that the world is so configured that at some level of technology the destruction is much easier than creation or defense and it could just be that in that situation the only thing that would prevent actual destruction would be a very fine grained surveillance i'm just you know for you me for doubting this a little just because i've seen with my own eyes what a police state is a little bit so it never really work so unless it's sort of attack you and the world is so diverse and we're also different and i've seen it with my own eyes that human imperfections and disorganization you know they just somehow always grow through any restrictions or norms just like graphs repayment you know yeah
3:37 am
well so what is it precisely that you're not convinced about that that could be some level of technology at which destruction because easy or that so impossible surveillance could prevent that the world from going to the idea so leave that dissemble surveillance have also have to interact somehow with you know once that's what is not convincing to me right so i think there it becomes a matter of degree which set of snorers would you be able to provide the world from getting the story surveillance up so take today's world were massive destruction is possible but it's also very hard to like nuclear weapons let's say like so that we can have. reasonable ability even with present day surveillance technology to detect if some nation is building a secret program. so if you then roll that back you require less of for all materials less big installations fewer people. working on this it gets harder and
3:38 am
harder to detect. with current technology but this is a very rapidly advancing field with. recognition software that you could have cameras that could monitor in principle you could monitor every body and so you could imagine even if you want an extreme case but just a kind of the most straight theoretical possibility of modern if everybody wore a collar all the time with with cameras and microphones so that literally all the time when you were doing something so may i system could kind of classify what actions you were taking and and if somebody were detected to be doing this kind of forbidden action alarm could be there done some human alerted or some say well my problem with i.a.e.a. is that. it is created by in essence. beings that are thought by human beings so how can it be something better or perfect earth than human beings enable able to not me so i think because i'm
3:39 am
thinking if lot beings are creating artificial intelligence and artificial intelligence is simulating human beings then it's a living flawed beings and it's going to miss something well i mean a i'm not sure it would have to simulate human beings but be depending on which pretty particular scenario we were looking at it may or may not be necessary to not miss a single thing i mean if you're looking at the kind of much worse global warming scenario it's find a few people drive cars even in that world right as long as the majority kind of stop doing it you wouldn't even need new surveillance technology there you would just need a carbon tax or something if you moved to the other extreme where a single individual alone can destroy the whole world obviously there it would be essential that not a single one slipped through but then it depends on how hard would it be for a single individual with a need to do something very distinctive activity accumulate some special role materials. and. maybe it would become possible to have the kind of surveillance
3:40 am
that could avoid that. today obviously our law enforcement capabilities are very limited but. i do think there are quite drop advances in using to recognize him a tree like recognize faces and to classify actions and then you could imagine that's being built up over a period of 10 or 20 years into something quite formidable so you wouldn't be voluntarily submitting the human race to the robot. that's what i'm about i mean is i'm not i'm not good i'm just noting that there are certain scenarios if the world unfortunate that turns out to be vulnerable and out way where it looks like it will lead to actually get destroyed or people will put in place to surveillance much now . that might be depending on what kind of surveillance technology you have. different ways of configuring that. maybe it would be almost completely automated or in the near term certainly it would require a lot of human involvement one way to sort of check things that have been flagged
3:41 am
by algorithmic means for example and then maybe respond to take a short break right now when we're back we'll continue talking. about whether we're living in a simulated world by computers or not stayed with us. child's seemed wrong when old quote just all. the world is yet to shape our disdain and become educated and in gains from it because betrayal.
3:42 am
when so many find themselves worlds apart. just to look for common ground. when there's only johnson. and i just got out of prison for. 41 years. i'm 72 years old now i got arrested for too many for some of the. league field you know i'm like this everything was taken out of. my work in the hospital but it was a. government decision on man that looks a little bit like me. because about. homicide want to cars. now.
3:43 am
work tough you is doing that so badly so you're going. to go to the curb. a good number for you to try to frighten kids might give. you a good some ground due to. and we're back with nick bostrom nick so you know a lot about a much more artificial intelligence much more than me do you think we can program
3:44 am
artificial intelligence to be this benevolent platonic king of this i don't know in lightened monarch or anything that has to do. with control or total control is inevitably repressive and bad. well i mean i don't think we would know how to do that today i mean of course we can't even build ais that can do all the things that humans can today but if say next year somebody figured out a way to make ai's do all the jobs that humans can do like some big breakthrough i don't think we would know yet how also to align it with human values that is still a technical problem that people are working on since the last few years but with some significant way still to go. to getting methods for scalable ai controlled so that no matter how smart the ai becomes even if and maybe becomes far smarter than we want to because there is a liberty. that you have is to become smarter than most i think eventually.
3:45 am
and then by that time you would want to also have the ability to make sure that they still act in the way you intended even one they've become intellectually far superior ultimately so that that's a technical problem. that needs to be solved with technical means but then if you solve that you still don't have what we could call the political problem of the governance problem like so it would enable the humans to get the ass to do what they want we still don't need to figure out how to show that this new powerful technology is used primarily for you know a beneficial purposes as opposed to wage war oppress one another. and that that part is not the tactical problem that it's kind of a political matter like judging from the history of humanity if you're saying there's a slight possibility that i can become more intelligent than us in a way more intelligent. it's not. mean humans trying to control and make i
3:46 am
do all these things that they want to do it's thing i ate controlling the humans and doing. well when he learns what they would want well i mean. in the ideal case the ai. being aligned with human values in as much as we would you know specify what it is that we want to achieve the ai would help us achieve it. do you think ai could ever simulate real feelings and memories. do you think it can ever really predict a human brain something as chaotic as a human brain because we don't really know what it is how blowing i mean i don't think that would be necessary for alignments to have a very detailed i mean we humans can't do that with one another and we can still be friends with one another or help other people and so forth so that that doesn't require the ability to create 100 percent accurate relational prediction.
3:47 am
so you have this other theory. before the vulnerable world that we my all be living inside some sort of a matrix. and there are layers maybe a simulation. there right you know actually something i published back in 2003. and it's an argument that tries to show that one of 3 propositions is true so it doesn't tell us which one. proposition one the 1st alternative. is that all civilizations current stage of technology development. go extinct before they reach a technological maturity so it's going to be that maybe they're out there far away other civilizations but they all failed to reach a technological maturity you know because human nature doesn't change and when technology goes further but humans use it to destroy the world yeah that could be
3:48 am
the case and a very robust saw that even if you have thousands of human like civilizations out there they would all succumb before they reach technological maturity so that's one way things could be another the 2nd alternative is amongst all civilizations that do reach technological maturity they all his interest in creating these kinds of what i call ancestor simulations these would be detailed computer simulations. at the finding last level of granularity that the people in the simulations would be conscious and have experiences like ours maybe some civilizations do get there but they're just a completely uninterested in using their resources to create these kinds of simulations. and the 3rd alternative the only one remaining i argue is that we are almost certainly living in a computer simulation right now built by someone who wants to build us a ship and anything that's the most probable one. they did the simulation argument
3:49 am
doesn't say anything about which of these is true or most likely it just demonstrates this constraint that if you reject all 3 of them you have a kind of probabilistic incoherence and wind of the full argument involves some probability theory and stuff but i think the basic idea can be conveyed. relatively intuitively it's supposed to 1st turn into is false so that some non-trivial fraction get through to maturity suppose the 2nd alternative is also falls so that some of those who have gone through to maturity do use some of their resources to create simulations ok right then you can show that. they because each one of those could run a lot of simulation study of some of them go through there will be many many more simulated people like us than there would be people like us living in our regional history even right whole 6000000000 of us yeah but not just that but you could show
3:50 am
that at technological maturity even by using just a tiny fraction of say one planet's worth of computer resources even just for one minute you could draw on. tens of thousands of simulations of all of human history so that if the 1st 2 and we could talk more about the evidence gallantly how that simulation is possible even if we don't understand our brain well meant they obviously we can't do what you say i mean i really honestly diary of a nation argument makes no assumption about the timescale behaved 20000 years or 20000000 years it still holds. and so because each simulating civilization would be able to run using a tiny fraction of its resources. hundreds of thousands millions of runs through all of human history almost all. beings with our kinds of experiences were dumbass simulated ones rather than non simulated ones and conditional in that they are good
3:51 am
we should think we are probably one of the simulated ones so in other words what that means is if you reject the 1st 2 alternatives it seems you are forced to accept the 3rd one which done shows you can reject all 3 in other words that at least one of them is true so that's the structure of the simulation argument ok so you answered my 1st question about how can we how can anything simulate human brain because you're saying there is no time span so again that. 2 questions if we're living in a simulation why would the future eyes. even make one just for fun i mean so. many possible reasons you could imagine i mean you could imagine scientific exploration like wanting to know counterfactual history what would have happened if things had gone differently that could kind of be both theoretical interesting and maybe useful for trying to understand other extraterrestrial civilizations you
3:52 am
might encounter you could imagine. entertainment reasons that we humans do our best with novels that bring you into this world that we put on theater plays and make movies computer games in many cases making the most realistic yes we can of course we can't make them perfectly realistic now but if you had that ability maybe we would make them perfectly realistic. so that that would be another example maybe maybe even some kind of historical tourism you could imagine if you can actually time travel maybe you could build an exact simulation of the past and interact with that and it would be as if you had to travel to the past and you could experience what it would be like and the other reasons as well that we we don't necessarily know very much about what would motivate or drive some kind of technologically mature opposed to man civilization and why they would want to do. different things with the resources and then i guess the core question eighty's even if we're living
3:53 am
in a simulation rate does it really matter to us i'm me and you and everyone around us i mean buddhists say the whole world is illusion so what this does is cancel out the things we live there are good or bad like love and feelings and problems no no nono no no i think to a 1st approximation if you became convinced you're living in a simulation you probably should have gone as if you were not living in a simulation like for most everyday things like if you want to get into your car you still have to take out the car key and open the door etc. so i think that's true i think there might be some respects in which new possibilities would exist if you are in a simulation that wouldn't really exist if you're not in a simulation. for example and we think. the universe can't just suddenly pop out of existence right. conservation of energy and momentum and so forth whereas of course if you are in a simulation if somebody pulls the plug of the simulation the whole thing ceases to
3:54 am
exist saw. the possibility of being more and more it's well that the world suddenly ending with that that seems to not take it's likely or not there were some times go but at least it seems like a possibility. other things as well. you could imagine things like. after after life like is clearly possible in a simulation you could just rerun the same person in another simulation and so forth or various interventions by the simulators in some ways actually as out of possibilities kind of structurally similar to what theologians have been thinking about in terms of supernatural relationship to creator and so forth kind of a kind of analogues of that arise within this simulation theory stuff. although i don't think there is any logical necessary connection one way or the other it's still
3:55 am
kind of intriguing that to get these kind of parallel set of possibilities in some respects not exactly the same. but in some ways kind of similar. ok now i understand the whole. theory because i wasn't really putting together because i was really thinking now ok so it was making sense it does make sense now . and it's still all related somehow to to artificial intelligence because what would be simulating us rate it will be some sort of. yes so a lot of scenarios today are. are linked to this dooms day when artificial intelligence takes over or the contrary a lot of people are saying that artificial intelligence is actually the solution to a lot of our problems like hunger and malady and global warming where us and some worry but we are going i think both are possible outcomes.
3:56 am
the fields and asked me whether i'm an optimist or pessimist and i started to refer to myself as a frightened optimist and. i do think that the transition. is not something we should avoid i see it more as a kind of gate through which we need to passage and all the possible paths to a really grand future for humanity go through escape at some point in waltz the. development of greater than human machine intelligence certainly like a purgatory before you're not not a parent or a but a trip necessary transition which however will be associated with significant risk including existential risk starts that we actually permanently destroy ourselves or what we care about in making this transition i think it is unless we destroy ourselves through some other way before i think this transition will happen and our focus should be on getting our act together as much as possible. in whatever
3:57 am
time frame we have remaining with with some ok whatever it is try to do the research to figure. scalable methods for ai control to the extent we can try to get the global order in what the recent posts say but can foster more collaboration. and in particular with an ai community and so forth developed in a common set of norms and the idea that it should be for the common good of all. and then making sure we don't destroy ourselves before we even get a chance with ai. would be good as well. and trying to grow up and become wiser in whatever there is sort of intervening number of years we. thank you so much for this wonderful insight it's been a pleasure talking to you pleasure talking to you.
3:58 am
most people think just stand out in this business you need to be the 1st one on top of the story or the person with the loudest voice of the biggest radio in truth to stand the news business you just need as the right questions and the right answers . question. it's a let me. ask i would name a. pic i could. not.
3:59 am
show more than. you love who you know not the. sound you make to see because don't want to tell. them what they were all at the kook on the standards of the hour now as you. know most of the focal. points. to the cynics through learning one thing for not. so when you hire them to use them to do. one thing with your house on the net passing them.
4:00 am
from our t.v.'s our boris wins big in the conservative party secures a majority in the british parliament as labor suffers in the polls and a highly disappointing night for them our live coverage from westminster returns in less than a minute. boss in other news later this hour at least 71 soldiers are killed in a militant attack on a military base in asia exposing a deep crisis for french foreign policy and its former colony. protests sweep across india after it passes a controversial religion based citizenship that excludes muslims at least 2 people have been killed.
16 Views
Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=1498782519)