Skip to main content

tv   Book Discussion  CSPAN  October 4, 2014 2:45pm-4:34pm EDT

2:45 pm
>> hello, everyone. can you hear me okay two in the back, can you hear me enact excellent. okay, i am the executive director of the intelligence institute and we are a nonprofit research institute just two blocks off the uc berkeley campus and we are very excited to be hosting nick bostrom who is the director of the humanity institute and he collaborates regularly on topics including the subject of his book, "superintelligence." and so before we begin, i had just a couple of comments on the logistics. after the comments you can buy a copy of the book right out here at the table.
2:46 pm
and second if you'd like to learn more about the subject, we have an online reading group for this book where we will read through this section together for one week and then we will discuss it in an internet form online and that might be the best way if you want to learn more about this stuff because you can read at the same place as a bunch of other people and discuss it with them online section by section. so if you want one now, go to a website at intelligence.org and there is a link on the homepage where this will be. and if you are a researcher in computer science or logic, please see whether any of the problems that we are working on are interesting to you and get into contact with us as they are. we are always looking for people that want to talk about these
2:47 pm
mechanical problems or it for now i would like to introduce professor russell who received his doctorate from stanford university in 1984. i'm sorry, 1986. he has been a faculty member at berkeley ever since and he has one numerous awards and published dozens of papers and he is also the co-author of the leading textbook in the world that pretty much everyone uses at universities and for many years he has been asking the question what if we as scientists exceed apical and what happens then? i agree that death isn't very important question and i'm glad you've been discussing it for a long time and he has been recently organizing the panels and workshops on out with other scientists. it's a professor, it an honor to have you here.
2:48 pm
[applause] >> thank you very much. i would like to thank you for organizing this. and i would like to thank rick lee. this is also time to thank my old professor at oxford university as well. and so nick has an eclectic background he studied physics and neuroscience in mathematical logic and computer science. he was doing this, trying to find out where the big question was. and so as a result of that, he has become one of the most common philosophers. and so he's actually answering the big questions and he provided as much as anyone has
2:49 pm
come a ways to think about these questions that are productive and leave us to perhaps change the way we go about managing the human race, which is all you can really offer. so in particular he talked about the questions, are we special, just because we are the ones that answer the questions and its affect on our understanding of all kinds of things. are we living in a simulation? the answer is yes. and what happened to all those civilizations that must be out there. and if so, was it because they developed artificial intelligence. so that is what nick is going to address today. and so with that, i will turn over the podium to neck and just say that my conversations have
2:50 pm
convinced me and we appreciate him being here. [applause] [applause] >> with the hair. >> let's see here. okay, thank you so much for coming here. i am not going to summarize the entire book but what i wanted to try to do was give some of the background, like where is this coming from from a writer's perspective. and then maybe then we can delve into any particular more concrete topics they might interested in. so taking a step back and thinking about the human condition from a more abstract point of view of, we can
2:51 pm
conceptualize things in this graph might be about time and then on the left there is some notion of the level of technological advancements. and this tends to be the normal way for things to be. and if we think the middle of it, it's actually an abnormal way for things to be. and it's either abnormal on this type of scale, this modern human condition or your commute to work as part of this. and it's a huge anomaly in history and we had escaped this condition and even then just in parts of the world. and of course, floating around
2:52 pm
in a vast vacuum out there. so many places are exceptional. and so any hypothesis suggests that we should break out the human condition with extraordinary evidence. the longer the time we are considering, the greater the probability and so we could break out in the downward direction and population and there is a concept of the viable population and the human species, even though it is a very different type of smart animal, it's not necessarily new as far as taking on these things. and so it is a factor in this
2:53 pm
picture. but i think that there is another attractor as well, we can see the to break out in the opposite direction and if we reach some kind of technological maturity where we can see it's physically possible and technology often replicating this and that our civilization continues at some significant fraction of the speed of light. at some point it becomes impossible because of the expansion. and so there is this finite bubble of stuff from our current that can be accessed.
2:54 pm
and so it might be that we approach that and the suffering would at that point go down. concept is defined as one that evolves the extinction and all the potential for further development. in a way that we could prematurely end the human story and in fact they have never had an existential catastrophe and at the end of history that will either have been zero or one of these. but the concept is important because the value of local
2:55 pm
entity that fundamentally doesn't depend on what takes place in time, it is a fundamental that doesn't depend on where it takes plc in space either here or in england or africa. and it becomes very important with a few sets of issues. if you crank the numbers and there are billions of galaxies and they can have billions of people living for billions of years and you could get even more magnitude better. but if you multiply those
2:56 pm
numbers together even a very small change in the probability, 100 plus 100 percentage points produces a higher expected affiliate than doing anything else that only has local effects like maybe eliminating world hunger area it might be that the most important thing is the indirect effect in that particular perspective isn't necessarily the correct one. but i think that it it might be a component they factor into this and suggest that planning and producing this risk is disproportionately part of the character as it should be. and so it is relatively part of
2:57 pm
this and i compared the amount of literature published. we had a hand on human extinction. and we are trying to change things paper by paper. saving the world one paper at a time. [laughter] one finding is that the lease on the time scale of the century, there aren't as central risks from nature, but we can see that they have to be fairly small and we have survived earthquakes and volcanoes or a hundred plus years and so we will introduce this and particularly with future technologies that we expand the powers in the
2:58 pm
external world and are self. in a different way of making the same point is a big urn which represents possible discoveries that tend to be made. we have made a bunch of discovery struck human history. and it has been extraordinarily beneficial and it may have been almost exclusively part of this but some have any used for good and evil. and that surprisingly small number and perhaps chemical
2:59 pm
weapons and nuclear weapons and torture devices, things like that that could make the case. and this includes the technology or an idea that destroys the civilization of the species with high proposition and so what does this look like? it's a little over half a century ago that we discovered how to create the effort and in a sense we were lucky because you need these difficult materials and the only way to get those is to have a big facility that is expensive to build. but suppose that there had been
3:00 pm
a way to unleash this entity but even simple methods like a baking pan in a microwave oven and stuff like that. and it's typically impossible by baking stuff in the microwave as it happens. before we did this, how could we have possibly have known how it would have turned out they meant we were lucky in this particular instance but if we keep pulling out all the concerns, then it looks like we will eventually pull it out. what we don't have this and that was the easy way and that's probably the end of human civilization and even if and we
3:01 pm
climbed up before doing so and then we reached the level of technological advancement and that may have done us damage. ..
3:02 pm
>> the futuristic version of that as described, like atomically-precise machinery and so fort. totalitarianism, enabling technologies. you remember the definition of an exist existential risk was ae that could maybe permanently lock us into some radically suboptimal state. some speculate that if we change various parameter values of the sort of sociopolitical game that is being played out by radical advances in surveillance technology or maybe the discovery of some psychological technique to manipulate desires more efficiently, that that could make it either easier or harder but somehow change the ease with which, like, a small group of people could control a large group of people. and i don't think we have, like,
3:03 pm
the level of political science that would enable us to predict in advance very accurately exactly what would happen if you changed one of these fundamental parameters. so then maybe you could have some sort of tyranny that would be immune to -- [inaudible] because it could nip it in the bud. a lot of these unknowns here at the bottom. if we take a step back again and imagine what would have been put on this list if we had asked the question last century instead of this century, it's striking that none of the things that one might now want to put close to the top would have been listed. they didn't have computers so, obviously, not a.i.. i don't think they had the concept of nanotechnology. some might have worried about totalitarian tendencies, but for the most part, these are only relative, to recent discovery says. so -- discoveries. there might be additional
3:04 pm
existential risks that haven't popped up on our radar which suggests a high expected value to continuing looking for those. in case there is some that it might be possible to do something about. if we combine these reflections with the notion of some level of technological maturity, like the micro lev and clearly things are influenced by political decisions and funding decisions and choices individuals make. nevertheless, it seems fairly plausible to me that if there is a physically possible general-purpose technology that would have a ride range of applications and scenarios and if they're also sort of intermediate technologies leading up to that, that would be bicep official and have many fact -- be beneficial and have many practical uses. and we will eventually invent
3:05 pm
those technologies and develop them. we can think maybe of this metaphorically as if you have a big box that is initially empty, and then you start pouring in sand. so where you pour in the sand determines where the sand piles up. so which technology areas you fund determines where you get initial capabilities. but if you keep pouring in sand, eventually you'll fill up the entire box. it kind of becomes obvious what other uses you can do and the state of the art in one field advanced enough that it sort of drags the others with them. so if one has this view, also the view previously that there are these existential risks but perhaps also an enormous upside, we then face the question of what attitude should one take to all of this, what kind of response. and i think that there are two, like, basic approaches to this that one can distinguish. on the one hand, i think best expressed by some random commenter on aing blog, he says
3:06 pm
i think -- [inaudible] go faster not because i think this is better for the world. why should i care about the world when i'm dead and gone? >> so that's his answer. to validate, we have to be clear about what exactly the question is we are trying to answer. so if the question is what is best for me, what should i hope for, faster technological advancements or slower, i think that he's basically right. if your hope is that somehow you could realize these, like, super long life spans of becoming like a jupiter planet-sized brain and living millions of years, then clearly that's not going to happen in the default course of events. like the default is, i'm sorry to tell you, that we're all going to die. [laughter] in a few decades. we are rotting away and aging.
3:07 pm
so that's what's going to happen unless some radical breakthrough, a cure to aging. uploading to the computers, superintelligence. something totally radical would have to occur. so there are ways to maximize the chances of that happening in time would be to hope that things move as quickly as possible. also even aside from these kind of astronomical upside extreme life spans, if you just want to have cooler gadgets and a higher standard of living, you should probably hope for faster technology, progress. however, if we ask a different question -- not what's best from a legalistic point of view, but what is best personally -- then i think the answer is quite different, something perhaps closer to idea expressed by what i call the principal of differential technological development which are the development of dangerous and harmful technologies and accelerated development of
3:08 pm
beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies. so rather than a blanket check for just trying to make everything happen sooner, we should think carefully about the desired pacing of different inventions. it might be that the right question is not for a particular technology, do we want to develop or or do we not want to develop that. the modern form of technology conservativism, the answer might be, well, we will develop it. the more relevant question might be instead on the margin, do we have reason to try to push that a technology comes sooner or slightly later? we presumably think we have some ability to influence the exact timing, otherwise all the funding that is given to technology research would just be wasted. if it had literally no impact at all on when the technology was developed.
3:09 pm
so we think by promoting a field, by working in it, by directing funding to it, maybe it could move things around, make it happen a couple of months earlier or a couple of months late later. even though my theme is an uninspiringly small change to make, it might be an important change if this affects net level of existential risk that we will confront. and in many cases, it's quite possible that particularly the sequence in which different key technologies are developed might make a huge difference. so imagine a simple toy model. we have these, let's take the three earlier key technologies like a. i'm sorry, nano tech and synthetic technology. maybe each one of those has existential risks as well as possible benefits. now you can imagine one trajectory where -- [inaudible] and then suppose we get lucky, we get through that, we survive the existential risks, and then
3:10 pm
we develop nanotechnology. and somehow we avoid using that to wage world wars or destroy us in other ways, and so we get through that, too, and finally we say machine superintelligence. which might or might not succeed. but the net level of existential risk on that path is the sum of these three different technologies. perhaps if we got these technologies in a different order, we could avoid some of these existential risks. say if we face a.i. first, well, we'll still have to hope we get through that critical technology, the existential risks of that. but then if we succeeded in the having safe and beneficial a.i., maybe it could help elinate or reduce the other risks from synthetic biology and nanotech. so now, there are a host of additional considerations we have to take into account before we can make, all things considered, judgments about these macro strategic issues, but it illustrates thinking in
3:11 pm
terms of the timing might give us more insight into these things than the simple-minded question of whether we should have technology x or not. so we can, generalize just slightly, propose a replacement of the traditional concept of sustainability which is static with the dynamic concept. so traditionally, people think of sustainability like i think in terms of achieving some state such that you could remain in that state indefinitely. so you use up resources only at the same rate as which they are replenished by nature. you pollute only at the rate in which things kind of get taken out and broken down and cleaned up. but what these considerations suggest is that instead perhaps of trying to achieve this stable state, we should try to pursue a trajectory along which we could continue traveling for a long
3:12 pm
time, and that would lead in a desirable direction. and it's an open question whether that would best be done by increasing static sustainability. so think of a rocket that is in midair and kind of burning a lot of fuel, in a sense to maximize the static sustainability, you would want to reduce the fuel consumption so it just hovers for as long as possible until it runs out of fuel, and then it crashes. but a dynamic sustainability concept might suggest you should maximize the as you get out into orbit, and then you have a more sustainable state that you can continue on. so that's, obviously, metaphoric, not meant to imply that we should maximize the consumption of fossil fuels, but but -- so in in the technology on one is like the same as capability in the first graph, and then coordination, some kind of measure about the degree to which the world is able to solve global coordination problems such as avoiding wars and arms
3:13 pm
races, technology races, destroying public goods. and a third dimension there of insight or wisdom or judgment, the ability to know how to use technological and other resources to ends that are truly worthwhile. so it might be that to have the best possible state, to really have an optimal utopia, we need to reach the point marked by -- [inaudible] where we have huge amounts of each of these three quantities. like you want some super duper advanced technology to make the world as wonderful as possible, but also you want to make sure we don't wage war, and you need deep insight so it's not just used for more, like, consuming more mindless entertainment, but that it's used to do something worthwhile that actually has moral value. so although the end point might be we want to reach that region up there where the sun is, that still opens the question of where we are right now, where
3:14 pm
the rocket ship is in this picture, better off with faster or slower growth than any of these. although we want to have these advanced technologies of certain kinds, we would be better off getting them only after we've first gotten our act together on achieving global coordination. so so much by way of kind of introducing some general frameworks for thinking about these macrostrategic issues. now let me turn relatively briefly into the specific issue of superintelligence. i think that we like to be at some point a transition to an era of superintelligence. and this will be the most important thing that has ever happened in human history. at least if we conditionallize on no other existential catastrophes. there are two possible ways in principle that this could come about. you could imagine enhancement of
3:15 pm
biological commission such as has occurred over evolutionary time scales. or through advances in machine intelligence. so right now machines are far inferior to us. we were talking about general intelligence, general powerful learning algorithms that can be trained up to do any of thousands of different jobs. but they are improving at a faster rate than biological commission. so to some extempt, it's not a question -- extent it's not a question of which of these will achieve superintelligence first. we can think specifically about ways to enhance biological commission. the way that i think that will initially become technologically feasible is through genetic selection or genetic engineering. in principle, there are other ways like smart drugs and such. i just think it's unlikely there would be some simple chemical such that if you injected it, you'd be a lot smarter. i think we would have evolved to
3:16 pm
produce it. but things might become confused with genetics over the course of a decade or something like that. and the simplest way won just to do embryo selection, invitro ferretly zigs. this would -- fertilization, this would mainly just require us to have more information about the genetic architecture of intelligence. and which we don't yet have to a sufficient degree, but the reason for that seems to be that the price of gene sequencing has been high enough to make it possible until recently to conduct very large scale studies with very large populations, like hundreds of thousands of individuals or even millions. but the price is now falling where such studies are beginning to be undertaken. and the reason we would need such large studies is the variations in the --
3:17 pm
[inaudible] of human intelligence is not due to, like, a few different genes, but to many genes that each have a very, very small effect. so hundreds or genes or thousands of genes. and so to discover a very, very large effect, you need -- small effect, you lead a large sample. once you have it, no further technology would be required, you could just start applying that generally in the context of invitro fertilization, some, like, eight or ten embryos are produced, and right now the doctor will look at it, and if there is a visible abnormality, we will not select that embryo, but one to have healthy looking ones, and you can also screen as is regularly done for down syndrome. but you can't currently select for positive complex behavior or traits, but that would become feasible. this technology would be
3:18 pm
vastly -- [inaudible] if combined with another technology that we don't yet have which would be the ability to derive gametes from stem cells. the if you had that ability, you could produce an initial set of embryos, select the best one, best in the sense of having the highest trait value of whatever trait value you're interested in, and then use that to drive stem cells that you could, from which you could derive game meets that you would -- exam -- gametes. and what that would achieve is, in effect, collapse the human generation span from 20-30 years down to maybe a couple of months. and instead of, like, a eugenicist having to try to persuade millions of people to try to change their breeding patterns for many hundreds of years, you could have a scientist doing this all in a petri dish without having to change who mates with whom. [laughter] so the effect would be vastly
3:19 pm
greater. so this kind of technology has been used in mice, gametes derived from stem cells. it's not complete science fiction, but it's not yet ready for use in human population. so i did some analysis together with my colleague, carl shulman, on what the effects might be of different levels of selection. so if you just stick to random embryos and select the most promising one for intelligence, then maybe you'll get four iq points or so. if you select one in ten, maybe you get 11 or 12, one in is -- one in 100, maybe 19, one in a thousand, 24. you see there are steeply diminishing returns there, just increasing the population from which you select gives you less and less. but if you look at the next row there of instead of doing a one-shot selection from a large population of embryos, you do
3:20 pm
repeated from several. then you don't get a steep diminishing return. for five generations of selecting the best of one in ten, maybe you could get as many as 65 iq points and ten generations of that level of selection involves 100 embryos, might bring up a kind of genome that has never existed in all of human history, like beyond whoever your favorite kind of genius scientist would be. and we don't really know exactly how far you would go because it's like uncharted territory. but it does look like it could add to some sort of form of superintelligence through these types of particle enhancement. there is no reason to think that the human species is the smartest possible biological species that could exist. i think probably the dumbest species were capable of creating a civilization. basically, our ancestors got
3:21 pm
suck sievely -- successively smarter. but there's no reason to think there is not a huge space above us. so i think that could create some sort of weak form of superintelligence. but, ultimately, the potential for information processing is vastly greater in machine -- [inaudible] the point is sometimes made to me that because it looks really dangerous to create machines that are far smarter than us, that this is like a reason to try to accelerate progress around biological cognitive enhancements so we can keep up with the machines and keep one step ahead. i think it works exactly the other way around, if we do pursue biological enhancement, it will hasten the day at which machines overtake us because we will have smarter scientists who will make more rapid progress on artificial intelligence and
3:22 pm
computer science. it still seems like it might be a beneficial thing to do, but not because it would sort of delay the takeover by machines, but because we might then be more competent when we are creating these superintelligent machines. the there are other ways as well that we could enhance biological cognition by instead of improving each processer, improving the way we work together to new institutions and systemic norms. i'm not going to talk about that now. brain computer interfaces, i don't, i don't think that that's where the action is, this cyborg task. i i this it's -- i think it's going to be technically quite difficult to enhance the capabilities of a normal, healthy adult through implanting things in your body. you might think, oh, wouldn't it be great if you could access google by just thinking about, thinking about it.
3:23 pm
but i can access google already, and i don't need any surgery to be able to do so. and most of the benefits you could imagine achieving through this kind of cyborg implants you could achieve by having the same device outside of you and using your natural interfaces like your eyeballs which can project, like, 100 million bits per second straight into your brain where you have dedicated visual cortex, highly evolved and optimized to make sense of it. it's going to be hard to beat that, i think. and in any case, the bottleneck is not really squeezing sensory data into the brain. like the first thing that the brain does with all this information is to throw away almost all of it, to try to extract the features that are actually relevant. so for people with disabilities and so forth, i think there can be great boons there, but by the time it would become possible to seriously enhance the intelligence of a healthy adult, my guess is that then the sort of the electronic part is
3:24 pm
already so smart that that's where the action is. and then that leaves path of machine intelligence as kind of the main additional to consider. and so i'm going to just quickly flip new. for the public at large, they think there might be these milestones that are occasionally crossed that make a big splash in the media. but, of course, underneath the hood there is more incremental progress in a variety of different techniques and algorithms and methods which have been developed. in order to produce machine intelligence, human-level machine intelligence through this kind of synthetic a.i. approach, we clearly would need something more than we already have. so one or more -- probably a bunch more -- of fundamental new discoveries, argue tech churl are -- architectural are
3:25 pm
missing. it's hard to tell how many, how many discoveries of this ago my tuesday -- magnitude. so there's just great uncertainty there. but most of these have been developed in the last six years or so since we had computers, and a couple of them were done before. so that's within the lifetime of some people who are around today. it's not really that long. like, if we imagine another century, we might well have another, like, 10 or 20 of these. we also find one looks at particular domains where it's easy to measure the performance of machine intelligence against human intelligence like -- [inaudible] that hardware has also made a significant contribution to improve performance. like in chess, roughly half of the improvements over the past couple of decades, i think, have been due to hardware getting faster and algorithmic improvements and improvements in
3:26 pm
databases. and that seems to hold for a variety of different areas, although the exact proportions kind of vary. there is another path towards machine intelligence which is to take the, have an -- [inaudible] of intelligence like the human brain, and if it turns out to be too difficult to just figure it all out from scratch, we could try to draw inspiration from biology. we could try to reverse engineer the human brain. and that could be done at different levels of ambitions in terms of possibility. on the one hand, you could just hear roughly how the brain is organized and using some reward-earning algorithms and patch it up completely synthetic or artificial algorithms, or you could try to limit or in that case copy a brain.
3:27 pm
and so the idea there is that you would take a particular brain, you would freeze it and then slice it up into thin slices, feed all slices through some array of microscopes to take pictures, use automated image recognition on that stack of pictures to extract the three-dimensional neural connectivity matrix of the uploaded tissue and then combine that with neurocomputational models of how each type of neuron functions and run that whole emulation on a sufficiently-powerful computer. this is something that we are very far away from being able to do now. it would require some, like, really advanced enabling technology to do that. but what it does not require is any new conceptual breakthroughs, some deep new theory. you wouldn't have to understand how thinking works in order to produce machine intelligence through this path. in principle, you could do it by just understanding how the parts
3:28 pm
of the brain work. and then using brute force technology, to kind of build an emulation of that. this is -- [inaudible] you can see there it's like a slice, you can see some cross-sections of neurons and city knopses. at the bottom there there's a block of these, if you just put these pictures on top of one another. and in the upper right corner is kind of the state of the art of image recognition algorithms that have extracted some of the connections there. so is we already have microscopes that can see with the right resolution. if we want to do that, it's just that to image an entire brain at that resolution would take roughly like forever -- [laughter] and the picture up there in the right corner looks very nice, but these algorithms occasionally make error, and those add up. and we have good neurocomputational models of some types of neuron but not all so there would be a lot of
3:29 pm
incremental work that would be required to make this kind of thing feasible. so we know that whole brain emulation is not just around the corner. it's got to be like decades out, because it just will take time to get all of the pieces together. maybe longer, but certainly it's not going to happen next year. with this kind of stuff there is always some small chance that somebody could make some unexpected radical breakthrough, and it's hard to completely rule out whether this could happen sooner. but the fact that there are these multiple paths and the fact also that if it turns out that machine intelligence is just too hard for us to crack, that we could then wait until we have a smarter version of humans to tackle the problem. i think all of that adds up to increase the probability that the destination will eventually be reached. yeah, a technical audience like you, i don't have to go through it, but a.i. researchers do
3:30 pm
sometimes complain that as soon as something works, it's no longer called a.i.. so a lot of algorithms and techniques that were developed in the field of a.i. are in use all around the economy today, but we just think of them as software. so that has been not at all like a complete standstill. instead, continuous progress that has not yet gone all the way. and in game a.i. they do -- well, on a lot of games, but they lack the general purpose market. so when will this point happen? we did a survey of some leading a.i. experts last year, and one of the questions we asked was by what year do you think there is a 50% chance we will have human-level machine intelligence? and the answer to that was 2050 or 2040, depending on exactly which group of experts we asked. and like a footnote, actually, we asked them to connallize on there -- conditionallize on there being no collapse of global civilization in the
3:31 pm
interim, so maybe you would have to push up by a few years, like, the median estimate if you didn't bake in that. so to hi own view -- my own view, that seems reasonable. it could happen much sooner, or it could take much longer. we asked by what year do you think there's a 90% probability of human level machine intelligence, and they said 2070 or 2075. in my view, that's overconfident. i just think we should assign more than 10% probability to us still not having it by then. there should be a -- [inaudible] we also asked by what year, if and when we do reach human-level machine intelligence, how long will it be from then until we get superintelligence? and you can see the results there. again, my view is different. i think that -- i'm very agnostic as to how far away from human-level machine intelligence. i do think, though, that if and
3:32 pm
when we reach that point, there's a high probability we might get superintelligence very soon thereafter. and in the book i describe some of the reasons for thinking that. but it's important to distinguish these two questions, right? because they often get rolled into one overall level of enthusiasm or skepticism, okay? but the question of how far we are from now to some rough notion of human equivalence and how the question of if and when we do get there, how quickly will machines completely leave us in the dust. and so some things hinge on this question, the question of how steep the takeoff will be. if we imagine that we will have a fast takeoff, maybe minutes or hours or days, a couple of weeks, for a start it then seems like it's not going to really be possible to invent new solutions to, like, safety issues as we go along. if we're going to get a
3:33 pm
favorable outcome in that kind of fast takeoff, it will be because of preparations we have put in place before, safeguard developed previously. if, on the other hand, it's going to take decades or centuries to sort of slowly, incrementally expand the capabilities from human level to superintelligent, then we would have ample time to develop new global institutions, to solve coordination problems, to develop the new scientists, train up new ph.d. students, perhaps, in the field. so the degree to which you need to worry about controlling this whole problem in advance depends on how fast you think the takeoff will be. also with the fast takeoff scenario, you are much more likely to get a world order where at the highest level of decision making there is only one decision-making agency. basically, if they're going to go from human level to superintelligence within the course of hour or weeks, then it's likely there will be one project that has completed the transition before the next following one has even started it.
3:34 pm
so in technology research and in software, you often have complete products, but it's rare they're so close to each other that they're only a few days apart. usually the leader has a few months on the next follower. so the faster the takeoff, the more likely it is that you will have a system that is, like, maturity and radically superintelligent in a world where there is not a system that is even close. and in such a system, in such a world it's likely that this first superintelligence could be extremely powerful more or less for the same reasons that we humans are very powerful on planet relative to other animal species. it's not that our muscles are stronger or our fangs are sharper, but that enables us to -- [inaudible] makes us the dominant species. to the point now where, like, the fate of the gorillas, let us say, even though they're much stronger than us depends a lot
3:35 pm
more on what we do than what the gorillas themselves do. so in this kind of scenario where you have a superintelligence that will quickly achieve technological maturity, you have a telescoping of the future that we may be able to achieve in 20 or 30,000 years happen very quickly by the superintelligence doing the research on digital time scales rather than biological. you have a potentially extremely powerful agent that might be able to shape the future according to its goals, and a lot might then hedge on exactly -- hinge on exactly what those goals are. but if you have a slow takeoff instead, then it's more likely you will have a multipolar outcome with many different companies or countries or teams and systems with roughly comparable level of capability ending up superintelligent, but at no point one being so far ahead of the others that it can just lay down the law. in that kind of multipolar outcome, you have a very different set of concerns. not just one agent has some
3:36 pm
bizarre value that it imposes on everybody. a different set of concerns not necessarily less serious concerns though. you then have economic competitive forces coming into play and evolutionary dynamics operating on this population of digital minds, and depending on exactly what assumptions you make, you might have a rapid population explosion of these digital mines because they can be easily copied. people will make copies of them -- [inaudible] these digital minds can make equals subsistence income for a digital mind like the price of hardware rental and electricity. but then lower the sub sis fence level for biological mines because we need houses to live in and so fort. so -- so forth. so you can create little models like where it becomes impossible for humans to aid, earn a wage
3:37 pm
income. we would be reliant on income from capital. the question also arises whether we would be able to preserve the institutions indefinitely or if it's going to be this world where trillions and trillions of digital minds that are readily becoming much faster and smarter than us and yet imagine that we would be holding on to significant share of the world's capital. and i don't know, the long-term evolutionary outcomes of that kind of dynamic might not contain anything that looks like a human or anything that we would place value on, no song and dance and beauty and so forth. it might turn out that these are just optimized for being very efficient at producing some particular kind of output that is valued in that economy. so there are a lot of things to be thought through either way, but it does look like if there is going to be a big fulcrum on which the future hinges, the arrival of superintelligence is
3:38 pm
a good candidate to what such a fulcrum might look like. so let's open it up and have a little bit of discussion. thanks. [applause] >> i'll just briefly mention, so the mic is right down there, so you can line up in front of the mic and just get ready with your questions, and then, nick, do you just want to call on people? >> maybe in the order -- >> yeah, in the order that they line up behind this gentleman right there. yeah. >> it's actually extremely hard for me to take your presentation seriously, and if -- really. so, i mean, this idea of the singularity in machines becoming this intelligent comes from john von neumann in the 1950s. but let me go through your three
3:39 pm
parts that you proposed. the first is genetic. and you're proposing, for example, this accelerating of the selection. if you do it that way even presumes that we do arrive at the genes which are conducive to intelligence -- which we haven't done yet despite what you say -- >> no, i say we haven't. >> okay, we haven't, we haven't. okay. so even if you did, actually, even if we did do this properly, you're leaving out end genetic effects. this is the first thing, that's not going to work. the second one is the neuroscience project you propose is pretty much exactly the one that henry -- [inaudible] is getting a billion euros for in the european union at the moment, and a thousand of us have signed a petition saying it's garbage, it's not going to work, it's a waste of money. and you can actually check this on the web, why we're saying that. it is hopelessly naive. the third one is interesting. you say that a.i. is the thing
3:40 pm
most likely to succeed. so you left the slide up only for a very short period of time. i did not see the single technique there that was developed after 1990. now, granted, they have different names like support vector machines is, basically, the single -- [inaudible] from the 1950s. i just really wonder, are you selling bookingses, or is this -- books, or is this a serious economic thesis? >> let me take the three questions in turn. so with genetic selection, like, if all you are doing is selecting between embryos, complications related to ep igenetics need to the not -- need not bother you. i mate be you'd only -- it may be you'd only capture a part, but you could get some of the way to enhancing the genetic predisposition to intelligence. if you develop more advanced
3:41 pm
genetic technologies where you could also say i understand how methlation works and engineer that, then maybe you could get farther. even without making an assumption about that, you could still be confident you would get part of the way. with regard to henry markham's project, i said nothing, and i have really no opinion about whether that's worth while or not. i just point out that at some point with advancing technologies in these different fields, it lookslike -- [inaudible] -- it looks like it should succeed. that's consistent with it being a complete waste of money to try to do it anytime soon. i'm not even sure it would be desirable to do it even if there were a prospect of succeeding, so i might be kind of happy that it looks unpromising. and as for whether there are any recent discoveries in my somewhat arbitrarily -- [inaudible] insights, i think deep learning is fairly recent and, yeah,
3:42 pm
conditional neuronetwork like big data is like a vague boundary of where you count that. and my impression is that progress has probably continued at more or less a constant rate. it's very difficult to measure because we don't have a good metric of how generally intelligent a.i.s are, but there doesn't seem to be any slacking off in the field in the last few years. there's a lot of enthusiasm, a lot of, like, interest in purchasing, acquiring a.i. companies like a company we have been working with, deep mind, in london was just acquired by google for $400 million just last year and after a bidding competition with facebook, really trying to scoop up talent. so i don't perceive any general
3:43 pm
kind of, like, disillusionment or a sense of stagnation in the field of a.i., and maybe stuart can -- [inaudible] more on that later. yes. >> thanks for the talk. my question is about coordination. you had the graph i think it was capability, insight and coordination, and then you talked about global coordination. i'm just wondering what you see as the highest possible outcome there, like if you could have your magic wand, what level of coordination, how that might look, might be reached and sort of who would be in control of these technologies, how would that look? >> well, so they're sort of two separate variables to consider here. so the concept of a single -- [inaudible] as i suggested in abstract, that'd be off a world order where global coordination problems can be solved. so in that sense, there is only one decision making industry. now, that concept is neutral. as to whether that's like a dictator that has unchallenged power or whether it's like a
3:44 pm
world government that is democratic or whether it's like a superintelligence a.i. that has taken over or whether it's like a universeally shared moral system that makes everybody collaborate, you can imagine many possible ways of substantiating this and some much more desirable than others. but in terms of thinking about the possible outcomes that could occur, it's an interesting concept to have because you get a different set of possible outcomes if the outcome will be selected not by anybody's or an aggregate of preferences, but instead by these competitions ask zero sum games that can occur when you have conflict. i think the long-term outcome is fairly likely to be a -- [inaudible] partly because of the possibility of superintelligence transition being very local and then bringing enormous power in. but even if we set aside all of these speculations, there has been this long-term trend towards an increased scale of political integration. it used to be that the tribe was maybe 50 people, the largest
3:45 pm
sort of aggregate of political integration. now we have supernatural entities, supernational entities like the e.u., weak forms of global governance organizations spanning the globe. and so if you think of these as sort of orders of magnitude, we've probably come more than halfway, and we just need, like, another order of magnitude growth in the largest political units, and we would have a global governance system. i don't know if that answers your question. >> yeah. do you see that as a desirable outcome? >> well, probably. like, obviously, a lot would depend on what exactly the values are that would influence the decisions of the single -- so you certainly could imagine with nontrivial probability, like, extremely dystopian single tups. never the -- singletons. if there were things we could do to collaborate instead of having conflict, that that would be a
3:46 pm
robustly good thing. it would seem to help not just with a.i., like, for example, to make itless likely that -- it less likely that you will have a technology race at the end where different nations have to scale back on safety precautions in order to get there first, but also with other existential risks. a big source, i think, is conflict; weapons systems built for new technologies that might be more destructive than nuclear weapons. so the greater the ability, the great powers would have to coordinate the effort, the better our chances would be with regard to these technologies. although it might increase the risk of permanent tyranny, totalitarianism, it might decrease the others efficiently to still be worth promoting. >> why did the graph on your last slide mount over time? >> say that again? >> why did the graph on your last slide flatten out over
3:47 pm
time? >> flat? flatten? >> flatten out. >> let's see, the last slide. yeah. well, because ultimately i think you reach technological maturity where you achieve the maximum information processing off of a given amount of material that is physically possible, and then the only way to further grow your information processing is by acquiring new material. and that, in the long run, can only be done to -- [inaudible] so you have the speed of light and how fast you can obtain new resources in space whereas in interim there you might have like a much faster growth while you're figuring out better ways to arrange the existing hardware through running better software or better ways of building more powerful hardware. >> what do you think about the
3:48 pm
oligarchy -- [inaudible] human values, but the only human value that it cares about are something like -- [inaudible] >> yeah. [laughter] so those kinds of scenarios will depend quite sensitively on your utility function or your ethic. like how much worse is it that the future is owned and designed by a small number of people than that it's owned and designed by all of us together? that's like an unobvious question in moral philosophy, i think. on the one hand, assuming these people were not like sadists, but were kind of random cannily selected people, let us say -- randomly selected people, let us say, they might have values acquired from the rest of our values except that they wouldn't care about us from -- they might not care about us in the same
3:49 pm
way that we care about each of our own lives -- [inaudible] but they might care in similar ways to the rest of us about abstract goods like beauty and happiness and avoidance of suffering. there might be a slightly narrower conception of values, it was just the value of a few people. personally, up close i think it would be much more desirable if there were a place both for these abstract values, but also for everybody who is alive today on the planet. it would not -- [inaudible] matter optimized for feeling pleasure or matter optimized for scientific discovery or what other ab advantage -- [inaudible] there's room enough to sort of allow all reasonable values.
3:50 pm
so that would be, by far, in my view, the desirable outcome. but just how far below of just having a few people doing the whole thing is a question where we probably would have different views. and i'm not really sure what i think about that. >> i'm following instructions, or at least i think i am. okay. so i'm wondering to the extent that this work in artificial intelligence is based on, um, the human form of intelligence as a model, i'm wondering if there's any thought about whether there's an upper limit to the capabilities of that form of intelligence and whether there's any thought about that it might require some different form of intelligence to reach these higher levels.
3:51 pm
>> yeah. i certainly don't think that the human cognitive architecture is optimal in a broad sense. it's constrained by the way that neurons work, there's certain things you just couldn't build build with biological neurons. it might well be that a very different form of architecture, a very different form of consciousness, if any, would be associated with information processing that was more optimized for intelligence. or for some kind of specific intellectual task that you wanted to perform. be we think of the starting point as the human mind and that perhaps initially we have something exactly like a human mind except now it's running on a computer, then there are a number of things you could do to
3:52 pm
immediately increase the effective amount of intelligence at your disposal. for a start, you could make copies of it so then by creating more and more instances of this same human mind, you could create maybe a kind of collective superintelligence like that just has the ability to think about more things powerful than any one human. if you have at this point still something like more -- [inaudible] or maybe not exponential, but some growth in hardware capacity, you could also just by waiting a little bit get speed superintelligence. so if you run the human-level mind on a computer, that's a thousand times faster, and now you have a human mind that thinks a thousand times faster. and then beyond that you could then try to do more qualitative improvements by -- you could try to add neurons into some area in the cortex or muck around and play around with things, and probably there are things you could discover after a while of experimenting which -- but ultimately, that kind of neuromorphic a.i., i think,
3:53 pm
would be surpassed by a synthetic machine intelligence, artificial intelligence designed from the ground up maybe by this population of whole brain emulations that they're running at digital speeds and doing this computer science research. but, ultimately, moving away from the biological architecture that has started because it just seems very impossible that what we have happened to evolve using our biological constraints would still resemble the optimal form of information processing when you remove some of those constraints. >> sorry, just a quick follow-up. you made me think of something. if you went with the full brain emulation approach and that worked, would you also be creating an artificial personality? >> there are different levels of success of the approach. in a limited case, you would get an exact copy of the person you you loaded intact with thoughts and values and thoughts and feelings. but it might be before you get the ability to create such an emulation, you would get something rougher, something maybe that forgot what you had
3:54 pm
learned but had the same learning ability. as a generic human being. and maybe even before you would get back, you would get something that didn't actually work, but that you could patch up with some, like, artificial algorithms. maybe you figure out exactly how a cortical column works, and it does something useful, and you combine that with ideas from a.i. and you get something that really operates nothing like a person, but it incorporated some insights. all of those are possible. my own guess is that if we shot for whole brain emulation in the whop of then getting an -- hope of then getting an a.i. that actually contained all the human values we enhanced so the superintelligence would inherit our values, we would probably find that what succeeded before was some form of neuromorphic a.i. that didn't have our values. it would be easier to achieve machine intelligence by clobbering together something vaguely inspired by biology than by really succeeding with such a
3:55 pm
high degree of finality that we would get all the relevant evaluative dispositions captured. >> i presume many members of the audience are really interested in the far future and want to become researchers or do research for some time. in which of those three areas do you think the value of the marginal next researcher is higher? wondering about superintelligence, researching on that area trying to find other things that are exist ten, risks or -- existential risks or other forms of knowns or unknowns or getting the order right for the technological developments that separate us from technological maturity. >> yes. i mean, so working towards whole brain emulation, i think, might have kind of negative values. working on the control problem for a.i. seems to be a very important task. so that would be plausible candidate for, like, the best thing you could do. or if not working on it yourself, then maybe through the
3:56 pm
division of labor work on something else and then, like, support people who are specializing in the control problem like niri is doing. other plausible candidates for the best thing to do would be to do general analysis of this kind of macro strategy either focusing on finding other existential risks and locating levers of influence, or method logical insights and new concepts that can help us be more intelligent about how we approach these types of macro strategy irk issues. -- strategic issues. very little research has been done there, and a lot of this understanding we now think we have is only recent and has been produced by -- [inaudible] so that might well be really important additional considerations to be discovered that could radically change our view about which direction we should be going in. that's another high expected utility area of research. and the third area would be to try to build more general capacity in, say, the effective
3:57 pm
altruism community, donor -- [inaudible] more effective organizations that can support the aforesaid areas of investigation and other targets of opportunity that might emerge. to enhance biological cognition or work on global coordination problems or other things like that. >> my question's actually very similar to that. i mean, there's a lot of uncertainty about, you know, what kinds of technologies are going to interact, how and, you know, what order we should -- [inaudible] etc., and i'm wondering as an individual what kind of general strategies could you suggest about how to prioritize the projects that should be worked on and how individuals can maximize, you know, their influence in reducing this threat. >> well, i am a believer in the principle of division of labor. for most people around the world the most efficient way to contribute would be earning to
3:58 pm
get -- that is, you just get down with your life, make a lot of money, have an exciting career, and then you're -- [inaudible] to the most cost effective charities and you have other people specializing in doing the object level work. for certain kinds of people, there are, like, additional -- say if you happen to be really good at computer science or mathematics, for example, then maybe you want to work directly on the control problem. or if you have really good political skills or a high position in some organization and other leversover influence, if you are, like, a successful journalist, you could work on raising public awareness. some additional opportunities, but those can be different for different kinds of people. so the baseline would be just -- [inaudible] and then for some people there are additional options. >> within both scenarios, there's going to be a decision about which purview to walk on. you've still got to decide which thing to donate to. and it seems possible there's going to be some scenarios where, like, speeding up
3:59 pm
development will actually be net harmful, so it matters a lot on what you work on. i'm wondering on that question of deciding what to work on, what can individuals do to improve how they make -- >> yes. i think we only have a fairly -- [inaudible] understanding of the sign of, like, different possible research directions. tried to describe in my book in the last two chapters some of my thinking on that with things like cognitive enhancement, global integration, whole brain emulation, improving hardware not -- like what i think about those. and there are other areas where we haven't yet thought through things sufficiently that we even have a good guess as to where the arrow points. from -- [inaudible] i mean, the difference you would make as individual is going to be very small, and it might be dominated by whatever difference you would make by directly supporting people working in a more focused way on, say, solving the control problem. so if you donated 1% of your
4:00 pm
income to the right group of people who do the technical work, that might count for more than which general area of technology whether you're working on the clean energy for cars or whether you're working on the smart electricity grid or something like that. even if those have intrinsically different signs regarding the level of existential risk and even if we could find out what that sign is, it might be having the e wills dis-- elasticity that it will be trumped by more targeted interventions. ..
4:01 pm
>> it is broadening out as we evaluate different things. some of the people are very concerned at the long-term future. ex essentially. but there is a portion that grows up that is well. it is a possible candidate for best use of the money. >> hello, this is not a crucial point. but i'm wondering if your calculation of the iq game, that
4:02 pm
the variance of the iq embryos is the same as the variances in the population as a whole, when in fact you have a limited number of combinations? >> i'm trying to remember what exactly we assumed. carl, do you remember enact. >> you remember? [inaudible question] >> it starts sometimes when you have to divide the problems and look at the iq points. looking at things like [inaudible] , things that matter. when you start with a particular thing like a person or a couple that you wouldn't be able to get those initial problems.
4:03 pm
[inaudible question] >> recalculation in operated that. thank you. >> okay, it seems like we have been dancing around a moral issue and whether or not we are worried about the threats and what our systems are and value systems. we are careful to speak about value systems. but i am interested that you have spent a lot of time thinking about this, what your attitude is towards realism and more generally whether you think there is some trade-off between getting people to research on moral philosophy rather than the aversion of risk. >> well, i don't have some view on metaethics.
4:04 pm
like the nature of moral truth, like a lot of individuals have talked about that and so there may be some abilities to improve in the moral philosophy. i don't think it's necessary to get the transition right. but we have this particular scenario where you have the ability to shape the future and it becomes very important what the goal is. but the problem of giving it this is part of the two-part system. it's a technical problem and it has the ultimo express like some
4:05 pm
kind of fairness. but there is also a select in the first place. and it's really going to shape all this in making this choice right way. but there is an idea of how to go about that to some extent which is in direct. rather than trying to direct it, there are features that we want you all to have. whereby we would be able to find out what the leaders do. especially in regards to making it concrete. which we would have asked them to do if we had had this to think about this question. and if we had been smarter and if we had known more facts.
4:06 pm
so now we have the pure question would have been asked to do under the circumstances and we can leverage this to make a better estimate than if we just had made our best effort at this. it should so the hope is that we can -- that we don't need to find the correct situation. nevertheless it may be useful in so far as to do what is morally right and different kinds of outcomes might involve different kinds of gambles for upside and downside and clarifying how do they make those trade-offs in a better way. it should be noted that whether the outcomes are aggregated.
4:07 pm
and that includes the population that is conditional and maybe there are some subtle differences there so this would and at super large populations which are much more important is the large population. so i think that it would have been somewhat youthful. it could've been the most cost effective thing to do and it seems to be quite desirable. >> in regards to training this in regards to humanity, to support us, which would be a being at that point. >> i mean, if we are going to
4:08 pm
build it, we have to build it in some way or another. it is just a matter of two things. which way to build it and which we ultimately have. and so i think that that, if we think about this from an entrepreneur point of view, we don't have this at the top and everything else. and we have evolved in motions to be dominated or exploited, things that have been a part of this even if they say it is okay with it, he's not really fully satisfied. and so there might be no part at all and so i think that it is a
4:09 pm
claim that in principle you can provide almost every level of intelligence with any final goal. and the goal is to calculate it up high. or we need to realize the highest reaches these equally compatible situations. so we might as well pick one that we can recognize to be valuable. now, it is true that i might be morally important than just the physical might itself might be conscious that we have moral status. then clearly that has to be taken in account with a moral
4:10 pm
reckoning. >> hello. [laughter] [inaudible] >> hello. okay, so i see that it is highly unlikely that artificial intelligence coming about in this kind of competitive and oligarchic society could ever really have an inclusive benefit for all humanity. i was just kind of curious what your views on that are and if we need kind of a shift in our overall global system before we should start running for this or what you thought about that remark. >> i think that even assembling
4:11 pm
anyone's human value, it's already a big challenge. but on top of that, there is this concern about the people that are selective, they may not be selective in the way that other people are selective for the common good. the way we would go about this is first the control problem. we have to solve the control problem before we solve the intelligence problem and then we need to wait another generation or two to make sure that there was no flaw in the solution and the men also after a time of scrutiny, we would want this which would secure this benefit. and that is part of the whole coordination problem. i think that it's probably part of making it happen and that
4:12 pm
includes the solution to the intelligence problem and we need solutions to both of these but it would be good if they came in the right order. and so in principle, there are two different things we might've been doing or we could try to accelerate the development and tens of thousands if not hundreds of thousands of people work on improving software and making that happen for one individual, whereas the control problems, there could be like six people working in the world and it can really make a visible difference. and there are a lot of people
4:13 pm
that would really try to slow it down. so it just makes more sense as we affect the sequence in which the problems are solved and it also seems very positive to do. so they are they are facing challenges like we're small organizations can make a big difference. >> it seems like a lot of people in the mainstream community in general population are skeptical of super intelligence. and in regards to your earlier question, why do you think that is and how do we get more people to critically about that? >> we should've asked before firing up. [laughter] and well, i think that first we
4:14 pm
have to be sure that they are actually skeptical about this kind of thing. because i think that i am skeptical about it as well and so -- or that things are progressing on an exponential curve and we can predict with higher accuracy expected when it will occur. so some of these people have skeptical attitude and they wouldn't necessarily disagree that much you specify the precise plan with appropriate probability attached. and if they still disagree, there are some that maybe have various views about the mind and they think that there is weird stuff happening and some i think are very impressed by the fact that people have predicted this
4:15 pm
in the past and they have been wrong. and that's some evidence. and it's not that strong against ai happening in half a century. and so that would be my guess is this why they have a level of skepticism that they do. and they thought that the tendency sometimes -- some researchers have said that they have overhyped with the system can do. like really wanting to make it seem as if they are doing advanced an amazing stop. and so they have sort of been duped by these claims. a lot of people have been duped by these claims and that water kind of spills over into these abstract claims as well.
4:16 pm
[inaudible question] >> i'm interested in those ideas for doing that. [inaudible question] >> on one hand, it seems probably true in the sense that the possible super intelligence is so vast that somewhere we harness intelligence to any arbitrary goal and on the other hand humans exhibit this behavior of pursuing one goal until there goal reaches a point of conflict and experiences have changed what i think the right thing to do is. and so i'm wondering, there hasn't been much conversation about super intelligence and whether that is a desire with
4:17 pm
what it is and how related. do you think that this has some flexibility in the? >> i think that there are different metaphorical speaks and again, it's different as we try to strike some compromise may be depending on what time of day it is and what stimuli are. so we face these problems with the nobler parts of our mind trying to take control of these parts and maybe won't what happens -- something is broad and probably figuring out the way that they cannot satisfy the
4:18 pm
broader range of the constituents and now they are just part of the government. i'm sure that with a true story it is a lot more complicated than that. all sorts of things happen in the human brain that can account for these apparent shifts of final values and it might just be a hormonal thing like when you are a teenager and maybe you don't tell how you have not discovered the meaning of life and so all of these things are possible. and so in the space of artificial minds but also have this kind of logic. i'm not sure that that would be better because it just makes the outcome a lot less predictable even if you can see what the pursuit is.
4:19 pm
and some complicated thing that seems almost antithetical. because it happens to humans as we end up with a particular type of behavior also but there's no reason to expect that kind of situation from the competing philosophies. that would be my guess. >> hello. you talked about humans getting better at collaborating. on a grand scale. and what i thought of was that we could make some sort of cultural pains that would be beneficial of certain technological improvements and. >> i can see that that would be a valuable thing.
4:20 pm
and i'm not sure how we can go about achieving it. and a lot of people have tried in many different ways to make the world more peaceful and just with success and that is slow in coming like millions of people really working. so it was like a very easy thing that someone could do that could radically make the world more peaceful. but maybe new opportunities can open up and there could be new insights coming about, intervention points like internet and stuff like that. things that haven't happened before because they couldn't. or maybe there is some way to try to talk about the choices people will make.
4:21 pm
and so that could be a powerful way of shifting it. it's the kind of thing that people like to talk about. there is some leeway that could affect what the values are and we are all pushing it just do stay away. where can make a difference with individuals. in almost no one else seems to be caring about it at all and nobody would interfere. >> can i see if i understood? >> yes. >> it seems like it would be valuable if we could improve how much peace there is in the world. >> yes. before i would start to work directly on that, i would want to see a promising idea and having a big impact on the level
4:22 pm
of peace in the world. if you are like president obama, you can meaningfully affect individuals of the world. but the smallest individual will have a small effect on that unless they have good new ideas. >> hello and thank you. i'm very interested in trying to figure out what conservative and sickly and try to come up with an information theoretic definition for non-and i'm wondering if you see this as having any problem to control problem and if you see it as a problem in the future. >> well, it seems like it's an interesting thing one could come up with a really neat thing that captures the relevant property. there are some things and
4:23 pm
questions that seem kind of interesting and relevant. i think there is valid in the and things that become relevant when combined with another insight and then suddenly you get an application of what you should do. thinking about this paradox, it's the kind of thing that looks at it's probably going to be relevant for a lot of things. although we may not see how it affects this or the simulation argument. [inaudible] and so maybe with more time we could have a more practical rest in. but if i could find out more, it would be worth learning about.
4:24 pm
>> computer scientists, what are some of the lowest hanging fruit in terms of this for the world at large? >> it's narrowly on the control problem and then you might be better off not focusing on them at all but to do things that would improve humanity's collective rationality and wisdom. which could improve the way that internet discussions work. or if you are not a computer programmer but a political scientist, maybe institutional innovations like predicting market and ensuring that we can raise the quality of public liberation. it would also help us with other existential problems as well
4:25 pm
until it might be that for a lot of people they are not doing ai specific stuff and it would be the best way to consider it. >> we already have self replicating programs. in self replicating ai. what is the probability that we would self replicate these programs? >> it's easier to do in as much as it is a program that self replicate. you can imagine. and i don't think that there is any intrinsic urge to replicate,
4:26 pm
although what i think you will find is a wide range of final goals in a wide range of possible situations, things that are very instrumental to pursue survival and interventions that would modify the goals. if you are going to act over a large space, maybe just replicate yourselves more so. but if it takes too long to send a regnal like in the galactic empire, you might want to have different parts because it takes too long from this central location. but i don't think that this concept of it is the relevant one. like when you have this kind of population, you do have
4:27 pm
evolutionary dynamics and we really come to dominate this. and it looks nothing possibly like the human mind. it might be more simpler or more complex and it might outsource a lot of functionality. the whole notion of self becomes problematic. we think of certain memories, strengths, weaknesses, all that can become as though we have advanced in technology and we can change the shape or impart memories or outsource smartness and some other profit and maybe different things based on their goals rather than the properties which are sort of tangible with the idea of self replication that could be less pertinent.
4:28 pm
>> the two last questions, and i think that will do it. >> okay, i am a grad student doing computing and i think there are other pew grad students in the crowd. this is maybe more of a wish than a question, but the utility function of academics is to get tenure and published papers and there's a lot of opportunity to publish papers and go to conferences where the main outcome may be that it is just giving us more powerful computing technology that will drive is much faster and off the cliff. could you and some other high status people make a really great sounding conference so that we can go there and not feel guilty about thinking about these things in our free time?
4:29 pm
>> that sounds like a good idea and proposal. [laughter] >> i know a physicist at mit that is organizing things in puerto rico to bring together some of the leading ai people and those interested in the issues. working out some kind of technical agenda that could be helpful and palatable and interesting. so somehow figuring out a way to do this. whether it's going to be an open invitation or whether it is invitation only. i don't know if you now. otherwise i'm sure that there will be other meetings along the same line as the first one is successful.
4:30 pm
[inaudible question] >> cool, thank you. >> [inaudible] >> actually, my question is about that there is maybe just one and let's assume that they, two years from now, five years from now, considering this right now, where would that happen
4:31 pm
[inaudible] >> okay, we face a condition that is happening amazingly quickly, like five years from now. and i guess it's like some department but just has amazing insight or someplace like google which is like the largest country and people working specifically on this. but if it requires a large effort, i guess conditioning on us being the kind of system that one person can build because it justifies that algorithm to make it work. and thank you very much for coming.
4:32 pm
[applause] >> c-span2 providing live coverage of the u.s. senate floor proceedings and key public policy events and every weekend, but tv him and now for 15 years the only television network devoted to nonfiction books and authors. c-span2 is created by the cable tv industry and brought to you as a public service by her local and cable satellite provider. watch us in hd, like us on facebook and false on twitter. >> booktv as bookstores and libraries dropped the country about the nonfiction books that they are most anticipating being published this fall. here is a look at some of the titles chosen. first off is the biography of thomas stonewall jackson. next, madrigal ridge of the flash, new new yorker's john law
4:33 pm
remembers the playwright. and living for the most anticipated fall titles, the collection of essays in the encyclopedia of trouble and spaciousness. wrapping up the list is a story in james mcpherson looking at the life of president of the confederacy jefferson davis in embattled rival. that is a look at some of the nonfiction titles that we have anticipated being published this fall. you can visit the bookstore in oxford, mississippi, or online at square books.com. >> morten storm is a former danish radical islamists. he talks about his experiences as a member of al qaeda and his later life as a double agent employed by british and danish intelligence agencies. this is about one hour and a half. >> i will be brief, but a

33 Views

info Stream Only

Uploaded by TV Archive on