tv John Brockman Possible Minds CSPAN March 30, 2019 8:00am-9:41am EDT
8:00 am
[applause] [inaudible conversations] .. >> from the deputy cancel new york times on defending the newspaper. samantha allen reports on algae bq communities. in charlottesville community. for complete schedule check your cable guide or visit booktv.org. we kick off the weekend with an author discussion on the future of artificial intelligence.
8:01 am
[inaudible] >> welcome to pioneer work. what an incredible turnout. i love it. this is so exciting. [applause] we are motivated to bring these events in these incredible speakers here because of you are community. you show up and are shaping a conversation in a culture. thank you so much for coming i love it. it means everything for us to see you all are here. i want to thank simon and our collaborators in these projects and assign studios of the simons foundation they help us make all of these events free and open to the public. they really are free. you're welcome to come and join the conversation even if you can't afford to or don't want to donate. or just are interested. if you are inspired -- if you
8:02 am
are inspired by tonight's conversation we rely on your donations so if you think you would buy john brockman a beer or daniel dennett and old-fashioned. continue entering consider donating. if it doesn't inspire you in school. this is what we're doing here. it's a genuine pleasure for me to it be able to introduce my old friend the incomparable john brockman. i feel like john's name should come with an adjective before. like the inevitable john brockman. the daunting john brockman, the incomparable john brockman. sometimes compulsive john
8:03 am
brockman. -- did you see the look. he has been a curator of emerging culture around scientific ideas and intellectual ideas in the broadest ideas that he can find and cultivate and john started a reality club, which happened in physical reality. can you imagine course markets like what were doing here today. the reality club consisted of the most interesting lines john could find and they met at chinese restaurants in new york city. they would talk, not about what accomplishments brought them to the table but they would as john said, ask each other the questions they were asking themselves. somebody is calling me right now. -- the reality club becomes edge.org in the conversation
8:04 am
forces on john's cure ration and you should check out edge it's fascinating. how many people weigh in with their ideas on that platform. he coined the third culture which was a response to the essay about the two cultures. here's humanity versus here's science. he thought about the third culture is a new paradigm for culture that grew out of intellectual conversations that were real and raw and rocky and provocative. also, with his partner, john built brockman inc. which is the influential nonfiction literally agency in the world. and were all brockman inc. authors. including john brockman himself. we are here tonight because of his lat latest collective volum, possible minds, and solicits
8:05 am
opinions and ideas on the subject of consciousness and a.i. i'm delighted that he also brought two of the greatest minds to discuss possible minds. i'm going to let john, up to a better more proper introduction but let's think and welcome john brockman, daniel dennett and david chalmers. [applause] >> is actually a very exciting evening for me and i do not like to show emotion. [laughter] these two individuals have been it at each other for 20 years litigating very serious and consequential ideas. they appear to be friendly, i hope they are. [laughter] these things matter. ideas matter. they have very different
8:06 am
conceptions of the world. they talk about the hard problem, consciousness for 20 years. tonight we are not going to go there, were going to talk about the next drop which the whole world of a.i. for me, although i met the original in 1965 in a been there ever since all of them. it got pretty boring in the 80s and i walked away from it in the world of expert systems. it was the fifth generation and they are coming, they're coming they came in they went. and nothing happened. i happen to be there at the meeting when the japanese official was directing and
8:07 am
showed up with, john mccarthy, i had a seat at the table and it seemed to peter off into another a.i. winter. twenty years later, you wake up and there's something called unsupervised fulfilling deep learning. it was all very interesting. i thought it would be valuable to find out what is happening. put together a dinner in london, inviting the demitasse of his, one of the sharpest people i
8:08 am
knew. and get a sense what's going on. in the group i dinner, we are told nothing to do with computing and that says a lot to say about the novelist and the musician and film maker. it was a fascinating start and we call that the london chophouse dinner. we will continue to do so. following that, was a conference in washington where a number of people were thinking about a.i. their entire lives starting with the world when people got together and brought the bottleneck with parallel
8:09 am
processing computer. mp callison the computer scientist, seth lloyd, and neil, neil said looking at this book, this is also pressing that it was written 20 years ago. it worries better culture, the commercialization of science. we should do a favor and. reporter: his book and this is how this book started and that is why we are here. tonight we will talk about themes in the book and the title of possible minds, and 25 ways to think about a.i. one theme is is intelligence and
8:10 am
possible? and i thought we would start off with five minutes with each of the gentleman starting with da dave. >> sure. it's a pleasure to be here. thanks so much to john and turnejannalevin. i'm on the side of possible. i like the center possible. i like john's theme. possible minds. it's a wonderful theme for thinking about intelligence but natural and artificial unconsciousness but natural. think about the state of possible minds absolutely vast. all the minds that have been, will be or could be.
8:11 am
starting with the actual minds. there have been a hundred billion or so humans minds of their owns, pretty amazing minds have been in there. confucius, isaac newton, jane austen, pablo picasso, martin luther king, a lot of amazing minds. those hundred million minds put together for the tiniest corners of the space of possible. we can add in all the non- human animal minds that are there. i looked up on the web today, how many organisms have lived, how many animals have lived in the history of the planet and it seems to be around ten to 29.
8:12 am
most of them are worms and the end. in their minds may not be so interesting. maybe a worm has a mind of its own. ten of the 29 minds are at least ten of the 20 every terrific minds. [laughter] still the smallest corner of the space of possible minds. one of the amazing things about the computer is the way that enables us to explore and expand that space of possible minds. arguably, for the first time since the history of planet the computer has enabled ways to existence and not by the standard of methods, biological evolution but by straightforward intentional design and programming.
8:13 am
so far the minds have been limited. but still interesting. john mentioned the success of the family that managed to teach themselves from scratch in a way that is holy and like. in which a way that the human would learn to play game or play the game at all. but nonetheless, turn not to exceed human capacity is that at least straw to exceed human capacities at least in the way that limit capacity. without the surprising successes on indirect ignitio recognitionh recognition, and vehicle driving, with unlimited domains, they're not there yet on autonomous vehicles but there on speech recognition and image recognition. starting to exceed human
8:14 am
capacity. so we had a limited, limited so far expansion of the space of actual mind to includes mine so far. it's the smallest expansion one thing we should not do is to not exaggerate where we have gone to with a.i. today. the advances are amazing but they are limited and they haven't got near general human intelligence and i think it's unlikely that they're going to do so anytime soon. it is happening in the next 20 years, will happen the century, maybe. people say with any technology they tend to underestimate in the short term -- sorry to overestimate in the short term and underestimated in the long-term and that's my attitude towards a.i. it may not change our lives completely in the next 20 years but in the next 200 years it's
8:15 am
going to transform in one of the reasons is the a.i. built-in away of self enhancing mechanism of exploring space and expanding the space of possible minds. the early air programs you design yourself. people would program in an alan ritter program that you can play chess. he built-in very simple rules of thumb to play chess. now the chess playing system like alpha learn to play chess from stress. and did so amazingly well. so learning as a method for moving ahead in the space. a possible minds. starting from a possible mind from the capacity to learn and get somewhere. evolution is a such message. i expect the cai exploding evolutionary methods where our
8:16 am
systems have artificial evolution among a different a.i. programs in their capacity expand predictable ways over time and thereby getting this ambience of a starting point. in evolutionary ways of expanding the space of minds and the most powerful method of all and exploring the space is one which is still to come. that is one to have a.i. systems during the designing and a.i.'s are designing a.i. when we get to a.i. it's a human level of capacity for various kinds of intelligence. then a little later these things always get better. you will have an a.i. program will be a great acumen level for
8:17 am
designing a.i. it will be able to design and a.i. one way or another, will be better than itself. why, because it'll be the human designing this enrolled design something better. this process of recursive self-improvement recursive self-enhancement list word. we start from a little corner here in the space of possible minds. learning and evolution expands it to a much bigger area but a small corner. once we hit a.i. during the designing this is the space of mind that we can design, but the areas that we can design may be able to design a.i.'s far greater than those that we can design. they go to greater space of mine, eventually you can see there is a vast advances in the
8:18 am
possible minds. this could lead us to they can stand to as. that would be genuine and super intelligence. i think that's possible and it will not happen in 20 years, or 50 years but we do have to think about and we do have to worry about it. the a.i.'s which we create will have the capacity and a working definition of intelligence and the ability to fulfill the goals of a wide range for the very wide range of goals of self problems i to achieve. these a.i.'s will be which are extremely powerful at achieving their goals. they are probably going to achieve those goals. we have to commend and be very careful of what the goals are.
8:19 am
we like to think about values and the values that we put in for these a.i. systems and we need to think about consciousness. are these a.i.'s that we are creating conscious, how does your consciousness relate to human consciousness. is this going to be a world of enhanced experience or mindless world without consciousness at all. maybe that's something we can talk about as this goes along. the final question we need to ask, where do we as humans stand with respect to these a.i.'s. the ones that they replace us or the a.i. system that enhances, do we succumb to aries, to enhance ourselves, upload herself to be the a.i.'s which are on the forefront of super intelligence. that is an attractive prospect compared to the ones of the humans don't exist at all and
8:20 am
get wiped out. but it raises so many questions. could you upload a human mind into a computer? it raises all questions to identity and consciousness. it's a great thing that john is spending just to bring along with science and engineers to think about these questions. >> thinkingthank you. i should add the university professor. [laughter] for the center mind, brain and consciousness in this university and needs no introduction. >> i demand you take back this introduction. >> the philosopher director of center of concord if studies. >> thinking fro engine thank you
8:21 am
john. this is supposed to be a debate but not anything that he said i would disagree with but i would not put the end emphases were he does. let's talk about possible for the moment. there are lots of things are possible. many things that are obviously possible are never going to be actual. it's possible to build a bridge across the atlantic. we will not do it. not now, not in a hundred years, not the thousand years. it would cost too much to be a foolish endeavor. a lot of the imagined a.i. projects permanently possible in principle and some of them are indefinitely things we shouldn't do because they will make more problems for us then they will solve. so just bear that in mind. somebody said the philosophers
8:22 am
the one who said we know is possible and practice but were trying to figure out possible in principle. [laughter] unfortunately philosophers spent too much time worrying about possibilities that are negligible and every other regard. let me go online on the record and say, yes, i think that conscious a.i. is possible and after all, we are conscious, we are robots and we are actual so in principle you can make is not a material, your best friend in
8:23 am
the future could be a robot and absolutely with the secretive ingredients, but we will not see it. we are not going to see her for good reasons. if you wanted conscious agents we got plenty of them around. they're quite wonderful. [laughter] the ones that we would make would not be so wonderful. and for me, one of the most important fears about the future has been long before -- is not going to happen in ten, 20 or 30 years. long before we get to super intelligence we would have meetings with people are so dependent on non- super intelligences that we would become the fragile and brittle in important ways. i think that we may call that the gps problem magnified.
8:24 am
people have begun not being able to read maps anymore or know how to get anywhere without the help of dps. use it or lose it. and i think using early as it is going to play a big role in everybody's life in the future. is there anybody in this room that knows an algorithm for extracting the square root? i learned one in school when i was about an eighth grade. it's not easy. but there are algorithms for doing root squares, nobody knows how to do that because you have that little button on your calculator that gives you the square root. many more important talents are
8:25 am
going to disappear except in the hands of creaky craftsmen and things like that who still buila hammer and a handle with a simple forge and be able to read a map and try to drive a car and other weird things like that. what some of us are disabled in those regards. that is something that worries me even more and what worries me is we will -- for the very best of reasons turn over our responsibility for making major decisions to artificial intelligences that are not conscious and not super, they're just very intelligent tools in
8:26 am
great fabrics of pattern recognition and who knew 20 years ago they could be such things and we know now there are. deep learning, et cetera et cetera. when we start delegating major life decisions to systems that are basically just smart tools, then i think this changes our predicament in the very important way. my slogan is, we want smart tools, intelligent tools not artificial and colleagues. in artificial colleague is somebody who can take responsibility and can be a co-author and can be morally
8:27 am
responsible for decisions made and we are nowhere near that as artificial intelligence. in the meantime, i think one of the major dangers -- i have not figured out how to prevent this from happening. alan touring one of my all-time heroes, set in motion one thing i regret, the turing test puts a premium on confession. convincing you that there talking to human being. it was a brilliant idea but ever since then there has been a premium on what we might call the disney vacation of artificial intelligence. making a i.c.e. and basically false advertising. and whether were talking about siri or any of the others they
8:28 am
have a paperthin human user interface which is deeply deceptive about what they actually understand. i think that's false advertising, and it's unfortunate and it should not be honored and it should be criticized, condemned and we should get out of the habit of treating a.i. as agents. the reason that this is going to be so hard as the number of people are forcing, in the next ten years i think the major market is going to be eldercare. and why not? taking care of our lives really folks who can't take care of themselves is not a good life a regular human beings as maybe
8:29 am
worse than being an old-fashioned telephone operator. you don't regret the loss of those jobs. but in eldercare, there will be good market reason for dignifying them to a very great extent. because older folks will want to have a companion, not to somebody that brushes their teeth and get them fed and so forth. i do not like the future that is populated by millions and millions of old folks who are basically settling for artificial companion that is a fake and most important regards. >> thank you. i think we agree on an awful lot
8:30 am
here. and maybe we do have a disagreement about the core question about what will be genuine a.i. in the book is when that's lucid and thoughtful. my understanding that it's possible to create autonomous intelligence and cautions a.i. and we shouldn't do it and maybe we won't, instead we should create the tools. and use the wonderful analogy of google maps. google maps tells you how to get someplace but you stop to get there. you ask, i want to get there, it'll show you a route and then he had to follow the route. but the human is still in the loop. that is what i'm seeing is a vision of a.i. and maybe you go to super intelligent a.i. and want to know how to get to mars
8:31 am
or when a war or something that a.i. will tell me what needs to be done. but the human will still be in the loop and get to do it. i worry about if it's realistic. there will be so many incentives to take the human out of the loop and get the a.i. the capacity to act on that advice directly. this is already happening with google maps. if you drive cars like a tesla a long time you saw them going to destination and when did the usual google map thing in which it would show you the way to get there. but then you had the bill and follow it and make the decision. and at some point they were navigating on autopilot.
8:32 am
and you can take the instructions and forward the instructions itself in turn the wheel and change the ways in a full domain when you see a car that has become an in the commonness. we can still change the goal and so on but when i think about domains like at home in his weapons in the military it would be for a brief period when the stakes dilute and maybe we'll just have a.i. systems advisors and soldiers are target protection and say you can shoot, but the a.i.'s eventually will be faster and better at doing this kind of thing but when the stakes of a genuine military conflict, a person is hard to see that we will not have the stakes. but genuineness autonomous soldier that have goals and
8:33 am
execute them. biological systems will eventually slow compared to the superfast a.i.'s like the financial purposes with the stock market markets, and military purposes, they will be so strong that they allow a.i.'s to achieve the goals directly and act on them. i think a ptolemy will be hard to avoid. if the tech companies are running it will happen and that the government and military is running it will certainly happen. i don't know what the vision is to avoid this from happening. >> the real important issue is how much autonomy do you want. as you say, some of the cars, we don't want them auto thomas,
8:34 am
gilbert built and at home in his car and i want you to call me carl. a self driving car is my slave name. [laughter] and gilbert said shut up and join me to the market. in the car says to the self walking man, we don't want that much autonomy. autonomy is as good as synonymous as we will. and i don't think we want to give a.i. complete autonomy because a.i.'s -- because of the nature of the technology has a certain involved ability that we don't have. you can back them up, and put them back together again and
8:35 am
make another copy on monday. if human beings were capable of being backed up and then brought back on monday, that would change the nature of human interactions in human relations dramatically. i for one don't want to go there. and i don't think many people do. so, if you make -- if you are right that is inevitable, that market pressures and cleverness will lead to genuinely autonomous a.i.'s then i think were in for very bad future indeed. when could that happen, we can
8:36 am
give them more autonomy than they can handle and that's what i'm afraid of. >> caroline jones is in the book and has a question today that pertains here. in regard to the reliance on the computational way of looking at the world at the expense of the complexity of the cognition. much more distributed intelligence going on beyond the ideas of sacred craniums you may not even be bounded by your own skin. in this regard, what is robot going to be proper ethics in a.i.? this computational view is very west coast.
8:37 am
the cyber analysis, weiner, shannon, have a much more ecological deeper view of how things connect and don't connect. >> is a great question. we have linked into a discussion we're having here. what is life and death and what is autonomy. autonomy basically requires free will. and then were up against all the questions about what is genuine free will. it's the kinds of problems that we are talking about super intelligence and were not even sure it's essential in the a.i. have free will or do they have consciousness, that's very deep question. what will matter for the sake of the questions of safety and human survival are the things
8:38 am
that people worry about and what the systems can do. in this debate we could describe autonomy and very simplistic terms. autonomous was not goals in a wide variety of goals and have the power to achieve them. to advance a.i. will be systems that have goals and can actually achieve them compared to the top versions of a.i. which can advise you on how to achieve it and then you achieve it. this is a limited form of autonomy. i'm not sure the consciousness would be applied to this. maybe on down the consciousness it would. but even that, once you actually have the a.i.'s goals and with the power to receive the goal and i think that's arty enough to get were going.
8:39 am
>> i think that the difference comes out if we actually compare good old-fashioned a.i. with contemporary a.i. at the moment, with deep learning and all the rest, we have, i've said the wonderful pattern findings are great at finding needles and haystacks and doing other amazing things. but they haven't been formed into an architecture that it's anything like an agent with its own goals and so forth. the two ways in principle you could go, you can go back to good old-fashioned a.i.'s and now were gonna do intelligent tailoring and will do it from the top down. we will figure out what goals we want to install inasmuch rules in the top job. that is one way that we can imagine going. and that's unlikely and much
8:40 am
harder and people actually think. until let 'er rip and bottom-up. and let these things evolve and learn and evolve and learn and learn and evolve. it will all be done by bottom-up masses. if we go that route, what we know right from the outset we will not be in control. we will not be in control and so we will be setting in motion something where the amount of autonomy in the systems, we will not be up to us. i am not deathly afraid right now because i think people who imagine the scenario and think
8:41 am
this is coming soon are just wrong. and i take orders of magnitude of difficulties stand in the way and when you take watson, in its own way, i don't know how many centuries of brilliant work of watson uses of the power of the small city, what percentage of an intelligent conches a.i. a fraction of 1%. turning watson into an actual in an agent would be the work of many persons centuries of work. and nobody knows how to do yet.
8:42 am
>> the exercise and what they call knowledge engineering is a big enough database in dealing with all that data and knowledge and apply the information and it's a wonderful thing. >> it's a great tool. >> i don't think there's one thing, what does this, watson does not, there are 30 or 50 different watsons out there. watson is a brand name at this point. >> some of the is -- put it politely. >> some of the scientists right now are really excite excited at machine learning. what you basically take a whole
8:43 am
lot of data and train them to do certain things to provide learning right now this is what you should be doing but even that leads to amazing results to to image classification and reinforcement learning which is used to drive on the go and out. it all becomes extremely messy and hard to control. when you have extreme learning you're optimizing something. people in machine learning have an objective in the way of perfect behavior for your system completely matching the training set on these images and winning every game of go, that's an objective in a really good machine learning system will object to make better and better. this is not up to us. the objective function may be up to us, but what you want your system to maximize, what is the behavior you wanted tomorrow. this puts an enormous once these
8:44 am
systems have an economy to acting achieve their goals. that puts an enormous responsibility on us as a creator of the a.i. to get the objective right. and make sure the systems are maximizing. there the right goals, and the self driving car, they can get you to the destination and not to run into anybody on the way and to obey traffic laws in which you have human level autonomy then you want to get the objective fraction and sometimes that will be the challenge of what autonomous a.i. to make sure they have the right goals and values. this is where all the stuff of west cognition enters. because human beings don't have one objective function we are many. the whole process of evolution and reproducing genes from any
8:45 am
number of objective functions along the way. it may well be some form of evolution will produce a.i. as messy and as a way of biological systems. those are for more challenges, human are so unpredictable in every way international is a really good thing? a.i.'s are as unpredictable as as in those ways. at a certain point you may wish we had simple a.i. with a simple objective that we knew about elise yet and i that we can understand it. >> the subtitle is philosophy in a.i. i have a question for both of you, how do you distinguish your work as cognitive scientists of those of philosophers. i'll tell you story, 25 years
8:46 am
ago, i did a book called the third culture which involves a chapter in i went to everybody else in the book and had them talk about each of the other contributors. marvin minsky got on the phone and said, tell me about dan dennett. the greatest philosopher since russell. six was later identified checking and let me read you what you said about dan dennett, the greatest philosopher since russell. and i said what. [laughter] what i meant, he was great but he is only philosopher that understands what we do. but you become one of the people that does the real stuff. so where is cognitive science and philosophy begin. let's talk about the role of philosophy in a.i. because
8:47 am
frankly i don't get it. i don't understand it. there is a subfield in a.i. which is funded with cognitive science that roughly occupies the position to experimental physics and that is that people who have done their homework they know the technology that they know how to probe but they're interested in theoretical questions and they're interested in helping the engineers in the anti- people sort out and understand what they're up to. >> a subfield of philosophy in a.i. >> yeah right. it's been my good fortune to be
8:48 am
tutored over the last 30 or 40 years by some of the leaders in a.i. although i'm not a coder i did some programming. the current generation of philosophers of cognitive science are superbly well trained and no whole lot more than i do. when dave did and he got into this and i was there when he was a graduate student. that is a very good time. i was reading a dissertation on predictive processing and brain hypothesis and a very technical
8:49 am
by a philosopher. >> that will not get you the price. >> no. >> is interesting in terms of the philosophy community. call mainstream, i don't see how anybody focusing on a.i. will go from the process at this point. >> i think you are wrong. philosophers were at the minds and many of the philosophers have thought about a.i. and someone like that is a prime example. he's one of the leading philosophers of mind and a large portion and somebody thinking about a.i. is very central and philosophy over the last two decades in the trends are being towards integrating the two. my phd and the lab i received on the academic philosopher the
8:50 am
researcher talk about science at the gluck hub center. if any of you are writing with dan. and i'm in the middle of this and lab writing programs, analogy making programs, and so on. and i have not done a lot of coding in the last couple of decades, i think for philosophers have the experience actually getting their hands dirty and write to build the systems. it gives you something to build on. that is part of the philosophers to educate themselves in the science and engineering. the other part is a.i. as part of the science which is software engineering and building the
8:51 am
software. another is a whole kind which what part doesn't tell us about the mind that is no longer engineering but the science is also philosophy. and were thinking about relationships of the artificial systems to say human systems. like dan who has done a lot there, and there is nothing about the human mind or human consciousness. and we need someone to come in to what is actually telling us in explaining and is also the social and political moral questions not just what a.i. systems can be built. but what a.i. systems should rebuild. dan just offered a proposal about that, and other people would offer different proposals. but someone will back on the ethical questions which will involve reflecting on human rights so we learn as a society. philosophers know how to think
8:52 am
and that's increasingly coming sensible about thinking about a.i. >> is interesting that a.i. over the years, there has been a similar gradients of philosophy will interest in some people and i need us, they want to be and they don't want to be close paco. it doesn't mean they are doing great work. this technical work by people who yon when issues, what relation is listed cognitive science in the mind. i think it's ironic is people way back, if you go back to the early days of simon and alan it was an attempt to devise the field in a.i. and cognitive stimulation. it was a cognitive simulation and it was used in the computer
8:53 am
to simulate 2 million cognitions where's a.i. was hooked by anything networked that was foreign. oddly enough the people who try to do cognitive simulation ended up with these go time models. which didn't do a very good job. while the people who treated it by crook ended up convincing the scrutiny and other systems like that which now we realize maybe that's how the brain is doing it. it's a circle which is very interesting. >> stuart russell had a question specifically addressed to you. stuart russell is one of the computer scientist that we all respect greatly. dan, you seem to divide a.i. symptoms into these tools with no middle ground. as a program with representing goals in the real world and such
8:54 am
agents could be competent in the little world and yet nonconscious. do you believe that consciousness will necessarily create them as we make the programs more confident in general? can you tell us how not to make conscious a.i. systems? >> good question. i'm glad i answered. indeed we can have very very intelligent systems which are not conscious in any interesting way. but they will see conscious in many ways but they will not have important features that we have. it's very much a matter of whether they are capable of taking their own interstates as objects of scrutiny in doing that recursively and indefinitely.
8:55 am
and that's a very special feature which i think no nonhuman animal has that capacity and that's a big difference between human consciousness and animal sentiment and i will not argue about where consciousness stops or starts. but the very important to realize that the techniques and structures that have been developed in recent years were wonderful at analyzing. that cannot be accomplished unconsciously. we can tell a story where it looks like conscious hypothesis testing. but it doesn't have to be
8:56 am
conscious. we can get all the benefits without any bit of acquaintance by the system itself with its own interstates. >> you mentioned -- >> let me go ahead with a run on unconsciousness. >> judea pearl is the father of the pc and network which we wouldn't have a.i. as we know today. it's a real giant of a field. is it too bold to assume the philosophy will melt into a.i. in the sense that all philosophical questions especially those with consciousness will be reduced to problems in a.i.? >> let's put it the other way around, philosophy is good at spinning off problems into the scientists as we told them. i think they consider themselves
8:57 am
a philosopher but he figured out some really good methods for solving the problem in space and time and we called it physics. along the way philosophy sped off psychology, and then just six, and what happens, it's never the case of the spinoff solves the entire problem but we find some part which is tractable on which people -- for recovery methods or people agree. where they didn't agree before and we say okay, that matters to compel agreement. in the cole that physics and linguistics in economics. did it solve every problem as space and time? absolutely not. some of the biggest ones are
8:58 am
unsolved. there as many views on the problem as they were before. is aaron going to solve the problem of consciousness. there most certainly not on its own, on the other hand, it will give us new insights and i'll give a.i. into the systems of behavior in remarkable ways to suspect the conscious and that someone may think they are good reasons to think there conscious and were still going to need philosophical reasoning to think about it. this gets back to the question of the elephant in the room. the a.i. systems really genuinely going to be conscious? this is not something we can dismiss as a philosophical question. it is very deeply as a moral system as human beings in entity
8:59 am
has moral. we have a system we should care about in the conscious and certainly only as conscious. the computer system don't have any consciousness and it's basically a tool and it might as will be a car or loudspeaker and so on. it doesn't deserve more consideration. if the systems are conscious, at least they enter in to the moral fear. they are among the systems we have to care about. in most a.i. systems eventually are conscious then we can simply use them and we have to start thinking about the questions about what they deserve an equal respect, rights and so on. in my view, it is a crucial question in my suspicion is a.i. systems develop which are more and more autonomous, more and more capable of reflecting the
9:00 am
recent of evidence. my own suspicion is that systems like that will have a sense that they are conscious systems. we will talk to them and eventually will say how do you know, i just saw them in the hand and experience may reflect. i read the owners manual and i know there's a bunch of circuits and i feel like so much more. [laughter] and that is how any of us have. >> absolutely. one thing i think you underestimate when i was worki working, one of the take-home messages from the whole experience for me was how little it takes in the way of animation
9:01 am
and speed particularly speed and grace to convince most people that a robot is conscious. . . . who were concerned. not because it was planned this way but did have some strikingly persuasive behaviors unconscious of how i was. if you walked in the room when todd was on his eyes would follow you across the room. it would freak people out. or shaking hands was a good one. i took her over to the lab and
9:02 am
cog's are was not attached to car gaps shoulder but if the plant to the bench. matt williamson said go ahead and shake its hand and she shook its hand and she screamed it's alive! because it had elastic actuators and that was enough. what i'm quite sure of is we are not going to have a problem convincing people that robots have moral rights and are conscious. it will go the other way around. we are going to have a problem convincing them that no, these aren't conscious, not yet. you are being fooled by the
9:03 am
tempo. >> some great psychological data on this, when people are inclined to say it is conscious, you show many cases. it is a metal body or biological body, very what it's made of, silicon, neurons, very this and that. the one factor that attracts more than anything else is the presence of eyes. of the system has eyes it is conscious. if it doesn't have eyes all bets are off. the moment we build our ais and put them in bodies with eyes it is irresistible to say they are conscious that ai systems that are not embodied do not have consciousness. there is a website you can go to, people -- people for the ethical treatment of animals, people for the ethical treatment of reinforcement
9:04 am
learners. anytime you give a negative signal don't do that again. it is a little bit of suffering and then given a reward and pleasure. give more reward than suffering. maybe that -- these questions, once we actually get to the genuine autonomous level it will be very hard not to treat them as conscious and that should raise many questions. >> we are talking ethics. the elephant in the room are the ethics of the big 5. what they are doing with the data, the reality is being programmed without your vote, without your permission. let me read a few words from george dyson's chapter in the book, weiner became
9:05 am
disenchanted with the cash of war shippers, brought motives to automation that go beyond legitimate curiosity. he knew the danger was not machines becoming more like humans but humans being treated like machines, quote, the world of the future will be a more demanding struggle against limitations of intelligence. not a comfortable hammock in which we can lie down to be waited upon by robot slaves. comment? >> we have to address things like this if you are going to talk about what you are doing and what ai people are doing. >> rereading weiner's book, to write my essay for this was astonishing away because i read it as an undergraduate i think
9:06 am
and okay. to read it today, it is remarkably prescient in some regards. i think some of the essays in the book are genuinely scary and i think people ought to read the essays and decide for themselves if some of the proponents there shouldn't be sat down and try to argue out of some of their blithe confidence about what the future holds. we have some serious problems looming and we should take them very seriously.
9:07 am
>> talk about humans being treated like machines. i think i'm gradually becoming a machine. half of my memories are stored on my smart phone or sitting in the cloud. i was trying to figure out who has a bigger part of my brain. is a google? is it apple? is it facebook? i think it might be google. they've got enough a lot of my memories, plans, calendar systems, navigation system. we have long since become giant eczema organisms with the giant xo cortex, all the computer systems we are coupled with. in the car you don't go anywhere without consulting the internet 5 or 10 times is what is and i'm going to be doing
9:08 am
again? how do i get there, it is true these corporations have a large portion of my mind. if they want to do bad things with that i am in trouble or we are certainly in a situation of having to give them large amounts. doing small bad things with it relatively, they are not yet taking their mind and reprogramming it, they are brainwashing us bit by bit by the facebook algorithm and i don't think they are malicious, the big corporations. they have structural incentive. someone genuinely malicious got control of those systems then we would have a dystopia. we don't think about that.
9:09 am
>> i believe we are going to be taking questions from the audience. jana? >> can we get some house flights to see who is asking the question? >> i had a question, you talked about this xo cognition and i was wondering what you thought the effects -- >> can someone point to you. >> over here on your left. what the impact of this xo cognition would be on evolutionary biology where technology moves faster than genetic adoration and what that means for the long-term of humanity as technology is more integrated into humans. >> what does the xo cortex mean? what is happening is our minds are gradually migrating onto computational systems. so far only relatively small
9:10 am
parts of the mind, memories, planning systems, navigation systems, we still have this conscious core which is mostly in the biology and we are still exerting free will if we ever had it in the usual ways and so on but the long-term consequences and the long-term, the conscious biological core may migrate onto computational technology. if we get to the age or the point where artificial intelligence systems have a level of greater human intelligence, technology will be slow and it will degrade itself. we are eventually going to have the option of uploading our entire core onto the computational system and we will have to decide if that is something we want to do. doing it will offer the promise of immortality.
9:11 am
of probably superfast processing, reasoning, will have to enhance ourselves more easily. will there be downsides? some would see it uploading is a form of death, the uploaded system will no longer be conscious. maybe i will no longer be me. maybe this is something dan has written about beautifully since his past work on where am i. we will need to think about those questions. once this is a realistic possibility the xo cortex turns into the whole cortex and we face a choice about uploading do i want to step into that system? will this still be me? people will have to start reading copies of ancient texts like personal identity and where am i in order to make it extremely practical decision what to do with my life. >> do you want to be on a dvd
9:12 am
or streaming? in the cloud? where will we find you? >> i have nothing to add to what dave said in reply to that question. i think it is important just to take a deep breath and remind ourselves how unbelievably complex brains are. the latest count says 86 billion neurons. there is reason to think the glial cells and astrocytes which outnumber the neurons are playing an important role in cognition. if they are, we've been studying the phone system while leaving the communicators out of the picture.
9:13 am
the brain may be -- when i started out thinking about brains i had this very simple, elegant, wonderful model of the neuron. i can understand how this works. i didn't want to see it get much more complicated and it has become much more complicated by orders of magnitude. when you start realizing it might even turn out viruses play a role in modulating our brains you realize we may have a laughably impoverished view of the dynamics of the human brain. >> it is all in the quantum superposition. >> no it isn't. i draw the line there. >> vicki christian is president of the royal society and noblest in biology for the
9:14 am
ribosome has a question about evolution. is the evolution of carbon-based intelligence a catalyst for the evolution of silicon-based intelligence? one that can survive extremes of environment and does evolution care about intelligence? speaking of viruses in one of his interviews with me he noted at the mrc, an eminent biological lab where he was deputy director, we make viruses, learning is opaque, we have to know every step of the way and check it and right now it is a problem. >> the problem of blackbox science hasn't been mentioned before but you raise it here. that is important. we are at the point where
9:15 am
thanks to the deep learning technologies we can delegate to black boxes, finding the patterns in all sorts of large data sets and we don't know how the systems work but we are making oracles and trusting them and we can even have proofs that they are trustworthy that they will give very good answers but this means a diminution in the role of the individual conscious scientists and also the distribution, we are moving away from the great scientists, the individual mind and beginning to deal with, let's face it, distributed understanding, where no one person understands the results
9:16 am
but the team in a certain sense does. i think that is a good thing. it is changing the whole structure of science and it may do the same with philosophy. where the idea of an intelligent designer, whether it is a designer of a theory or discoverer of a scientific model, that will be distributed, they might have to just discontinue the nobel prize for instance. >> the point about evolution is an interesting one. commonplace at least in humans, the force of biological evolution has largely been supplanted by the force of
9:17 am
cultural evolution, the development of language and writing and computers and so on but it wouldn't surprise me of cultural evolution will continue to be a force but if this vision of ai is right there is some point cultural evolution itself may be supplanted by an artificial design evolution where we move ahead by leaps and bounds by human designing artificial intelligence which designs ever greater artificial intelligences and so on and that could be a kind of evolution which supplants and outstrips ordinary cultural evolution. is that the future evolution of humanity? my view is it is the evolution of a different species. that is the view sue blackmore argues for. human hosts will no longer be necessary for meme's to evolve and we will have teams which
9:18 am
are sort of technologically hosted meme's. if you want an example how that might work, right now there are algorithms being used to predict the popularity of popular songs. they are getting better. the day may come where a song goes platinum without having been heard by a human being. >> that is the best take away from the evening. >> i'm all for the future where we are conscious of the song and someone is experiencing it because that is where the team gets to have some value to someone. >> i am jumping in. i just want to carry the questions from the audience
9:19 am
which is to say i'm not going to carry it at all. if you're in possession of the mike you are go for a question. >> i have a question. >> there you are. >> just to make a comment, anyone who has their head up you can give them the mike while we are listening to the question. >> my question is kind of going off of what you were just saying, if there are any regulations happening among the us government or any government, you mention elder care is going to become, ai will take over elder care. there are sex robots taking over much of people's lives. what keeps this from spiraling into this evolution where human
9:20 am
beings don't matter anymore? is there any regulation and what can we regular people do to make sure it doesn't go in that direction? >> i would say there's a lot of discussion. the issue of regulating ais, the risks, downsides and ethics of ai have become extremely prominent in the popular discussion over the last two, three, four years especially, i went to a conference two years ago which was devoted to coming up with ethical principles for guiding ai and we came up with 23 principles that were supposed to play a role, the partnership on ai which involves some of the leading companies involved in ai like google and facebook and deep mind and so on supposedly coming up with principles and a fair bit of skepticism about how much difference that is making, it is easy for people
9:21 am
to pay lip service to this kind of regulation and ethical principle but when it will matter is where there's incentive like financial incentive for military incentive for some other incentive for ai to be developed, to avoid in ai arms race. we don't want that because it could go in unpredictable ways. what actually happens when the americans and chinese are in competition in the context of a military situation? people will say there was this regulation. i talked at west point, the military institute a couple years ago and asked what will be the military's attitude toward super intelligence and the singularity? is this something we should prevent? they said no. gratitude is better american super intelligence than chinese super intelligence so an
9:22 am
incredibly important question. i don't know what an ordinary citizen can do but thinking about this and talking about this and keeping the issue active in the public eye is a good first step. >> there's a lot of questions. if you have a mic, go for it. who has a mic? there we go. >> this problem, not sure how to phrase it and it might go to the hard question of consciousness. you guys have been talking about the idea of uploading your mind or translating it to another medium but it doesn't click with me because i'm thinking you could clone me, create another me with all of my experiences but i am not going to be able to share my awareness with that person and i feel it is the same issue,
9:23 am
putting my intelligence into a computer, you are erasing me now and just copying it. the me who exists now no longer will. >> the me who exists no longer will in 10 minutes either. >> why don't i try a real short answer to that. just imagine this happens slowly. your brain is dying and thanks to technology you get to upload a little bit every day and so you get used to the fact that more and more and more of your brain is actually residing in the cloud and interfaced with you and eventually your brain, your biological brain is dead and you go right on.
9:24 am
that is one of the possibilities and if you think of it that way it is like the ship of theseus, you go on living. >> if i upload myself i will do it that way, one neuron at a time. stay conscious throughout, here i am here i am here i am, now i am here, okay? >> we have a person with a mic and let's get that other mike moving around the room. >> you mentioned consciousness, right here. we are easily convinced of consciousness. what would it take for you to be convinced of consciousness? the chinese room state it is practically impossible. >> red flag to able, take it away. >> not always doesn't. the chinese room is a defective thought experiment.
9:25 am
i've said several years. i don't want to talk about it. you can read endless many pages of what is wrong with john searles's argument. i know a lot of people don't care about the technicalities, they just like where he comes out. if you like where he comes out and don't want to know the technical arguments, then go with it but don't think that you are being convinced by a good argument. that's enough. >> let's stay on the topic of the problem for 20 years. >> what would convince me that a system is conscious? a lot of things. the single thing that would carry the most weight to convince he was talking. other white convinced another person is conscious? how do we learn what they are
9:26 am
conscious of? through talking to them and they tell me their consciousness. we encounter martians and talk to them i hope they tell me about their conscious experiences. event ai system says when i look at the world i have these experiences, seems to have a certain qualitative character, i wonder wonder -- whether another ai system would experience things that i experience, could in ai that knew about my circuit diagram really know what it is like for me to be experiencing a certain smell, the smell of ammonia at a certain time? that's the kind of thing that would really convince me. maybe it will turn out it ran one of those turning test guidebooks, talk like a human and internalize how to reproduce the illusion of
9:27 am
consciousness but assuming it is not something like that, it would be strong evidence for me. >> another question in the corner? you want to do that? new question. >> over here. it seemed from your earlier discourse you hold human cognition paramount and human needs to be in control of the ai but we know humans are pathological in decision-making. we are the first marshmallow species. you spoke a possible minds but not possible problems and i would like your thoughts on uniting those things. could you see a subset of the problems because they might make better decisions than humans do? >> i can foresee it but i'm not sure i like what i see when i foresee it. ask yourself whether you would be content and maybe the answer is yes. i'm deliberately choosing an example on the knife edge.
9:28 am
you have been charged with a heinous crime and you have your choice between a trial by jury, trial by judge or a trial by ai. do you want a trial by ai? and if so under what conditions? >> i'm not going to have one of those judges that is really hungry before lunch. >> that's why it is an interesting question. there is evidence judges are all too human. >> we have a question over here in this quadrant. if you have a mic you can ask it. >> over here, there you go. i have i guess a clarification question more than anything else, you spoke earlier that the difference between intelligence and super intelligence is not necessarily conscious, possible in
9:29 am
principle, you brought up top ctr. hofstadter and his book a strange loop so thinking is consciousness, if super intelligence is not conscious could is super intelligence not be thinking and if so would it be kind of a dumb super intelligence? where do you have that line of demarcation between very intelligent but not thinking at all versus really stupid but thinking so sort of intelligence but consciousness? any thoughts on that? >> yes. very simply, we tend to intellectualize cognition and think of it as thinking done by the brain, let's put it that way. but in fact a lot of what we have learned in the last 50 years is that we now know lots of ways unconscious processes
9:30 am
can mimic conscious thought. one of my books, in several of them i introduce the idea of a creature the test hypotheses before trying them out in the real world and when people think about that they think of somebody doing that knowing that is what they are doing and thinking about it in those terms but in fact you can get all the benefits of (creature completely unconsciously. do you want to call that thinking? not necessarily. thinking not to be reserved only for conscious cognition, that raises to me the best issue of how do we get a personal level out of a system that has this wonderful stuff
9:31 am
going on at the sub personal level and part of the answer comes from wonderful phrase that comes, improved by guy claxton, british psychologist, who said, claxton says intelligence is knowing what to do when you don't know what to do. and if you think about that you realize if you are already equipped with instincts or training or something you sort of know what to do under many circumstances so you are given a novel problem. if you know what to do when you are given a novel problem you have no training for what do you do now? if you are really intelligent you know what to do which is to
9:32 am
think about it, requires the recursion's we have been talking about. >> a question. could you have a super intelligence but knows what it doesn't know and will we get one that receives a question and says i have to think about it. let me sleep on it and get back to you in the morning? >> i think we only have time for one more question but i want to say we will do a book signing afterwords. book table is over here. there is the book table but we are going to take one more question. i want to encourage people to see this exhibition which just opened which plays on some themes of industrialization and automation and the human living inside the machine. a lovely pairing. if we have a question down the
9:33 am
middle here? >> you touched on this a little bit, but it seems weird to me that so much of the cultural obsession with where ai is going has to do with malice and intentional ai doing bad things to people because it doesn't take much more computing power than we have now to 2 really bad things to everybody so putting you on the spot, trying to get a public service marketing campaign if you could steer the conversation in some other way what do you think is more productive bend than thinking about ai getting conscious and doing bad stuff? >> i don't think malicious agents are the biggest problem. the problem is people trying to exploit ai for evil ends. even at the point of view of
9:34 am
thinking of agents, many problems will arise from thinking about structural factors, ordinary incentives to use ais to make money or when wars or solve problems, we will have strong incentives to build powerful ai systems and that will have some spinoffs. very large effects over which we have to be careful and those effects are as concerning as terminator style scenarios and much more likely because they can be almost expected to happen through ordinary forces and a lot comes down to this question of having to get the objective function, the goals and values of your ai system just right. when the terminator scenario, we have an ai whose value function is to destroy humanity
9:35 am
and then okay, that is not so good but ostrom, maybe an ai whose value function is to create paperclips, paperclips, humans are taking up space for paperclips so maybe it turns out the value function is i want the humans to be happy but what is my test? make sure they have a smile on their face. ai goes around plastering smiles on everyone. a lot of ways this could go wrong even with well-meaning agents and this is why we have to be careful. >> final question to wrap it up, i would like to ask each of you after 20 years of sparring with each other. >> we met each other and 89, 90. >> seemingly in a friendly way although this takes banter, what if anything have each of you learn from each other tonight? >> tonight?
9:36 am
>> i learned a lot from david over the years. i'm not sure i learned anything aside from getting clearer about what some of his current views are on these issues. there is much more agreement than i was expecting. >> i've learned a lot from dan over the years. he was one of the first philosophers i read at my mother's knee as it were and being his knee was working in his lab as a graduate student, there are many things, there are things on which we have strong disagreements, consciousness for example, that can be explained by the standard methods of science and to what extent it is a big problem. it wasn't our focus tonight but one of the things i learned is think hard about the relationships between consciousness and the way
9:37 am
systems think about their own consciousness. dan sees this as a matter of illusion than thinking about consciousness is full of one giant delusion, the god delusion and consciousness delusion. there is a grain of truth underneath but the philosophical problems going by this illusion. it is an interesting view. i don't buy it myself but i have learned to think about consciousness in human systems or artificial systems, think about the mechanisms by which these systems are modeling their own minds, thinking about their own minds and thinking of themselves as objects. that is the right perspective to start from and thinking about consciousness even though we end up diverging. >> i feel the conversation just got started. lovely moment to thank our incredible guests.
9:38 am
9:39 am
9:40 am
>> that is all happening tonight on booktv in primetime, visit booktv.org or check your cable guide for complete schedule. >> good evening, everyone. i am a member of the event staff at politics and frozen i would like to welcome you. a couple quick notes before we get started. if you have not done so please take a moment to silence cell phones. we have c-span booktv with us, you don't want to be
62 Views
1 Favorite
IN COLLECTIONS
CSPAN2 Television Archive Television Archive News Search ServiceUploaded by TV Archive on