tv Max Tegmark Life 3.0 CSPAN November 20, 2017 11:28pm-12:37am EST
8:29 pm
[applause] the importance of discussing artificial intelligence may very well be crucial to human destiny. destiny. trying ttry and to assess everyg that could go wrong is not being alarmist. on the contrary, deep thinking coming analysis and advanced planning will allow us to think about what kind of future do we want, and ultimately enable us to create a future with ai that is positive for humans and machines. we are fortunate that these conversations are happening now, and that two of the press and thought leaders driving these conversations are our special guest tonight. first we will hear from eric director on the digital economy and author with andrew mcafee of two best-selling books the
8:30 pm
second machine age and machine platform crowd harnessing the digital future. next we will hear from max tegmark cofounder of the culture of life institute and author of the new book just coming out, life 3.0 being human in the age of artificial intelligence, and i understand it is in its second week on "the new york times" bestseller list. congratulations. and the gentleman will have a conversation together and finally, there will be time for questions with the audience. before we conclude with reception and a book signing. without further ado, please join me in giving a warm welcome to eric and max tegmark. [applause] ..
8:31 pm
thank you jack and susie for supporting us. especially think it's a mark, we only known each other about three years that he is one of my best friends. he has a joyful mind ten when he said let's do this, jumped at the opportunity mainly because it's fun to talk to him. we'll have a phone conversation. also i hear your questions and comments. we live in very unusual times now. assuming have read and seen cars are driving themselves, people
8:32 pm
are talking to their phones, it's a little bumpy, they're not really that good at it but they're beginning to talk to us. run a ten year time burger from machines not understanding us to them understanding us very well. that's a unique time. we are lucky to participate in that. it opens amazing possibilities. they could be the best decade in the history of humanity, or one of the worst because the power being unleashed by artificial intelligence is unlike anything before. let's define what were talking about. you can think of artificial intelligence is techniques to imitate the mind.
8:33 pm
could you speak to machine and not know whether or not you're interacting with the human or machine? machines are not passing a test to that they're getting closer. within that is a category called machine learning. this is was driving a lot of excitement. old-fashioned ai is an area where we would teach the machines what to do and say this is how you play checkers or chest synthesis the rules and machines would follow those instructions. now the learning revolution is taking over. instead of us telling machine step-by-step what to do which frankly didn't work that well, machines are learning themselves how to solve problems.
8:34 pm
the way they do then is we give them many examples. this is a dog, this is a cat, this is a dog, this is a cat. they usually do it 10000 are million times. eventually the machine says i see the pattern. we have so much digital data that we can show them many examples of fraud were successful moves are faces and eventually they start los learng the patterns. a particular subcategory the biggest part of that is in deep learning their deep neural networks. loosely based on the human brain. this a category called reinforcement learning. they can learn new strategies on their own.
8:35 pm
one example is a people the company called deep line. the cover of the science magazine they gave this machine algorithm the pixels for atari games. how many have ever put played breakout? they gave the machine the raw pixels and they didn't say what it is the machine had to figured out. they gave at the raw pixels a controller to move it left or right in the score and said look at the score, your job is to move around the controller and try to make the score as high as possible. at first, the machine was a very good. the sometimes get lucky and hit the ball other times it would
8:36 pm
miss it. whenever it was successful in the square one higher the machine is like i have to do more of that. after 300 games is pretty good, almost never missing. as good as a good teenager. so they let it run for a while. they don't play breakout a lot and there's a strategy here to figured out how to send the ball behind the liquid in the you could do the so the machine learned how to do better than the designers. imagine a newborn baby being born in the hospital and hand it again by the end of the day it's beating all the surgeons. that's a cool little example.
8:37 pm
in the same technique looks for others it worked on space invaders and pac-man. once that had a quick feedback loop for the square change quickly was able to learn that on its own very quickly. you can use these techniques for the things. you could think of a data center where google has their computers running as a big videogame. this data is coming in and temperatures in the score is tried to make it as efficient as possible. your controllers that you can adjust the valves. they had a bunch of smart phd's working on this and they had thought they had it running as efficiently as possible but when they put the machine against it got dramatically better.
8:38 pm
this was before, the term machine learning on the cap 40% more efficient. then they turned it off and went back to before. the machine figured out how to run the data center better than the geniuses at google. so then you figure out why don't we do for all kinds of factories, for still finishing lines and makers of any object. there's room to improve all sorts of categories. there's three breakthroughs when machine learning made some big breakthroughs. we talk about them in problem solving and you might want to get snacks afterwards be careful what you're reaching for.
8:39 pm
there are some muffins, not all are muffins sometimes we can make mistakes and at stanford they have 14 million images. each has been labeled by humans what they are. in 2010 the machine see what they were they were wrong about 30% of the time. today there wrong about 2.6%. this is when they started using these deep machine learning algorithms. humans are about 5%, they haven't improved a lot. we are pretty much the same hardware and software.
8:40 pm
it's important because many task these better to have humans to sputter to have a machine do it or more accurate. for instance you can use them to diagnose diseases nick have examples of patients that don't have cancer in patients the two. the machine will figure out as well as a pathologist. it looked at skin cancer and did better than the human. i mentioned voice recognition. that's just in the past year. since july 16, 2016. humans are about 5% error rate. it's in the ballpark not quite better than humans.
8:41 pm
that's opening up economic possibilities. once you can see and recognize things like a pedestrian or bicyclist starts to become feasible to give control of the car to a machine. when they first started doing them they made air about once per second. now what you'd want to have a car. now it's once per 30 million. that's years without making a mistake. very soon we'll see more of these on the road. i've written in them and i feel quite comfortable being driven down the road making a left turn turn through traffic and ultimately i think it will be much safer. 30,000 deaths by human drivers today to drop that by 90 or 99%. machines are probably not 100%
8:42 pm
so will have to face some ethical questions when the spec mistake. rod who used to be at the lab has a company in boston called rethink robotics. baxter works for about $4 an hour doing simple tests. no computer programming. you show baxter what you wanted to do. after a few examples it gets it. of course i can work seven days a week, 247. is watching another robot sorting things and soft objects like clothing faster than humans and that will replace work in those areas. also medical diagnosis in the legal area they see a lot of routine legal work they had
8:43 pm
360,000 hours for the legal work. what does this mean for the economy? there's good news and big challenges. it makes the pie bigger than no economic guarantee everyone will benefit. it's possible for some people to be made worse off than they were before. that's part of what's been happening. it could get worse. productivity is growing in gdp is on a high that the median family income is lower than the '90s. they just had reports out there was an uptick over 3% last year and depending on the adjustment and may have match the previous
8:44 pm
high. if you normalize it it's still lower than 97. how can median be so much lower because media is the 50th percentile not the average. have the people are higher in floor. that can stay flat if you have a bunch of wealth going to the top 1%. that's what is happened as computers kicked in. 1% have their own 1%. the share of income is at a new record high. the only time is close was at the great depression.
8:45 pm
were having an economic challenge of a pie getting bigger. the distribution is becoming more skewed. part has to of tax policy or international trade most economist see the way technology is being uses number one driver. that's not inevitable. we have an opportunity to rethink how we reorganize the economy. we could get everyone richer at the same time. we could all be better off at the same time that their choices will need to make as a society in terms of taking advantage. this was his usual will not solve the problem. were trying to address understand the drivers oven, do research live looked at the
8:46 pm
inclusive innovation challenge. you can go to another event, the governor and other people will come and talk about how we can use technology to create shared prosperity for the many not just the few. let me leave you with the closing thought these technologies are wondrous and they give us opportunities. they can be used for good and create basketball but they don't automatically lead to a distribution that makes people better off. it's important for us to think hard about what we can do to change it to a better society. thank you. >> thank you.
8:47 pm
thank you eric for your friendship and your kind introduction. we'll see if the technology cooperates but we switch over. i'll continue further in time. talk about what will happen if it keeps getting smarter. first let's go back and look at the big picture. 13.8 billion years ago, it was very boring with almost uniform plasma everywhere and nobody there to witness or enjoy it.
8:48 pm
gradually clamped into galaxies, stars and planets. about 4 billion years ago life appeared on earth. life is dumb. they couldn't learn anything in the lifetime. left to pronoun like us, we can learn. if we use the metaphoric thinking of a computer of sorts than learning correspond stop loading the software in our minds. i can upload the spanish model if i want to learn spanish but bacteria can't do that. it's this ability to learn the changes the software that allow cultural evolution that made us most powerful species on the planet.
8:49 pm
if you can also design your own hardware. humans are trying to head in that direction. we can get implants or artificial knees, but true life three-point doesn't really exist. we heard from eric of getting smarter that traditionally artificial intelligence used to work when they look at the world chess champion used work by people taking their own intelligence and coding it into a program simply because it could think faster and remember more.
8:50 pm
the recent progress has been driven by learning. you have simple machines inspired by human brains and train them with data. you feed them pixels and outcomes this. will put it here and they'll say it's a herd of elephants walking across the grass feel. if you look at the computer games as we saw, now once a computer can learn to play atari games that are ready tells you there's room for growth. you can think if your robot of life itself is a game where you get rewarded for certain things. the same company came out with a trained toy robot to see if they could learn to walk. they just gave them points to move forward.
8:51 pm
but they didn't know anything about the concept of walking. they did it in simulation. this is what happened. nobody taught it to do that. learn by itself. ♪ local local. >> a try to friday of different types to run and jump and so on. anything wears a bit of a game plan the stock market or a sport.
8:52 pm
i like to think of the says intellectual tasks as forming a landscape like this where the height is how hard it is for the machine to do. and the ocean level being how good machines are doing at present. arch human chess playing skills have long since been submerged. and of course the worst kind of career advice to give torches is to encourage them to do jobs that are about to get some urge. the sea level is rising. machines keep getting better. it's fascinating to wonder what's gonna happen. some people think machines won't be able to do something machines will be able to do it all. and then what?
8:53 pm
we have interesting choices to make. a few other choices we should not make deliberately. so i found a bunch of collects will will the two cofounders here will in eric's son is also some more happy to have. the organization help create the best possible future with technology for thinking hard in advance of what we need to do. i'm optimistic we can create a wonderful future science we win this race between the growing power and the growing wisdom with which we manage. to win the race to change strategies. in the past we stayed ahead but
8:54 pm
learning by mistakes. when we invented fire, screwed up and then we invented the fire extinguisher. will as technology gets more powerful we reach a threshold for technology is so powerful you know longer want to learn from mistakes. you want to plan ahead and get things right the first time. nuclear weapons is in the category where we don't want to learn from our mistakes and have an accidental war and say whoops and superhuman intelligences in that category. some people call that being an alarmist. i call it safety engineering. when nasa thought through
8:55 pm
everything that could possible go wrong with the first mission on the moon. what they did was what led to the success of the mission. they thought through what could have gone on to make sure it didn't. what am i suggesting we should do? first of all, i think we should try hard to make sure we get a treaty against lethal weapons. the biologists and chemists are happy if i ask. [inaudible] if i asked about chemistry you probably think of new materials rather than chemical weapons because of scientist came out of force and persuaded the politicians of the world to make a band. we physicists have a more iffy
8:56 pm
scorecard here when you read about kim jong-un and putin and trump. we feel pretty responsible for this. ai researchers feel strongly they want to be like the biologist and chemist keep the power of ai will focused on things like your cancer and doing wonderful stuff rather than making it cheaper to murder people anonymously. if you take something like that will drive the price to zero will the law of economics will lead us to place we don't want to be. will that be number one on my list will i think there's hope for that. will because superpowers have a
8:57 pm
lot to lose. second, we should try her best to make sure this growing time is used to make everybody better off. i look for to talking to more about this. and third, i think we have to invest in safety research. what i mean? there many to nerdy problems to solve to transform today's computers into ai systems. raise your hand if your computer has ever crashed. that's a lot of hands. how does that feel? not good, frustrating maybe the frustrating it's what used if ai was controlling the u.s. nuclear arsenal for example or other key infrastructure. it's incredibly important we
8:58 pm
upper game in terms of making things work. another challenge is to make sure the goals are aligned with ours. it doesn't have to be a frightening things to be another entity smarter than us. when we were all about this big were all in the presence of more intelligence agencies can our parents. but if you tell your future self driving car will to go as fast as possible in your being chased by police helicopters and they say that's what u.s. for will then you begin to see how hard it is to get them to understand our goals. we all know how tough it can be
8:59 pm
to get children to adopt our goals when they understand where we want. also my kids are less interested in legos south and when there were little. we don't want machines to get bored with us as our teenagers are with legos. just to summarize why we need to take this seriously that machines may be smarter than us will summarize in the short video here. >> will artificial intelligence ever replace humans is a hotly debated question.
9:00 pm
some claim it will gain and out before humans and destroy humanity. others say don't worry be another tool we can use to control her current computers. so here we are going to share the takeaways of the conference. will separate myth from fact. >> first off machines have long been better than us at arithmetic or weaving. their mechanical and repetitive. why should i believe that there some things that are impossible for machines to do. >> we've thought of intelligence is mysterious that can only exist in biological organisms. there's no law that says it's impossible to do that information processing better than humans do. and some machines do things
9:01 pm
better than humans. this suggests that we've only seen the tip of the iceberg and long track to unlock the intelligence to help you for sure flounder. >> how do we stand the right side? what should we be concerned about? >> this would have been a concern. super intelligence that doesn't ensure goal. if you think of muscle is coming in on you what matters is what it he's seeking missile dozen how well it doesn't. it's competence. super intelligent ai is very good at attaining goals and the most important thing press to do is make sure the goals are aligned with ours.
9:02 pm
cats and dogs have done a great job of lining their goals with the goals of human. i can help think that kittens are the cutest particle in our universe. we could be better off with cats and dogs will ensure that ai adopts our goals. >> and when is that going to arrive? >> super bowl doesn't have been negative. if you get it right it could be the best thing that happened to humanity. everything i love about humanization has to do about intelligence. humanity might flourish like never before. most researchers think super intelligence is decades away.
9:03 pm
research thinks it may take decades. we need to start right away. we need to figure out how machines learn and adopt the goals and retain the goals. what about when they disagree? should we vote, should we do it the president wants, what ai decides? the question of how to live with it is what sort of future we want to create we shouldn't just be left to ai researchers. >> thanks. so how do i get involved to make sure we don't live in a dictatorship of superpowers. >> i'm interested to continue the conversation to hear what kind of future you want to create with the technology. that's a question that should not be left to geeks like
9:04 pm
myself, it affects us all. thank you so much. [applause] >> maxillary dentist or the conversation. they will ask you to join in. the pen was asking you a question, when do you think artificial general intelligence will be here and you dodge the question. scientists eight to make predictions about the future but audiences love it. i was at the conferences and we did polls see you can be more precise about what people at the conference is saying when we might have machines like that and maybe define agi and where you put yourself.
9:05 pm
>> what we mean by intelligence? what i mean is simply how good the machine's the thing is at accomplishing its goal. we have narrow intelligence letter can multiply the number fast but the artificial general intelligence as broad as ours so we asked leading ai researchers by when did they think machines could outdo us at all goals? the answers interesting. first of all there was a disagreement. that's important. it means we really don't know. some of say for sure it's gonna happen real soon they're exaggerating because ai researchers don't think it will happen for like a hundred years. but they say for sure is second happened in their lifetime
9:06 pm
they're exaggerating two. including in this group of optimists that think the leaders who work we showcase here. my own feeling is i think the average was 2055 and one was 2047. >> two years later so the progress was so quick. the matter of decades is reasonable. we put thought into when it's gonna happen. even for not sure. >> whether someone told you an alien intelligence will land on planet earth in 2055. that might wake us up.
9:07 pm
in some way this is an alien intelligence. >> yes but were better off because if it was just some spaceship were to figure it doesn't matter what we do but this is different. were creating it ourselves. we get the opportunity to decide what intelligence is going to be. it's a great opportunity to season not squander. >> you convinced one person of the and that is elon mosque who came by and found this to be an important challenge. he spoke at mit said were summoning the demons that never goes well. it's a fascinating story in the book. tell us about how you got to
9:08 pm
learn to make a donation was interesting. >> it's a fascinating story. not too long ago i could've never dreamt i be doing a project with elon mosque. i realize that he was thinking seriously about this. since we're planning on doing this conference to bring ai researchers and i reached out and asked if he is interested in the phone call. it became clear quickly not only did he care about it, he gets it. the media likes to betray him as a doomsday or come scaremongering. nothing could be further from the truth. he has incredible optimism and the potential of humankind.
9:09 pm
he's betting his energy and money on things spending think it are impossible like that it might be under planet are making cars electric and having solar panels throughout the world. for someone who really thinks a lot about the longer-term future is not going to want to dismiss what might ruin everything in 30 years. but i tried hard to persuade him a what we needed to do was get the ai community engaged in sure them it wasn't about trying to stop ai research. we talk about running the race
9:10 pm
but we didn't need to slow down the power that developed the wisdom. there's research that was unfriended and he kindly agreed to be the first person to find it. we had $10 million. >> that's a big number. >> it's small compared to what the government spends on research but the first ever spent on this. so we decided to give it out, deadline short, which arose going to happen. without 300 team supply. >> you almost and make the announcement that night.
9:11 pm
i was watching you having these discussions back and forth and i thought something big was brewing below is the reason why humiston to it? >> is trying to attempt the first ever landing of a rocket booster on this barge and this has been a dream of his and they said you cannot distract the media from this. by having another big announcement. so finally this is our friend found a diplomatic solution. he would make the announcement at the conference but he cannot tell the world until after the rocket landed. >> he source to secrecy. but he never told us the number that number is really the headline.
9:12 pm
>> really transform the way research thinks about it. they go to conference they feel that this is cool. something they can help with rather than just been under attack. >> you also mention north korea and some of the risks. you've mentioned having a project to reduce nuclear proliferation. it is not nearly as hypothetical. was it today that north korea sent to muscle over in japan again?
9:13 pm
do you have a concrete thing we can do about the. >> first will even bringing up is a step in the right direction. the same phenomenon. we humans are science understand our world better. we use that to amplifier technology. we need to be ready for it. you would never walk in the kindergarten say here's a box hand grenades, want to play with it. sometimes when i look at statements from kim jong-un and or have tweets from a certain person that's what it is but then you take this box of 4000 nukes and play with them, hindsight maybe this could've been handled better. >> how? today even if you want to have russia and the u.s. want to have deterrence to make sure the other side doesn't get them.
9:14 pm
how many nuclear weapons do you need to deter putin? maybe 100, 500? if you took out the smartest 500 cities there would be much less. the russians don't have to take out were to deter us. but putin doesn't have 900, he has 7000. why does russia have 7000 because we have a 7000. the president trump got together and said let's cut down to 1000 each deterrence would be on chain. it would not in any way reduce or deterrence against kim jong-un. but more awareness of the risks technology pose.
9:15 pm
>> as physicist and technologist keep inventing more powerful technology with the stoner spear you can heard a few people, with the gunner machine got a few more, a nuclear bomb can kill millions. the next wave may be existential. there are technologies not just ai but biological and others that could give a red button that says will destroy humanity, please do not push. suppose we handed out each one of those to every person on the planet, how long would we survive? >> i'm sure there's people in the stone age who thought it was a good idea to kill others in
9:16 pm
the stone age. now were amplifying the power we need to make sure this stuff don't fall into everybody's hands. i think this is why we should make sure we don't get into a military arms race with ai. military weapons are expensive to build, these things will cut cost much more than an amazon delivery drone. >> even physical things, some i can put in the address of photo of an ex-girlfriend are upset about this horrible situation we don't want to be in. the superpowers realize mass-producing this will help terrorists and nonstate actors
9:17 pm
so i hope they can clampdown. >> before we turn it over i want to make sure we get back to some higher notes. so he spoke optimistically about how it has the possibility to grow the pie then he said shame on us if we can make sure everyone is better off. i would test more specifically how would you like to see this happen? in this country there's people against wealth redistribution. >> my long-term is as long as yours, maybe the next five or ten years versus 20 or 30.
9:18 pm
in the short term they're supposed shortage of work that only humans can do. we need to think about as some of these are jobs get automated how do we get people to do the new jobs. the first is education. helping people learn new skills like reinventing it. machines are bad about creativity, emotional intelligence and this is where we need to motivate people we don't teach in schools very well. schools seem to take creativity out of kids. scott have to be a lifelong thing for people continuously learn new things. such as throwing technology it's conceptual change less memorizing the facts of
9:19 pm
following instructions and more how can we plan discover. we do more action learning so i would be one part. another is more entrepreneurship. you might be surprised if you look at entrepreneurship and innovation is gone down. technology overall there has been less the business formation listed companies started. that makes it difficult it's always a losing strategy to put the old economy in place. as technology automates we didn't just say the see if we could hang on, we and i mean
9:20 pm
entrepreneurs invented new goods and services. joseph called the creative destruction. there's been more more regulations and barriers. in boston there's a law passed special tax on labor to slowdown the transition. the small examples of people being uncomfortable change. unknown free distribution. we have a tax system that has shifted more money to richer people even though most of the wealth and income has gone to the top 1%.
9:21 pm
but the technology and the tax system are looking to exasperate. that's a social choice. certainly there is a value decision of how we want to balance things. in countries like sweden where you are born, they been successful investing health, education, not necessarily giving people money but making life a little bit more pleasant. in the long run, i sure your optimism that machines will be able to do everything humans can people say machines will take all of the jobs in my reaction is shame on us if we turn that
9:22 pm
into a bad thing. talking about a world of vast wealth and no need to work. we can spend her time having discussions at the museum of science and playing and doing sports. the problem is the income distribution. that's a social choice. >> i agree with all of this and i agree that if we have a lifeline vacation time be great. of course we have to think about society where people have meaning and purpose. many people have never worked a day in their life and feel plenty of meaning and purpose. and so i agree will be able to
9:23 pm
do it? do you think we will be able to do this and now the latest tax reform being discussed seems like it's going in the opposite direction. >> one of the reasons for having the discussion is not a matter of us making predictions. it's a matter of us making choices. it's up to us to decide. it works in a democracy is there's no dictator k, we the people decide. people basically say i like your ideas but unless the voters wanted we cannot go out. so starts with changing the conversation and having this
9:24 pm
conversation. i have no doubt about the feasibility the political feasibility is a choice. that's a good note. it's interesting to hear questions should be some microphones here. will be delighted to hear questions and comments that you have. >> will try to keep the questions brief to make sure they impact our questions. >> the first question. >> good evening. have a question about the economic consequences of artificial intelligence that machines take over the jobs. right now a lot of people will
9:25 pm
be out of a job. truck drivers, you say it's a political choice, but nobody so maybe yes another sponsor talking about it television you don't hear it you hear rumblings about basic income, maybe ownership what are your ideas affording that? >> part of the political dysfunction and anger, not all the parties driven by this economic trend. median income is stagnant so half the people are worse off. i don't blame people for being angry. they see things are not getting better.
9:26 pm
over the next decade the underlying forces will get stronger. tens of millions of people will lose their jobs. the question is preferably millions of new jobs that come in place and that's not automatic. so far we've done okay with creating new but they're not as good. employment is higher now than it used to be. the issues around wages than on employment. can we create new and better jobs? there is work to be done in childcare, healthcare and teaching, and science and arts. we can afford to spend more and the money is there.
9:27 pm
we can unless the private sector more. there's over 300 companies and organizations and announced winners to create jobs. the government can help support that. there's something called the earned income tax credit. in silicon valley it's basically the basic income but tied to work. if you work more they were supplemented takes in a towel and our job and turns it into a 12 or $15 an hour job. not by heaven and minimum wage but having us contribute. then the employer has no disincentive to hire. the person gets more money and a
9:28 pm
lot of people feel like they want to be part of society and a contributing way. people feel comfortable like our kids to the future straight people checks people don't feel satisfied about that. my sociologist friends feel like having a way to contribute is part of it. there's more that we could do. >> thank you both for being here. in recent years many people have taken to the streets in an attempt to solve the problem by asking them to solve it and i see most technological advances in recent years as preventing that. what you see as the most seamless, least violent way to
9:29 pm
overthrow the enforcing nature of the collection of power among only a few individuals? can we use artificial intelligence for the and how would you apply them? >> all technology is a double-edged sword, even fire. certainly these modern technologies for positive social change. what's happening is the anger being felt by more people who see their lives getting worse even though the pies going is very real. couple that with major cuts and availability of higher education
9:30 pm
have angry people who don't have the opportunity to learn these details that's a great dream come true. we've been seeing people are hungry for change and they will vote for whoever promises most change. barack obama slogan was changed. and in the sense donald trumps was also change because i do feel like they got enough change. brexit was the option of biggest change and ultimately to be helpful people could think of several ways to building a social movement to tap into this anger and show people solutions that work. eric's book, machine platform and cloud talks about drivers of
9:31 pm
9:32 pm
will have them overthrown. >> so maybe at other times years ago could happen here so they are in power and if they don't have shared prosperity in the dough comes at them with pitchforks like key will. >> next question in the front the deal the way to win so looking at a very simple game theory sell any
9:33 pm
intelligent computer can learn that. >> howdy make people cooperate rather than fight? one of the most powerful ways to make people cooperate is why day agreed to give up freedom to get married? because they think of the cool things that marriage will enable. and the same with all collaboration because all parties realize if i put this petty thing aside to collaborate i can do great stuff. but not what you're doing as a society it is the opposite if you go to the committees of the future it is called utopia like terminator. it is exactly the opposite
9:34 pm
of what we have to do. where do want to be the future? and she says and then think of the challenges and that is the reason for what sort of future you are excited about some hoping this can foster that type of collaboration we are talking about. >> that seems like the perfect place to end. with the idea of cooperation
9:35 pm
how do we work together to envision a positive future so i will take a priority to say we will continue the of conversation at a reception and a book signing and then follow the signs to the bottom floor. end max will be there to talk to. [laughter] and we can all talk to one another because there are amazing commons and concerns and hopes that were raised here. [applause]
562 Views
1 Favorite
IN COLLECTIONS
CSPAN2Uploaded by TV Archive on
