tv Charlie Rose Bloomberg December 21, 2016 10:00pm-11:01pm EST
10:00 pm
>> from our studios in new york city, this is "charlie rose." charlie: what is artificial intelligence? >> so, it is what we are all doing when we make machines do things which we usually ascribed to human intelligence. it's all about taking the things which we thought were brilliant things that only went along inside our brains, we make computers do them instead. we often break up artificial intelligence into two subsets of work. many of the researchers are working on what we call autonomy, which is having
10:01 pm
machines survived by themselves and figure out stuff by themselves and humans can't help them great many of the other folks here work on what we call connective assistance, which is all about machines helping humans. charlie: when people think about artificial intelligence, they think you're trying to get at human intelligence. what is the barometer to measure progress in a.i.? andrew: my favorite barometer is, how much does the boring part of my life get done for me by machines. 100 years ago, people would have said, if a machine can multiply two numbers together -- we have all grown up where we don't regard that as being a particularly intelligent thing. 20 years ago, it was beating humans that chess. -- at chess. nowadays it is things like reacting like a human in your language. the barometer keeps changing, if you like.
10:02 pm
the thing that we humans think we are unique at, automating it, finding out if we can get computers to do it instead. charlie: are the similarities between how ai systems learn, and how our brain learns. andrew: yes. a lot of the biggest advances in artificial intelligence in the last 20 years have actually been inspired i looking at what goes on in the brain. often the brain really reacts to mistakes. if we do something, it doesn't work out. stuff in the brain says, i'm not going to do this next time, i will do something else. a lot of our machine learning algorithms do the same thing. charlie: is that safely touching something that is hot and recoiling from it, and learning not to put your hand there? andrew: the reflex action is a good one. there are much more mundane things, like if you say to a search engine, i want to buy a
10:03 pm
chevy tahoe, and the search engine shows you a vacation in lake tahoe. there's going to be evidence you are not particularly satisfied. he will change itself a little bit and after a few hours, making the same mistake. charlie: what are the other differences between a.i. learning, brain learning? andrew: one of the things is computers have got really good memories. we are much fuzzier and our memories. when the computer scientist is working on ai and trying to build intelligent systems, they do cheat by taking advantage of computer memory. to be good at games in chess, humans study and learn the principles. computers think once for about a month, every single possible endgame, all the quadrillion possible situations they can be
10:04 pm
in, memorize what they will do in each of those situations. charlie: and recall it instantly? andrew: that is an approach to a.i. that is cheating because it is absolutely nothing like what humans do. it turns out to memorize every possible scenario in chess would take hundreds of years on a fast computer. charlie: you got a grant from the federal government to study the brain. andrew: that's right. charlie: what are you looking to learn about the brain that will fuel what you are doing here in your research?
10:05 pm
andrew: one of the big ones is how the visual system works. humans are still ridiculously good at understanding things around them. the highest resolution cameras are wonderful for sensing was around, but understanding and knowing that i should be paying attention to you right now and not some light moving on behind you, that the brain does for me. i don't even have to think about it. watching what happens in the near owns -- neurons in the brain is interesting. charlie: people think about sight as a function of the eyes. it is a function of the brain. to a large degree. andrew: yes. that's right. we are learning a lot. it's interesting how much useful competition happens here. more happens here and more happens way back here. biology has built a very clever system where the sensing and computing is together. we engineers like to have a more organized way of doing it.
10:06 pm
often we do break it up into these different modules. charlie: how intelligent is artificial intelligence today, as we sit here in your lab? andrew: it can do things which 20 years ago we thought you had to be a professional, highly trained human to do. interestingly, it can't do some things which a one-year-old -- charlie: give me an example of both. what would be an example of something a highly trained professional can do and today through artificial intelligence, can duplicate? the best example. andrew: you are a great lawyer, you are very busy, you ask your assistant, go find everything else this judge ruled on for this lawsuit and see if we have a chance of finding something that matches. the assistant goes off and does it. that can be automated now. that seems like i intelligence work you need training for. charlie: you can duplicate that now. andrew: yes, and it's quite amazing. it is so much more creative than
10:07 pm
doing something like playing chess. charlie: that would be useful for me, to know everything you've ever said about artificial intelligence. i can go back and look at it, and it would get me a sense of how you have responded and where your curiosity is, but do it at such a rapid speed. andrew: exactly. it is very much those kinds of questions that involve real judgment and how you will summarize an argument. if i say to you, what you think of washington,, d.c. does not do any good for you to give me a massive list of everything you know about washington, d.c. you have to summarize it. charlie: has that been around for a while? andrew: it's getting better every year and this is the big race between google, apple, microsoft, and facebook. they are all trying to get their systems to be better than each other's and do these kind of general answering of questions. charlie: they all classify that as artificial intelligence, or is it sibley trying to be on the frontier of computer technology and the digital revolution? andrew: all four of those
10:08 pm
companies come to places like carnegie mellon and m.i.t. begging up to give them artificial intelligence researchers. charlie: they want your graduates. andrew: yes. charlie: what is the most surprising thing today, that even like the passage to do it amazes you in terms of the velocity of change? andrew: here's a couple of big surprises. we have worked so hard and robotics, almost every aspect of being able to understand the world, getting our robots to act swiftly and act safely, they cannot still reliably pick up a cup of coffee. simple things where we use our hands -- charlie: why is it so hard to do that? andrew: we are not quite sure. we have tried various wings with grippers and clause to do this. human dexterity still blows us away. our geithner -- current understanding is that our fingers and our hands are amazing instruments.
10:09 pm
we are sensing all over them so that as your hand is coming into grip something, it's doing lots of computations. is this going to slip, is this going to fall? we robot assists really need to get our act together for our robots to do the same thing. charlie: how far away are you from finding the capacity to do that? andrew: it's one of the big unsolved things. i'm going to estimate 50-50 chance we will be really good in five years. charlie: on the other hand, there are things the human brain can do that are not even close. and i assume that has to do with emotions, consciousness, what else?
10:10 pm
what's a better example? andrew: we have absolutely no idea about making a system which just -- make decisions about what it was to do in the world. everything we have built at the moment is a fancy calculator. it is something where we humans have said, your purpose is to do this and do it well. this thing that we humans do where we kind of take responsibility for every aspect of our lives and make our decisions every day, that kind of level of autonomy is not something i'm seeing anywhere in the industry at the moment. my gut tells me we are about 1935 in error not ask. we have -- aeronautics. there are concepts like supersonic flights and the possibility of moving masses of ordinary people's lives.
10:11 pm
we're a very confident. their industries employing hundreds of thousands of engineers building intelligence systems, that we can see that these glimpses of what happens when we've got real acts together in terms of understanding what humans want, and being able to give answers for that. charlie: you said apple, google. i assume amazon, any company that is heavily into technology and has products therefore knows the benefits of knowing when more about how to input artificial intelligence into those products so they can do more things. when they come here to talk to you about students and what they need, what is it they say to you
10:12 pm
is important for them? what are they looking for in talent? andrew: here is one of the things companies are always telling me they need. not people who understand math, but really likes solving actual problems people are facing right now. i very rarely have someone come in and tell me, i need someone who is an expert in a fancy technical aspect. they will say, i need someone who's going to build me a system which can give an early warning if the crops look diseased in ecuador. charlie: that is artificial intelligence, using sensors and everything else. andrew: that's right. the best people in this industry are the ones who are trying to solve big problems through a.i., not just build a.i. for its own sake. charlie: if you look ahead to where a.i. may go -- everyone talks about personal systems. what is the world of a.i. going to make for your daughter 10 years from now? andrew: here's the fun way of looking at it. super celebrities, they walk around with a team of people with clipboards managing everything.
10:13 pm
we will have technology matching all this up for us. you want to be dialing for an uber. charlie: they will know where i am and where i have to be, and they will know where i want to go. andrew: that's right. this idea that we have fancy concierges helping us in our lives, that will be kind of fun. i think the important things are, somebody slips and falls late at night. a throne will be there to help them almost immediately. charlie: they will know this person has fallen. andrew: exactly. we will have machines looking out for us to make sure we are ok. charlie: is that near? andrew: it's already here in that many people have sensors on the move.
10:14 pm
10:17 pm
about this, think about the idea of all these assistants working for these celebrities. they are going to be out of a job. what's going to happen to them? >> that is something we spend a remarkable amount of time talking about. back to the days when agriculture was a massively labor-intensive world, i don't think we feel bad that is not requiring hundreds of people to bring crops into the field anymore. what we are conscious about is that we will cause disruption while things change. nowadays a great career is wedding planner. that's an interesting job, right? it would have seemed ludicrous that ordinary people were having professional wedding planners working for them even 50 years ago. why do we have that now? the rest of technology enabled the more grungy pieces of work to get automated trade this is the pattern which you see coming up. one of my kids love's doing videogame reviews, and that was a job, if you like, which never would have resisted -- existed in the past.
10:18 pm
charlie: when you build artificial intelligence to beat a game, the results we have seen would go. is this an intelligence that is simply focused on that one thing? is it, for lack of a better word, sub operative? andrew: yes. these things are total idiots. when we researchers go after games, it's often because they are a clear example of a specific kind of mental event. chess and go are ones where you the human see the whole board, you have time to think about what your next move is, and no one is hiding anything. one of the places where a.i. got interesting recently in games is poker. poker is an example of a different kind of mental
10:19 pm
activity where you are playing a game, but some of the things you do, you are not doing them because they make the most sense immediately. they are not what some people would call rational. when i'm playing poker -- i read your face. what i am playing for is two aspects. one is assessing the other people. the other is deliberately misleading. the computer playing chess always does what it thinks is the best thing. a good computer playing poker will occasionally be making mistakes or trying to mislead its opponent in various ways. when you see people using a.i. for games, it is to look at different aspects of intelligence. the first one was problem-solving. things with poker and other games of gambling our negotiation and deception. charlie: can you give artificial intelligence judgment?
10:20 pm
andrew: you can give artificial intelligence goals. you can say, we want you to do this, and while you are doing it, you are never allowed to do that. then you can ask a computer to figure out what is the right way to a call bush that. charlie: as you know, and many graduate schools, especially in terms of geopolitical strategy, there's a lot of role-playing. how does artificial intelligence fit in that? if russia invades latvia, the u.s. should do what? andrew: for those things, one of the most important things that a.i. can do is it teaches us what we must never reveal. for example, it actually has to be the case that some of our actions are unpredictable. if you are playing poker and you always bluff, you play poker and you never bluff, both of those are a disaster.
10:21 pm
you have to sometimes bluff, sometimes not bluff. similar in policy in these international games, it cannot be the case that the other stakeholders out there know your strategy in advance. one of the most interesting things for us a.i. folks is that we are programming a.i.'s which are deliberately not being protectable in the way they deal with these things. great example, if we are citing spam emails, when we build a.i., email likely to be spam, those a.i.'s are being pretty clever about not acting in a way which tips off the spam email writers. charlie: what in the most essential way is the challenge of creating human intelligence? what is it that is a great impediment to progress? andrew: well, i want to be clear about something.
10:22 pm
i don't think many of us are interested in trying to duplicate human intelligence. it's called artificial intelligence. artificial is therefore a reason. what we are often trying to do here is save lives or increase safety and comfort by having computers which can react very quickly to emergencies, actually make us safer and more comfortable. charlie: that can be essential to medical research or medical treatment. andrew: exactly. you can see patterns. charlie: an insight into what they might have, supercomputers and artificial intelligence can do that analysis faster than anything. andrew: exactly right. very interesting thing going on is using computers to spot patterns as to which kind of people are going to be affected successfully, but which kinds of treatments. the computers are noticing all
10:23 pm
kinds of little things going on in the background that none of us humans have spotted before in terms of the human genome or other aspects of the way they live and how those are influencing and what kind of frequency will be successful. charlie: when you think about charlie: when you think about things the brain does, like consciousness, will you ever get there? andrew: consciousness, none of us engineers know anything about. it is a complete mystery to me, personally as a human, and i would not know where to start when it comes to creating a technological version of that. charlie: i'm going to ask whether machines will ever be able to do this. create original works of art. andrew: yes. charlie: explain how that would happen. andrew: so, this has been around for a while. i can have a robot which throws a can of paint across the floor. and call it art. if on the other hand you wanted
10:24 pm
to have a computer program produce imagery which the critics love, i do also think that is quite possible, even now. if we did a blind test of computer art versus human art, maybe not now, maybe five years, 10 years from now, a blind test, you will not be able to see -- charlie: what is a blind turing test? andrew: how do you know if a computer is acting like a human? put either computer or human behind a wall talking to you through messaging, and we find out if you can tell if it's a human or computer. charlie: and if you can, it passes the turing test? andrew: yes. that is one of the old sort of benchmarks for whether a.i. has reached human intelligence. these days we don't talk about
10:25 pm
it very often because it looks fairly easy to do. charlie: you are telling me that you can sit on the side of the curtain and on the other side of the curtain is a supercomputer having the qualities of artificial intelligence, and by text messages it says you cannot tell whether it is a human brain are artificial intelligence? andrew: that is the turing test, to do this. at the moment, it has not been a convincing victory for any computer. but it's not one of the hardest things to imagine. charlie: interesting point. no one has ever had a successful test to pass the turing test. will they? andrew: i think so. it's very hard. i'm really confident that within 20 years, convincingly we will
10:26 pm
pass the turing test. but by the time it happens it will just be like chess or calculating multiplication tables. people will say, it's just a computer doing that. it's not real intelligence. charlie: continuing my list of chores and things machines may be able to do. household chores. andrew: that one is harder, actually. i much prefer to begin when the task of doing original works of art than household chores. as i mentioned earlier, picking stuff up in a mess, picking up a floppy object like a piece of clothing and folding it, i know it seems monday in, but we humans are geniuses at that. the robots we built, they are still pretty crappy. -- crummy when it comes to these every day axial manipulation. you will have autonomous trucks
10:27 pm
driving along the freeway at 80 miles per hour before you have a robot butler that can clean up after your kid and full their clothes. charlie: is it going to make us face this question, what the hell are we going to do with all our free time? andrew: we are going to face a question like that. i want to warn you that this would be the same question that agricultural laborers were asking in 1880. we don't know what happens next. every time it comes up, we like to believe we are the special case. so far that hasn't happened. we do have one important choice. suppose we all had a system that meant we could do a better job at each of our jobs and only have two work 30 hours per week. would you take it? that is one way we could spend the benefit we get from a.i.
10:28 pm
we could be working 30 hour weeks instead of in the united states, 60 hour weeks, and share the benefit that way. or, we could say half of you keep working 60 hour weeks. the rest of you, you are unemployed. that to me sounds like a much worse future. charlie: this will be a tough question. how do we handle the job market implications of artificial intelligence? andrew: it's interesting that now, leases like carnegie mellon, students, as part of what they learn, they are no longer just thinking about the technology. they are thinking about the societal implications. charlie: the kids to come here to study artificial intelligence like carnegie mellon -- is there a basic core course, computer science? what is the course they are studying? andrew: when they're in high school, we need them to have loved math. math is the center of ai.
10:29 pm
once they come to a place like carnegie mellon, they learn about algorithms, which is how computers organize how they are going to solve tasks. then they study things like computer vision and machine learning, and discipline, which are about placing different parts of the human brain. charlie: at what age did your daughter learn to code? andrew: at 10 she was able to write code which i thought was pretty respectable. charlie: did she do that on her own or because you encouraged her to do it? andrew: she is surrounded by friends and family who are geeks so she became a geek herself. charlie: like father, like daughter? andrew: yes. charlie: are we facing what many might call a fourth industrial revolution? andrew: i think it is -- charlie: or is it here?
10:30 pm
andrew: we are in the first few years of it, ever since things like travel agents became irrelevant. it's easier to do it with a computer now for all of us. i think it will be 2020 that we see that whole areas of what we used to think as only people could do can now be done better. charlie: is this part of the sharing economy? andrew: the sharing economy is an interesting side aspect of a.i., where you are getting groups of people together to solve problems -- let me describe it more clearly. sharing economy is a way in which people can do what they are really good at. if i'm really good at writing and you are really good at
10:31 pm
planning, we make sure you do what you are good at and i do what i am good at, and then we do a win. having the computers help organize us so the right people are doing the right things -- charlie: sharing economy. the same way that uber provides somebody who needs to move somewhere and somebody who wants to take somebody somewhere. two separate talents. what are going to be the companies and industries and sectors that are disrupted most by artificial intelligence? andrew: the ones which are really coming up in my mind, what might have originally been called white-collar work, like the legal profession and some parts of the medical profession with extremely high training and expertise. so, these are ones where computers are actually able to make some of these judgment calls. a doctor facing lots of contradictory information actually figuring out what the
10:32 pm
problem is, or a lawyer who has to sift through a vast amount of information to find the actual solution to a difficult problem. interestingly, something like a nurse or teacher whose real job is understanding the people they are interacting with over time. that i would find much harder to automate. i don't see those disappearing for decades. charlie: artificial intelligence is changing life as we know it. andrew: yes. charlie: marvelous. andrew: yes. charlie: and when you sit around and and blue-sky this, 10 years from now, 15 years from now, 25 years from now, just take 25 years from now. right before 2050. where do you think artificial intelligence will be?
10:33 pm
andrew: i hope the world will be a much safer place. charlie: safer? andrew: when disasters happen, there will be fleets of robotic devices coming in to render aid, fast triaged to get people to safety, autonomous vehicles coming to pick up severely injured folks, large pieces of heavy equipment coming in to move things in the way. if you could imagine a world in which just like now we have learned to build houses to protect us from the elements, we are using machines to give us and are using machines to give us far greater protection. then, 50% of the planet currently living their life in fear almost every year as to what will happen to them, they may have a much more secure and pleasant life. ♪ charlie: you can't stop
10:36 pm
technology. andrew: that's right. and if we the united states said, we are not going to do this, we could sit on our hands you and let europe and asia do it, we will not want to do that. that's not what the united states is all about. charlie: who stops us? our collective will? is it a legislative function, some ethics board decides here but no further?
10:37 pm
andrew: the place we need legislative help is asking some uncomfortable questions which we will need answers in order to save lives. for example, when you are programming an autonomous vehicle to minimize casualties in an accident, who decides whether that car should be protecting the driver, or the person being crushed into? i don't want the engineers deciding that. i don't want me to decide that. charlie: you want congress to decide that? andrew: it sounds impossible, but i want congress to decide that. charlie: what worries you the most about this forward progress of artificial intelligence? andrew: i worry that it's very stressful for people to live through times of change, and
10:38 pm
this is a time of change, and it will cause great anxiety. all the economic theories and experience we have saying that disruption occurs, people get new jobs, and life goes on, and we progress, the frightening part of that is during the disruption, a lot of people are displaced and they have a harder time and then might have had otherwise. charlie: that is the kind of impact amazon had, didn't it? amazon was a new business model, but it was using technology. it was all online and it was disruptive to the business of bookstores around the world. andrew: yes. this really is the story of technology, and the story of the united states. over the last 200 years. we have constantly found better, more effective ways to do things. but you cannot do that and not think about the consequences to all the people who have been trained to do something which we are now automating. charlie: this is what intrigues people, this question. people like elon musk.
10:39 pm
you have stephen hawking saying, it could spell the end of the human race. stephen hawking saying that. elon musk said it is the most existential threat we face. here are pretty smart guys saying, watch out. do we know what we are creating? andrew: it is worth being extraordinarily careful about all these things. i would put that up there with genetic modification of foods, with -- if we broadcast stuff out into interstellar space, some other alien civilization might spot us. these very long-term existential questions are worth thinking about. i want to make a distinction that at the moment what we are building here in places like the robotics institutes around the world are the equivalent of really smart calculators, which
10:40 pm
solve specific problems. charlie: artificial intelligence that is smarter than you are, is that bad? andrew: if i was really worried about that, i would already be really unhappy. i know there are billions of people smarter than me out there at the moment. we all know there are many smarter people and smarter organizations than all of us. charlie: artificial intelligence that can outthink the human population. my question is, who controls the artificial intelligence? we talk about artificial intelligence being created by engineers, scientists. and -- could it go out of control? andrew: no one knows how we will go about building something that
10:41 pm
frightening. that is not something our generation of a.i. folks can do. it is well possible that someone 30 or 80 years from now might start to look at that question. the moment we have the word artificial in artificial intelligence, i am dreadfully worried about releasing software for autonomous car driving which turns out to have a serious bug which meant on a leap day, all the cars suddenly stopped on the freeway because they had to bug in their code. that could kill tens of thousands of people. that is a very real question and responsible engineers have got to have responsible ways of validating and improving their systems to prove they are safe. charlie: what is the difference between artificial intelligence and super intelligence? andrew: artificial intelligence is a real technology, just like steel or hydroelectric power,
10:42 pm
which we are using at the moment to make our life's safer. super intelligence is an intriguing science-fiction concept alike meeting aliens or having nano-bots crawling through our veins. charlie: slight thing after thing after thing which in science fiction became reality. andrew: that's absolutely true. plenty of stories do involve the frankenstein story. and we are all concerned about this model of an eager scientist who produced a compound they thought would do good that turns out to do bad. modern engineering, you do not want to leave something without testing it. it's illegal to release something life safety critical without having very detailed -- charlie: to make sure you understand the consequences and collateral damage.
10:43 pm
andrew: if you look at what is happening in a large company or even in universities, a large company that is producing a complicated piece of equipment, much more than half the effort goes into testing it. many projects, 10% of the work is inventing the new thing, 90% of the work is in testing to find out any scenarios which can cause trouble. that is what frightens me, if we a.i. people, excitement of getting this stuff out there, don't test enough, and some of our robots, instead of saving lives, inadvertently hurt people. charlie: that's what keeps you up late at night? andrew: yes. charlie: that fear, it we have not tested it, it gets out of control, it runs like wildfire. andrew: in the early days of computing -- computing in
10:44 pm
medicine has saved millions of lives. but in the early days, there are some computer programs which accidentally made pieces of medical equipment go out of control and actually killed patients. what happened then is the computer scientist realized, you have to have detailed testing procedures. we don't have to make the same mistake with robots. we are already smart enough to know we have to test this stuff. i frankly think it's not -- it is not ethical to release software to the autonomous vehicles on the road, for example, right now, until we have some governmental standards for safety. charlie: but it's coming. andrew: yes. charlie: what about this? through the power and progress and rapid increase in potential of artificial intelligence, some mad person who happens to have all the smarts in the world takes advantage of all the other
10:45 pm
learning in the world, and programs some robot, or some other kind of thing, to do destructive acts. perhaps racial, perhaps antisocial, perhaps terrifying communities. is that a scenario? could that happen? andrew: absolutely. every piece of technology which can improve the human condition can also be used to damage the human condition. i absolutely believe, unfortunately, right now there are people in various parts of the world figuring out how to put explosives onto drones, amateurs, even. learn how to do this stuff on they the internet. just as we have medical
10:46 pm
beatments where there has to controls on disease agents, we as a society have to understand that technology will be used by evil people as well as good people. charlie: just think about the horror. we already have people who are terrorists who are willing to die for their cause. they're willing to blow themselves up. think about how much larger the potential would be, if you could multiply that, and somehow get inside of -- and do things on such a large scale. andrew: so, i am deeply worried about all kinds of acts of terrorism. and, there are many folks who are using tools from artificial intelligence and machine learning to help quickly react to prevent these kinds of disasters. for example, after the boston
10:47 pm
marathon bombing, there was limited visual information about possible suspects. but, it was possible to then use computer vision to support automatic methods to sift through all the information from all the videography around at the time to help quickly determine potential suspects. so, while i agree there is a real danger of people using robots for evil, the solution is not for us to sit on our hands and say, we better not do robot work. we have to figure out how to use robots to protect people. charlie: how much of artificial intelligence is already being used by the military? for example, just the capacity to use anything that is autonomous to advance into places where the military might consider too risky otherwise. andrew: i'm not the right person to talk about all aspects of the
10:48 pm
military. for the last 30 or 40 years, cruise missiles have been using artificial intelligence to route themselves efficiently. charlie: and drones too. andrew: if understanding the world for surveilling and getting a sense of dangers, whether -- a.i. has been used for more than a decade in those areas. the military is also investing in experimental robotics. not just flying drones, but ground-based robots water-based , robots and underwater robots being able to do survey link -- surveillance when it is too dangerous. charlie: i have got to believe the military is doing this. they want to know what the other person is doing. and then you can figure out what they are doing and then you can figure out the antidote to it. andrew: yes. one thing that was noticed
10:49 pm
during the wars of the last decade, u.s. soldiers, someone asked friends from home to send them remote-controlled vehicles because they actually felt safer piloting a toy vehicle into an area in order to see what was going on. charlie: because of landmines. andrew: yes. this was just hobbyist folks in the military. plenty of these things now have government programs to use small robots to protect the lives of troops. charlie: the interesting thing seems to me that this is not just what nationstates are doing. this is what a whole range of people who have become very smart at operating either supercomputers or accessing the internet, or creating software. and that is creating a whole range of people who can do a whole range of things.
10:50 pm
andrew: yes. and one of the -- when you look at the world this way, it's the countries or organizations which have got the trained smart people are the ones who are going to be prospering in this situation. i would not like to be a country that for example had very few mathematicians. that would indicate all the technical people are eventually going to be in china. china has a lot more people, a lot more mathematicians. charlie: they have an emphasis on math and science there. andrew: that's right. charlie: computer sciences especially. andrew: yes. if i'm looking at what the natural resources are for a successful country in the 21st century, it is the number of math trained brains. it is not the amount of oil. charlie: could i make this even more precise, that as we look at the contest, it's not a zero-sum game among nations.
10:51 pm
those that have the most capable and proficient and innovative use of artificial intelligence are going to be in a commanding place. andrew: yes. absolutely. you see this even now, it is small groups of smart people who start the $100 billion companies. i really do profoundly believe that the united states, which has led for the last 60 or 70 years in technology, can still lead here, and i wanted to because i want us to build the automated planet that respects human life and values. you mentioned earlier the question of where did the military go to find these brilliant people? they are going to places like silicon valley. the main thing a lot of us are thinking about now is the care and feeding of all the young
10:52 pm
tech geniuses. the reason folks are going to places like silicon valley, the young tech geniuses want to live in cool places. that is why pittsburgh has become cool now, we are getting a massive influx of a.i. people. charlie: the same way with austin, texas in a few places like that. andrew: exactly. if you want to build up a great a.i. work force, you need to have an environment where people can really explore crazy ideas. charlie: isn't it in our national interest to share? andrew: if you want to get your idea out there and used by billions of people, your best bet is to do a startup in the united states with viral marketing which gets the whole world using it. in that sense, yes. everyone wants the rest of the world to share what they are doing. there's trade secrets and military secrets at the same time, and one of the things you learn in either of these
10:53 pm
environments is you come across some great technical idea, you are not going to hold onto it forever. you've got to use it right now. something else is going to happen. or the technology is going to change and wipe away that advantage. charlie: you know the business side as well as scientific side. is it realistic to expect companies like google -- the question i asked earlier -- to want to be as secretive as possible about this because the competition is so intense? andrew: it does make sense to protect new technology. usually it is by keeping it secret rather than patenting it these days. charlie: that's the secret these days? andrew: correct. here is the fascinating thing about the game at the moment. do you remember getting these a.i. experts was the most important thing? you have to have them be happy and motivated. telling someone to come and work for you to do something which --
10:54 pm
your parents are never going to see or know about -- is not motivating. to really get the best people into your companies or organizations, you actually do want to give them visibility as to what they are doing. that's why it is not the case that you have very long-term secrets about technology anywhere. we, as people who are employing a.i. experts, part of what we are doing is we tell you, you are saving the world, you are changing the world, and we want you to be part of it. charlie: and you become heroic and popular if you do. are we at the risk of creating an a.i. arms race? andrew: there's a technology race in computer science which has been there for decades, and it's going strong at the moment. charlie: the race has gotten more intensive. andrew: yes.
10:55 pm
charlie: is it all behind closed doors? andrew: no. interestingly, much of the most exciting stuff going on in a.i. still gets published. there's annual conferences. something called the american association of artificial intelligence, if you are doing something cool in a.i., the best thing that happens to you is if you get a paper accepted by that conference and you are on the world stage of what is going on. charlie: because everybody in the world that has resources and is a competitor in terms of the big ideas in america, they want smart people. it's like an nfl draft. andrew: it's very much like that. carnegie mellon, we are now planning on sending talent scouts out to the high school and even middle schools to find these people. charlie: you know what i like about this?
10:56 pm
we want people to care more about science. we want young people to be as interested in science as they are in becoming a rock star, or nfl star, or nba star. so that science, because of its consequences, is a place where people know they can do well, do good, and be celebrated. andrew: when you are programming a robot it's like magic. ,the thing i tell middle schoolers, the closest thing to getting to go to hogwarts is getting to do robotics and a.i. charlie: what excites you the most? what gets you revved up? andrew: part of the reason i moved from industry to carnegie mellon is that the whole game over the next few decades is won or lost depending on the talent of the people in these systems. if they do it well, the year 2040 could be the best year to be alive in the history of the human race.
11:00 pm
65 Views
IN COLLECTIONS
Bloomberg TVUploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=542346296)