tv Gary Marcus Rebooting AI CSPAN November 16, 2019 2:00pm-2:42pm EST
2:00 pm
roosevelt on scientific developments including nuclear research that help to win part of the war. any other questions. [applause] i guess i am to make myself available to sign books if anyone wishes one. the bikes will be signed out in the library in front of the store. thank you all. [applause] [background sounds] and now on c-span book tv, more television for serious readers.
2:01 pm
>> good evening everyone and welcome to books and thank you for supporting your local and employee owned bookstore. c-span is here to make sure your phones are in. [silence] i do want to like you know about a couple of other events coming up as my friend reading from his new novel, on tuesday stephen kinzer is going to present the history of the cia. we also have some off-site events tickets are still available for monday's conversation between selecting a norcal and talking wednesday. tonight we welcome gary, author of rebooting ai which argues the computer began human in jeopardy does not signal that we are on the dorsum of fully autonomous cars are super intelligent machines. taking inspiration from the human minds, the book explains that if we need to advance our intelligence to the next loophole, and suggest that if we are wise on the way, we won't need to worry about the future of machine overlord.
2:02 pm
he says that finally book that tells us what ai is and it's not in what ai could become if only it were ambitious and creative enough. the called elusive and warmed account. gary marcus is the founder and ceo of a robust ai and was founder and ceo of geometrics intelligence. he's published journals in science and nature and perhaps the youngest and nyu is the author at heart zero in on thursday this comment rebooting ai. thank you. [applause] [background sounds] >> we had some technical difficulties here. i am here to talk about this new book rebooting ai some of you
2:03 pm
might have seen as the head of the new york times. but how to build artificial intelligence we can trust. i say we all should be worried about the question because people are building a lot of artificial intelligence they don't say that we can continue trust. the way we put it in the base was visual intelligence as a trust problem we are relying on ai for more but hasn't yet earned our confidence. we also suggest and i also want to suggest that there is a hype problem. a lot of ai is overhyped these days. often my people who are very prominent in the fields. so a group angst, deep learning, a major approach to ai these days and he said the typical person into a mental task with less than one second of thought, we can probably automate it using the ai narrative are in the new future. as a profound claim. anything that you can do it a second, we can get ai to do. if they were turned in the world work be on the verge of changing altogether. it may be true some days 20 years or 50 years or hundred years from now but i'm going to try and persuade you that is not
2:04 pm
remotely true now. the trust problem is this. we have things like driverless cars and people say that they can trust but that shouldn't actually trust it and people die in the process. this is the pitcher from a few weeks ago in which a tesla crashed into a stopped emergency vehicle. that actually is happened five times in the last year. the tessler on autopilot has crashed into a vehicle by the side of the road. a systematic problem. here's another example in a murky and the robot industry and hope this never happens to microbus. this night robotics is the security robot and it basically committed suicide by walking into a little puddle. [laughter] >> and machines can do anything a person can do in a second will a person in a second, can look at puzzles i maybe i shouldn't go in there. thereabouts can't. that other kinds of problems you like bias on people been talking about lately. see can do a google image search for the word professor. [speaking in native tongue] something like this.
2:05 pm
or almost all of the professors are white males even though the statistics are in the united states that only 40 percent of the professors are white males and if you look around the world, it's even much lower than that. yes systems that are taking a lot of data that they don't know if the data is any good and they're just reflecting it back out and thus perpetuating false stereotypes. the underlying problem with artificial intelligence right now is the techniques that people are using are simply is it too brittle. so everybody's excited about something called deep learning. slippery good for a few things. actually for many things. object recognition for example. get them to recognize that this is a bottle and maybe this is the microphone or you can get it to recognize my face may be distinguished from my uncle ted's face for example. i hope it will be able to do that. deep learning can help some with radiology but it turns out that all of the things that is good at, all into one category of human thought or in intelligence. in the category that they fall into several things called perceptual things.
2:06 pm
so you see for example something and you have to identify it further examples of things that look the same percent the same as a fourth. but that doesn't mean that that one technique is useful for everything. i wrote a critique of deep learning a year and have a go. can find it online. but deep learning is critical bravery printing find a live for freight and a summary of it is that the deep learning is really brittle of vacant shower that they're downsized to be learned. >> even though everybody is excited about it it doesn't mean things are perfect. first i will give you a real counterpart to his claim. if you are running a business and wanted to use ai for example, you need to know okay i actually do for you nor can it not do for you. or if you are thinking about ai ethics and wondering machines might be able to do sooner what they might not be able to do soon, i say it's important to realize your limits. here's my counterpart. the typical person can do a mental task less than one second of luck thought and we can gather enormous and added data
2:07 pm
that are directly relevant, we have a fighting chance to get ai to work with that. song is the test data and the things that they work on their different from the things that we taught the system on. they're not is it too terribly different. in the system doesn't change very much over time. the problem they try to change ourselves doesn't change much over time. so this is the recipe for gain. this is what ai is good at right now is fundamental things like games. alf ago, is the best no player in the world and is better than any human has vincent exactly with what they are good at. system hasn't changed. the domain in the game hasn't a change, in 2500 years. in a perfectly six set of rules and you can gather as much data as you like for freight. promos for freight coming up the computer by itself or different versions of itself which is what deep minded in order to make the player and you can just keep playing it so we can keep gathering more data. let's compare that to a robot that is eldercare.
2:08 pm
you don't wonder about the does elder care to collect and then set the amount of data through trial, nara and work some of the time and not some of the other time. so if it works 95 percent of the time, putting grandpa in bed and drops grandpa 5 percent of time, they're losses of bankruptcy. it's not going to fly for the ai that would drive an old eldercare robot. when it works, the way the deep mourning works is something called a neural network that depicted up at the top it is fundamentally taking big data and making statistical effect nation. and it's taking label data. a little about the pitcher affect what, a bunch of pitcher of couples and a bunch of pitcher of angelina jolie and he shared a new pitcher of tiger woods that isn't is it too different from the old pitcher of tiger woods and it seems and it correctly identifies that it is tiger woods and a angelou eat chili. this is the sweet spot of departing. the people got really excited about it when he first got get popular. deep learning will send him a supersmart robot. believe ernie seen some examples
2:09 pm
are an example of a robot that's not really all that smart. mostly smart later. the promise has been around for several years but has not been delivered on. in fact there are lots of things that deep learning is for. in osaka most of the switches rating. on the right, are some training examples and he would teach a system these things are elephants. eddie shared another element that looked a lot like those on the right, the system would have no problem at all. it knows wouldn't know if it. but suppose you shared a pitcher of the left will be sure the left, the way deep learning system responses says, person. it mistakes the silhouette of an elephant for a person. and it's not able to do what you would be able to do which is personal recognize it as a silhouette and second of all that truck is really an elephant. this is what you might call extrapolation or generalization deep learning can't really do this. so interesting deep learning everyday more more seating uses systems that make judgments about whether people should stay in jail but whether they should
2:10 pm
give particular jobs and so forth. it's really quite limited. as another example. in making the same.about unusual cases. he joined the pitcher of the school bus, but hillside and snowbank, it is with great confidence while that is a snowplow. what that tells use the system cares about the texture of the road and still is no idea numbers between snowplow and school bus with a fork. fundamentally mindless statistical summation and correlation the doesn't know what's going on. the state the right was made by some people at mit if you are a deep learning system, you see it's an espresso because there is phone there. is that super visible because of the landing here but it picks up on the texture of the foam and says the espresso because another example is if you show deep learning systems banana, and this super in front of the banana not worried that this is
2:11 pm
starting to control our society and you are not paying attention. and we get the next side of this. maybe not. were going to have to go that went out size of that becomes of technical difficulties. i will continue though. there's nothing i can do. all right. once again here. beckman sounds okay i'm certain
2:12 pm
next going to show you a pitcher of a parking sign with stickers on it. it would be better if i can show you the actual pitcher but presenting size over the web, there's not going to work so with the stickers on if you can imagine that in the departing system calls it a refrigerator still filled with a lot of food and drink so is just completely off it it started something about the colors and textures but it doesn't really understand what's going on. that is going to show you a pitcher of a dog that is doing bench press with barbell. yet something has gone wrong. [laughter] say of that. what is that of down. beckman sounds i would need it back laptop and i just couldn't do it fast. i don't know, i don't take they're going to be wheeling to edit it. so is gone. if so xo you pitcher a dog with barbell and it is lifting barbell. with deep learning system concilio there's barbell there and dog but he can't you hey
2:13 pm
that is really how did that dog is the rep that i could lift the barbell. he has notes because of things and is looking at. currently ai is out of his depth when it comes to reading so going to redo a short little story that laura ingalls wilder wrote. it is about a manzo the nine -year-old boy who finds a wallet full of her name this truck in the street and then amanda's father guesses at the wall it might belong to somebody who's names as sir. he finds mr. johnson. so here's the thing that wilder wrote in a manzo terrace mr. thompson f he said to do those of popular. as of the we've god of in those days any jobs in aesop news answers pocket and says, yes, i have, $1500 to two. what you know about it. is this a. >> yes that's it. snatching the public document. and he hurriedly cast the her name in account all of the bills were twice as many in a recent basis i have relief.
2:14 pm
he said the boy didn't steal any of it. you are a mental image of it. might be very vivid or not so limited but you know in your different a lot of things and why hasn't stolen a her name or where the her name might be and you understand why he's reached in his pocket looking for the wallet because you know the wallet is occupied physical space and that if your wallet is in the pocketbook, art is in the pocket, you'll recognize it and if you don't feel anything there that i will be there and so forth. in all of these things pretty can make it a lot of inferences about things like how everyday objects work and how people work. second answered questions of what is going on. there's no ai system yet they can actually do that. gpt two is the closest. some of you might have heard about it because ai is famous because elon musk has founded and has a premises at the giveaway all their ai for free.
2:15 pm
that's what makes the story so interesting. he gave it away for free until the maid this thing called gpt two. and they said and is so dangerous that we can't give it away. it is an ai system that was a good a human language that they didn't want the world to have it. some people figured out how it works and make copies of it and i can use it on the internet and so my collaborator ernie davis and my co-author and i sent him the amount the story into it. a manzo is found the wallet and he given to the guys i scanned the her name and he now is super happy. but you do as a feeding a story continues it. it took a lot of time many hours, and it continues it for him to get the her name from the safe place where he had it. this just makes no sense. it is perfectly grammatical but if he found his wallet what is it doing in a safe place. but the words are correlated and some asked database but is completely different understanding. in the second of the talk which one i will do that went out visuals is called looking for
2:16 pm
clues. the first clue that i say we need to do as we develop ai further is to realize the perception which is what deep learning does woa, it's just part of what intelligence is present some of you especially in cambridge know how the theory of multiple intelligences there is global intelligence and musical intelligence and self-worth. in the cognitive psychologists, there is also common sense in planning and attended many different components and what we have right now is the forum of intelligence that is just one of those. and it is good at doing things that did that went out. good perception, certain time of a pain. doesn't mean it will do everything else. the way i say about this is that deep learning is the great hammer and we have a lot of people looking around staying, because i have a hammer, everything must be in l. and some things actually work that went out like jazz and so forth but in much less progress
2:17 pm
on language so there's been, exponential progress in how will computers play games but there has been zero progress in getting them to understand conversations that that is because intelligence itself has many different components no silver bullet can solve it. the second thing i wanted to see is no substitute for common sense. we need to build that into our machines. i wanted to show you pitcher of a robot on a tree with a chainsaw and is cutting down the wrong side if you can pitcher that. so is about to fall down. this would be very bad. you would want to solve it with the popular technique called reinforcement learning we had many trials. would not want that. all of these mistakes that would be bad. but ghostbusters. i was finishing this relic will pitcher something called a yarn theater which is like a little bowl with some yard and strength comes out of the hole.
2:18 pm
as soon as i describe it to you, you have enough common sense about how physics work but i might be wanting to do the hard they can understand. and then really ugly one beacon recognize this one even though it looks totally different because you get the basic concept. that is what common sense is about. dave's going to show you a pitcher of obama. you'll note this vacuum cleaner robot. that is going to show you the tele and a dog doing his business you might see. so the robin doesn't know the difference between the two and that is going to syria the who apocalypse. something that happened not once but many types which is that none of the difference between what they should clean up and don't waste. and they spread the dog waste all the way through people news houses and it has been described as jackson pollock of artificial intelligence and commonsense disaster. then what i really wish i could show you the most is my daughter climbed two chairs sort of like what you have now. my daughter at the time was four years old you guys are sitting in chairs where there is a space
2:19 pm
between the bottom of the chair in the back of the chair. and when she far years old she was tall enough to pick cancer. and she didn't do reinforcement learning she didn't do it by imitation i was never able to climb through the chairwoman today and even if i'm in good shape and exercising a lot, as you remember those who know it watched a television show dukes of hazard and climb through a window to get inside the car. she never seen that so she invented for herself a goal. i say this is the essence of how human children learn things. they see goals that live cannot do this. what effect do that. can i walk on the small ridge on the side of the road. potential of their five and six and a off and they all day long, make against when it was like this are counted out. she tried this as you learn it essentially unlimited. so she squeezed juice and a little sack and she did a little problem solving. this is very different from collecting a lot of data with a lot of labels away that deep learning is working right now. i would suggest that the day i
2:20 pm
want to move forward is that we take simplest from kids do this. the mixing i was going to do is to quote he'll live silvey who teaches at harvard down the street and she made the argument than if you are born knowing that there are objects incessant places and things like that, and you can learn about particular objects but if you just know about pixels and videos so you can't really do that. you need a starting. is this what people call a native hypothesis the opposite of life site hypothesis. now a video about the baby let me down the mountain. nobody wants to say that a few months have anything other than the temperaments are not shy. people who say that human a few months are built with notion of space and that is still arguing. nobody has any problem thinking that animals might do this. so i should have a beehive explaining, mountain a few hours after it was born. embracing this view has to realize that something built into the brain of the baby read
2:21 pm
there has to be for example, an understanding of three-dimensional geometry from the minute that the ibex comes out of the moment. similarly it must know something about physics. and if somebody did doesn't know that it can calibrate it and figure out just how strong his legs are and so forth but as soon as it's born, he knows that. in the next video i was going to show you have to go look this up online and ieee news robots fail. it shows a bunch of robots doing things opening doors and falling over. trying to get into a car and falling over. sad that i cannot show you this right now but you get the point. current robots are really quite ineffective in the real world. many i was going to show were things that have all been simulated. it was a competition. everybody knew exactly what the events were going to be. just going to have to have the robots open the doors and turn the dial stuff and they've all done them in computer selections and when he got to the real world, the robots failed left and right.
2:22 pm
the robots could deal with things like friction and wind and so forth. so to sum up, i know a lot of people are worried about ai right now. and what about robots taking over jobs and killing us all and so forth. there's a line in the book which is something like wearing about all of that stuff, would be like in the 14th century worrying about huawei fatalities and huawei traffic fatalities when people would've been better off worrying about hygiene. what we really should be worried about right now has brought some vast, future scenario in which ai is much smarter than people and can do whatever it wants like documents more question. but we should really be wearing about the fact that we are using current ai a lot in things like decisions about jobs in decisions about jail sentences and so forth. so anyway, on this topic of robot attacking, i suggest a few things you can do. first one is just close the
2:23 pm
door. so robots right now can actually open doors. the competition now, to them how to do that. if it is orthomolecular there's not even a competition yet on how to have robots route lock the door. it would be like another seven or ten years before people start working on doors wears a little bit jammed they got accountable in the knob and stuff like that. so just lock the door. or put up one of the stickers that i shared you. you completely confuse the robot. but that is or talk to it in a foreign next in a noisy room. like the robot don't get any of that stuff. in the second thing i wanted to see, is that deep learning is the better matter than we felt before. it lets us climb to certain heights but just because of the is the better letter, doesn't mean that is necessarily going to get you to the moon. seems we have a really couple to hear what we have to discern as listeners and readers and so forth the difference between a little bit of ai and some magical forum ai that simply hasn't been invented yet.
2:24 pm
some close and i love questions nothing as many as you have. if we want to build machines that are smartest people, but i say we need to do is to start by studying small people. human children and how they are flexible enough to understand the world in a way that ai isn't yet able able to do. thank you very much. [applause] >> yes questions. i'm a retired orthopedic surgeon and he got here just in time because her coming up now with robotic surgeries which is more common in the knee replacements have you gotten any information about where that is headed and how good it is. >> for the dream is that the robot can completely do the surgery itself. right now most of that stuff is sort of an extension to the surgeon. like any other tool. in order to get robots to really be able to be full-service, the need to really understand the
2:25 pm
underlying biology of what they are working on so that he did understand the relation between different body parts they are working with. in our ability to do that right now is limited for the kind of reasons that i am talking about so there would be advances in the field the next year i would expect, when recent people to mars whenever that is, that we would have a sort of robot surgeon like we have in science fiction. we are nowhere near to the point. it will happen seven day, there is no personal reason why we can't build such things and have machines have better understanding of biology and sort of medical training but we don't have the tools right now to allow them to absorb the medical training. it reminds me of the famous experiment in cognitive development where a chimpanzee, was trained and raised in the human environment and the question was, we learned language the question is no. the answered is no. he said a current robot to medical school with lord diddley squat so it wouldn't help help it to be a robotic surgeon.
2:26 pm
other questions. [inaudible conversation] give current limitations of ai driving cars. >> i'm sorry i didn't mention software because very much. they are really interesting test cases. they are, seems like navy is logically possible to build them, but empirically, you get is what we call outlier cases and falls directly from what i'm staying before so if you are trading data of the dave things that you teach the models in system on our two different from what you see the real real world so they don't work very woa. so the case of the tow trips, the fire trips the tesla news keep running into, is probably in part because mostly they're trained on ordinary data were all the cars will be passing the highways. and you see the systems that hasn't seen before it it really doesn't understand how to respond. so i don't know weather driverless cars are ultimately
2:27 pm
going to prove to be closer to something like chess or go with bounded enough to get the system to work with current technology but more like language which to me seems completely outside the range. the people been working on it for 30 or 40 years, memo for alphabet with an google has been working on it for a decade. there's progress but is relatively slow. it looks a lot like massimo. some people solve one problem and it causes another problem. in the first fidelity of the driverless car was a tesla that ran underneath a semi trailer that took a left turn onto a huawei so first of all is the problem that is outside of the training center whatever unusual cases that involve i have been told and don't have proof of this permit told what happened is that the tesla thought that the tractor-trailer was a billboard the system had been programmed to ignore billboards because they didn't, is going slow down so often that would be
2:28 pm
rear ended all of the time so one problem was solved slowing down to billboard and another problem popped up. wacko. so seven so far is that driverless cars are a lot like lacrimal people make a little bit of progress they saw one particular problem but they don't solve the general problem into my right we don't have general enough techniques even trying to solve the problems of people are staying that i will just use more data and more data and more data. they get a little bit better but we need to get a lot better. right now the cars need a human intervention about every 12000 miles less a check. this is oppressive. turns out that a few months only have a fatality on every hundred and 34 million miles in average. if you want to get a human loophole, you've got a lot more work to do. it is just not clear that the grinding out the same techniques are going to get us there and this is again the metaphor having a better letter, has brought necessarily going to get into the mood. >> my question is about machine learning and using it right now,
2:29 pm
i'm an astronomer, is it cited so the question is if you just doing pattern recognition, you really don't learn anything. you say were making progress on having machine learning kinds of programs be able to tell us they're making good decisions in enough detail that is useful. >> there's a lot of interest in that. right now may change but right now there is a tension between techniques that are relatively efficient and techniques that are producing what we call results. as you know. right now the best techniques for a lot of perceptual problems by identify this look like another asteroid that exceeded four corneas of my we've god is far from interpretable as you can possibly imagine. people are making progress to make that a little bit better. but now i start to get better results and you give up interpretation. there's a lot of people worried
2:30 pm
about this problem that i not seen any great solution to it so far. i don't say it's insolvable in principle but right now it's kind of moment with the ratio between how good the systems work and how little we understand pretty extreme. also going back to the driverless cars, cases were 70s going to die and somebody is going to have to tell the parent of a child or something the reason they died seems to be the parameter number 317 with a negative number. it might be like completely meaningless and unsatisfying. but that is sort of where we are right now. the questions. >> clear thoughts on application to healthcare diagnostics. in racial bias the people are concerned about and the fact that we can't afford the of misdiagnosis. >> liz three different questions. the first is in a me forget have to remind me. the first is can you use this stuff or medical diagnoses the answered is yes but relates to
2:31 pm
the last which is how important is misdiagnosis and the more important is of the less you can rely on these techniques. it is also the case of the human doctors are completely reliable. the advantage the machines have no or something similar radiology in particular is the pretty good at pattern recognition. they can be as good as radiologist lease in careful laboratory conditions and nobody's really has the starts i know what a subject, as a working real-world system that has radiology in general. they're more like demonstrations that i can recognize this particular pattern but in principle departing as an advantage of people there. there's a definite disadvantage so in that he can't read the medical charts. there's less of what we call unstructured text and medical charts which doctors notes and stuff like that. it's just written in english rather than being a bitmap pitcher chart. machines can't read this stuff woa. i say maybe in a little bit they can recognize keywords and stuff
2:32 pm
like that. so there are really good radiologist, like a detective. some electronic homes. i realize that there are just eight symmetry that are here but it should be in this accident that that person in 20 years ago and tried to put together the pieces in order to have an interpretation in the story about what is going on. and i say they do something that. being given and nothing is impossible but has brought going to roll out next week. this of the first cases of ai really on medicines probably going to be radiology things you can do on a cell phone radio have a radiologist available. in developing countries where there's not enough doctors, the systems might not be perfect but you can try to reduce false alarms to some degree and you get decent results we maybe couldn't get any results at all. so we will start to see that. and it's going to take longer because we don't actually have the data because pathologists have been digital. they are starting to do it.
2:33 pm
radiologist have been digital for a while. then there are things that if you ever watched the whole vision showhouse we try to put together some complex diagnoses of a rare disease or something like that. systems aren't going to be able to do that for a long time. so again many attempts like that but it just wasn't very good so missed like heart disease when it was obvious their first year medical student. and there goes back to the difference between having a lot of data and having understanding. industry correlation we don't understand the underlying biology and medicine than you can't really do them and so we just don't have the tools yet to do really high quality medical diagnosis that's a ways off. >> i am working now i am sort of the one on the team, part of what i'm working on is scoping but our small industry tests that automation in machine
2:34 pm
learning would be helpful for and other tasks like for testing or doing help wider problems. wider problems that are like you are staying, not possible right now with current methods. how would you, i'm always looking at ways to explain get the idea across between sort of like you said down the dead. >> i say that the fundamental difference is some problems are closed world and limited the possibilities are limited and more limited the world is the mark current techniques to handle those. some problems are open-ended. they could involve arbitrary knowledge so driving is interesting because in some ways is closed-end like the only driver the roads that were talking about ordinary driving
2:35 pm
in ordinary circumstance but it's open-ended because it could be a police officer with a it will resign staying that this bridge is out. so there are so many possibilities and that way there is open-ended but you know the driverless cars that work woa to set the sort of closed and a lot of conventional data for it and they were very poorly when they're forced to go outside informally their comfort zone. the systems have a comfort zone being a little that glib about it and they go outside of it and it doesn't work out woa. >> [inaudible conversation] [inaudible conversation] >> years of evolution. >> wrong way but not using enough data. >> i don't see it that way.
2:36 pm
so the way i see it is that there are a billion years of evolution with the dentist they built a genome is the rough draft of the brain. if you look at the development done developmental biology, it is clear that is not a blank slate. mary carefully structured. there is any number of experiments it did also illustrate this all kinds of ways. and you can have deprivation studies in various things. when evolution is done is the shape of a rough draft. decide if i don't write. and that rough draft is built to learn specific things about the world need to say about ducklings looking for something to imprint on a soon as they are born. our brains are built to learn about people and objects unsupported. but when evolution is that, is to give us a really good toolkit for assimilating the data that we get. and you could see when more and more data and more more time tonight going at the same thing and maybe but we are not very good at replicating a billion years of evolution.
2:37 pm
and that is a lot of them at trial that evolution did. we could try to replicate that with enough cpu or gpu time and enough graduate students and so forth but there is another approach to engineering which is called biomimicry we tried to look to nature and how it solve problems and try to stay close on the ways in which nature solved problems. that is fundamentally by what i am suggesting is that we should look at how biology in the forum of human brains or other animals brains, the baby ibex news brain, manages solve problems. because i want to build literal identical, we don't need to build more people, i have two small people that are fine. and they are great in their great mentors. we want to build ai systems that take the best of what machines do woa. just to compute really fast with the best of what people do which plans to be flexible enough not to be able to read and so forth. so we can do things like, that no human being is all right now such as integrating the medical
2:38 pm
literature. the 7000 something like that papers published every day of there is no dr. that can read them all. it's impossible for a few months to write up machines can't read it all they can do to your benefit but was she's a good read and that we could see them in a way that we can continue scale computers, and we could revolutionize medicine. i say to do that we need to build in basic things like i'm a space and so forth so that the machines can then make sense of what they're doing. >> >> how are you thinking about fixing the problem. building these new modules, what forum they will take and are they going to be the same currently say that they use or something completely different. >> i would see we don't have the answers. boy tried to pinpoint the problems. try to identify in different domains like space and time and
2:39 pm
totality with criticisms working with no printed the second thing i will see is the most fundamental things is we need a way to represent the knowledge they are learning systems. so currently, there is history things called expert systems are very good at rep recognizing knowledge so if this thing is through the do this other thing than it's likely that such and such as happening if these two things are happening the knowledge looks a little bit like sentences in english language. we have deep learning which is very good at representing correlations between individual pixels and labels and things like that but very poor and representing that kind of knowledge. what we argue is we need a synthesis of that. learning techniques that allow you to be responsive to data in ways that traditionally at the techniques were particularly not responsive. the learning responsive to data but represent that went out works with abstract knowledge that you can for example, teaches something by staying, apple is the kind of fruit and have an interest in it.
2:40 pm
something that we can continue do a little bit of this. but we really don't have systems of having any way of teaching something explicitly like what occupies physical state and a wallet inside the pocket so i feel different than a wallet that is not inside of the pocket we do still have a way to be telling the machine right now. >> i am thinking that if lee does take the deep learning approach and don't learn all of this from data and we try to biweekly a different game, encoded things about time and then tomorrow were like space. >> thing we need to do all three of those i'll see about the front. but i would see is that there is a lot of knowledge that needs to be included but doesn't all have to be hand included but there is
2:41 pm
there are support remains so if you have a framework for representing time and you can represent things that happen in time. if you don't even know the time exists in correlation between pixels has brought going to give you that when late you could had met things you could say about number of words in english language the typical mosquitoes and something like 50000. and maybe there are ten pieces or a hundred pieces the same, says that goes with each of those words. then you are talking about millions of pieces of knowledge. you are not talking about trillions, there be a lot of work to encode the mall seminar trying to do it maybe we can try do it in a different way than he did it but it's not an unbounded campus when the people that want to do right now. yes it is so much fun to play around with deep learning and to get good approximations not
2:42 pm
perfect answers. they already have the appetite to do it right now but it could be that there's just no way to get there otherwise. that is my view and it is a long tradition of native staying that the way you get into the game if you have something that allows you to bootstrap the rest. i don't say is the lacrimal problem lisa scoping problem that we need to pick the right one and bootstrap dress and if you look at the cognovit get development, there's true for babies. they have strong basic knowledge for these domains and they develop more knowledge. we cutely start with the kinds of things that develop middle psychologist like this identified over and work on those and see where we are. the questions. >> >> woa if not, thank you very much. [applause] they will a book signing somewhere right there i
59 Views
IN COLLECTIONS
CSPAN2 Television Archive Television Archive News Search ServiceUploaded by TV Archive on