Skip to main content

tv   Gary Marcus Rebooting AI  CSPAN  September 29, 2019 5:59pm-6:45pm EDT

5:59 pm
the technology in 1920. and now we have technology and we are still debating on whether on the 12th grade education is enough. it's obviously not enough. and all of the signs from the economy and labor market are signs that is not enough. unlike our predecessors who are unable to respond to those by saying let's educate our young people, we are fighting about it and turning it into questions of identity and politics and partisanship when clearly there is just a sign that our young people need our support, need our help and needed more education and skills in order to survive in the current economy. a century ago we heard the signs and just responded in a rational and collective fashion now we are hearing it and responding in
6:00 pm
an irrational selfish passion. >> afterwards or saturday at 10:00 p.m. and sundays at 9:00 p.m. eastern from book tv on c-span2. all previous afterwards are available as podcasts and to watch online at booktv.org. >> a good evening everyone. welcome to borders books and thank you for so porting our bookstore. before we begin c-span is filming so make sure all films phones on silent. i want to let you know about a couple of other events. on tuesday, stephen kinzer is going to present the history of cia. we also have events last week and tickets are available from monday's conversation between celeste and -- and for the talk on wednesday. tonight we welcome gary marcus, author of rebooting ai which argues that a computer being a
6:01 pm
human in jeopardy does not signal we were on the doorstep of fully autonomous cars. taking inspiration from the human the book explains if we need to advance to the next level and suggest if we are wise along the way we will not need to worry about a future machine overloads. finally a book that tells us about what ai is, what it is not what it could become if we are ambitious enough. an elusive been deeply informed account. gary marcus is the founder and ceo of robust see a in his ceo geometrics intelligence. he is published in the journal and perhaps the youngest professor emeritus at a why you. [applause] >> this is not what we wanted to see but we'll see if it will go.
6:02 pm
>> this is not good okay. maybe it will be all right. we've had technical difficulties. i'm here to talk about this new book, rebooting ai and some may have seen and not bad i had this weekend in the new york times called how to build artificial intelligence we can trust. we should all be worried about that question because people are building artificial intelligence they don't think we can trust. artificial intelligence has a trust problem. we are relying on a and i more more but it hasn't yet our confidence. we also suggest there is a hype problem. so a lot of a eyes overhyped these days often by people who are prominent in the field. and her angst, one of the owners of deep -- he said the typical person can do a mental task with
6:03 pm
less than one second of thought. we can probably automate a using ai now or in the future. that's a profound claim. anything you we can do in his second we can get ai to do. if it were true ai would be on the verge of changing altogether. it may be you true someday but i'm going to persuade you it's not remotely true now. the trust problem is this. we have things like driverless cars that people think they can trust yet they should not actually trust and sometimes they die in the process. this is a picture from a few weeks ago in which a tesla crashed into a stopped emergency vehicle. that has happened five times in the last five years. a tesla on auto pilot has crashed into a vehicle on the side of the road. here's another problem, i'm working in the robot industry. this robot is a security robot and committed suicide by walking into a puddle.
6:04 pm
so, you have entering same machines can do anything a person can do for the person in a second and can look in the pommel and say maybe i should not go in there and the robots can't. we have other kinds of problems like bias. so, you can do a google image search for the word professor and you get back something like this where almost all of the professors are white males even though the statistics in the united states are only 40% of professors are white males. around the world is much lower than that. you have systems taken data but they don't know if the data is good and they're just reflecting it back out on that is perpetuating cultural stereotypes. the underlying problem with artificial intelligence is the techniques people are using are too brittle. so, everybody is excited about deep learning. it's good for a few things. object recognition, you can get deep learning to recognize this is a bottle and a microphone and you can get it to recognize my
6:05 pm
face and distinguish between uncle ted's face. deep learning can help some with radiology but it turns out all of the things that it is good at falling to one category of human thought or human intelligence there something you have to look at and identify things that like the same or sound the same. but that does not mean that one technique is useful for everything. i wrote a critique of deep learning a year and half ago you can find online calling deep learning and critical appraisal. an wired row, deep summary of it. it says they are downsized to deep learning so even though everybody is excited it's not perfect. first, a real counterpart to injure angst claims. if you are running a business and wanted to use aiu would need
6:06 pm
to know what can ai do for you. or if you're thinking of ai ethics wondering what machines can do soon i think it's important to realize there are limits on the current system. if the typical person can do a mental task with less than one second of thought and we can gather an enormous amount of data directly relevant we have a fighting chance to get ai to work without so long as the test data the things that we work on are different than the things we taught the system on the system doesn't change much over time. the problem you're trying to solve doesn't change much over time. this is a recipe for game for this is what ai is good with his fundamentally things like games. alpha go is the best player in the world. the system hasn't changed. the domain and game hasn't changed in 2500 years. we have a perfectly fixed set of
6:07 pm
rules and you can gather the data for free. you can have the computer place over different versions of itself which is what deep minded. or you can keep playing and keep gathering more data. compare that to a robot that does eldercare. you don't want a robot that does eldercare to collect data through trial and at work some of the time and outwork others eldercare robot works 95% of the time putting grandpa into bed and drop some 5% of the time you're looking at lawsuits in bankruptcy. that's not going to fly for the ai that would drive in eldercare robot when it works there's something called a neural network and it is taking big deity in making statistical approximation so you label pictures of tiger woods, and a bunch of pictures of golf balls and angelina jolie and you show new picture that is in two
6:08 pm
different and it correctly identifies this as tiger woods and not angelina jolie. this is the sweet spot of deep learning. people got excited when it started getting popular. wired magazine had an article saying deep we've already seen an example of a robot that's not smart. this has been around for several years but not delivered on. there are things that it evens and perception. on the right are some training examples. you change the system these things are elephants. if you show something with elephants on the right you'd say wow it knows what an elephant is. if you showed a picture on the left the weight responses it says person. we mistake the silhouette of the elephant for a person and not
6:09 pm
able to do in the trunk of his civilians and this is what you are because extrapolation in deep learning can't do this. so we are trusting deep learning every day it's getting used in systems that make judgments about whether people should stay in jail or whether they should get particular jobs and so forth. it's quite limited. here's another example, making the same.of actual cases. this is with great confidence it's a snow plow. the system cares about things like the texture and the road and no idea about the road of a school bus or what they're for. his fundamentally mindless this thing at the right was made by some people at mit. if you are deep learning system you would say it's an espresso
6:10 pm
because there's from there. it picks up on the texture of the foam and says it's espresso. he doesn't understand it's a baseball. another example is you show a banana and he put the sticker in front of the banana on it's a psychedelic toaster and because there's more color variation in the sticker the deep learning system calls for the toaster. well it's a banana with the sticker in front is too complicated. all you can do is say which category something belongs to that's all deep learning dogs. if you're not worried this is starting to control society you're not paying attention one
6:11 pm
second here to look at my notes. so, i was next going to show you a picture with the parking sign with stickers on it. it would be better if i can show you an actual picture but presenting slides over the web is not going to work. a parking sign was stickers on it and the deep learning system calls that a refrigerator filled with food interests. and notice colors and textures but not understand what's going on. then a picture of a dog that's doing a bench press. something has gone wrong. thank you for that.
6:12 pm
>> i would need a mac laptop and i think i just cannot do it fast. >> i don't think they're going to be willing to edit it. just go on. so a picture a dog with a barbell and it's lifting a barbell. the deep learning system can tell you that there is a barbell there in a dog but it can't tell you that's weird how did he get so ripped that it could lift the barbell? current ai is even more out of its depth when it comes to reading. it will read you a short story that laura ingalls wilder wrote. it's about a 9-year-old boy who finds a wallet full of money dropped on the street and his father guesses the wallet might belong to someone named mr. thompson. and he finds mr. thompson. here is the thing that wilder wrote. manzo turns to administered you
6:13 pm
lose a pocketbook? mr. thompson jumps and slaps his hand to the park and said, yes, i have. $1500 in it too, what you know about it? he said is this it and he says yes. he opens it and counts the money and all the bills twice and then breathes a sigh of relief and says the will that boy didn't steal any of it. when you listen to the story you form a mental image of it. might be a bit of it or not. but you know you and for a lot of things like did the boy had his stolen or where the money might be you understand why he has reached in his pocket looking for the wallet. he knows wallace occupy physical space and if your wallet is in the pocketbook or your wallet is in the pocket you will recognize the. you know these things and can make inferences about things like how everyday objects work and how people work. and so you can answer questions
6:14 pm
what's going on. there is no ai system yet that can do that. the closest thing we have is a system called gpt two. this is released by open ai. some may have heard about it because it's famous because elon musk founded it and the premise that they will give away their ai for free and that's what makes the story for free. until they made this thing called gpt to. they said gpt two is so dangerous we can't give it away. this is is so dangerous we did not want the world to have it but people figured out how it worked to make copies of it and you can use on the internet. so my collaborator ernie davis and i said in the all manzo story into it. remember, all manzo has found the wallet, the guy counted the money and now has -- you super happy. defeat in the story and it continues. it took a lot of time, maybe an hour for him to get the money
6:15 pm
from the safe place he hit it. it makes no sense. it is perfectly grammatical but if you found his wallet what is it doing? the word safe place and wallet are correlated in a vast database but is different from the understanding little children do. the second half of the talk i will do without visuals is called looking for clues. the first clue we need to do as we develop ai further is to realize that perception which is what deep learning does well it is part of what intelligence does. you may know what a gardner's theory of intelligence is. there's verbal intelligence, musical intelligence and so forth. as a cognitive psychologist i would say there's things like common sense, planning, many different components. what we have is a form of intelligence just one of those. it's good at doing things to fit
6:16 pm
with that. good at certain kinds of gameplaying. it doesn't mean it can do everything else. the way i think about this is deep learning is a great hammer and we have a lot of people looking around saying because i have a hammer everything must be a nail. and, some things work with that like go and so forth but there has been much less progress on language so there is been exponential progress and how well computers play games but zero progress in getting them to understand conversations. that's because intelligence and self has different components, no silver bullet will solve it. the second thing i wanted to say is there's no substitute for common sense. we need to build common sense into our machines. the picture i wanted to show you is of a robot on a tree with a chainsaw and it's cutting down the wrong side if you can picture that and it's about to fall down, now, this would be very bad we would not want to
6:17 pm
solve it with reinforcement learning, you are not like a fleet of a hundred thousand robots that would be bad as they said in ghostbusters, then i was going to show you this really cool picture of the yarn theater which is like a little bowl of yarn and some string that comes out of a hole and as soon as i describe it to you you have enough common sense about how physics works and then i was going to show you a picture of an ugly one and say you can recognize this you from the looks totally different because you get the basic concept. then i show you the picture of a room above the vacuum cleaner robot on that a picture of new tele- and a dog doing its business you might say. say the roomba doesn't know the difference between the two and then the coup pocket . this happened not once but roomba does that do not know the difference between new tele- that they should clean up and spread the dog waste all the way
6:18 pm
through peoples houses the jackson pollock of artificial intelligence and common sense disaster then what i really wish i could show you the most is my daughter climbing through chairs sort of like the ones you have now. my daughter was four years old, you're sitting in chairs where there is a space between the bottom of the chair in the back of the chair. now she did not do this what we call reinforcement learning which we i was never able to climb through the chair, little too big even if i'm in good shape and exercising a lot and for those who know that know is she watch the television show dukes of hazard and climb through window she'd never seen that so she just invented for himself a goal and this is the effort essence of how the human children learn things like can i do this can i walk on the small ridge on the side of the road i have two children five and six and a half an all day long they
6:19 pm
make up games like what it was like this or can i do that and so she tried it and learned it essentially in one minute. she squeezed through the chair got a little stock still a little problem solving. this is different than collecting data with labels. i would suggest that if ai wants to move forward we need to take clues from kids on how to do these things. the next thing is going to do was quote elizabeth spunky who teaches at harvard on the street and she has made the argument if you are born knowing their objects and places then you can learn about particular objects but if you just about pixels and videos you can't do that. you need a starting. this is what people call a native hypothesis. there is a video of a baby i backs, nobody ever wants to think humans have anything in nate's.
6:20 pm
humans are are built with notions of space and causing causality and the argument is a i should do it but nobody has a problem thinking animals should do it so i show a baby ipaqs climbing down the side of the mountain a few hours after it's born. anybody who sees this should sees that there something built into the brain of the baby. there has to be an understanding of three-dimensional geometry from the minute it comes out of the womb. similarly, you must know something about physics and its own body. it does me to get calibrated and see how strong its legs are but as soon as is born it knows that the robots fail, a bunch of robots like opening doors for falling over. he said that i cannot show you this right now but you get the. current robots are really quite
6:21 pm
ineffective in the real world. the video i was going to show her things that had all been simulated. it was a competition that darpa ran and everybody knew what the events were going to be. they're just going to have the robots open the door and turn dials and stuff like that. they had done them in computer simulation. when i got to the real world the robots field left and right. they could not deal with things like friction, wind and so forth. so, to sum up, i know a lot of people are worried about ai right now and read about robots taking over our jobs and killing us all. there's a line in the book about about all that stuff would be in the 14th century. more and about highway fatalities when people would have been better off worrying about hygiene. what we should really be worried about is not some vast future scenario in which ai is much smarter than people and could do whatever it wants and i can talk
6:22 pm
about this, we should be talking about the limits of current ai and how are you seen it in things like jobs in jail sentences and so forth. so, on the topic of robot attack i suggest a few things. the first is to close the door. robots right now cannot actually open doors. there is a competition now to teach them how to do that. if that doesn't work lock the doors. there's not a competition yet to have them lock the doors and it will be seven or ten years before people work on doors where is gm's and people have to pull in the not so just lock the door or put up a sticker i showed you. you will completely confuse the robot. were talking in a foreign accent in a noisy room. the robots don't get any of this. the second thing i wanted to say is that deep learning is the better latter. it lets us climb to certain heights.
6:23 pm
just because something is a better ladder doesn't necessarily mean it will get you to the moon. we have a helpful tool here but we have to discern as listeners, readers and so forth the difference between a little bit of ai in some magical form of ai that has not been invented yet. to close and then i would love questions. if we want to build machines as smart as people we need to start by studying small people, human children and how they are flexible enough to understand the world the way that ai is not able to do. thank you very much. [applause] questions. >> i am a retired orthopedic surgeon and i got out just in time because they're coming up now with robotic surgery switches prominent in the knee replacements. have you got information about where that is is headed and how
6:24 pm
good it is et cetera? >> well, the dream is that the robot can completely do the surgery itself. right now most of that stuff is an extension to the surgeon. so like any other tool in order to get robots to really be able to be full-service they need to understand the underlying biology of what they are working on. they need to understand the relation between the different body parts they are working with and our ability to do committed for the reason i'm talking about will be advances in that field but i would not expect when we send people to mars whenever that is that we would have a robot surgeon like you have in science fiction. we are nowhere near to that., it will happen someday, there is no principled reason why we can't build such things and have machines better understanding but we don't have the tools right now to have them absorb the medical training. it reminds me of a famous
6:25 pm
experiment in cognitive development where a champion zine was trained and raised in a human environment and the question was, what it learn language and the answer was no. if you said the current robot to medical school it wouldn't learn diddly squat. >> other questions. >> with the current limitations of aia applied to self driving cars? >> yes. self driving cars are really interesting test case. this seems like maybe he is logically possible to build the body empirically the problem you get is outlier cases and it follows directly from what i was saying. i think if you're training data, the things you teach the model in his system are are two different than what you see in the real world they don't work well. the case of the tow truck and
6:26 pm
the fire truck that the teslas keep running into is probably in part because mostly they are trained on ordinary data for their moving fast on the highway and you see something and it has not seen before and it doesn't understand how to respond so i don't know whether driverless cars are ultimately going to prove to be closer to something like chess or go but it's going to be more like language was seems completely outside the range. but people have been working on it for 30 or 40 years, there is progress but it's relatively slow progress and it looks a lot like lacrimal. people solve one problem and it causes another so that the first fatality from a driverless car was a tesla that ran underneath a semi trailer that took a left turn. personally have the problem that
6:27 pm
it was outside the training center whatever it was an unusual case and second of all i have been told, i don't have proof of this but i have been told that what happened is that the tesla thought that the tractor-trailer was a billboard in the system had been programmed to ignored billboards because of it did and it was their so often it would get rear-ended all the time. one problem was solved on another problem popped up. the game lacrimal. so what happened so far is that driverless cars are like whack a mole. people make a little bit of progress they don't solve the general problem into my mind we don't have general techniques to try to solve the problems. people are saying i will just use more data and they get a little bit better. we need to get a lot better. right now the way mo carson need a human intervention about every 12000 miles. that sounds impressive. humans only have a fatality
6:28 pm
every 3400 miles on average. if you want to get to human level you have a lot more work to do. it's just not clear the grinding out of same techniques are going to get us there. this is again the metaphor, having a better ladder will not get you to the moon. >> my question about machine learning and using it to do right now i'm an astronomer so we started to use it in science. the question is if you're just doing pattern recognition you don't learn anything. you think were making progress on having machine learning kinds of programs to be able to tell us other making decisions in enough detail is useful? there is interest in that. right now, it may change but right now there's tension between techniques that are relatively efficient and techniques that produce interpretable results. right now the best techniques for a lot of perceptual problems if i want to identify does this
6:29 pm
look like another asteroid deep learning is the best at that and it says far from interpretable as you can imagine. people are making incremental progress to make that patter but, there's a trade-off you get better results than give up interpretation. there's people worried about this problem, i have not seen a great solution to it. i don't think it's insolvable in principle. but right now it's a moment with the ratio between how good the systems work and how little we understand is extreme. also we will have cases where somebody will die in somebody will have to tell the parent of a child the reason your child die is parameter 317 was a negative number one should've been positive. it will be completely meaningless. that is where we are right now. other questions? >> your thoughts on healthcare
6:30 pm
diagnostics and also just the fact that we can afford to have any misdiagnosis. >> i guess that's three different questions. the first is, i may forget one, the first is can you use this for medical diagnosis and the answer is yes better relates to the last which is how important is the misdiagnosis. the more important it is the less we can rely on the techniques. it's also the case of human doctors are not completely reliable. the advantage machines half or something, radiology in particular they're good at pattern. at least in careful laboratory conditions. nobody really has those words i know a working real-world system that does radiology in general. they're more like demonstrations that i can recognize this particular pattern. in principle deep learning has an advantage over people. it has a disadvantage and then
6:31 pm
it can't read the medical charts. there's unstructured text which is doctors notes and stuff like that. it's just written in english rather than being a bitmapped picture of a chart. in hands, machines can't read that stuff at all or they can may be a little bit recognize keywords and stop but there really good radiologist is like a detective. sorta like sherlock holmes. they're like well, i realize this asymmetry here that it shouldn't be and it relates to this accident at the prison had 20 years ago and tries to put together the pieces in order to have an interpretation or story about what's going on. current techniques don't do that. again, i'm not saying impossible but is not going to roll out next week. so, the first cases of ai really have an impact on medicine are going to be radiology, that you can do on a cell phone where you don't have a radiologist available. in countries where there's not
6:32 pm
enough doctors, the symptom systems may not be perfect but you can try to reduce the false alarms to some degree and try to get decent results where you couldn't get results at all. we will start to see that. pathology will take longer because we don't have the data radiologist have been digital for a while. then things like if you ever watch the television, house where you're trying to put together some complex diagnosis of a rare disease and systems are not going to be able to do that for a long time. ibm made an attempt with that with watson but it was a very good. it missed heart disease when it was obvious to a first year medical student. and there it goes back to the difference between having a lot of data and having understanding. you're just doing correlation but you're not understanding underlying medicine then you can't really do that much. we just don't have the tools yet to do really high quality medical diagnosis, that's ways
6:33 pm
of. >> thank you for coming, i'm working now as a data analyst, data scientist and part of what i am sort of the team of the organization, part of what i'm working on his scoping what our small industry task that automation using machine learning would be helpful for and harder tasks like forecasting or doing solving wider problems that are like you were saying not solvable right now with current methods. how would you -- i'm eyes in ways to explain or get the idea of cross between bounded versus unbounded problems. >> i think the fundamental difference is some problems are a closed world.
6:34 pm
they are limited. the possibilities are limited. the more limited the world is the more current techniques can handle them. some are open-ended where they could involve arbitrary knowledge or unusual cases. so driving is interesting because in some ways it's closed and like you only drive on the roads over talking about ordinary driving in circumstances but it's open-ended because there could be a police officer with a hand-lettered sign saying this bridge is out and so, there are so many possibilities in that way it's open ended. so while you end up finding is that the driverless cars work well in the stuff that is closed, there's a lot of conventional data forward and they work very poorly when they're forced to go outside and formally their comfort zone. so, the sims systems have a comfort zone of being a little bit about it and they go outside it doesn't work that well. >> you made it a.about how
6:35 pm
learning about data in humans don't -- do you think that humans have an advantage about evolution about the problem. >> a billion years of evolution. >> the problem that maybe we are using data the wrong we are not using enough data? >> i don't see it that way. i see it that a billion years of evolution what they did was build the genome and a rough draft of the brain and if you look at the developmental biology is clear that brain is not a blank slate. it's carefully structured. we don't understand all of it but there's a number of experiments that illustrate it and you can do deprivation experiments where animals don't have any exposure to the environment. so, what evolution has done we shape a rough draft of the brain. not a final brain and that is built to learn specific things about the world. you can think about ducklings moving for something to imprint on the moment they are born.
6:36 pm
our brains are built to learn about people and objects and so forth. but what evolution has done as it gives us a good toolkit for assimilating the data that we get. you could say buy more more date and time could i get the same thing and may be, but we are not very good at replicating a billion years of evolution. that is a lot of trial and error that evolution did. we could try to just replicate that with enough cpu or gpu time, enough graduate students and so forth but there is another approach to engineering which you try to look to nature and how it's all problems and try to take clues for the way in which nature solves the problem and that's fundamentally what i'm suggesting is that we should look at how biology in the form of human brains and other animals brains, manages to solve problems. not because we want to build literal light, we don't need to build more people.
6:37 pm
i have two small people and they are great and great cotton and nicer. we want to build ai systems that take the best of what machines do will witches compute fast with the best of what people do which is to be flexible. and we can do things like solve problems that no human being can solve. so the 7000 papers published every day, no dr. can read them all, it's impossible for humans, right now machines can't read at all but if we built machines that could read and we could scale them the way we can scale computers than we could revolutionize medicine. but to do that we need to build in basic things like time and space and so forth so that the machines can then make sense of what they read other questions? >> are used are you thinking about fixing the problem? building these new modules what
6:38 pm
form they will take, are they going to be the same structure that deep learning currently uses are something completely different? >> the first thing i would say is we don't have the answers but we try to pinpoint the problems. we try to identify in different domains like space, time and causality where the current systems work and where they don't. the second thing i will say is the most fundamental thing is we need ways of representing knowledge and our learning system. there is a history of thing called expert systems that are good at representing knowledge. if this is true then do this other thing. it's likely such and such is happening. the knowledge looks a little bit like sentences in the english language. then we have deep learning which is good at representing correlations between pixels, labels the very poor representing that knowledge. what we argues that we need a synthesis of that. learning techniques that allow
6:39 pm
you to be responsive to data in ways that traditionally are techniques were not responsible for data. learning is responsive to data that represents or works with abstract knowledge so that you can for example teaches something by saying i don't know the and apple is a kind of fruit and have an understand that and we have systems that can do a little tiny bit of this but we don't really have systems where we have any way of teaching something explicitly like wallets occupy physical space and that a wallet inside a pocket is going to feel different than a wallet not inside a pocket. we don't have a way of even telling a machine that right now. >> i'm thinking if we don't take it the deep approach and don't learn all this from data and we tried to incorporate some of this knowledge then how do we play in a different game of whack a mole like were trying to impose some knowledge about time and tomorrow we are like well we
6:40 pm
need to have knowledge about space? >> i think we need to do all three. what i would say is there is a lot of knowledge that needs to be encoded, doesn't all have to be hand encoded. we can build learning systems, but there are some core domains and i borrowed this, there are some core domains that enable other things. if you have a framework for representing time you can represent things that happen in time. if you don't know that time exists and just see correlations between pixels is not going to give you that. one way to estimate things is to think about like the number of words in the english language that a typical native speaker nose. is something like 50000 if they have a big vocabulary. and maybe there is ten pieces are 100 pieces of common sense that goes with each of those words then you're talking about millions of pieces of knowledge
6:41 pm
but not trillions of pieces of knowledge. it would be a lot of work to encode them all. maybe we can try to do it in a different way but it's not a non- founded task. as one people don't want to do because is so much fun to play around with deep learning and get good approximations, not perfect answers so nobody has the appetite to do it right now but it could be there is no way to get there otherwise. and, i mean, that's my view and that was counsel you in -- it's a long tradition of native is saying that the way you get into the game is that you have something that allows you to bootstrap the rest. i don't think it's a whack a mole problem i think it's a scoping problem that we need to pick the right core domain and if you look at the cognitive development literature is true for babies. then they develop more knowledge. i think we could at least start with the things that they have identified, work on those and
6:42 pm
see where we are. other questions. thank you all very much. [applause] >> i think we'll have a book signing if anyone. >> thank you for coming out. we have copies of the book had the front counter. >> book tv recently spoke to republican congressman steve scalise of louisiana about his recovery from gunshot wounds he suffered during a congressional baseball game practice in 2017. >> pretty powerful weapon obviously, this 762 caliber bullet that hit me you could take and bear down with that and when i saw the size of the bullet later they showed me, i said how my a live. it really does make you wonder. but again, a lot of miracles that day. and that is one i detail in the book. whatever your faith is, i have a strong faith and it helped get me through it.
6:43 pm
but, i chronicles some very specific things that happen on the ball field that even if you don't have the same faith you might say the first one or two is a coincidence but by the time you get to the fifth and sixth the one clearly there is a larger presence on that ballfield and guide get did take her was. one of those is that brad rinsed her up usually did not stay till the end of practice. >> he normally has a meeting around 8:00 o'clock. so he leaves closer to seven to go shower and get ready and go to the office. that morning his meeting was canceled. he decided to stay for extra batting practice. he was in the batting cage on the first baseline. the shooter was hiding behind the third base dugout. brad was out of the fire but he could see everything happening. again, and skills took over and he knew as soon as the shooter went down he had to come check on me to see what had happened and if he could do something to help me. again, i would not be here if you was not there that day but most days he would not have been
6:44 pm
there because his schedule would have brought him somewhere else and one of the many miracles. >> steve's glazes book is called back in the game. you can watch the rest of his interview by visiting our website at books he read out of work. type his name or the title of the book in the search box at the top of the page. >> prime time starts now here in book tv. first, you will hear from mary gray, on workforce issues faced by tech companies like amazon, google, and new bird. then, andrew pollack, the father of a student killed in the school shooting in parkland, florida offers his thoughts on school safety and guns. that's followed by our author interview program, afterwards with paul to the who reports on the cost of college education, at 10:00 p.m. gordon chain weighing soon on the dangers that south korea may face from north korea.

72 Views

info Stream Only

Uploaded by TV Archive on