tv Cade Metz Genius Makers CSPAN April 17, 2021 4:29pm-5:59pm EDT
4:29 pm
on identity and being raced asian-american in america. you can. >> tonight on booktv in timetime, thoughts how humans can adapt to get through hard times. jeff reflects his tenure ceo of general electric. vox -- how the six conservative scream court justice rulings could effect the country, and then a back on how our memory works. >> boost on c-span -- booktv on c-span2. funding for booktv comes from these television companies who support c-span2 as a public service.
4:30 pm
>> you're watching booktv on c-span2, with top nonfiction books and authors every weekend. booktv, television for serious readers. >> today we continue the ongoing focus on a.i. and innovation through a fresh ex-moration of a.i.'s history. one of the issues for today and the choices we face as digital citizens and the future implication for all of it. before a.i. ban to change our world for bet word a group of academics spent decade trying to build neuronetworks in the fairs of skepticism. they were pulled into a world they didn't expect along with the rest of us. in his become, the mavericks who brought a.i.
4:31 pm
to google, facebook and the world, "new york times" record cade metz uncovering the tale. it's a dramatic tell ago the history. he's here today to explain stories and address quiz such as the untold story that reveal the tap city of the key people, communities and companies shaping the evolution of a.i., what its driving the conflict between national interests or shareholder value, the pursuit of tech innovation and the human concerns about privacy, security and prejudice. how are decisions being made and who is benefiting. it's my pleasure to welcome, cade. cade draws on his rich experience as a senior staff writer with wired magazine and the u.s. editor of the register. the british science and technology news letter. he's based in the san francisco bay area and is a technology
4:32 pm
correspondent with the "new york times" and carves, a.i., driverless cars roark bostonnics, virtual reality and other emerges areas. here are his five numbers. nine years covering a.i., $44 million that google paid for jeff hinton, 37, movement, 80, photos of a black woman tagged as gorilla on google and one book telling the story of his booked geniusmakers, welcome, cade. it's my pleasure to welcome for the first dan'l she wrote for
4:33 pm
wired and kaiser health news and hold as ph.d in neuropsychology from columbia and university. welcome here. >> so fled to have you both here. we look forward to your conversation. take it away. >> it's great to see you again. it's been, gosh, maybe, 16-18 months since we haas -- last hung out in san francisco. it's nice to reconnect over the aib which is near and dear to our hearts. so, you write the book that this is a story that is fundamentally about people. people who were toiling in labs, obscurity, maligned and mocked for their ideas. what captivated you the most about this cast of characters? >> well, there were two moment is think that really inspired
4:34 pm
the book. the first was when i was in the four seasons hotel in seoul, south korea. marguerite, with those five numbers eludes to to this event where a lab called deep mine, based in london, and had been bought by google, built this machine to play the ancient game of "go," they call the eastern version of chess except it's exponentially more complicated and most people thought whether they were "good" liars a.i. experts they thought a machine that could beat the best players of go was still decadeses away but in 2016 this machine belt by deep mind beat one of the world's top players and i was lucky enough to be the four seasons hotel in seoul and there was a moment when i realize it that the people who had built this machine, who had spent years kind of cultivating the ideas behind it, and building
4:35 pm
this machine, could not understand what the machine was doing. they, like everyone else, was amazed and in some ways con fueled and caught unyou awares by this machine, and these individual people , including the leader of the lab, became the focus of this book. i knew that demis would be a character, but then he met jeff hinton, who is a generation older than demis, and he became the central thread. he in his own way is a fascinating character and worked on many of the same ideas as demis and demis' colleagues, for decades, and for anyone who knows jeff, he is a fascinating, engaging, strange character who
4:36 pm
has endured serious hardship of the years, among other things, and i thought to myself, if i can just get jeff on to the page, maybe this book will work. >> i actually wanted to ask you about a moment about jeff who as you say is this connector of this group. he is almost like the protagonist of your book. about halfway through during the chapter that delved into the medical uses of a.i. there's a small scene in which jeff puts united lets out a small sigh and says early diagnosis is not a trivial problem, that we can do better and why not let machines help us? and you write that there's this -- specifically really important to him because of his wife's experience with pan kratzic cancer. that moment comes back at the end of the book where he is at a
4:37 pm
party being celebrated but you see him immersed in this note card thank you talk. can you talk but the moment. >> i love you pinpoint those two very human moments and you see in the emotion that jeff shows the saying -- the sigh he lets out and then receives the award herb dub talk about himself he talks about his wife, and those are the kind of movements that i really wanted to capture, and the rope those particular moments are so important is that it shows the personal struggle that jeff faced to bring these ideas to the fore, and it's just part of his personal struggle but certainly, like, the most powerful. i think particularly the end when we talks about his wife, but this is someone who was -- had two wives, by the way, die of cancer, and at the same time
4:38 pm
experience his own physical hardship as you learn in the first sentence of the book, he he does not sit down because of a back problem and this plays into the story as well, that in the face of this extreme back probable problem he has to make these pilgrimages across america and the atlantic to realize these ideas. those are he types of moments that i feel show not only what he faces but people like him often face when they're trying to kind of realize their work there technical problems that need to be overcome and there are personal obstacles that have to be overcome as well. >> humanity in some connections of the book just jumped awful the page, a sense of jeff were really loving but sometimes
4:39 pm
struggled and how to -- a lot of jokes. humor as almost a deflective mechanism. others were a little more aggressive or adamant but their ideas in ways that captured a little bit of their insecurity. just reminded everybody that even though they mavericks there's a humanity there, and so i do talk to other people in -- for the book. hough did theirs lives, their loves, dreams, opportunities, and obstacles affect the way they approached their work? >> what i loved it was different for each person. we're talking about essential lay single idea, this idea of a neural network which dates back to the '50s and all these different people trying to push the idea fur and their
4:40 pm
motivations and the way they dealt with it were so different because they are humans. we're all different, and those personalities come out in so many different ways. you're right, sometimes it's humor. jeff is an incredibly funny person as well, and i like how that humor would attract those around him as he is struggling to get this idea out. he needs the help of others, and you're right, humor is not only entertaining but it's a way of convincing people to work on this project, and as i got to know jeff, more importantly, is a interviewed his students, who knew him well, you saw him as this magnet, often driven by the humor but for a reasons would draw people to him and help him
4:41 pm
realize this idea. other people who pushed this idea forward in other ways. some people are so adamant about their idea and so upset by the obstacles that they really lash out and you see that as well. and that can work in some ways, and it can backfire in others. what i wanted to do is show all of that, and also how -- as these folks push this idea forward, it would behave in ways they didn't expect, especially that moved in these companies, and surprise even them, and they reached these moments where they don't necessarily know what to do. that is part of it as well. >> what surprised people like jeff, like young, like andrew
4:42 pm
about the way that their technology caught on but also was let loose for the rest of to us experience? >> the great example -- we keep going back to hinton. the great example is that hinton who was born in london, originally made his way to the united states and started to explore this neural network idea, first on the west coast, and then as a professor at carnegie carnegie-mellon in burg. that's moment in the mid-80s where jeff and his account realize he cannot do this a.i. work without tacking money from reagan's defense department and that's not something he wants to do. at the height of the robb contraa -- iran-contra affair. he and his wife have strong beliefs and don't want this work to be used inside the military.
4:43 pm
so, they actually leave the country. he believes in this stance to the point where he goes to canada and sets up shouldn't at the university of toronto as a professor, and it would have real implications for the whole field years on when this idea of finally started to work. most of the talent was centered around jeff and others in canada and not in the u.s. but as that idea starts to, and jeff is sucked into google, in short order google starts to work with the defense department and that's a big protest at the company and some people who believe this is absolutely the right thing for google to do and others who are really upset and didn't expect to be working for a defense contractor. and there's a moment where jeff himself struggles with this. he was against it but in working
4:44 pm
for this company he wasn't sure how much he should speak out. he ended up lobbying one of the founders sergei bran to push back but he one as public with his concerns as some others and there's a moment where there's a google employee who criticizes him for that. again you feel the humanity of the situation. all of us can relate to having our own beliefs and then having the motivation of our company who we work for and supplies our paycheck on the other, how do you baseball those -- you balance those? it's a hard thing. >> i want to go back to the topic of the military. a.i. has long history of being funds by the military. can you talk about that? goes back to deep learning. >> i think this is important for
4:45 pm
silicon valley, at orlando remember. we're at this moment where we often see news stories that say, silicon valley is opposed to working with the military, and there's certainly a portion of the valley that believes that. but the valley in many ways was built on military momentum. google was bent built on defense department. the internet came out of dod project. the founders of hp worked in nixon's cabinet with the defense department. and so there's this mixed history. yes, silicon valley is largely a liberal place and have been times particularly in recent areas where there have been protests for that kind of work, but that is only one side of the equation, and i think it's a good way of thinking about this technology that i write about, not only when it comes to military use out bull sorts of
4:46 pm
use -- but all sorts of uses that are called dual uses for this technology, it can be used for good or ill or many ways you might not expect, and it a lot of it is about point of view and all of us struggling to figure out what is right and what is wrong. it's not always black and white. >> really important here, to, right, and i think going back to that first moment you mentioned earlier in our conversation with demis in korea where there's this national excitement about "go," which is like chess, and like other games where a.i. are trained and where they beat us. they're violent games, war games. why are these -- why are those the games of choice? >> well, it's interesting.
4:47 pm
the games of choice because they're hard. it's as simple as that. they're chosen for technical reasons, chosen for historical reasons, and it's about the conquest, right? that's a real thing. and you can see this in demis. in the book, he himself is fundamentally a game player, and games are about competition. they're metaphors often for war, and the intensity of demies 'ambition -- demies 'am pitch is pool taliban. he wants to -- palpable and he wants to win. that drives him as well as the technological aspects of this. but the other think is games are something that we all relate to,
4:48 pm
and there's a reason that people woke up to the gains of a.i. when that home hasn't. all of us played games as children and we can understand the way games work, winner and a loser, and the idea of a machine beating us is not only interesting but it is scary, and you talk but the excitement in korea. it was palpable. what i often say because it's true, that's one of the most amazing week is have experiences and i was not a participant, just an observer. you could feel the ximent of an entire country because "go" is a national game in korea and you can feel the sadness when the korean, very human player, who is getting beat, you can feel the sadness envelope the country, and that fear, and the concern. it really brought out those
4:49 pm
emotions and that is why that was such an inflection point. you're right, there's dark side to this as well as a light. >> one thing i found interesting about the stores, the people stories, this quest to improve upon our humanity almost like we're like an outdated pc. where does that come from? >> i mean, my father was an engineer, and we talked about this a lot. it's very easy when you're focused on the technology to see it as somehow separate from humanity. as an attitude you see in silicon valley often, that the technology is sort of boxed off from everything else that is happening and when you do that it'seese to see it only for the
4:50 pm
positive thing it will bring and not see all the other consequences and effects of this. think what we really need to do -- and this is part of what i wanted to do with the second half of the book in particular -- is show that any technology is bigger than itself. it is about how it weaves into the rest 0 oralisms anyone who has lived over the last four years can recognize that. affection is not a complicated techle nothing but has huge effects bus because we all use it, these technologies i writes about in the book are far more complex in part because this gets me back to your first question -- is that we don't really understand how these things are operating in some
4:51 pm
cases. they learn skills on their own by analyzing data, more data we can wrap our heads around. that means they'll do things we don't expect. as the technologies get more powerful and pervasive, their effect on our world will be greater than these relative live simple technologies we use today. >> going deep into the future here for an audience question, bases on what you learn do you think members will have greater than human's general intelligence? >> the book goes into this. and what i do want to do in the book as well as in a conversation like this, is make a clear distinction between the technologies of today and this idea that rear going to have a machine that can do anything the human brain can do. scientist call it agi,
4:52 pm
artificial general intelligence. that is still something we do not know how to get to we have two labs now who say thisser is their stated mission. deep mind is one in london, open a.i. is another in san francisco. their charter, it says we are building this. but they, including demis, who is in this camp, don't necessarily know how to get there. it's aspirational. in the future, and because of that it is really hard to say when we might get there. but this is not certainly a near term thing. we have the systems that can learn specific skills. we're talking about a machine that can recognize the words you say or recognize spaces and other objects in photos and that kind of image recognition can help us build self-driving cars, other forms of robotics that can
4:53 pm
respond to some situations. it can help in health-care, an area you know well, but that's different from a machine that can really reason and that's one of the reasons self-driving cars are not on the road yet. they can't deal with the chaos and uncertainty that we as humans can deal with. so, i don't like to make a call on this because it's impossible. right? it is in the future, and people are going to argue about it, it can be fun to argue when that's going to happen. it's not going to happen soon. >> the deep mind lab, their chock-full of neuroscientists. some went to school with me. and i think they would be among the first to tell you that we, as a neuroscience field, don't understand intelligence, real intelligence. how do you create something that you don't understand? >> this is why i always liked
4:54 pm
work with you and your staff. is is the key thing to understand, right? and it might not be obvious when you read stories about a.i. we do not understand how the brain works. therefore, recreating the brain is -- from the get go, it is a task that is harder than you might think. the brain is such a mysterious thing-but what you do see is it provides a certain inspiration to the field, and you see this in jeff hinton, he was inspired as a student at cambridge, by the idea that you can recreate the brain, and in -- really nurturing this idea of a neural network, he is really -- driven by this notion that he can take the least way the brain works
4:55 pm
and applies that to machines and dem this is taking that to another level. has team of neuroscientists who study the jane the way the brain works and how that might be recreated. some people like demis and jeff see this as kind of a virtues circle and as you better understand the way the brain works, that can help you build machines that work like it, and as the machines improve and you figure out ways that they can mimic human behavior, that can help you in understanding the brain. that works sometimes but not always. >> another question from our audience, kate says modern social media are simpleer deep learning based a.i. but modern social media incorporate enormous amount of a.i. and have the done so for a long time.
4:56 pm
can you talk but the impact of a.i. on communication and the online world. >> absolutely. this is another problem with the a.i. field. you have to define your terms. a.i. is a term that dates back to the '50s and is applied to everything. what i meant by simpler is that the algorithms that are choosing your social media feed are simple compared to the neuronetwork style systems and the reason they're more complicatessed is that they do learn their tasks in this really intense way. the example i always use to set the conversation you take thousands of cat photos and you feed by this neural network, just mathematical system, and the network analyze the photos
4:57 pm
and looks for patterns that define a cat. so it learns how to identity a cat but we as humans can't really understand everything it is learning and that's one thing where the cat photo. now think about medical images. these systems learn the same way when it comes to medical images. we don't see the flaws in those systems. we don't know them the mistakes they're learning. enough apply that to the internet, which this -- the person who asked the question illudessed to. these systems learn from everything we post to internet, all the texts, books, wiktorowitsch can ipedia and we know the internet can be bayh ready against women and people of color and have hate speech o the internet which these systems that are learning language, help drive the google search engine
4:58 pm
are being used in chat box design to carry on a conversation and they're learn the biases and the flaws and by the way other that things we may that's see. because of the way the systems are built. >> you alluded to the bias that maybe be -- these systems might be escalating or promoting on scale. about two-thirds of the way into the book we come to the chapter that deal with these very issues, and you have a quote from meg mitchell from an interview with bloomberg, warning about the sea of doom problem. can you talk about that? >> absolutely. that quote is interesting on many levels, one of chit is came
4:59 pm
so early. another scene in that chapter is -- alludessed to at the top of this call is that moment where google identifies photos posted by a software engineer in brooklyn as gorillas. that's 2015. we are still struggling to deal with that issue. years after meg said that in the pages of bloomberg, years after that incident, even though you have people like meg and tim and so many others, you have not only noticed this problem but really called attention to it. the fundamental thing to realize is that it's endem n95 the technology -- endemic to the technology. it has to work in this way to work the way the large companies want to. it requires enormous amounts of data. and what that means is you can't
5:00 pm
just remove those flaws. you have -- you can try to start over, but how do you get the data that doesn't have those flaws in it? it's such a difficult thing, and it is -- exemplifies this moment we're going through now, not only at google and microsoft but the tech industry has a whole, where it's struggling to deal with that conundrum. ... it is really thoughtful because in a way the machines, just an extension of us. coming from our own bias, all of
5:01 pm
us. our people starting to think more about that since since that point probably even before that others are speaking out about it. stay back so the good news is they are thinking about it more are these other forces. what is so interesting to me if you do have this moment in my book which will really call attention to this. they're both hired by google and lead by meg they create this ethical ai team at google which is designed to fight this problem. recently, since my book was put to bed m they have both been ousted from b google. and i have written about this in the pages of the times. as you have on the one side, new people calling attention to this, and a lot of people taking notice but inside companies as well as out.
5:02 pm
you have these other corporate forces and various ways pushing against it, right? these companies have their own aims. companies are complicated. and driven by the profit motive among other things it often comes in the conflict with these other efforts. systemic i find that so interesting. those networks were but like. [inaudible] before they were adopted by amazon and google, their ideas are aligned. >> you are exactly right. it's interesting, you see on twitter for instance on a daily basis people, accusing say to me of being an activist. which, really raised my
5:03 pm
hackles. right? you see this in the both the same criticism of these neural network pioneers were sort of in theer wilderness saying technology is going to work when most of the industry thanks it won't, right? you could deride them as an activist. these are all scientists. to make a great analogy, in the face of pushback from all of these people around her she is saying you need to pay attention to this. and she is not alone. >> why you think the industries having a hard time with this? >> change is hard. for one thing, the other thing is companies are designed in certain ways the other interesting things in the book is these companies develop
5:04 pm
individual personalities almost. they respond too situations different formed by their history. but also, companies are designed to promote themselves right? and to tell the world what they are doing is positive. even if on one level they acknowledge the problems of the world they do not with the problems to be on them. that's the heart of the clash t between state google as they tried to publish a paper that called attention to some of these problems. will google did not want google's name on that, right? that's part of the issue there. and again, this is not just a google problem. this is something that all of these companies are going to have to deal with. >> this is an audience question. is there a way to correct the biases in ai?
5:05 pm
>> guest: it's very, very hard. if you take the systems, these natural language systems literally they spend months analyzing text from the internet. months where john but thousands of books and articles.of and blog posts and everything. it works because of volume, right? it works because you want to throw as much added as you can. what that means is you cannot just weed out all of the stuff that might be problematic. in the industry is still struggling how do we deal with that? right now you have to put band-aids on after-the-fact. you have to put filters on it. that is not always ideal. people have talked about can we develop synthetic data so to speak that can train these and other ways? that is still an open question as i well. technically it is such a hard
5:06 pm
problem. >> there is a theme in the book, trying to find, it is clarifying where a woman is trying to understand why the data does not work, can you walk us through that scene? >> this t is deb whose clarify start up in new york. and they are trying to build an image recognition system and a content moderation system. she herself as a black woman fromom ottawa. and so to her the problem is obvious. the companies taking these stock photos which of an floating around the internet for ages andhe using them to train the system. while almost all of the photos of the majority of the photos
5:07 pm
were of white men, right? so when she sees that, obvious leader or that is a problem. but it's not necessarily obvious to others, right? that is part of what we are dealing with here. the question of diversity in the tech industry has obviously been a problem for avi while. in this field there is an added level to that problem, right? and that we have people who are choosing the data, right? many of the white men i have a certain perspective on the world. that's going to inform the data i choose. that's one of the reasons we need reverse a population of people working on this so they can see these issues in the way it was completely obvious to her. >> you mentioned that junie and meg were opposite people.
5:08 pm
there is a different scenario there around the project. kenny talked about that? and also what took me was the people that got let go in the wake of that. >> guest: it is true. it's the military project i mentioned earlier with jeff hinton so google started working with the department. of defense on image recognition for drone footage. that can have a lot of uses including surveillance as well as autonomous weapons potentially. it's a way to identify targets. that was a real concern to people. i think there are a lot of echoes here withf the situation and bias, right? so many of the people who protested that, many of whom you are right are women, are no longer with the company,
5:09 pm
right? so it's a company, this is the way personality sometimes exert itself. it's a company where at least in the early years the employees were encouraged to speak their minds, y right? can pushback when they wanted to pushback. seen these protests at google bubble over into the public sphere in ways they have not another company. but google is also a public company. they have really pushed back against it. although they did pull out of the project. the end that pushing out a lot of the people who protested it. there is a pattern here in this respect. even though that is a i very different situation.
5:10 pm
[inaudible] what should be a government role in bringing ai to society? >> i do think is our issues we have to deal as ah society as a whole. that mean tech companies, that means individuals. but that means government as iua well. and we are starting to see government, ate least wake up to these ideas. even the dod has laid down what they call ethical guidelines for the use of this type of technology. but we need to keep thinking about this right? i need to keep writing about and youou do too. as a society we need to keep looking for what is really happening. it's easy to say we've seen this from government agencies, it's easy to say we are thinking about it, right? it's easy to say we have a framework in place.
5:11 pm
how much teeth does that framework have the? how much is it really going to do? and when push comes to shove, is it really going to affect things? these are all questions that we all have to dealns with. they are only going to become more important. >> host: how do you audit in ai system where you mentioned earlier works understand what goes and what comes out. >> guest: there's a lot of disagreement onhe this. part of the problem is when it comes with the t bias problem we do not have the data we need to really audit. ultimately at this point, what it is about is really testing the system, right? see where it works and what doesn't. that is not always the way internet technologies has worked, right? it's more about getting it out to the world and passionate
5:12 pm
after-the-fact. but the way these systems are built now, the way to reallyea audit is to really test it. to see where it goes wrong and to fix those flaws. you know, that is a hard thing to do but that's really what needs to be done nowadays. >> host: what is the importance of model introverts ability. and - do you see this into the future? >> guest: it's a very interesting field for this is the idea of got this giant network that trans in all this internet data, can we develop tools that allow us to really understand what it has learned? again, that is a really area.sting you see this in academia, decedent industry, a lot of people are working on stuff are there some things that can be learned. at the same time, these networks are getting bigger,
5:13 pm
and bigger, and bigger taking in more data is becoming harder to understand what they are learning. i do not know how you ever get eathere. fundamentally these machines are learning at a scale that we as humans cannot. that is just the reality of it. that's why these machines are powerful. they can learn from more data than we could ever learn from. they can learn from these skills to a level that we could never hardcode all the behavior and as an engineer, right? so although interesting, that is not something that's going to pay off anytime soon, if ever, right? we really need to realize that. >> you mentioned ai the way we learn. the baby will learn words how
5:14 pm
to handle something with their hands or walk. but there falling all themselves all the time. >> guest: you are exactly a right. there are some ways machines are superior to us and they have been for a long time. we keep developing new ways they are superior. but these are niche areas. the machine is not good at so many things that we are still good at. so, yes a machine can analyze thousands of cat photos, right? and pinpoint all of those patterns we could never define. but they are not good reasoning. and they are not good at just picking things up in a moment, like even a baby can, that is a great point. >> host: human intelligence has a lot to do thehent motions. sometimes it's what we learn, how we learn, how fast we acquire that knowledge or
5:15 pm
forget it fully depends on human emotions. and so to audience question again human intelligence the a drive to big sense of compassion. are there artificial machine equivalents to emotion? whether intended or not. and then adding on for my own, how would that help with the quest for artificial intelligence better? >> guest: what i often say is we tend to see emotion sometimes in machines, right? they will exhibit al little peace of behavior. that will elicit something in us that reminds us of what we see in humans. we tend to project those emotions onto machines. but machines you don't feel that so to speak. among them are many other
5:16 pm
flaws. there are efforts to kind of re-create that. but that is also hard to. do. what i will say though is i think we need to understand the way thatan these machines are affected by her own emotions, right? again it's not just building technology for technology sake. we need to think about how that affects us emotionally, how it affects us e historicall? there are so many things to consider when it comes to building these machines. would it help if they have emotions? i don't know that could create their own problems and probably will. >> host: how was your own relationship with technology change since you have been covering this? >>at what i try to bring to my
5:17 pm
reporting, and even to the book asan well, is a healthy skepticism. and in objectivity. you know a story i often tell, to you earlier my father was an engineer at ibm. one of the things he worked on was the universal product code. the barcodes on all of your groceriesll. it's something everybody can relate to. theirhe stories of the creation of technology is technically interesting how they did this. but then he also had this great story about when they actually deployed it. something happened they did not expect they put this in grocery stores and there were literally protests on the sidewalks because people thought this was a sign of the beast. come true from the bible and the book of revelations, right? it was a really instructional moment for me, right?
5:18 pm
you have on one side this technology that is built in this fascinating way and reaches a point where it canan w do things you did not expect technically. then you put out into the world and then it really does things you did not expect and it interacts with people's emotions. [inaudible] and as i develop at the time i have to constantly remind myself of that. and constantly say, what am i forgetting about where this could go? is the person who is talking to me now, they have the full picture? don't i need to talk to someone else and see what their perspective is? that's what it is about. constantly widening your net to new people to understand
5:19 pm
new things about technology. but also about the world in general and how that technology is going to affect thatolha world. >> host: thanks for bringing that up. we talked about facebook earlier, a lot of the technology on that platform and others is about personalization. so we come these big bubbles but we don't talk to people don't share the same values. how is ai amplifying that? and what are the consequences for the future? >> guest: that's another great point and i'm actually writing a peace about this at the moment. you do see this in our daily lives when you use the services. designed to give you what you want, right? what is a social network or a chat box. i wrote another story during quarantine about the chat box
5:20 pm
which are using ai systems these natural language systems we talked about at length here to learn to carry on a conversation. that is a parable full thing at this moment. we need interaction, right? and people are starting to use that. what the system does, the reason people respond to it is it gives them what they want, right? it is telling them positive things. and that's a people respond to. but, in all relationships we need both positive reinforcement and the negative, right? we need to be taken out of those bubbles. how is technology going to do that? whether to social network or to chat box? how do you get people to use a chat box? it's going to be a good sibling is going to point out, my sisters did this, point out your flaws left mark tell you where you need to be better,
5:21 pm
right? that is an important part of our lives. whether it's short sibling, your therapist, you do not need someone only telling you the good things. you don't want people in your life only creating that bubble for you. how do we step out of that? >> host: are there people who are working on that problem to give you a balanced look as opposed to a very personalized outlook? >> guest: people are at least recognizing the problem. as i wrote about that chat box for instance, i talked to experts in all sorts of fields. in therapy, and technology, sherry terkel was well known in this field. she is adamant about this. these sort of their plea boxes are only enforcing what you
5:22 pm
arty think. only giving you the positive. we need to step back and think about this. luckily there people like she released calling attention to it. >> host: of your conversations with people over the last year quarantine, has that experience of being isolated to where it may be technology as part of the solution but not the entire story, has that changed like thehe focus areas you are talking about with people are getting them excited? what has been the effect of this on the field? >> the effect is huge, right? i think about it a lot as a put together stories. i think we all need to think about this. i think about this as i raise my two daughters. you see them rely on technology more and more. there are some things that arere incredibly positive. i have a 13-year-old daughter
5:23 pm
who has actually developed some of her relationships to an entirely new level through phase ten. she will live on face time with her cousins on another coast. that is been a real positive thing, you can see it. but, in the long term this can also drag us down. we can rely on technology as a crutch in a way that maybe we .houldn't it can be easy to stay-at-home and do zoom like this. it's easier to be a reporter, right? to get on zoom. but the better stories come, the better questions come, you find things when you get out into the world. right? it's true not just of journalist but anyone else. there are the things we all need. it might e be easier to say if you are a company, things are
5:24 pm
working well with everyone at home, let's just keep it that way. maybe that's not the best thing. i am hoping is much as possible we can get back to a world where i am meeting people on the street, others are going back to the office forhe all of these very human reasons that we talked about. >> host: , how would you come to choose the main characters in the book? statement what i eventually realized is there is a common thread with jeff. and there's a tiny circle around him. but i became fascinated by was this idea that various stages of eight neural network was so strange. even in the k ai field there were not a lot of people who believed inld it.
5:25 pm
henderson that circle is teeny tiny, that makes for a really good story of two levels, right? one like on the mental good story historically is someone who believes in something even in the face of w skepticism, right? but also had this tiny group whose paths cross each other in these really surprising ways. there are these moments when i was doing the reported, for instance when i realized dennis who created deep minds came out of this program at the university called the gatsby unit. which is this blend of neuroscience and ai. well it was founded by jeff hence, right? they were all of these moments that i discovered in my reporting where jeff was there, right? a lot talk the moment when he
5:26 pm
into students really showed the image recognition to work this idea into thousand 12. jeff two years earlier had been instrumental in making this idea work with speech recognition and a completely different part of the world. he was at microsoft. there's a great story in the book about him traveling by train because he does not sit down. told microsoft to make it work. then on top of all of that this tiny group suddenly became a normatively invaluable when the idea started to work pretty students in the opening of the book when jeff literally auctions his services off to the highest bidder and they're all these companies bidding tfor him. and set the price for the talent. to have this tiny group of people, they are people there interesting in their own particular ways. and then they are suddenly
5:27 pm
into demanded they move into these companies. that became the center of the book, right? this tiny group that moved into industry heard that's going to out aea lot of interesting and important people. but that is part of any story that you do. any book that you write. >> host: looking back in the history jeff is certainly the hub of this network. i had the chance too sit in in one of their workshops formally called niche conference. one of things i noticed at the time i think it was 2012? 2013? it was all guys. and so one woman who was part of a group was not there and i don't know why, how does that
5:28 pm
trickle down? >> guest: you see it as a dudes problem. you walk into a room like that we are actually working together when euro the story, right? and you can see that. and you could see that historically, right? all of these people we are talking about. the people who are instrumental, the people who were in that room when you were in that meeting are the people who built the technology. the tiny group, they built it. that is the fundamental problem that is there is the technology starts to work. and then you have people like there's a scene in the book which is similar to your experience. she walks into the newark conference and sees hundreds of people, you have seen this to her therefore a lecture. and she realizes there is no one who looks like her.
5:29 pm
out of hundreds of people she counted five black people they were all men. all of whom she knew. it's not in the u.s. it's in barcelona, right? this is a global community that we are talking t about. and this issue is in many ways global. >> host: the articles you work at what we were wire which was where the first on facebook. and she said that she came to technology with the idea was diplomacy and found that it wasn't. is that something out today? >> guest: it's absolutely playing out. you see it time and again with the situation of google -- what i think is positive
5:30 pm
though isiv people are more willing to call it out, right? what you and i saw firsthand is often so hard to convince people to call it out, right? there were consequences are still are for people who obviously still are for people who are willing to say the obvious. so on some level we see some progress because people are willing to stand up and say, we need to think about this. that is so hard to do. but were starting to see that. what else we need to do is listen when that happens, right? and that can be hard to write? you have to be willing to -- willing again to put yourself in uncomfortable situations. me as a reporter and white man i need to listen when those
5:31 pm
moments come up. when people are critical of say the way i am doing things. i am building a story it's one that i have learned is you have a certain way of doing things. and you have to be -- you have to be willing to be challenged on that. even if there are really good reasons why you do things and why we build technology or you build a story. you have to be willing to step back over and over and over again and say is there something else and need to think about? >> host: howdy on people term nba book?
5:32 pm
[inaudible] that means roping in so many other things we talked about right customer at the development of the idea and then all of the areas where it started to work rate that's what's fascinated the single idea is driving the change in area after area after area. through the people have it read like a novel to show all of that. butt then show all these questions it has raised. the biased questions the autonomous weapons question, bithe big question we have not talked about is the disinformation question. this is the other huge problem that we are going to face is the systems can generate images,ys videos, blog post, tweets as well as a conversations that look like the real thing. if we think we have a disinformation problem now, once the machines perfect that in once we build machines that can t do that one 100% of the
5:33 pm
time, with the change would be look at the world. and then there are the geopolitical issues. this is a global thing. not an american thing. all the talent was outside the u.s. they are all immigrants. u.s. companies jumped on it but by then way, and not seen at beginning of the book or jeff hinton auctions himself off to the highest bidder there is a chinese player right then. people say china followed, they were right there. there are all these geopolitical issues to consider. my aim was to rope all that into one book and fed us for every thing to come for this is what has happened. these are the questions we are facing including that big agi questionin people say they are working on, how should we esthink about that?
5:34 pm
that is what i wanted to do. so hopefully, it is a good read for people who want a human story. but, hopefully i can layer the bigger ideas on top of all of that. >> host: can we talk about outside the u.s. but we've been very u.s. focused here. how is it playing out in china? in asia? in europe? can you talk about that? >>n absolutely. one of the things i am fascinated by and maybe this is surprising to some people, jeff hinton was an academic, he was a professor at the universitymi of toronto. and when he moved into google one of the stipulations was he wanted to keep hisas professorship. and he wanted to keep behaving like ann academic. you saw this with one of his old colleagues followed him.
5:35 pm
jan went to facebook he followed jeff's lead he wanted to be able to publish. he talks about this a lot. what happened was remarkably the sensibilities of these few individuals change the course of the companies. and what you saw google and facebook and others is that they publish all of their latest research right? the latest ideas get shared with everyone. what that means is they are available to everyone. whether you are in london, or you are in china, the latest research is freely available. and the currency becomes who has the data right to train and who has the processing power to train the systems and who has the talent? in some ways, a lot of people think china has the advantage. they have a huge population that means are going to create more data.
5:36 pm
the going to create more talented ai researchers who can build the systems. and that is really what is important there. so you have this new landscape that you have to think about differently. this is not a 1950s cold war landscape. we need to not think about this in terms of export controls or sealing off our borders to certain immigrants. it is not that kind of world. we at the u.s. are relying on immigrant talent, we always have. that includes chinese talents. free bar our borders to research were shooting ourselves in the foot. at the same time, exactly. to the export control, we are not exporting anything to
5:37 pm
china, what is that going to do? they have all the access to the research papers right? so we need to think about this world differently. certainly need to worry about espionage from other countries. there are concerns particularly when it comes to military applications and the like. this is not the world of absolutes that people might have thought our world was in the past. >> host: some of going out to the misinformation you mentioned a bit ago, before it played out here it's played out elsewhere. are there things happening outside our borders that are powered by ai that we should keep an ion to see where we are going in the future? >> guest: absolutely. the prime example is in china
5:38 pm
where this technology, a neural network which can identify photos among so many other things is being used targeted ethnic minority.ei right? that is the type of thing that really raises concerns in this area. this same type of technology is being deployed here in the states. and luckily started to raise questions about it. china is the extreme example. but as you indicate as we see this play out to think about how we are going to deploy these things. how we are already starting to show. you see this in my book weather starts to be a rollout of face recognition technology. and really because of people's company start to wake up too it. and some of them start to
5:39 pm
respond. and at least say, we need to think about legislation. as usual these companies only go so far. but at least they are publicly recognizing these types of issues. i think you are exactly right. we live in a world that this is developed everywhere and deployed everywhere. we cannot think of what we are doing solely looking at our borders. >> were also doing that here, right? but their technology has been used for years to track undocumented immigrants. >> that's exactly right. again that's another example you might call technology simpler. it's often about the way these things are deployed. and then as the technology becomes more powerful those very issues become bigger and
5:40 pm
bigger. >> what a can private citizens do if they're worried about the use of ai by their local police departments and local governments? >> i think one of the lessons here and we said this earlier. these are problems we all need to deal with. the cubbies are not going to deal this on their own we know that. that becomes our problem, right? we need to individually speak up about this journalist need to write about this. people need to call attention to it. >> often the problems that ai
5:41 pm
causes affect franchise or marginalized already. and so the problem that causes may seem far away from someone else. why should people like you any who are privileged we ought to step b outside of our bubbles, right? andre realize it's easy in silicon valley to forget everything else that is going on. .it is easy to only look at what is happening in your bible. we need to remember these are not technologies just for the privilege. these are technologies that are creeping into the daily lives of everyone. sometimes in on unexpected
5:42 pm
ways. we absolutely have to keep an eye on that. >> what i appreciate about the book and away it was a history of this community. also about a story about spirit speak absolutely. when i often say is that any good story is about people. and that includes technology. technology does not always work that way. fundamentally what i wanted to do was tell a story about people. and thenth if i could do that build on these bigger ideas on top of that. >> if you're going to write a sequel in ten years, what you think he would be writing about? is it going to be eaten or networks again? or is there another technology this going to push us forward
5:43 pm
even more? >> my inclination is always to do something completely different. i may surprise you. one of the things i've been thinking about is quantum computing. which is another fascinating area that's coming to the fore. we will see there. but this story the ai story is really only just beginning, right? but we are only just understanding how the systems work and how theyus are being deployed. it got so much left to cover. certainly covering at the time. >> there is a lot of highflying stories quite literally. we talk about that particular story and then we can move on
5:44 pm
after that? >> that is a good place to end. you do learn inn the first sentence of the book has his back problems. he literally does not sit down. as a teenager he was lifting a space heater for his mother and he flipped the disc. by his late 50s the disc would slip so often that it is hard for him to function. like all of these stories from his students were they walk into his office laid out on the desk trying to defend their phd thesis. he's on his desk, caught by then wall, what that meant was is not driving he does not fly. because commercial airlines make you sit during takeoff and landing. this is another moment i was floored by site came up in my research that jeff hadci moved to google. google ist thinking about
5:45 pm
acquiring deep mind. and they want jeff to bet the company and help them decide if they would spend what ended up being six and $50 million on that company. will allen used to use the head of engineering devices incredible which is been inspired by their own feet as a skydiver like an extremist skydiver. too basically strap jeff in place on this makeshift bed in a gulfstream jet. this is how jeff made it to london ended up walking into the office. he told me that story. one of the great things on the hard things about our job is we do it piece-by-piece, right? you get a little hint and you
5:46 pm
go to the next person to get a little bit more. once you got enough go to the sources have got this much. tell me thehe rest. that is the way the book often worked. one of the numbers is 44. that seems like a crazy number at the time. but in retrospect not even two years, seemed like a bargain basement deal given how much has been paid for for deep mind. why the huge explosion for this talent? inquiring and nfl quarterback. so that right, the 44 million-dollar figure that
5:47 pm
jeff was acquired for, that was the hardest thing to pinpoint in the book. i was worried i would not be able to back that up. but that is how much they paid. you are right it'scq a bargain price for its $44 million for three people, right? so individually that's a whole lot of money. then the prices would explode. it's basically supply and demand. there is this group of people who specialize in the field. they can argue whether at the company behave rationally or not. they are all intent on jumping on this area. and that meant the prices went sky high. everyone wanted their own jeff hinton. the seat most interestingly with microsoft they wanted theirr own this incredible guy
5:48 pm
one of the top executives ends up going to montréal to get their own. but you had this frenzy. it may happen in other areas soon where the price suddenly skyrockets for the talents. >> engineers for jobson's call? >> still feels we can command a lot of money. because it bleeds into these other areas of driving cars for instance self flying drones which are becoming big. it's an area people are looking at is a change the way we look at technology in the
5:49 pm
way technology is built. >> anything you left out of the book that you wish and gotten in? >> i often say everything is in there. pretty much is. unfortunately is one anecdote i cannot share sometimes that's the way it works. one anecdote i wish was in there because parallels this whole story a professor at stanford. >> she is in the book. one anecdote people don't realize is really powerful, in order for its and networks to work, we talk about this you need the data you need the processing power.
5:50 pm
jeff hinton and his two students were bought for over cordova million dollars that a inkey result on a contest called imaging net. an image net is a collection of photos that allowed them to build a system that could recognize everything in those photos. that was the brain child. and i wasph talking one point to her and ended up talking to her advisor as well, there was this moment between them were faith that says i would to image net she wanted to bet her career on this idea. he told her that was not the way to go. that she should not do it that was not the thing to bet her career on. and she did it anyway, right? that is another peace that had to be a in place for this kind of thing to work. >> host: it is such a powerful thing that drove access.
5:51 pm
>> it is complicated but i wish it was in there, right? sometimes you make choices for narratives and for flow. until you make these hard decisions. but, again, we need to step back and think about the decisions we made, right? whoever we are and rethink them. and stay tuned did i make a mistake there? does that need to go in the next time? or that type of thing to go into next time. >> how quickly did you write this book and what have you learned? >> i wrote it over the course of about two and a half -- three years. i made h the mistake of agreeing to write a book and agreeing to join the "new york times" on the same week. i have will never ever do that
5:52 pm
that is a mistake. you end up making your life far too complicated. what i have learned is the next time i'm taking a book leave and i'm going to concentrate on the book words try to do it in the mornings and late at night and stand in line at the grocery store thumbing through google docs on my phone. that is not the way to go. >> cellular ands more about taking time off to write your project rather than during your full-time job. >> guest: i think you also, in writing a book you learn more about your field. you also learn about the way that we tell stories. and the way you give people a real idea of what is going on. it is a different skill from
5:53 pm
daily reporting. and i have learned a lot about things. like i said, you also make choices that you regret and you try to learn from that in the future. they want to that's the perfect segue into the museum's initiative. you were asked to write a single word down that would give a young person some advice. i know that's a very personal word to you. can you show us what that word is and tells the story behind it? >> absolutely. this is going to go back to my father again. who is an engineer at ibm. an amateur philosopher. my word is truth. there are some symbols below
5:54 pm
which i will explain. my father love this philosopher name adler who wrote this book kate six great ideas. news many of the ideas you should live your life by. my father always talked about, hugh echo this book there are three ideas you should strive for in your daily life. truth, goodness, and beauty. we believe this to the points he puts symbols for each of these ideas on keepsake boxes made of oak for all of his grandchildren. this is the way you should live in truth was the most important. he would represent truth with the theorem for this symbol for truth, goodness, the heart, beauty was the rose. truth was so important because it informed what we understood to be goodness and beauty.
5:55 pm
to understand this concept you need to understand what is true. and truth is something you do have to struggle with every day. i certainly struggle with it as a reporter. it's about constantly talking to new people and reevaluating what you have heard the previous day and taking in what you're hearing from everyone and sitting down at the end of the day and thinking what do i believe is true? given everything i have learnt given everyone i have talked to is ave personal decision. but it iske informed by everyone you speak to on a daily basis. >> that's great you dedicate the book to your dad. >> absolutely. he died a few years ago. so what i often say is the one person who would enjoy the book most is not around to read it that is the one thing
5:56 pm
that makes me sad. but otherwise this embodies the book in many ways. all-black things he stood before he was an engineer, he believed in those bigger ideas. any often raise the awareness about howbo technology can affect our world in ways we might not expect. i think we will leave it at that. go on into more, station higher. going to get back thank you so much for inviting me to it do this. it's a pleasure to hear about the book. i hope this is very successful. >> thank you for doing this lot of fun. i'll turn it over joubert's big thank you so much danielle. this is a terrific conversation. absolutely on point as we think about how technology is redefining what it means to be human. in changing how we interact
5:57 pm
with one another and frankly theon planet. for the last four decades has established itself to be the premier institution and trusted source for preserving and communicating the history of computing and its impact on human experience. always serve a diverse array of audiences, the key point here is these live events are part of the public education a conversation. that helped all of us become better citizens in the modern world where technology is ever present. these conversations are preserved and available full-length video on the youtube channel. key takeaways are available on her website. and they are preserved as a permanent part of our collection and publication and research and exhibits an ongoing t education. soca in quick summary, it's a beautiful thing to be able to think about history because everything has it.
5:58 pm
and it is one of her previous gas pointed out, he was a noted historian is it imperfect but indispensable guide to the future. and the only mere and measuring route we have for the present. so thank you kade and danielle is a great conversation about the way the world is today with ai would back it if years to readjust, thank you so much bye-bye everyone. ♪ ♪'s macbook tv on cspan2, every weekend with the latest nonfiction books and authors. funding for book tv comes from these television companies who support cspan2 and the public service. selecting a book tv on cspan2 jeff reflects on his time as ceo of general electric including the challenges he faced after 911 and during the 2008 financial crisis. on her weekly author interview program "after words", neos
39 Views
IN COLLECTIONS
CSPAN2 Television Archive Television Archive News Search ServiceUploaded by TV Archive on