Skip to main content

tv   Cade Metz Genius Makers  CSPAN  May 9, 2021 6:45am-8:15am EDT

6:45 am
c-span2. >> today we continue to focus on ai innovation to fresh exploration of ai history. as well as the choices that we face of digital citizens. before ai began to change your world for better or for worse.
6:46 am
they spent decades trying to build their own networks. often in the face of enormous skepticism. talking digital assistants to self driving cars and automated healthcare. pulled into a world they did not expect, along with the rest of us. google, facebook and the world. the reporter uncovers his tail. i think it is an often dramatic telling of this history. some of the untold stories that keep people, the communities and companies shaping the community of ai. a natural interest, shareholder value at the pursuit of tech innovation. how these decisions are made and
6:47 am
who is benefiting. it is my pleasure to welcome kate. they rich experience as a senior staff writer and the u.s. magazine of the register. now based in san francisco bay area where he has a technology correspondent with the new york times and he covers ai, driverless cars and other areas. here are his five numbers. nine years covering ai. $44 million that google paid, 37. eighty photos of a black woman tagged as a gorilla by google and one book telling the whole story cleared welcome.
6:48 am
a moderator for today's program. an editor for the wall street journal. written about ai across multiple industries. she wrote for a few others. so glad you're here with us. >> so nice to see you again. sixteen, 18 months since we last hung out. it is nice to reconnect.
6:49 am
who is fundamentally about people. people who were toiling in obscurity, maligned and mocked for some of their ideas. what captivated you the most about this cast of characters? >> there were two moments i think that really inspired the book. the first was the four seasons hotel. it was based in london. it had been bought by google, built to play this game of go. they often called it the eastern version of chess. most of those people were ai experts. the best game was still decades
6:50 am
away. it may actually be one of the top players. the four seasons hotel. this moment when the people that built this machine who had spent years kind of cultivating the ideas behind it and building this machine could not understand what the machine was doing. confused and caught unaware by this machine. these individual people including dennis who, for the most part as a leader of this lab became the focus of this book. i knew that dennis would be a character. before i met jeff who was a generation older than dennis and he became the central thread.
6:51 am
he, and his own way, is a fascinating character. had worked on many of the same ideas as dennis and dennis colleagues for decades. for anyone who knows jeff, he has a fascinating, engaging, strange character who has endured serious hardship over the years among other things. i thought to myself, if i can just get jeff onto the page, maybe this book will work. >> i actually wanted to ask you about a moment about jeff. kind of this connector of this group. about halfway through the chapter that deals with the cases of ai, jeff putting out a small sigh. he said early diagnosis is not a
6:52 am
trivial problem. we can do better, but why not let machines help us. this was specifically really important to him because of his life experience with pancreatic cancer. that moment comes back at the end of the book. you see him immersed in this. he has also, at that moment -- >> i love that you pinpoint those. you see in the emotion that jeff shows, that side that he lets out. the moment when he receives the award. he does not talk about himself. he talks about his wife. those are the kind of moments they capture. it shows the personal struggle
6:53 am
that jeff faced to bring these ideas to the floor. it is just part of his personal struggle. certainly the most powerful. at the end when he talks about his wife. this is someone who had two wives died by cancer. experiencing his own physical hardship as you learn in the first sentence of the book. he is someone who literally does not sit down because of a back problem. this plays into the story as well. this extreme back problem he has to make these pilgrimages across north america. sometimes across the atlantic. realizing these ideas. those are the types of moments that i feel show, not only what he faces, but people like him
6:54 am
often face when they are trying to realize their work. their technical problems. they are personal obstacles that have to be overcome as well. >> i felt for humanity. some sections of the book just jump off the page. really loving, but sometimes struggle. also jokes. almost a defective mechanism. others were a little bit more aggressive or adamant about their ideas or ways that captured a little bit of their security. just reminding everybody there is a humanity there. i do talk to other people for the book. how did the lives, dreams, opportunities and obstacles
6:55 am
affect. >> what i love is it was different for each person. we are talking about essentially a single idea. it dates back to the 50s. it is about all of these different people trying to push the idea forward. the way they dealt with it was so different. because they are humans. we are all different. the personality comes out in so many different ways. you are right. sometimes it is humor. jeff is an incredibly funny person as well. i like how that humor would attract those around him. as he is struggling to get this idea out. he needs the help of others. the humor is not only entertaining, but it's a way of
6:56 am
convincing people to work on this project. as i got to know jeff, more importantly, as i interviewed his students who knew him well, you saw him at this magnet often driven by the humor. drawing people to him and helping him realize this idea. the people that push this idea forward in other ways. some people are so adamant about their idea and they are so upset by the obstacles that they really lash out. you see that as well. that can work in some ways and it can backfire in others. what i wanted to do was show all of that. also how, you know, these folks
6:57 am
push the idea forward, it would behave in ways we did not expect a surprise even then. they reached these moments where they do not necessarily know what to do. that is part of it as well. >> what surprises people like that about the way their technology caught on. >> the great example, it is almost inevitable. the great example is -- born in london, eventually made his way to the united states and started to explore this idea, first on the west coast and then a professor at carnegie mellon in pittsburgh, there is this moment in the mid- 80s where jeff and his wife at the time realized
6:58 am
that he cannot do this ai work without taking money from ronald reagan's defense department. that is not something that he wants to deal. at the height of the iran-contra affair. he and his wife have very firm beliefs on this. they do not want, you know, this work to be used inside the military's. they actually leave the country. he believes in this stance. the point where he goes to canada and sets up shop at the university of toronto as a professor. it would have real implications for the whole field. years on when this idea finally started to work. most of the talent was centered around jeff and others in canada it was not in the u.s. but, as that idea starts to work at jeff is sucked into google, in short order, google starts to
6:59 am
work with the defense department. there are protests at the company. some people who believe this is absolutely the right thing for google to do. there are others that are really upset by it. i did not expect to be working for a defense contractor. a moment where jeff himself struggles with this. he was against it, but he was not sure how much he should speak out. he ended up lobbying one of the founders to push back on this. he was not as public with his concerns as some others. there was a moment where there was an employee who criticizes him for that. again, you feel the humanity of the situation. relating on some level to having our own beliefs on one hand and having the motivations of our
7:00 am
company who we work for. how do you balance those? it is a hard thing. >> i want to go back to the topic of the military. a long history. can you talk a little bit about that? it goes way back to the inception. >> it really does. i think that this is important for silicon valley to we're at this moment where we often see news stories that say silicon valley is a working there's a portion of the valley that believes that . but the valley in many ways was built on military money. google was built on defense department funding the internet came out of the dod project. the founders of hp worked in nixon's cabinet with the defense department .so there's this mixed history.
7:01 am
yes, silicon valley of that kind of work but that's only one side of the atequation. and i think it's a good way of thinking about this technology that i write. not only when it comes to military use but also defense hiuses, that there are whatdual uses for this technology meaning it can be used for good . it can be used for ill. anyways you might not expect and a lot of it is about interview and it's about struggling to figure out what's right and what's wrong. it's not always black and white >> the finances important here too, right? going back to what you mentioned earlier in your conversation with dennis in korea where there's this national excitement about go
7:02 am
which is like doda and some of the other games, their wild games. games? >> their games of are hard. it's as simple as that. they're chosen for technical reasons. they're chosen forhistorical reasons . and his conquest, right? that's the real thing and you can see this in ." he himself is fundamentally a game player. and games are about competition. their metaphors often forwar . and the intensity of dennis's
7:03 am
ambition is hollow. you can feel that in korea. he wants to win. and that's really what drives him as well as the technological aspects of this . and you know, the other thing is that games are something that we all relate to and i think there's a reason he will do the ai win when it goes event happens. all of us play games as children and we understand on some level the way games work . user you she us is scary. you in korea t. it was how old. what i often say because it's true blis that makes you participants, i was an observer.
7:04 am
you higher country's national game focused on it but you can also hear, feel the sadness when the korean very human player was getting beat . you could feel the sadness and the country and fear. it brought out those emotions and that's why was such an inflection point htbut you're right, there's a dark side to this as well as a life. >> one of the things i found interesting about the story in this book about the people's story was that those requests to improve upon our humanity almost were like an outdated pc. where does that come from? >> my father was an engineer and we talked about this a lot. it's very easy when you're focused on the technology.
7:05 am
to see it as somehow separate from humanity. there's an attitude you see sometime valley often that psychology is sort of boxed off from everything else is happening and when you do that it's easy to see it only for the positive things is going to bring. and not see all the other consequences exit this. what we really need to do this is what i wanted to do with the second half of the book and is show a technology . it is about how it weaves into the rest of our lives. anyone who's lived over the past four years can recognize that. we have relatively simple technologies. facebook is not a complicated technology but it has huge
7:06 am
effects because we all use it . these technologies that i write about in the book are for more complex in part because this gets your first question and the way i answered it. we don't understand how these things operate some cases. they literally learn feels on their own by analyzing data, or data then we can wrap our heads around and they will do things we don't expect. geez more powerful and pervasive, in fact on how relatively simple technologies we usetoday . >> trying to leap into the future here for an audience question. eight on what you learn machine will have greater levels of general within the next five years eland what you think? >> the book goes into this
7:07 am
kand what you want to do as well as congress like this is made clear distinction between today and this idea that we're on the scene and the human brain and you scientists call it agi artificial general and. is still something we do not know how to get you. we have two labs now who say this is their statedmission . open a is another in san francisco. in their charter it says are building this. they including dennis who is in this don't necessarily know how to get there. it's aspirational. it's in the future because of that is really hard to say when we get there. but this is not certainly a near-term thing.
7:08 am
we have the systems and learn his skills. we're talking about policy is the word you say or recognize faces and that image recognition could help us build self driving cars, other forms of likes to respond to some situations. it can help with healthcare, an area you know well that is different a reason and that's one of the reasons self driving cars are not on the road yet. they can't deal with all the chaos and uncertainty we as humans can deal with. i don't like to make a call on this because it's impossible . it is in the future and people are going to argue about it. it can be fun to argue about when it happens but it's not going to happen soon. >> the deep mine lab, their chock full of neural fines,
7:09 am
and i think they would be among the first to tell you that as a neuroscience field we don't understand intelligence, real intelligence. how do you dissuade something you don't understand? >> this is why i liked working with you and your staff. this is the key thingto understand and it might not be obvious when you read stories about ai . we do not understand how the brain works. therefore re-creating the brain is off site from the get-go. it is a task that is harder than you might think. the brain is such a mysterious thing. but what you do see is that it provides a certain inspiration to the field and you see this and just clinton. he was inspired as a student
7:10 am
at cambridge by the idea that you can re-create the brain. and in nurturing this idea of a neural network he's really, he's driven by this notion that he can take the loose way the brain works and apply that to a machine. dennis is taking that to another level. he has a team of neuroscientists studying the brain and the way the brain works and how that might be re-created. some people like dennis and jeff see this as a virtual circle. whereas you better understand the way the brain work that can help build machines and as the machines improve, that can help you understanding the brain. that works sometimes but not
7:11 am
always. >> another question from our audience . modern social media incorporates an enormous amount of ai though, can we talk about the impact of the ai on communication and the onlineworld . >> this is another problem with the ai field. ai is the term that gets applied to everything though. and what i meant by simpler is that the algorithms that are sage using what's in your social media are relatively simple compared to these neural networks filed by systems and the reason there more complicated is that they do learn their tasks in this
7:12 am
really intense way. the example i always use y which was set the conversation is you take thousands of photos and you feed it into this neural network is just a mathematical system and this network analyzes all those photos and looks for patterns that define what looks like. that's how it learns to identify a cat. we as humans can't really understand everything it is learning and that's one thing with a cat photo. now think about medical images. these systems learn the same way when it comes to medical images. we don't see the flaws in those systems. we don't know the mistakes that they are learning. now apply that to the internet. which you know, the person who had the question alerted to. these systems learn from everything we've posted to the internet.
7:13 am
texts, books, all sorts of other things. we know the internet can be biased against women and people of color. we had eight speech, needless to say on the internet. these giant systems which are learning natural language now, systems that help drive the google search engine are starting to be used in chat bots that are designed to carry on our conversations. their learning those biases and flaws and other things that we may not see, even the creators may not see because of the way the systems are built. >> you alluded to the bias that might be escalating or promoting at scale. about two thirds of the way into the book we come to the chapter that deals with these veryissues .
7:14 am
you have a quote from meg mitchell from her interview she gave to bloomberg warning about the co2 problem, can you talk about that? >> that quote is interesting on many levels. one of which is itcame so early . another scene in that chapter is, this was alluded to at the top of this call is that moments where google identifies photos posted by a software engineer in brooklyn as gorillas. that's 2015. we are still struggling to deal with that issue. years after meg said that in the pages of bloomberg, years after that incident. even though you have people like meg and so many others.
7:15 am
who have not only noticed this problem but called attention to it . the fundamental thing to realize is that it's pandemic to the technology . the technology has to be built in this way that's going to work the way say these large companies wanted to. it requires enormous amounts of data. and what that means is you can't just remove those flaws . you can try to start over, but how do you get the data doesn't have those flaws in it? that's such a difficult thing . and it exemplifies this moment we're going through now. not only google and microsoft tech industry as a whole where is struggling to deal with that conundrum. >> couple years ago i had a conversation about this with kate crawford and she mentioned that we have been biased to think about machines as being in follicle
7:16 am
and what they say goes because how good a machine make a mistake? what you pointed out is really thoughtful because in a way, the machines are just really an extension of us. and we come with our own biases, all of us. are people starting to think more about that and is that 2015 quote or even before that other peoplewere speaking out about it . >> they are thinking about it more there are these other forces . and what's so interesting to me is you do have this moment in my book where kenny and meg are both hired by google and led by meg, they create this ethical ai team that google which was designed to fight the problem. recently, since my book was
7:17 am
put to bed they have both ousted from google. and i've written about this in the pages of the times. as you have side people calling attention to this and a lot of people taking notice, inside companies as well as out you have these other corporate versus in various ways pushing against it. these companies have their own aims. companies aare complicated and driven by the profit motives among other things. it often comes into conflict with these other efforts. >> i find that so interesting because their ideas are powerful, insightful. much like neural networks ul were but even before they were adopted by amazon and google. their ideas are malign.
7:18 am
>> you're exactly right you see on twitter for instance, on a daily basis people accusing say timnit of being an activist which really raises my hackles. you can make and you see this in the book the same criticism of these neural network pioneers who were sort of in the wilderness saying to this technology that it's going to work when most ofthe industry thinks it won't . you can derive them as an activist. all scientists, timnit in the face of pushback from all these people around her she's saying you need to pay attention to this. thankfully she's not alone. >> why you think the industry is having a harmful moment?
7:19 am
>> range is hard and for one thing. the other thing is companies are designed in certain ways. this is another thing you see in the book is these companies develop the visual personalities almost they respond to situations differently. in particular ways they're formed by their history but also companies are trying to promote themselves and to tell the world that what they are doing is positive. even if they in one way acknowledged the problems on the world they don't want these problems to be on that and that's part of the class between google and timnit and may is a tried to publish a paper all attention to some ofthese problems . cool didn't want google's name is part of the issue.
7:20 am
and again, this is not just the google problem. this is something all these companies are going to have to dealwith . >> an audience question, is there a way to correct biases in a high? >> is very very hard. like i said is and then to the technology so if you take these systems, these natural language citizens literally, they spent months analyzing texts from the internet. where thousands of books and thousands of articles and blog posts computers. it works because of volume. it works because you want to throw asmuch at it as you can . that means you can't just we always . industry science as a whole is on the media?
7:21 am
in. not always you. people thought about me to send that they so to speak and trainees in other ways is still open. it's such a hard problem. >> there's a scene in the book, i'm trying to find. it's clarifying where a lawyer is trying to understand why the data does work. can you walk us throughthat scene ? >> this is dead rocky who wasn't a clarifies startup in new york trying to build an image recognition system and content moderation system. she herself is a black woman
7:22 am
from allah e so to her the problem is obvious. the company is always stock photos and living around for ages and using them to train system. well, all the majority of white men . so when she sees that, that is immediately a problem but it's not necessarily obvious to others. that's part of what we're dealing with here. the question of diversity in the tech industry has obviously been a problem fora while . in this field there's an added level to that problem. in that we have people who are choosing the data. me as a white man, i have a certain perspective on the world . and that tcould inform the data i choose.
7:23 am
that's one of the reasons we need a diverse population of people working on this kiso they can see these issues in the way that den can see them. it was completely obvious to her. >> you mentioned that timnit and meg were after the same goal. there was a different scenario around project maven . can you talk about that and what shook me was the fact that two of the people that got let go in the wake of that were also women. >> it's true and this is the military project that i mentioned earlier with jeff hinton. google started working with the department of defense on image recognition for drone footage. that can have a lot of uses including surveillance as well as autonomous weapons potentially. it's a way to identify
7:24 am
targets that was a real concern to people. i think there are a lot of echoes here with the situation involving bias. so many of the people who protested that, many of whom you're rightare women are no longer with the company . so google is a company that, and this is the way it's personality sometimes exerts itself . at least in the early years the employees were encouraged to speak their minds. and push back when theywanted to push back . that's one of the reasons you've seen these protests at google bubble over into the public's fear in ways they have in other companies google is also a public company and they've really pushed back against this alternately didn't pull out of project maven.
7:25 am
they ended up pushing out a lot of the protesters, there's a pattern eceven though it's a different situation technically. >> an audience question related to government and ai. governments are using ai for surveillance and defense tech . what should be governments role in bringing ai into society and a healthy way ? >> these are issues we have to deal with as a society asa whole . that means tech companies, that means individuals but that means government as well. and we're starting this to see government at least wake up to these ideas. even dod has laid down what they call ethical guidelines for the use of this type of technology but we need to keep thinking about this.
7:26 am
i need to keep writing about it and you do to and as a society we need to keep looking for what's happening . we've seen this from government agencies and companies, saying we're thinking about it. it's easy to say we have a framework in place. how much does that framework have, how much is it really going to do and when push comes to shove, is it really going toaffect things ? these are all questions we all have to deal with. and they're only going to become more important. >> how do you on it and ai system. you mentioned earlier some people don't understand what goes in and the answers that come out. >> there's a lot of disagreement on this. part of the problem is when it comes to dealing with the bias problem we don't have
7:27 am
the data we need to audit it. ultimately at this point what it's about is reallytesting the system . seeing where it works and where it doesn't. that's not always the way internet technology hasworked . it's more about getting it out to the world and patching it after thefact . but the way these systems are built now , the way to really audit it is to test it. see whereit goes wrong and fix those flaws . that's a hard thing to do but that's really what needs to be done nowadays. >> what is the importance of model integrate ability and do you see this helping our needs in the future. >> i think it's an interesting field. this is the idea that we've
7:28 am
got this giant neural network and it trains on allthis data. can we develop tools that allow us to understand what it has learned ? i think again, it's really an interesting area of research and you see this an active, you see it in industry a lot of people working on this and there are some things that can be learned but at the same time these neural networks are getting bigger and bigger and taking in more data and it's becoming harder to understand what their learning . i don't know how you ever get there. fundamentally these machines are learning at a scale that we as humans cannot. .. that is just the reality of it. that's why these machines are powerful is they can learn from more data than we could ever learn from. they can learn from these skills to a level that you can never hardcode all the major and asan engineer .
7:29 am
so i think that although interesting, that's not something that is going to pay off anytime soon if ever. we need to realize that. >> you mentioned ai develops the way that we learn, do you want to caveat that ? or how to handle something with our hands or walk, and they're falling all over themselves all the time. >> you're right. .there are some ways machines are superior to us and have been for a long time and we keep developing new ways they are superior but these are niche areas. the machine is not good at so many things that we are still good at. so yes, a machine can analyze thousands of photos and pinpoint all those patterns we could never define on our own . but they're not good at
7:30 am
reasoning. and they're not good at just picking things up in a moment like even a baby can. it's a great point. >> human intelligence has a lot to do with emotion. sometimes what we learn, we learn it , how fast we acquire that knowledge or forget it depends on human emotions. so from an audience question. human intelligence is suffused with human emotions such as a drive to win or a sense of compassion . are there artificial machine equivalents to emotion whether intended or not and bad at it on my own question, how that help our quest for artificial intelligence? >> what i often say is that we tend to see emotion sometime in machines. there are behaviors and that
7:31 am
will elicit something in us that reminds us of what we see in humans. we tend to project those emotions onto machines but machines, they don't feel that so tospeak . among their many other flaws. and there are efforts to kind of re-create that. but that's also hard to do. what i will say though is i think we need to understand the way that these machines are affected by our own emotions. again, this s is not just technology for technology sake. we need to think about how that affectsus emotionally, how it affects us historically . there's so many things to consider when it comes to building these machines.
7:32 am
will it help if they have emotions? i don't know, that could create its own problems and probably will. >> how has your own relationship with technology changed since you've been covering this ? >> what i tried to bring to my reporting and even to the book as well is a healthy skepticism and and objectivity. you know, a story i often tell is to you earlier my father was an engineer at ibm and one of the things he worked on was the universal product code. the barcode. it's on all your groceries. it's something everybody can relate to and he had amazing stories about the creation of thattechnology . it was technically interesting how they did this but he had this great story about when they deployed it and something happened they didn't expect.
7:33 am
they put this in grocery stores and there were literally protests on the sidewalks because people thought thiswas the sign of the beast . come true from the bible in the book of revelations. it was a really instructional moment for me. you have on one side this technology that is built in this fascinating way and reaches this point where it can do things that you didn't expect technically. then you put it out to the world and then it really does things you didn't expect and it interacts with people's emotions like we talked about earlier and history and literature. these are all things thatwe need to consider . and as i have written the book and as i developed this at the time, i had to p constantly remind myself that . and constantly say what am i
7:34 am
forgetting about where this could go? is the person who's talking to me now, do they have the full picture, don't i need to talk to someone else and see what their perspective is and that's what it's about. it's about constantly widening your net to new people who understand new things about the technology but also about the world in general and how that technology willaffect world . >> we talked about facebook earlier and a lot of the technology on platform and others is about personalization's e. we become these beings in bubbles where we don't talk to people who don't share the same values or ideas . how is ai amplifying that and what are the consequences for the future ? >> it's another great point and i'm writing a piece about this at the moment because you do see this in our daily
7:35 am
lives when you use these services, their designed to give you what you want. whether it's a social network or a chat bot. i wrote another story about during quarantine about these chapbooks which are using these ai systems, natural language systems we talked about at length to learn to carry on a conversation and that's a powerful thing at this moment. what we need interaction and people are starting to use that and what the system does is the reason people respond to it is it gives them what they want. it's telling them positive things and that's what people respond to . but in our relationships we need those positive reinforcements and the negative. we need to be taken out of those bubbles.
7:36 am
how is technology going to do that? whether it's in a social network or a chat bot . how do you people get people to use a chat bot is going to be like a goodsibling and point out your flaws ? where'd you need to be better ? that's the important part of our lives whether it's a sibling or atherapist. you don't someone only telling you the good things . you don't want people in your life only creating thatbubble . >> are there people working on that problem, giving you a balanced lookas opposed to a personalized book ? >> people are at least recognizing the problem and as i wrote about that chat but for instance, i talked to experts in all sorts of
7:37 am
fields. in therapy, in technology, chairing terkel was well known in this field is adamant about this. these sorts of therapy but that are only reinforcing what you already think, only giving you the positives. we need to step back and think about this. luckily there are people like sherry at least calling attention to it. >> in your conversations with people the last year in quarantine, has that experience being isolated and where maybe technology is part of the solution but not the entire story, has that changed the focus areas that you're talking about with people that are getting excited, what is going to be the effect of this on the field? >> the effect is huge and i think about a lot of other
7:38 am
stories and we all need to think about this and i think about this as i raise my two daughters and you see them rely on technology more and more . there's some things that are incredibly positive like i have a 13-year-old daughter who has actually developed some of her relationships to an entire new level through facebook. she will live on face time with her cousins on another coast. that's been a real positive thing, you can see it. but in the long term, this can also drag us down. you can rely on technology as a crutch in a way that maybe we shouldn't and it can be easy to stay at home and do zoom like this. it's easier to be a reporter. right, to get on zoom the better stories,, the better
7:39 am
questions, when you find these things when you get out into theworld . and it's not just a journalist, anyone else. there are those things i outhink we all need. and it might be easier to say if you're a company, things are looking well a. just keep it that way. maybe that's not the best thing. i'm hoping as much as possible we can get back to a world where i am meeting people on the ldstreet, others are going back to the office for all these very human reasons that we talked about . >> how did you come to choose the main questions, the main characters in the book? >> what i realized is there was this common thread with jeff. and that there was this tiny
7:40 am
circle around him and one on one thing i was fascinated by was this idea at various stages of a neural network was so strange even in the ai field. that there were a lot of people who believed it. so that circle is tiny and that makes for a really good story on two levels. one like fundamental story historically is someone who believes in something even that faces skepticism but you have this tiny group whose paths would cross each other in these really surprising ways. there are these moments when i was doing the reporting when i realized that dennis who created the mine came out of this program at the university called the gatsby unit which is this blend of
7:41 am
neuroscience and ai. that unit was founded by jeff hinton. there were all these moments i discovered in my reporting where jeff was there. will talk about the moment when he and two students showed image recognition to work through this idea in 2012. jeff two years earlier had been instrumental in making this idea work with speech recognition. he was at microsoft and there's this great story about him traveling by train because he doesn't sit down. and on top of all that this tiny group suddenly became enormously valuable in and the idea started to work and you see this in opening of the book when jeff literally
7:42 am
auctions his services off to the highest bidder there are all these companies bidding for it and it set the price for the talent you have this tiny group of people in each, they are people. they're interesting in their own particular way. and then there suddenly in the man and they move into these companies so that became the center of the book . this tiny group who then moves into industry and that's going to leave out a lot of interesting and important people . but that's part of any story that you do in the book that you write . >> looking úback at the history we just jeff certainly is because of the network. i had a chance to sit in at one of their workshops for the big niche conferences.
7:43 am
and one of the things that i noticed at the time was 2012, 2013 was was all guys. and one woman who was part of the group actually wasn't there, i don't know why . how does that trickle down to them? >> talk about the dudes problem. you talk about that and we wereworking together when you wrote the story and you can see that . you can see that historically. all the people that we're talking about, the people are instrumental, the people in that room when you are visiting that meeting are the people who built the technologies. it's a tiny group and they built it so that is the fundamental problem that is there as the technology starts to work. then you have people like
7:44 am
there's a scene in the book which is similar to your experience . she walks into the new rex conference and sees hundreds of people. using this to owho are therefore a lecture and she realizes that there's no one who lookslike her . she's out of hundreds of people she counted five black people and they were all men, all of whom she knew and this is a conference not in the us, in barcelona. this is a global community that we're talking about and this issue is in many ways global. >> i remember a story you assigned me when we were working together at wired to do an interview with one of the first employees at facebook. she said that she came to technology with the idea that
7:45 am
it was a meritocracy and found out that it wasn't. in fact, it's still playing out today . >> it's absolutely playingout . you see it time and again with the situation at google and timnit. what i think is positive though is thatpeople are more willing to call it out . what you and i saw firsthand is it was often so hard to convince people to call it out. there were consequences. there still are your people obviously, for people willing to saythe obvious . so on some level we've seen some progress because people are willing to stand up and say we need to think about this. and that is so hard to do. but we're starting to see that.
7:46 am
what we need to do is listen when it happens and that can be hard to. you have to be willing to put yourself in uncomfortable situations. me as a reporter and as a white man i need to listen . a in those moments, and when people are critical of say the way i'm doing things or i'm building a story. that's one thing i've learned is that you have a certain way of doing things and you have to be willing to be challenged on that, even if there are really good reasons why you do things and why you build technology or you build a story, you have to be willing to step back over and over again and say is there something else ineed to think about ? >> how do you want people to
7:47 am
remember your book? >> what i wanted to do and there came a moment inthe middle of it when i thought this is a bad idea, this is never going to happen . it's like build a book that pulled the definitive story d of this moment in time. what has happened on so many different levels that means roaming in so many of the things we talked about . the development of the idea, and then all the areas where it started to work and that's what's fascinating. the single idea is writing the change in area after area and i wanted to through the people is read like a novel. show all at but then show all these questions it has raised . the bias questions, autonomous weapons questions. the question we haven't talked about is the disinformation question. this is the other huge
7:48 am
problem that we are going to face is these systems can generate images, videos, blog posts, tweets as well as conversations that look like the real thing. if we think we have a disinformation problem now, machines perfect and once we make machine that can do that 100 percent of the time we have to change the way we look at the world. then there are all the geopolitical issues. this is a global thing. all the talent was outside the us. they're all immigrants, the omus companies jumped on it but by the way in that scene at the beginning of the book where jeff hinton auctions himself off to the highest bidder there's a chinese player right then. they were right there. and there are all these geopolitical issues to consider.
7:49 am
my aim was to rope all that into one book and kind of level set us for everything to come. this is what has happened. these are the questions that were facing. including that big agi question where people say they're working on shouldwe think about that . that's what i wanted to do. so hopefully like it's a good read for people who want a human story. but hopefully i can layer the bigger ideas on top of all that. >> can we talk about outside of the us it's been very us focused here how has it been playing out in china, in asia , and europe, can you talk about that western mark. >> one of the things i'm fascinated by and maybe this is surprising to some people. jeff and and was an academic.
7:50 am
he was a professor at the university of toronto and when you moved into google, one of the stipulations was he wanted to keep his professorship and he wanted to keep behaving like an academic and you saw this with one of his old colleagues who followed him. yann talks about this a lot and remarkably , the sensibilities of these few individuals changed the course of these companies. what you saw at google and facebook and others is that they publish all their latest research. so the latest ideas get shared with everyone. what that means is there available with everyone whether you're at london or you're in china. the latest research is freely available. the currency becomes who has
7:51 am
the data to train these systems and you who has the processing power to train the systems and who has the talent? in some ways, a lot of people think china has the advantage there. they have a huge population and that means you can create more data, they have more talented ai researchers who can build the systems and that is really what's important there so you have this new landscape that you have to think about differently. this is not a 1950s cold war landscape. we need to not think about this in terms of export controls or sealing off our borders to certain immigrants. it's not that kind of world. we as the us are relying on immigrant talent, we always
7:52 am
have and that includeschinese talent . if we say our our borders to chinese airesearchers we are shooting ourselves in the foot . and if we, we have export controls so were not exporting anythingto china, what is that going to do ? they have access to all the research spaces so we need to think about this world differently. certainly we worry about espionage from other countries . there are concerns or take a lowly when it comes to military applications and the like .es but this is not the world of absolutes. that people might have thought our worldwas in the past . >> going back to the misinformation you mentioned, before it played out here it played out elsewhere .
7:53 am
in the philippines forexample . are things happening outside our borders that are powered by ai that we should keep an eye on and see where we are going inthe future ? >> absolutely. a prime example is in china where this technology, neural networks which can identify faces and photos amongst so many other things is being used the target ethnic minority. as the type of thing that really raises concerns in this area. this same type of technology is being deployed here in the states and the luckily we're starting to raise questions about it. china is this extreme example but as you indicate, as we see this play out in extreme ways, as we see it play out outside our borders we need
7:54 am
to think about how we are going to deploy these things, how we are already starting to and you see this in my book where there starts to be a rollout of face recognition technology and really because of people like timnit these companies startto wake up to them start to respond . and at least say we need to think about legislation . as usual, these companies only go so far but at least they're publicly recognizing these types of issues. ithink you're exactly right . we live in a world where this technology is developed everywhere and employed everywhere . cwe can't thinkof what we're doing solely within our borders . >> we are also doing that here, a company that doesn't come up in the book is how interior but their technology has been used for years to track undocumented immigrants
7:55 am
. >> that's exactly right and this is an example of you might call volunteers technology simpler if not this ai that we're talking about but it's often about the way these things are deployed. and then as the technology becomes more powerful those very issues become bigger and bigger. >> what can private citizens do if they're worried about ai, either their local police department or their government. >> one of the lessons here and i said this earlier is these are problems we all need to deal with. the companies are going to deal with them on their own, we know that but governments are going to deal with it on their own. there's often blends with the interests of the companies as well. and particularly if we have these situations where the corporations are pushing people out, that becomes our
7:56 am
problem. we need to individually speak up about this. journalists need to write about it but also people need to call attention to it. >> often the problems that ai causes effect groups are disenfranchised or marginalized already. and so the problems that it causes may seem like they are at arms reach, far away from everyone else. why should people like you and me who are privileged care about them? >> again, we all have to step outsideof our bubbles . and realize it's easy in silicon valley just to forget everything else that is going on. it's easy to only look at
7:57 am
what is happening in your bubble. we need to remember that these are not technologies just for the privileged. these are technologies creeping into the daily lives of everyone. sometimes in unexpected ways. we absolutely have to keep an eye on that. >> one of the things i appreciated about the book is that in a way, it wasn't a history of this community but also a story of us. >> what i often say is that any good story is about people. that includes technology. and the technology writing doesn't always work that way. but fundamentally what i wanted to do was tell a story about people and then if i could do that build on these
7:58 am
bigger ideas. on top of that. >> if you were going to write a sequel in 10 years, what do you think you'd be writing about? is it going to be neural networks again or is there another technology and a new community of eccentrics that are going to push usforward even more ? >> my inclination is always to aydo something completely different so i may surprise you but one of the things i've been thinking about is quantum computing. is another fascinating area which is coming to the four. we will see there but this story, the ai story is really only just beginning. and we're still only just understanding how these systems work and how they are being deployed.
7:59 am
i think there's so much left to cover and it certainly i'll keep covering it in the times. >> there's a lot of highflying stories in your book, quite literally. can we talk about that particular story and we can bond to your shared worries after that. >> that's a good place to end because we go back to jeff hinton. you learn in the first part of the book and he has this back problem.he literally does not sit down. as a teenager he was lifting a space heater for his mother and he slipped a desk so by his late 50s the desk would slip so often it was hard for him to function. there are all these stories from his students where he's laid out on his desk and they're trying to defend their thesis and he's on his desk or on,by the wall .
8:00 am
what that means is he does not drive, he does not fly because the commercial airlines make you sick during takeoff and landing . and this is another moment i was floored by as it came up in my research that jeff had moved to google and google thinking about acquiring the mind and they want jeff to go to london to that the company and help them decide if they're going to spend what ends up being $650 million on this companyand he says i don't fly . alan eustis was the head of engineering devises this incredible contraction which was inspired by his own feet as a skydiver, like an extreme skydiver that basically strapped jeff in place on this makeshift bed ffin a gulfstream jet and this is how jeff made it to london and ended up walking into the
8:01 am
office. >> it's pretty wild, who told you that story? >> one of the great things and very hard things about our job is that you do it by peace. you get a little hint that this has happened and you go to the next person and you get a little bit more and ng once you got enough, you can go to the source and say i got this much, you might as well tell me the rest. that's the way the book often works. >> to follow up on the mind, there's the 44 million that google paid for jeff and his students. that's seems like a crazy number at the time but in retrospect it's not even 2 years. >> ..
8:02 am
the 44 million-dollar figure jeff said required for, that was the hardest thing to pin point in the book. i was worried i wouldn't be able to back it up but that's how much they paid. and you're right, it is a bargain price. $44 million for three people. individually that's a whole lot of money, but then the prices would explode and it's basically supply and demand. there was this tiny group of people who specialized in this field and we can argue about whether not the companies behave rationally or not but they were all intent on jumping on this
8:03 am
area and that meant the prices went skyhigh. everyone wanted their own and you see, most interesting with microsoft, where if facebook had one and google had one, they wanted their own. this incredible guy we haven't talked about one of the top executives in sept going to montréal to try to get their own that you just had this frenzy. when that happened there are these moments, and they happen in other areas soon with a price suddenly skyrockets for the talent. >> is that still on the up? >> well -- >> we may have engineers would be for a job on the skull. >> it's still a field where you can command a lot of money. because aleta do these other areas, self driving cars, for
8:04 am
instance, self flying drones which are becoming big. it's definitely an area that people who are looking for a career should look at. these are skills that are in demand and it is a change in the way we think about technology and the way technology is built. >> anything that you left out of the book because there was an space for it that you wish had gone in? >> i often say that everything is in there. and it pretty much is. unfortunately there is one great anecdote i just cannot share, sometimes the way it works. one anecdote is i wish was in there which i think is interesting because the parallels this whole story, a professor at stanford, she's in
8:05 am
the book that one anecdote that people don't realize that i think is really powerful, in order for neural networks to work, you needed a data, and you need the processing power. jeff hinton and his students who were bought for $44 million they has a ski this to resolve on the context called image net. and image net is a collection of photos that allow them to build a system that could recognize everything in those photos, and that was the brainchild. i was talking to her and ended up talking to advisor as well and there was this moment between them where she said i want to do image net, basically better career on this idea and he told her that was not the way to go, that she should not do it, that wasn't the thing to
8:06 am
better career on, and she did it anyway. that's another piece that had to beti in place for this kind of thing to work. >> why isn't it inhe the book, like such a powerful thing that grows all the men's success? >> like i said it's complicated by wish it was in there. sometimes you make choices for narratives and for flow so you make these hard decisions. but again we need to step back and think about the decisions we have made. whoever we are and rethink them and say did i make a mistake there? does that need to go in the next or that that the thing to go into the next? >> a question for the audience here, quickly did you write the book? what have you learned from
8:07 am
writing? >> i wrote it over the course of about two and a half, three years. i made the mistake of agreeing to write a book and to join the "new york times" on the same week. never ever do that because that's a mistake. you end up making your life far too complicated. what i've learned is next time i'm taking a book leave and i'm going to concentrate on the book rather than trying to do it in the mornings and late at night and standing in line at the grocery store while thumbing through google docs on my phone. that's not the way to go. >> so you learned more about the actual taking time off to write your project rather than doing it as part of your full-time job. >> i mean, in writing a book you
8:08 am
learn more about your field. i think you also learn more about the ways we tell stories and the way you give people a real idea of what's going on. it's a different skill from dailyy reporting, and i have learned a lot about the ark of things. like i said you also make choices that you regret and you try to learn from that in the future. >> that's a perfect segue into the one word initiative. so you were asked to write a single were down that gives a young person some advice. i know it's a very personal word to you. can you show us what that word
8:09 am
is and tells the story behind? >> absolutely. this is going toto go back to my father again, who was an engineer at ibm but an amateur philosopher. my word is truth, and there's some symbols below which i will explain. my father loved thisl philosophr named mortimer j adler wrote this book called six great ideas about many of the ideas you should live your life by. my father always talked about, and he wouldr echo this book tht there were three ideas that you should strive for in your daily life. truth, goodness andy beauty. he believed this to the point where he put symbols for each of these ideas on keepsake boxes made of oak for all his grandchildren. he firmly believed this wasl a
8:10 am
way you should live and truth was most important. he would represent truth with this symbol that was meant to represent the pythagorean ferrum. beauty was the rose. truth was so important because it informed what we understood to be goodness and beauty, , to understand this concept you need to understand what is truth. truth is something that you do have to struggle with every day. i certainly struggle with it as a reporter. it's about constantly talking to new people and reevaluating what you've heard the previous day andit taking in what you're hearing from anyone and it is sitting down at the end of day and thinking what do i believe is true, given everything i've learned, giving everyone i talk to, it's a personal decision but it's informed by everyone you
8:11 am
speak to on a daily basis. >> that's great. you dedicate the book to your dad. >> absolutely. he died a few years ago so what i often say is like the one person who enjoyed the book most isn't around to read it, that's the one thing that makes me sad but otherwise this embodies the book in many ways, a lot of things he stood for. he was an engineer but he believed in those bigger ideas. he often raised those concerns that we talked about, about how technology can affect our world in ways that we might not expect. >> i think we will leave it at that. i don't want to go on into the more often complication here. i'm going to turn it back to dan'l and thank you so much for inviting me to do this. it's been a pleasure to talk to you about the book.
8:12 am
i hope it's very successful. >> thank you for doing this, a lot of fun. >> dan'l, , alternate overdue. >> thank thank you so much. this is been across theun conversation. absolutely on point as we think about how technology is redefining what it means to be human and changing how we interact with one another and, frankly, the planet. chm for the last four decades has established itself to be the premier institution and trusted source fores preserving and communicate the history of computing and its impact on the human experience. and while we serve a diverse array of audiences the key point here is these life events are vital part of our public education and conversations that help all of us become better citizens in the modern world where technology is ever present. these conversations are preserved and available full-length video on youtube
8:13 am
channel but also key takeaways have been -- will be available on our website and they are preserved as a permanent part of our collection and for publication research and exhibits an ongoing education. in quick summary, it's a beautiful thing to be able to think about history because everything has it. as one of our previous guests pointed out and using no historian, it's an imperfect but indispensable guide to the future and the only mirror and measuring rod we have for the present. so thank you kate and daniela. is a a great conversation abot the way the world is today and will be back in a few years to adjust. thanks so much. >> here's a look at some of the best-selling nonfiction books according to quail ridge books in raleigh, north carolina.
8:14 am
>> some of these authors have appeared on booktv and you can watch their programs anytime at booktv.org. ♪ ♪ ♪

70 Views

info Stream Only

Uploaded by TV Archive on