Skip to main content

tv   Cade Metz Genius Makers  CSPAN  December 15, 2021 2:04am-3:33am EST

2:04 am
c-span2. >> today we continue to focus on ai innovation to fresh exploration of ai history. as well as the choices that we face of digital citizens.
2:05 am
before ai began to change your world for better or for worse. they spent decades trying to build their own networks. often in the face of enormous skepticism. talking digital assistants to self driving cars and automated healthcare. pulled into a world they did not expect, along with the rest of us. google, facebook and the world. the reporter uncovers his tail. i think it is an often dramatic telling of this history. some of the untold stories that keep people, the communities and companies shaping the community of ai. a natural interest, shareholder value at the pursuit of tech
2:06 am
innovation. how these decisions are made and who is benefiting. it is my pleasure to welcome kate. they rich experience as a senior staff writer and the u.s. magazine of the register. now based in san francisco bay area where he has a technology correspondent with the new york times and he covers ai, driverless cars and other areas. here are his five numbers. nine years covering ai. $44 million that google paid, 37. eighty photos of a black woman tagged as a gorilla by google and one book telling the whole
2:07 am
story cleared welcome. a moderator for today's program. an editor for the wall street journal. written about ai across multiple industries. she wrote for a few others. so glad you're here with us. >> so nice to see you again. sixteen, 18 months since we last hung out. it is nice to reconnect.
2:08 am
who is fundamentally about people. people who were toiling in obscurity, maligned and mocked for some of their ideas. what captivated you the most about this cast of characters? >> there were two moments i think that really inspired the book. the first was the four seasons hotel. it was based in london. it had been bought by google, built to play this game of go. they often called it the eastern version of chess. most of those people were ai
2:09 am
experts. the best game was still decades away. it may actually be one of the top players. the four seasons hotel. this moment when the people that built this machine who had spent years kind of cultivating the ideas behind it and building this machine could not understand what the machine was doing. confused and caught unaware by this machine. these individual people including dennis who, for the most part as a leader of this lab became the focus of this book. i knew that dennis would be a character. before i met jeff who was a
2:10 am
generation older than dennis and he became the central thread. he, and his own way, is a fascinating character. had worked on many of the same ideas as dennis and dennis colleagues for decades. for anyone who knows jeff, he has a fascinating, engaging, strange character who has endured serious hardship over the years among other things. i thought to myself, if i can just get jeff onto the page, maybe this book will work. >> i actually wanted to ask you about a moment about jeff. kind of this connector of this group. about halfway through the chapter that deals with the cases of ai, jeff putting out a
2:11 am
small sigh. he said early diagnosis is not a trivial problem. we can do better, but why not let machines help us. this was specifically really important to him because of his life experience with pancreatic cancer. that moment comes back at the end of the book. you see him immersed in this. he has also, at that moment -- >> i love that you pinpoint those. you see in the emotion that jeff shows, that side that he lets out. the moment when he receives the award. he does not talk about himself. he talks about his wife. those are the kind of moments they capture.
2:12 am
it shows the personal struggle that jeff faced to bring these ideas to the floor. it is just part of his personal struggle. certainly the most powerful. at the end when he talks about his wife. this is someone who had two wives died by cancer. experiencing his own physical hardship as you learn in the first sentence of the book. he is someone who literally does not sit down because of a back problem. this plays into the story as well. this extreme back problem he has to make these pilgrimages across north america. sometimes across the atlantic. realizing these ideas. those are the types of moments that i feel show, not only what
2:13 am
he faces, but people like him often face when they are trying to realize their work. their technical problems. they are personal obstacles that have to be overcome as well. >> i felt for humanity. some sections of the book just jump off the page. really loving, but sometimes struggle. also jokes. almost a defective mechanism. others were a little bit more aggressive or adamant about their ideas or ways that captured a little bit of their security. just reminding everybody there is a humanity there. i do talk to other people for the book.
2:14 am
how did the lives, dreams, opportunities and obstacles affect. >> what i love is it was different for each person. we are talking about essentially a single idea. it dates back to the 50s. it is about all of these different people trying to push the idea forward. the way they dealt with it was so different. because they are humans. we are all different. the personality comes out in so many different ways. you are right. sometimes it is humor. jeff is an incredibly funny person as well. i like how that humor would attract those around him. as he is struggling to get this idea out. he needs the help of others.
2:15 am
the humor is not only entertaining, but it's a way of convincing people to work on this project. as i got to know jeff, more importantly, as i interviewed his students who knew him well, you saw him at this magnet often driven by the humor. drawing people to him and helping him realize this idea. the people that push this idea forward in other ways. some people are so adamant about their idea and they are so upset by the obstacles that they really lash out. you see that as well. that can work in some ways and it can backfire in others. what i wanted to do was show all
2:16 am
of that. also how, you know, these folks push the idea forward, it would behave in ways we did not expect a surprise even then. they reached these moments where they do not necessarily know what to do. that is part of it as well. >> what surprises people like that about the way their technology caught on. >> the great example, it is almost inevitable. the great example is -- born in london, eventually made his way to the united states and started to explore this idea, first on the west coast and then a professor at carnegie mellon in
2:17 am
pittsburgh, there is this moment in the mid- 80s where jeff and his wife at the time realized that he cannot do this ai work without taking money from ronald reagan's defense department. that is not something that he wants to deal. at the height of the iran-contra affair. he and his wife have very firm beliefs on this. they do not want, you know, this work to be used inside the military's. they actually leave the country. he believes in this stance. the point where he goes to canada and sets up shop at the university of toronto as a professor. it would have real implications for the whole field. years on when this idea finally started to work. most of the talent was centered around jeff and others in canada it was not in the u.s. but, as that idea starts to work at jeff is sucked into google,
2:18 am
in short order, google starts to work with the defense department. there are protests at the company. some people who believe this is absolutely the right thing for google to do. there are others that are really upset by it. i did not expect to be working for a defense contractor. a moment where jeff himself struggles with this. he was against it, but he was not sure how much he should speak out. he ended up lobbying one of the founders to push back on this. he was not as public with his concerns as some others. there was a moment where there was an employee who criticizes him for that. again, you feel the humanity of the situation. relating on some level to having our own beliefs on one hand and having the motivations of our
2:19 am
company who we work for. how do you balance those? it is a hard thing. >> i want to go back to the topic of the military. a long history. can you talk a little bit about that? it goes way back to the inception. >> it really does. i think that this is important for silicon valley to remember. we are at this moment where we often see new stories that say if silicon valley as opposed to working with the military. there is certainly a portion of the valley that believes that. they would build on military money. google was billed on the defense department funding. the internet came out of the dod project. the found the heirs of hp worked in nixon's cabinet.
2:20 am
there is this mixed history. silicon valley is largely a liberal plane. there have been times in recent years were there been protests for that kind of work. that is only one side of the equation. i think that it is a good way of thinking about this technology that i write about. not only when it comes to military uses, but all sorts of uses. dual uses for this technology. it can be used for good, it can be used in many ways that you may not expect. a lot of it is about point of view. struggling to figure out what is right and what is wrong. it is not always black-and-white >> it is really important here. i think going back to that first one that you mentioned earlier in our conversation in korea
2:21 am
where there is this national excitement. it is like chess and some of the other games where ai are trained they are wargames. they are violent games. >> it is interesting. they are the game of choice because they are hard. it is as simple as that. they are chosen for technical reasons. they are chosen for historical reasons. it is about the conquest. that is a real thing. you can see this. in the book, he himself is fundamentally a game player. games are about competition. they are metaphors often for war
2:22 am
the intensity of the ambition is palpable. you could feel in korea he wants to win. that is really what drives him as well as the technological aspects of this. the other thing is games or something that we all relate to. i think that there is a reason that people look up to them winning that event. all of us play games as well. as children. we can understand on some level the way games work. there is a winner and a loser. the idea of a machine beating us is not only interesting, it is scary. you talk about the excitement in korea. it was palpable. what i often say, because it is true, one of the most amazing
2:23 am
weeks i've ever experienced. you can feel the excitement of an entire country. focused on this. you can also feel the sadness when the korean, very human player that is getting beat, you can feel the sadness and that fear and that concern. it really brought out those emotions. that is why it was such an inflection point. there is a dark side to this as well as the light. >> one of the things that i found interesting about the story in the book, the people story, there was this quest to improve humanity. almost like an outdated pc. where does that come from? >> my father was an engineer.
2:24 am
we talked about this a lot. it is very easy when you are focused on the technology to see it as somehow separate from humanity. an attitude that you see in silicon valley often. the technology is sort of boxed off from everything else happening. when you do that, it is easy to see it only for the positive things that will bring. do not see all of the other consequences and effects of this i think what we really need to do and what i wanted to do with the second part of the book is any technology is bigger than itself. anyone who has lived over the past four years can recognize
2:25 am
that. we have relatively simple technologies. facebook is not a complicated technology. these technologies that i am writing about in the book are far more complex in part because, this gets back to your first question, the way i answered it. we do not really understand how these things are operated in some cases. they literally learn skills on their own by analyzing data. more data they and we can wrap our heads around. as those technologies get more powerful and more pervasive, their effect on our world will be greater. relatively simple technology that we use today. >> based on what you learned writing this book, do you think machines will have greater the inhuman level of general intelligence within the next 25
2:26 am
years? please explain what you think about that. >> the book goes into this. what i do want to do, in the book, make a clear distinction between the technologies of today and this idea that we will have a machine that can do anything that your brain can do. sometimes i just call it agi. artificial intelligence. that is something that we do not know how to get to. we have two labs now that say this is their stated mission ai san francisco. they, including dennis and they don't know how to get there. it is really hard to say when we
2:27 am
might get there. this is not certainly a near-term thing. we are talking about a machine that can recognize what you say in recognizing faces and other objects and photos. that image recognition can help us build self driving cars. other forms of robotics that you respond to some situations. health and healthcare. an area that you know well. that is different from a machine that can really reason. we as humans can deal with them. i don't like to make a call on this because it is impossible. it is in the future. people will argue about when that will happen. it is not going to happen soon.
2:28 am
>> the deep mine lab. some actually went to grad school with me. i think that they would be among the first to tell you that we do not understand intelligence, real intelligence. how do you dissuade something that you do not understand. >> this is why i always liked working with you. this is the key thing to understand. it may not be obvious when you read stories about ai. we don't understand how the brain works. re-creating the brain, almost from the get go, it is a task that is harder than you may think. the brain is such a mysterious thing. what you do see is that it provides a certain inspiration
2:29 am
to the field. he was inspired as a student at cambridge by the idea that you can re-create the brain. really nurturing this idea of a neural network. he is driven by this notion that he can take the way the brain works and apply it to machines. now taking it to another level. he does have a team of neuroscientists that studied the brain and the way that the brain works. how that may be re-created. some people see this as a circle. as you better understand the way the brain works, that can help you build machines that work like it. as the machines improve and you figure out ways that they can mimic human behavior, that can
2:30 am
help you better understand the brain. that understands sometimes, but not always. deep learning based ai. modern research in social media also incorporate an enormous amount of ai. can we talk about the impact of the ai? a problem with the ai field. you have to defy your term. i think it is applied to everything. what i meant by simple or is the algorithms that are choosing what is in your social media feed are relatively simple compared to these neural networks style systems. the reason they are more
2:31 am
complicated is that they do learn their task and this really intense way. the example i used sort of set the conversation as you take thousands of cap photos and you feed into this neural network which is a mathematical system. analyzing those photos and looking for patterns. you define what it cap looks like. that is one thing with the calf photo. now it is medical finishes. learning the same way when it comes to medical images. we do not see the flaws in those systems. we do not know my hundred -- do not know the mistakes.
2:32 am
these mistakes learn from everything we posted to the internet. all sorts of other stuff. we all know that the internet can be biased. it gets rid of people with color. we have hate speech. needless to say, on the internet. these giant systems which are learning natural language now are starting to be used in chat box. they are learning those bias and those flaws. by the way, other things that we may not see. even the creators of this technology may not see. because of the way the systems are built. >> the bias side. it may be escalating or producing at scale. about two thirds of the way into
2:33 am
the book, we come to the chapter that deal with these very issues from the interview he gave to bloomberg, if i'm not mistaken, warning about the problem. can you talk about that? >> absolutely. that quote is interesting on many levels. one of which as it came so early another scene in that chapter is , this was alluded to at the top of this call, that moment when google identifies photos posted by a software engineer in brooklyn as guerrillas. that is 2015. we are still struggling to deal with that issue. years after meg said that in the pages of bloomberg. years after that incident. even though you have people like
2:34 am
meg. and so many others, you have not only noticed this problem but called attention to it. the fundamental thing to realize is it is endemic to the technology. the technology has to be built in this way if it is going to work the way these large companies wanted to. it requires enormous amounts of data. what that means is you cannot just remove those flaws. you can try to start over, but how do you get the data that does not have those flaws in it. it is such a difficult thing. it exemplifies this moment we are going through now. not only at google and microsoft, but the technology industry as a whole. struggling to deal with that conundrum. >> a couple years ago i had a conversation about this. she mentioned that we have this
2:35 am
bias to think about. being infallible. how can she make a mistake. i think as you point out, it is really thoughtful because in a way the machines, just an extension of us. coming from our own bias, all of us. our people starting to think more about that since 2015 quote? before that people were probably speaking about it. >> they are thinking about it more. there are these other forces. it is so interesting to me. you do have this moment in my book where they will really call attention to this. both hired by google. they create this ethical ai team
2:36 am
at google which is designed to fight this problem. recently, since my book was put to bed they have both been ousted from google. as you have, on the one side, people calling attention to this and a lot of people taking notice, inside as well as out, you have these other corporate forces, in very ways -- various ways, pushing against it. these companies have their own aims. they are complicated. driven by the profit motive among other things. it often comes into conflict with these other efforts. >> i find that so interesting. their ideas are powerful, insightful, important. much like the networks. before they were adopted by
2:37 am
amazon and google. their ideas are aligned. >> you are exactly right. you see on twitter, for instance , on a daily basis, people accusing me of being an activist. you know, you could make, and you see this in the book, the same criticism of these neural network pioneers that are sort of in the wilderness saying this technology will work when most of the industry thinks that it will not. you know, you could derive them as an activist. these are all scientists. i think that you make a great analogy. in the face of pushback to all of these people around her, she is saying, you need to pay attention to this.
2:38 am
thankfully, she is not alone. >> you know, change is hard. the other thing is, the companies are designed in certain ways. another certain thing that you see in the book. these companies develop these individual personalities almost. they respond to situations differently. in particular ways that they are conformed by their history. promoting them selves and to tell the world what they are doing is positive. even if on one level they acknowledge the problems of the world, they don't want those problems to be on them. that is the heart of the clash between google. they try to publish a paper that called attention to these
2:39 am
problems. it is part of the issue there. again, this is not just a google problem. this is something that all of these companies will have to deal with. >> this is an audience question. [inaudible] >> it is very, very hard. it is endemic to the technology. if you take these systems, these natural language systems, literally, they spend months analyzing texts from the internet. thousands of books and thousands of articles and blog posts and everything. it works because of volume. it works because you want to throw as much at it as you can. you cannot just weed out all the stuff that may be problematic.
2:40 am
the industry as a whole is still struggling. how do we deal with that? right now you have to put band-aids on. put filters on. that is not always ideal. people have talked about can we develop synthetic data so to speak. that is still an open question as well. technically, such a hard problem >> there is a scene in the book. a woman is trying to understand why the data does not work. can you walk us through that scene? >> clarify a startup in new york. they are trying to build, and
2:41 am
image recognition system and a content moderation system. she herself is a black woman from ottawa. to her, the problem is obvious. the company had taken all of these stock photos which had been floating around the internet for ages and use them to train the system. the majority of the photos were of white men. so when she sees that, that is a problem. it is not necessarily obvious to others. that is part of what, you know, we are dealing with here. the question of diversity has obviously been a problem for a wild. there is an added level for that problem. me, as a white man i have a
2:42 am
certain perspective on the world. that will inform the data that i choose. one of the reasons i need a diverse population working on this. they can see this issues in the way she could see them. completely obvious to her. >> you mentioned that -- there was a different scenario there. can you talk about that? to other people got let go in the wake of that. >> it is true. this is a project that i mentioned earlier with jeff. google started working with the department of defense on image recognition for drone footage. that can happen to a lot of
2:43 am
uses. including serve aliens and autonomous weapons, eventually. that was a real concern to people. i think that there are a lot of echoes here with the situation involving bias. so many of the people who protested that, many of whom, you are right, are women and no longer with the company. this is the way, the personality sometimes exert themselves. at least in the early years, they were encouraged to speak their minds. push back when they wanted to push back. that is one of the reasons you see them bubble over in the public sphere and the way they have not in others. they have really pushed back about this.
2:44 am
they, you know, ended up pushing out a lot of people who protested. there is a pattern here in this respect. even though it is a very different situation, technically >> an audience here related to government and ai. they are using ai for surveillance and defense tactics what should be the government's role? >> yeah, i do think these are issues that we have to deal with as a society as a whole. technology companies, maybe it is individuals, but that means government as well. we are starting to see government at least wake up to these ideas. even the dod has laid out ethical guidelines for the use
2:45 am
of this technology. we need to keep thinking about this. i need to keep writing about it and you do, too. keep looking for what is really happening. it's easy to say we've seen this from government agencies. you need to say we are thinking about it. we have a framework in place. how much will it relate do. when push comes to shove, is it really going to affect things? these are all questions that we all have to deal with. they will only become more important. how do you audit and ai system where you mentioned earlier, understanding what goes in and the answers that come >> there is a lot of disagreement on this.
2:46 am
part of the problem is, we do not have the data that we need to really audit it. ultimately at this point, what it is about is really testing the system. see where it works and where it does not. that is not always the way internet technology has worked. it is more about getting it out to the world and patching it after the fact. the way that systems are built now, it is to really test it. see where it goes wrong and fix those flaws. that is a hard thing to do. that is really what needs to be done nowadays. >> what is the importance of -- and you see this growing in the future? >> i think it is interesting. we have this giant network.
2:47 am
it trains on all of this internet data. can we develop tools that allow us to really understand what is learned? again, that is a really interesting area of research. you see an industry that a lot of people are working on. had the same time, these networks are getting bigger and bigger and bigger. they are taking in more and more debt. harder to understand what it is that they are learning. fundamentally, these machines are learning at a scale that we as humans cannot. it is just a reality of it. these machines are powerful. they can learn from more data than we could ever learn from. they can learn from these skills to a level that you could never
2:48 am
hardcode all their behavior and as an engineer. i think that although interesting, it is not something that will pay off any time soon, if ever. we really need to realize that. >> you mentioned the way we learn. a baby will learn words or how to handle something with her hands or walk. what about the falling over? >> you are exactly right. they have been for a long time. we keep developing new ways they are superior. the machine is not good at so many things that we are still good at. a machine can analyze thousands of cap photos.
2:49 am
pinpoint all of those patterns that we could never define on our own. picking things up in a moment. it is a great point. >> human intelligence also has a lot to deal with emotions. what we learn, how we learn it, how fast we acquire that knowledge but forget it. we depend on human emotions. an audience question. a drive to win. are there artificial machine equivalents to a motion whether intended or not and then i added on my own question. how would that help report requests for artificial intelligence? >> what i often say is we tend
2:50 am
to see emotions sometimes in machines. they will exhibit a little piece of behavior and that will elicit something in us that reminds us of what we see in humans. we typically project those motions on the machine. machines do not feel that, so to speak. among their many other flaws. there are efforts to re-create that. that is also hard to deal. what i will say, though, i think that we need to understand the way that these machines are affected by our own emotions. this does not build technology for technology sake. we need to think about how that affects us emotionally, how it affects us historically. there are so many things to
2:51 am
consider when it comes to building these machines. would it help if they have emotions? i don't know. >> how was your relationship with technology changed? >> what i try to bring to my reporting and even to the book as well is a healthy skepticism and objectivity. a story i often tell is, you know, my father was an engineer at ibm. one of the things he worked on was a universal product code. it is on all of your groceries. he had these amazing stories about the creation of the technology. it was technically interesting how they did this. then he had this great story
2:52 am
about how they deployed it. something happened that they did not expect. they put this in grocery stores and there were protests on the sidewalks because people thought that this was the sign of the beast. it was a really instructional moment for me. you have on one side this technology that is built in this fascinating way and reaches this plate where he can do things that you did not expect, technically. then it really does things that you did not expect. it interacts with peoples emotions, like we talked about earlier. and literature. these are all things that we need to consider. as i have written the book and i develop the speed at the time, i
2:53 am
have to constantly remind myself of that and constantly say, what am i forgetting about where this could go. is the person who was talking to me now, do they have the full picture? do they need to talk to someone else and see what their perspective is. it is about constantly widening your net to new people that understand new things about technology and also about how that technology will affect that world. >> we talk about facebook earlier. a lot of the technology on that platform and others is about the nations. we have become these big bubbles where we don't talk to people that don't share the same values how are we amplifying that in what is the consequences for the future?
2:54 am
>> it is another great point. you do see this in our daily lives when you use these services. it is designed to give you what you want. whether it's a social network or a chat bot. another story about during quarantine about these chat bots that are using these ai systems. we have talked about it at length here. it is a powerful thing at this moment where we need, you know, interaction. people are starting to use that. what the system does, the reason people respond to it is it sort of gives people what they want. it is telling them positive things. we need both positive
2:55 am
reinforcement and the negatives. we need to be taken out of those bubbles. how is technology going to do that? whether it's social network or a chat bot. how do you get people to use a chat bot? a good sibling and going to point out my sisters did this, point out your flaws. where you need to be better. that is the important part of our lives. a sibling or a therapist. you do not need someone only telling you the good things. you don't want people only creating that bubble for you. how do we step out of that. >> are there enough people working on that problem to give you a balanced look with a very personalized outlook? >> i guess people are at least recognizing the problem. as i wrote about the chat bot,
2:56 am
for instance, i talked to experts in all sorts of fields. in therapy. in technology. sherry, you know, well known in this field, she is adamant about this. they are only reinforcing what you already think. they only give you the positive. we need to step back and think about it. at least calling attention to it >> people in quarantine, has that experience being isolated or maybe technology as part of the solution, but not the entire story, has that changed the focus area that you are talking about with people? what has been the effect of this
2:57 am
>> the effect is huge. i think about it a lot. i think we all need to think about this. i think about this as i raise my two daughters. you rely on technology more and more. there are some things that are incredibly positive. i have a 13-year-old daughter who has actually developed some of her relationships through face time. she will live on face time with her cousins on another coast. that is been a real positive thing. you can see it. in the long term, this can also drag us down. you can rely on technology as a crunch in no way that maybe we should not. it can be easy to stay at home and do zoom like this.
2:58 am
it is easier to be a reporter to get on zoom. but, the better stories come. the better questions come. you find these things when you get out into the world. you are not just a journalist. they are the things that i think we all need. it may be easier to say, you know, things are looking well for everybody at home. let's just keep it that way. maybe that is not the best thing. i am just hoping we can get back to a world, you know, where i am meeting people on the street. others are going back to the office for all of these very basic human reasons we are talking about. >> how would you come to choose the main characters in the book?
2:59 am
>> well, what i eventually realized was there was this common thread with jeff. there is this tiny circle around him. but i have been fascinated by is this idea at various stages of a neural network. so strange, even, you know, in this ai field. not a lot of people who believed it. that circle is teeny tiny. that makes for a really good story on two levels. fundamental story, someone who believes in everything, even the skepticism. you have this tiny group whose paths would cross each other in these really surprising ways. moments where i was doing this reporting period when i realized
3:00 am
dennis came out of this program at the university of college london. this blend of neuroscience and ai. founded by jeff. and there were all of these moments that i discovered in my reporting where jeff was there. talking about the moment when he and two students really showed image recognition. jeff, two years earlier had been instrumental in making this idea of speech recognition in a completely different part of the world. he was at microsoft. a great story about him traveling by train because he does not sit down. making this work. and on top of all of that, this tiny group suddenly became
3:01 am
enormously valuable when the idea started to work. you see this at the opening of the book. he literally auctions his services to the high bidder. and it set the price for the talent. you had this tiny group of people and each, you know, they are people. they are interesting in their own particular ways. and then there is certainly a demand and they move into these companies. that sort of became the center of the book. this tiny group that moved into the industry. that will leave out a lot of interesting and important people. that is part of any story that you do, any book that you write. >> let me back up. certainly the hub of this network. i had a chance to sit in at one
3:02 am
of their workshops. one of the things that i noticed at the time, 2012 was it was all guys. the one part of a group was not there. i do not know why. how does that trickle down to now? >> well, you walk into a room like that and we were actually working together when you wrote the story. you can see that. and you can see that historically. all of these people that we are talking about, the people that were instrumental, the people in that room when you are visiting that meeting are the people that built the technologies. a tiny group and they built it.
3:03 am
that is a fundamental problem that is there as the technology starts to work. and then you have people, similar to your experience, walking into the conference and seeing hundreds of people, you see this, too, who are therefore a lecture. she realizes that there is no one that looks like her. out of hundreds of people, she counted five people and they were all men. it is in barcelona. this is a global community that we are talking about. this issue is in many ways global. >> doing an interview with one of the first employees of facebook.
3:04 am
she said that she came to technology and found out -- is that still playing out today? >> it absolutely is playing out. you see it time and again with the situation. what i think is positive, though , people are more willing to call it out. what you and i saw first hand was it was often so hard to convince people to call it out. there were consequences. there still are for people who are willing to say the obvious. you know, on some level we have seen some progress. some people willing to stand up and say we need to think about this.
3:05 am
that is so hard to do. we are starting to see that. we also need to listen when that happens. that could be hard, too. you have to be willing to put yourself in uncomfortable situations. me as a reporter and white man, i need to listen when those moments come up. when people are critical of the way i'm doing things or i'm building a story. that is one thing that i have learned. do you have a certain way of doing things? you have to be willing to be challenged on that. even if there are really good reasons why you do things and why you build technology or you build a story. you have to be willing to step back. over and over and over again.
3:06 am
there's something i need to think about. >> how do you want people to remember your book? >> what i wanted to do, you know , there came a moment when i thought, wow, this is a really bad idea. it is never going to happen. build a book that told the definitive story at this moment in time. what has happened, you know, on so many different levels. that means roping in so many things we have talked about. the development of the idea and all of these areas where it started to work. this single idea is driving it area after area after area. i wanted to have it read like a novel. show all of these questions that it has raised. the bias question, the autonomous weapons question. the big question we have not
3:07 am
talked about was the diff information question. these are the other huge problems we are going to face. these systems can generate images, videos, block post tweets as well as conversations that look like the real thing. once the machine's perfect that, once we build a machine that can do that 100% of the time, yet the change in a way we look at the world. and then there are all the geopolitical issues. this is a global thing. it is not an american thing. all the talent was outside of the u.s. the u.s. companies jump on it. by the way, the beginning of the book when jeff -- there is a chinese player right then. they were right there. all of these geopolitical issues
3:08 am
to consider. roping all of that into one book and kind of level set us for everything to come. this is what has happened. these are the questions that we are facing including the agi question. how do we think about that. hopefully, it is a good read for people that want a human story. but, hopefully, i can layer at the bigger ideas on top of all of that. >> yeah. outside of the u.s. it has been very u.s. focused here. how is it playing out in china and asia and in europe? can you talk about that? >> absolutely. one of the things that i am fascinated by, maybe this is
3:09 am
surprising to some people. jeff penton was an academic. professor at the university of toronto. when he moved into google, one of the stipulations was if he wanted to keep his professorship and he wanted to keep behaving like an academic, you saw this with one of his old colleagues who followed him, he followed jeff lee. he would not have been able to publish. .... ....
3:10 am
3:11 am
and is not that kind of world. it includes chinese talent. we are shooting ourselves in the foot. at the same time -- right. right. exactly. to the expert controls, we are not exploiting anything to china. what is that going to do? we need to think about this world differently. certainly wearing about espionage from other countries. there are concerns, particularly when it comes to military applications and the like. this is not the world of absolutes. that people may have thought our world was in the past. >> some of the, going back to the misinformation you mentioned a minute ago, it is played out
3:12 am
elsewhere. in the building, for example. are there things happening in our borders that are powered by at ai that we should keep in ai on. >> absolutely. the prime example is in china where this technology, a neural network which can identify faces and photos among so many other things, is being used to target an ethic minority. that is the type of thing that really raises concerns in this area. this same type of technology is being deployed here in the states. this is an extreme example. as we see this play out in
3:13 am
extreme ways, as we see it play out, we need to think about how we will deploy these things. how we are already starting to. you see this where there starts to be a rollout of face recognition technology. these companies start to wake up to it. some of them start to respond. at least say we need to think about legislation. as usual, these companies usually only go so far. at least they are publicly recognizing these types of issues. i think that you are exactly right. we live in a world where this technology is developed everywhere. we cannot think what we are doing solely within our borders. >> we are also doing that here. the technology has been used for
3:14 am
years to track undocumented immigrants. >> that is exactly right. you may call it simpler. someone who we are talking about. it is often about the way these things are deployed. and then, as it becomes more -- those very issues become bigger and bigger. >> what can private citizens do if they are worried about the use of ai by their local police department and local government? >> i think that one of the lessons here, this was earlier, these are problems that we all need to deal with. the companies will not deal with this on their own. we know that. the governments will not deal with this on their own. they often blend with the interest of the company's as well. particularly, if we had these
3:15 am
situations where the corporations are pushing people out, that becomes our problem. we need to individually speak up about this. journalist need to write about it. also people need to call attention to it. >> often the problem that ai causes affect groups that are marginalized already. the problem that it causes may seemed like they are at an arm's reach far away to someone else. >> well, again, we all have to step outside of our bubbles and realize it is easy in silicon valley to forget everything else that is going on.
3:16 am
.... .... >> technology always doesn't work that way.
3:17 am
the fundamental you want to tell a story about people into bigger ideas. on top of that. >> if you are going to write a sequel in ten years what would you be writing about? or is there another technology or a new community of eccentrics? >> . my inclination is always to do something completely different. i may surprise you. one of the things i have been thinking about is quantum computing. which is another fascinating area coming to the four. we will see there. but this story, the ai story
3:18 am
is really only just beginning. we are still only just understanding how these systems work and how they are deployed. there is so much left to cover and certainly i will keep covering it. >> there is a lot of highflying stories in your book. some quite literally so can we talk about that. >> that's a good place to end. so you do learn he does have a back problem. he does not sit down. as a teenager he slipped a desk so by the late fifties it would slip so often it was
3:19 am
hard for him to function. all the stories from his students he's laid out on the desk they are trying to defend her thesis. or he's on a caught by the wall. but he does not drive or fly because commercial airlines make use it during takeoff and landing. this is another moment i was floored by as it came up in my research that he moved to google and they are thinking about acquiring deep mine and they want him to go to london to help him decide if they would spend $650 million on this company. he said i don't fly. will the head of engineering devises an incredible contraption inspired by his own feats as an extreme skydiver that basically straps jeff in place on a makeshift
3:20 am
bed on a gulfstream jet and this is how he made it to london and ended up walking into the office. >> that's wild. who told you that story? >> one of the great things in the hard things about our job is you do it piece by piece. you get a little hint this is happened and then you get a little bit more then you can go to the source and say i got this much tell me the rest. that's the way the book worked. host: to follow up the 44 million that google paid for jeff. that seems like a crazy number at the time. but in retrospect that would
3:21 am
seem like a bargain-basement deal. given how much they paid for deep mind. why this huge explosion in the fight for this talent? it's like acquiring the nfl quarterback. >> that 44 million-dollar figure that he was acquired for that was the hardest to pinpoint in the book i was worried i could not back it up. but that's how much they paid. you are right. that's $44 million for three people. individually that's a lot of money. but then the prices would explode because it is basically supply and demand there is a group of people who specialize in this field. we can argue whether or not the company behaved rationally
3:22 am
or not. but they were all intent on jumping on this area. that meant the prices went sky high. everybody wanted their own jeff. microsoft most interestingly, facebook had one and google had wine and they wanted their own. there's one top executive we have not talked about those to montréal to get their own. that you had a frenzy. and that happened, there were these moments where the price is skyrocketing for the talent. host: it is still a field you
3:23 am
can command a lot of money. because it leads into the other areas of self driving cars for instance. self flying drones which are becoming big. it's definitely an area that people should look at these are skills that are in demand and it's a change the way we think about technology in the way it is built. >> anything you left out of the book you wish had gone in? >> i often say everything is in their. it pretty much is. unfortunately there is an anecdote i cannot share. sometimes that's the way it works.
3:24 am
one anecdote i wish that was in there because it parallels the whole story a professor at stanford, she is in the book but one anecdote people don't realize that is powerful. in order for neural networks to work you need the data and the processing power. jeff hinton in his two students that were bought they had in the jeanette lament - - image net is a collection of photos to build a system to recognize everything in the photos. that was the brainchild. i was talking to her and i talked to her advisor as well. there was a moment between them and said i want to do
3:25 am
image net and was betting her career and he said that was not the way to go she should not do it that was not the thing to back her career on and she did it anyway. that's another piece that had to be in place for this kind of thing to work. >> at such a powerful thing for success. >> yes. it is complicated but i wish it was in their. sometimes you make choices for narrative. and for flow. so you make these hard decisions. but again, we need to step back and think about the decision we made. whoever we are to say did i make a mistake there? does that need to go when the next time?
3:26 am
host: how quickly did you write the book? >> i wrote it over the course of about two and a half or three years. i made the mistake of agreeing to write a book and joined "the new york times" the same week. never do that. that is a mistake. you make your life far too complicated. i learned the next time i take a book leave. i will concentrate on the book instead of trying to do in the mornings and late at night and standing in line at the grocery store while thumbing through google docs on my phone is not the way to go. host: c would take time off instead of with your full-time
3:27 am
job. >> in writing a book, you learn more about your field. you also learn more about the ways we tell stories. and the way you give people a real idea of what is going on. it's a different skill from daily reporting. i have learned a lot about the arc of things. like i said you also make choices that you regret and you try to learn from that in the future. >> that's a perfect segue into the initiative so you asked to write that gives a young
3:28 am
person some advice and on that's very personal word to you can you tell us what that word is in the story behind it? >> absolutely this goes back to my father who was an engineer at ibm but amateur philosopher. my word is truth they are symbols below which i will explain. my father loved mortimer j adler who wrote the book six great ideas and was a philosopher. many of the ideas you should live your life by. my father always talked about and echo the book three ideas you should strive for in your daily life. truth, goodness and beauty. he believed it to the point he
3:29 am
put symbols on keepsake boxes for all of his grandchildren. with the thuggery and fear him. for truth and goodness. and truth was so important because that informed what we understood goodness and beauty to understand what is true. that you do have to struggle with every day. it's about constantly talking to new people or reevaluating what you heard the previous day to take in what you are
3:30 am
hearing from everyone and sitting down at the end of the day, what do i believe is true? everyone i have talked to, it is a personal decision but informed by everyone you speak to on a daily basis. >> that's great. you dedicate the book to your dad. >> and then the one thing that makes me sad he's not around to read it but that this embodies the book in many ways and what he stood for. but he believed in the bigger ideas. and he often raise those concerns that we talked about how technology can affect our world in ways we might not expect. host: we will leave it there
3:31 am
so now i will turn it back so thank you for inviting me to do this it's been a pleasure talking to you about the book i hope it is very successful. >> thank you for doing this. it has been a lot of fun. >> thank you danielle. this is a great conversation on point as we think how technology is redefining what it means to be human and changing how we interact with one another and the planet. the last for decades see jim has established to be the premier is just tuition for communicating the history of computing and the impact on the human experience. while we serve a diverse array of audiences, the key point is the live events are a part of the public education and conversation to help all of us become better citizens as technology is ever present.
3:32 am
's conversations are preserved on the youtube channel but also key takeaways are available on the website. they are preserved as a permanent part of our collection for publication to have an ongoing education. it's a beautiful thing to think about history. and as one of the previous guests pointed out who is a noted historian, and indispensable guide to the future the only mere

50 Views

info Stream Only

Uploaded by TV Archive on