tv Book TV CSPAN April 16, 2011 7:00pm-8:00pm EDT
7:00 pm
said no i'm not going to spy for you and they said okay well, we admit that we take you and we can't admit you are life. the best thing to do is to kill you and that is exactly what they did. >> alex kershaw is the author of "the envoy" the epic rescue of the last jews of europe in the desperate closing months of world war ii. .. >> tonight "after words" with kevin williamson, author of "the politically incorrect guide to
7:01 pm
socialism." >> now on booktv, brian christian looks at the state of artificial intelligence today and reports on the annual loebner prize event where the turing test is administered to the most advanced computer programs in the world. from seattle, washington, this is about 50 minutes. [applause] >> thanks so much to elliott bay for having me. i've lived in seattle for the past four years, and so this is my neighborhood haunt. and thanks to you guys for coming out. this is awesome. um, so the book really tells two stories. the first story is about how the computer plays in to this really
7:02 pm
longstanding philosophical narrative which is that humans have always been sort of obsessed and fascinated with their unique place in creation. what is it that makes us different and special and unique? and to answer this question we've typically, if you go back to aristotle and plato and these guys, we've typically tried to benchmark ourself against animals. so what i think is so interesting about the 21st century is that the benchmark that we're using to figure out who we are has changed. we're much more interested in our relationship to machines than in our relationship to other animals. it's changing the way that we think of ourselves. and the other story is a much more personal one which is that i got to be a part of one of the artificial intelligence communities sort of main competitions which is this contest called the loebner
7:03 pm
prize. and so i was, essentially, part of the human defense. and so i found myself in this very strange position. so the book opens in the fall of 2009, and i'm in brighton, england, and having this very strange feeling of i've flown 5,000 miles from seattle to brighton just to have several five-minute-long instant message conversations which seems a bit like overkill. [laughter] my goal in these conversations is probably one of the strangest things that i've ever been asked to do, which is that i have to convince a panel of scientists that i'm human. um, and they are going to be somewhat skeptical. [laughter] so this, this is all part of what's called the turing test. so the computer science pioneer
7:04 pm
alan turing back in 1950 as the computer was just in its infancy was already asking these really philosophical questions like can machines think? are they intelligent in the human sense? would it be possible someday to design a machine that actually could think? and if we did, how would we know? and so what he decides to do is just put philosophy off to the side and say i'm going to invent a practical test. we're just going to hold this test and have our answer. so the way it works is that you convene a panel of scientists, and they're having these five-minute-long textual chat conversations, sending messages back and forth. what they don't know is whether the messages that are coming back to them are from a human being or from a computer program claiming to be a human being. um, and it's their job over the five minutes to figure it out. so alan turing's famous prediction was that by the year
7:05 pm
2000 computers would be fooling us about 30% of the time and that as a result we would, as he puts it, be able to speak of machines thinking without expecting to be contradicted. um, this prediction did not come true, and each at the year -- even at the year 2000 the top a.i. programs were maybe fooling the judges once a year if they were lucky. um, and so my ears really perked up in the year 2008 when the top computer program at this annual competition managed to fool three out of the 12 judges or 25% meaning that it was just one vote shy of passing this threshold and passing the turing test. so it was a, it was a narrow save, a narrow scrape for homo sapiens. [laughter] and so i had this feeling of, well, maybe that means that 2009
7:06 pm
is going to be this pivotal year where machines finally cross that mark. and the feeling that i immediately started to have was not on my watch. [laughter] um, and so i wanted to see if there was, you know, something i could do to come to the aid of my fellow humans. and so i tried to get in touch with the organizing committee. um, so i should say that the way the contest is run everyone who participates whether you're a piece of software or a person gets this score, and the score represents how confident the judges were that they were talking to a real person. so every year there's a computer that gets the highest score which wins a research grant for the programmer and in a word called the most human computer award. and this is the award that the contest is based on. and it's the one with all the
7:07 pm
sort of scientific attention. but the strange thing is that there's also an award that goes to the real person who did the best job of persuading the judges. they were talking to a real person. which is called the most human human award. [laughter] and so i immediately became fascinated with what exactly is this award all about? and as i read back over the history of previous winners, one of the winners in 1994, one of the very first, was the science fiction author charles platte. and when asked how'd you do it, how'd you prove yourself more human than the other people in the contest, he said, well, it was easy. i was moody, irritable and obnoxious. [laughter] everyone else was mild-mannered and polite, and so i stood out. and to me that was hilarious but also bleak. [laughter]
7:08 pm
and at some level, also, this call to arms of, well, okay, how do we be as human as we can be under the constraints of the turing test, and how does that translate to life, does it? so i got in touch with the organizing committee of this test. so i started at the top by reaching hugh loebner, the granter of the award himself. loebner is an eccentric millionaire who made his fortune in new jersey selling plastic portable roll-up lighted disco dance floors. [laughter] in the 1980s. and decided at some point that part of what he wanted to do with his fortune was immortalize himself into the annals of science. and he's also, as i asked him, he's very excited about the day when computers do all of our work for us.
7:09 pm
and so he cites laziness as one of his main motivations for funding this prize. but really before i knew it i made the case for why i wanted to be part of the human confederate team as we're called, and before i knew it, my name was on the roster. and so i was in this position of, okay, in six months i'm going to be one of the four actual people trying to take a stand against these machines, passing this test. what am i going to do exactly to prevent that from happening? and the organizers' advice to me was pretty much what i had been told to expect which was, well, you are human, so just be yourself. and those words sort of haunted me. just be yourself. i kept having this feeling that, you know, it represented, perhaps, a naive overconfidence in human instincts or, at worst,
7:10 pm
actually fixing the fight. because the a.i. programs that we're going up against are in many cases the result of decades of work, and, again, so are we. [laughter] but the programmers who write these programs have done tremendous analysis on the past conversations of the test. they know which conversational routes lead to deep exchange and which ones fizzle out. they know how to steer the conversation toward the strengths of their program and how to avoid its weaknesses. and we all know intuitively that not all conversations are uniformly successful. there's this huge demand in our society for dating advice, conversation coaches, conflict resolution, seminars, all of these things. um, that suggests paradoxically that communication is both our species' perhaps greatest cognitive strength and the place
7:11 pm
for the room for the greatest improvements. and that's when i read the 2008 transcripts when the humans are downright apologetic to each other that they can't make better conversation. [laughter] one says, i feel really bad, you must be really tired of talking about the weather. another says meekly, i'm sorry for being so banal. meanwhile, the computer in the other window is apparently charming the panels off the judge is, in no time, gushing lols and smiley faces. [laughter] and so my feeling was, we can do better. ordinarily, now, i must say my intention was to be as thoroughly disobedient to the organizers' instructions to go to brighton and just be myself as i possibly could. i looked back over the history of the tests, i looked at which conversations went sour, i studied the way these software
7:12 pm
programs are composed and what sorts of human conversation they have to make in order to be viable so i could try to play up precisely those things. and i talked to psychologists and linguists and computer scientists about what are the things about human conversation that are really hard to do on computers. now, ordinarily there wouldn't be anything strange about this. we train for tennis tournaments, we cram for exams, but given that the turing test is meant to evaluate how human i am, there's something sort of odd about this. it suggests that, you know, being human or being myself is more than just showing up. and so for me one of the interesting lessons was that it sort of is. and looking at these software conversations helped me get a sense of that. so before i get into that, i
7:13 pm
feel that i should read a strange and more than slightly ironic cautionary tale. dr. robert epstein, the uc san diego psychologist, editor of "parsing the turing test" and co-founder of the lobar prize subscribed to an online dating service in the winter of 2007. he began writing long love letters to a russian woman named ivana who would respond with long letters of her own describing her family, her daily life and her growing feelings for epstein. eventually, though, something didn't feel quite right. long story short, epstein ultimately realized that he'd been exchanging lengthy love letters for over four months with, you guessed it, a computer program. poor guy. it wasn't enough that web ruffians spanned his e-mail box
7:14 pm
every day, now they have to spam his heart. [laughter] on the one hand i want to simply laugh at the guy. he founded the loebner prize turing test, for christ sake. what a chump. [laughter] then again, i'm also sympathetic. the unavoidable presence of e-mail spam in the 21st century not only clogs the inboxes and bandwidths of the world. for example, roughly 90% of all e-mail messages are spam. we're talking tens of billions a day, you could literally power a small nation -- for example, ireland -- with the amount of electricity used to process the world's daily spam. but all of that spam does something, arguably, worse. erodes our sense of trust. i hate that when i get e-mail messages from my friends, i have to expend a modicum of energy, at least for the first few words, deciding if i think it's really them writing.
7:15 pm
we go through digital life in the 21st century with our guards up. all communication is a turing test. all communication is suspect. that's the pessimistic version, and here's the optimistic one. i'll bet that epstein learned a lesson, and i'll bet that lesson was a lot more complicated and subtle than trying to start an online relationship with someone with a dumb idea. [laughter] i'd like to think at least that he's going to have a lot of thinking to do about why it took him four months to realize this was no actual exchange occurring between him and i van that and in the future he'll be quick to the real human exchange draw and that as a result his next girlfriend who, hopefully, is not only a bona fide homo sapien, may have i van that in this very strange way to thank.
7:16 pm
um, so to help understand some of his anxiety about humans' relationship to computers, it's worth pointing out that up until about the 1950s computers used to be human. so back before the word "computer" was a reference to the mechanical digital devices that proliferate on our desks and in our pants pockets and things, it meant something else which was that it was a job description. computers, frequently female, worked in groups for research laboratories and corporations in the military. so groups of human computers were behind everything from the first calculations of hailey's comment to the atomic bomb project -- haley's comet. engineers and computers fell in
7:17 pm
love all the time. [laughter] and, in fact, it's very strange. if you look back at these early papers in computer science before anyone knew what these gizmos were, to listen to people like turing having to explain for the first time to their audience what they're talking about. and so what they say is, well, you can imagine that this digital thing is kind of like a computer. and what they mean is it's kind of like someone who does math for a living. and so what i find so strange then is that living in the 21st century it is the human math whiz that is like a computer. so the mechanical version has not only become the defalter, but actually has supplanted the original as being the literal term. we are now like computer when they used to be like us.
7:18 pm
so it's this strange twist, we now imitate our old imitators. um, the harvard psychologist daniel gilbert says that every psychologist must at some point in his or her career write a version of what he calls the sentence. specifically, the sentence -- which is always capitalized -- reads like this: the human being is the only animal that, blank. and so the history of human sense of self is, you might say, the history of these failed debunked versions of the sentence. the twist now is that it's not really the animals that we're so concerned with. um, if you go back and read aristotle and descartes, they're really interested in the trying to prove we have souls, and so
7:19 pm
they say things like, well, okay, wolves can run through the jungle and avoid falling logs and hunt prey and form social groups and recognize their friends. that's, obviously, very easy to do. [laughter] we are capable of things like long division and remembering facts. [laughter] and the existence of the computer, i think, takes some of the wind out of that argument where, in fact, we've seen exactly the opposite which is that these very rigidly-logical step by step things like doing long division and factoring large numbers are, in fact, quite simple as long as you apply the method, and things like recognizing your mother are extremely complicated, and we're still developing systems to do it. the newest version of iphoto on the mac just released their face recognition software, and it's okay. so, meanwhile, you know, we've
7:20 pm
been playing grand master chess for decades. so all of these really counterintuitive results coming back to us without some of what we thought was really easy is actually really hard, and some of what we thought was really hard is actually really easy. and i think what's fascinating about the turing test in particular is that it really cuts both ways. so there's a philosopher at oxford named john lucas, and he says when machines finally pass the turing test, it will not be because they're so intelligent, it will be because we are so wooden. that it'll be an indictment on our conversational skills rather than a testament to machine technology. um, and so all of these sorts of questions are swirling in my head as i try to get back to this central issue of i'm going to brighton to defend my species, what am i going to say? and for me a look back at the
7:21 pm
way that some of these programs are built starts to suggest a couple different options. um, specifically people who write software that competes in the turing test generally have to make one major trade-off which is are they going to compose ahead of time everything that the program is going to say and give it this very defined character and personality and voice? um, the trade-off with that is that you find there's a very limited range of stuff that your program can say, and so it gets very awkward if you try to push out of that sort of rigidly-defined structure. so you see a lot of turing test programs in the '90s that have extremely well thought out opinions of the second term of bill clinton. the judge -- to which one of these comments the judge replies, huh, well, that's interesting. you know, i really like pancakes. what do you think of that? and it just keeps going about
7:22 pm
bill clinton. [laughter] um, the trade-off that you make if you go the other direction, so around the '90s and the first decade of this century this other paradigm emerged for how to create a -- [inaudible] which was basically to turn it loose onto the internet and have it simply observe what people say to it. so you could almost imagine this martian landing on the earth, and someone comes up and says, hi, and the martian pulls out this giant rolodex and says humans often begin a conversation by saying hi. interesting. [laughter] then the martian walks down the street and comes up to someone, and says hi. and that person says, hey, what's up. and the martian goes, humans often say, hey, what's up, when you say hi. and in this very painstaking
7:23 pm
process and over decades of waiting for people to talk to it, it builds this massive database of real stuff people say. and you find when you interact with this kind of software, this very uncanny ability to respond to all sorts of things. so, for example, i attempted to see if it would reply to me if i spoke in french. and i found that my french was not only understood, but actually corrected. [laughter] sort of condescendingly, it told me the grammatically correct version of the sentence that i had given it. [laughter] and you often find if you try to give it song lyrics, it will sing back to you. so i had a conversation where i started typing in lyrics from "bohemian rap soty." so i said scaramuce, and the
7:24 pm
robot cleverly replies, will you do the fandango? [laughter] and there's something eerie about a program that can do this. but part of what you learn as you interact with these programs, it's not so much that you're not talking to a human as that you're not talking to a human. and that turns out to be a really critical distinction. so if you ask it questions to which there is a correct answer, it will generally have that answer. if you say what country is paris in, it'll say france. how many legs do ants have? it'll say six. but if you start to ask it questions about itself, you get this very strange sense that you're talking to several thousand people that are being quickly cycled in is -- in and out. and this was what i discovered when i started to ask cleverbot about it love life. [laughter] so i say, do you have a boyfriend?
7:25 pm
cleverbot: not at the moment, but i hope to find one soon. so i venture, i would like to be your boyfriend. [laughter] cleverbot: that would be difficult since i'm happily married. [laughter] curious, i asked again for clarification. user -- sorry, me: do you have a boyfriend? >> no, i'm still looking for love. do you have a husband? i don't have a husband because it would be illegal for me to have one since i'm male. [laughter] male? okay. what gender are you? female. [laughter] once again just for good measure, do you have a boyfriend? not at the moment, but i live in hope. can i be your boyfriend? no. i'm straight. [laughter] so the other thing that you have
7:26 pm
to keep in mind when you're interacting with a program like this is that you're up against several million prior conversations. and, in fact, it becomes one of the main challenges is trying to figure out how to depart from everything that's been said in ten million previous conversations. because as long as you stay within that giant database that it has, it's going to have some response. maybe not internally consistent, but t going to have some -- but it's going to have some response. the objective is to actually push out of that and leave it totally stranded. in fact, this is a very similar problem to what happens in game theory. so if you look at grand master chess players or expert checkers players, every board starts in the exact same configuration. so there are only so many original moves that you can make. so if, you know, ten million games of chez have started --
7:27 pm
chess have started in position one, you know, you make your first choice, and there are still a quarter of a million games that have been played in that position. and one of the big challenges to being a chess grand master is how to get in a completely new position and to get your opponent actually thinking rather than simply remembering the standard wisdom. so one of the stories that i tell in the book is the day that checkers died. which was in 1863 in glasgow, scotland, checkers died. this was the world championship checkers match between james wiley and robert martins. they were scheduled to have a 40-game series over the span of about two months. and the outcome was zero wins, 40 draws and zero losses on both sides. and, in fact, 21 out of the 40 games they played were the same game. [laughter]
7:28 pm
move by move. the game had gotten to this point where there was such a giant pool of collective checkers wisdom and the players were sort of reluctant with the stakes that high to play an inferior move just to get the other person -- it's called out of look -- that they just didn't get out of book. they just played the book game 21 times, and the sponsors were extremely displeased. and no one, of course, knew who to give the world title to. so part of the challenge for the checkers community is to figure out how to keep the game worth playing. and so the real strategy, basically, is if you don't like the way checkers players are opening the game, open the game for them. and that's what's happened in top-level checkers play. i don't know if you guys have been following that. [laughter] but ever since the 1880s
7:29 pm
they've been mandating the first few moves of the game so that the two players will sit down, and they'll literally draw moves out of a hat and say, okay, you've got to do this, you've got to do this, you've got to do that, okay, now you can actually start the game. you can actually start the game. and this becomes a way to kind of salvage it, to force people back into a portion where they have to do original thinking. and you see the same thing when you go up against cleverbot. how do you wrench the conversation into this totally original place where it has not encountered the subject matter or the specific language that you're using. um, and so for me that was part of the challenge, was when i sat down, the first thing the judge said to me was, hi, how's it going? and i thought, no, we're in book. this is what every conversation begins with. you know, you fool, don't you realize that -- [laughter] there's going to be, like, thousands of entries in the hi, how's it going table in this database?
7:30 pm
and so i think that's, that's part of the challenge of human conversation is to get out of book in that same way where, you know, you sit down over coffee with an old friend and you say, you know, how's it going, you know? oh, good. how's it going with you? and the goal is to figure out you've got this kind of template conversation which is like these standard questions and the standard answers, and you have to figure out sort of how to break that and get it into this totally fresh thing where you're actually thinking again and you're actually sort of responding freshly. ask so -- and so in many ways the question of how to win at the turing test also becomes a question of how to relate to each other. and that, for me, was the really surprising verdict. um, that all of the answers that i got while i was looking at these things all sort of came back and gave me something about how to talk to people.
7:31 pm
so to me the computer is the latest in this longstanding history of thinking about who we are and what makes us different, um, but it's also this really radical shift in the question where our reference point has totally changed. um, and so the computer's not only shedding light on these really age-old questions, but to some extent it's literally changing the answers. um, and it's corrected some longstanding errors like we were talking about with, um, a.i. sort of hit aristotle and descartes where it hurts in what i think is a really welcome way. and, you know, it's given us these fascinating verdicts like a.i. can land planes before it can ride bikes, and it can translate u.n. minutes before it can be shown a photograph of an object and tell you what that object is which is what any 5-year-old can do. and so i think artificial
7:32 pm
intelligence and the turing test specifically is not just a pat on the back at, you know, how impressive we are and that there's all this unsung complexity to what we do which i think is part of the important message, but at the same time it's not merely a pat on the back, it's also sort of this call to action that we should not only celebrate, but actually actively pursue these things. and that's part of what i think the beauty of these chatbot programs is, that, you know, the existence of spam forces me to be myself in order for you to click the link that i'm sending you. that it's not merely a question of etiquette or scale, but is actually now a part of online security to act like myself all the time. um, and, you know, i think more generally it's just this fascinating process where we create these systems in our own image, but they, there's always this gap where the approximation ends and the real thing begins. and so that gap always has
7:33 pm
something new, i think, to teach us about who we are. so i'll stop there and take some questions from you guys. [applause] >> i have one. >> yes. >> full disclosure, i haven't read the book yet, but -- [inaudible] >> yes. >> i'm curious as you were driven to researching the book what beyond curiosity and the really fascinating -- [inaudible] were you looking to sort of reinstill some hope in the context of an otherwise sort of h.g. wells computers are taking over our lives story?
7:34 pm
>> yes. >> [inaudible] >> well, i think traditionally the narrative of a.i. is seen as this very dehumanizing narrative. and so when you look at the portrayal of a.i. in movies, you get something that, basically, runs like computers are going to slay us all with machine guns, black out the sky, seal us in hyperbaric chambers and siphon our body heat forever. [laughter] and, i mean, be this happens -- if this happens then i will, obviously, feel very foolish for saying this -- [laughter] but i feel a little bit more sanguine about a.i. than that. that, you know, i think in the context of the turing test if you look at, generally, the way these a.i. contests are won, ibm and their deep blue computer clashed against the world human chess grand master gary cat par
7:35 pm
love several times until the computer won and they were like, okay, we're done. and he was like, no, rematch. they were, like, no. sorry. once humans get beaten at something, we don't need to contest it ever again is the prevailing attitude. you know, pretty much the same thing just happened with jeopardy! where what i really want to see is the ibm jeopardy! supercomputer versus the show's writers where they give the writers a vengeance match to create much more tricky questions with denser puns and these sorts of things. and so i think the same thing is true of the turing test. the first year that the loebner prize is awarded, hugh loebner is done. he pulls the funding, walks away. but to my mind that's actually a very exciting time. where we are sort of knocked to
7:36 pm
the canvas conversationally and have this opportunity to do the really human thing which is to pick ourselves back up and figure out how to adapt and improve and get better. can and the beautiful -- and the beautiful thing about the turing test is also it hopefully means being better at life. so i sort of look forward to that. yeah. yes. >> are you looking forward to the time when you can be one of the judges? >> yeah. um, it would be mice to be one of the judges because -- nice to be one of the judges because you don't have the existential anxiety of having your humanhood in doubt. [laughter] so, yes, i mean, i think that would be a kick to come back and try to figure out what the strategy would look like from the opposite side. yeah. >> yeah. presumably, each one of these computers is submitted by a respected research entity of some kind. um, but then there must be every year at least, like, one that is
7:37 pm
the least human program. >> yes. >> so could you talk a little bit about if there are any sorts of -- are they picking bad strategies to try to fool the judges, or are there things in common that make the programs bad year after year? >> um, that's a good question. what makes the programs bad. i can tell you, for example, that with these bots the one i most frequently refer to in the book is cleverbot, a single, giant, you know, masses of conversations together. one of the problems that happens when you're indexing tens of millions of conversations is that it takes a while. um, and not all conversations make sense when it takes several seconds for the other person to say anything. um, and so that becomes, actually, another way of trying to trip up the computers is to generate this form of conversation that's less like a kind of strict q&a kind of
7:38 pm
deposition-style give me the right answer to this thing and i will wait until you do that. um, and more of the kind of, like, rep parte sort of quick-witted style of conversation where if computer's got to reach back through the logs of ten million conversations just to be, like, ha ha, good one -- [laughter] um, then you do start to detect that, yeah, maybe no one's home back there. yeah. [laughter] yeah. >> if you were going to wager a guess what year the computers could be evaluated other computers versus human speech to be more accurate than the humans, even hitting 25 or 30% -- >> right. you mean the computers themselves will actually judge the test? is. >> at what point will they be a more accurate judge of their own intelligence being mimicked? >> yeah. well, a good question. in fact, i would say several years ago. um, basically, computer-run
7:39 pm
turing tests are now completely standard part of internet security where i think most of us who have tried to enter a blog comment or something have gone up against this strange window that says tell me what this wiggly word is. of and that's known as what's called a captoa, completely automated public turing test of authenticity. yeah. you get the gist of ha that acronym is. and basically, now, on the internet with bots running amok, we have to do so many turing tests that we can't scale up to that kind of level, so we then use computer software to decide whether you are computer software. [laughter] um, and it's this very strange thing. and i, i think we're starting
7:40 pm
to -- i mean, it would make me nervous to get to a point where just to enter into a blog post i would have to go into a fully-fledged five-minute-long turing test. but it's starting to happen. it happened to me, i was playing an online video game on the internet, and one of the server admins came on and was, like, can i just talk to you for a second? i just want to make sure you're not a script. i was, like, no problem, dude, i am a person. and he was, like, okay. [laughter] but it occurred to me this is really the tip of the turing test iceberg that, you know, in several years' time will you have to get grilled, you know, every time you sign in? i don't know. we'll see. yeah. >> you talk a little bit about -- [inaudible] in the book and how, you know, computers don't have existential problems with, you know, not having a body. can you expand on that a little bit more? >> yeah.
7:41 pm
um, i think in many ways what's been pretty healthy for the discipline of philosophy about a.i. is that it's actually bringing the body back in to the conversation. that if you look at, you know, these sort of old-school guys like plato and aristotle and up to descartes, you know, they have to write off animals because they have these ideological reasons that they need to discount everything that animals can do. so that, basically, amounts to discounting everything bodily about the human experience. and what you're left with is something like, you know,al algebra. um, and i think intuitively that's not fully satisfying to us about what the human experience is really all about. um, and so i think it's, it's
7:42 pm
useful to then encounter a system like the ibm watson machine which if it were a person, it would have read every book and every issue of "the new york times" ever, but it has never left the tiny cubby in which it has done all that reading. so you can ask it something like in what year was this duke core mated and all that, and it knows all of that. of but if you say you're looking at the wall and you look down, now what you see? it's like, i don't know, i've never looked at anything. and so i think there's something really healthy about that that's bringing the body back in. and these sorts of things of motor skills we don't think of motor skills as cognitively impressive. but it's really hard to create a robot that can walk on two legs. you know, the segway has three computers in it just designed to
7:43 pm
keep the, you know, handlebar upright. and that makes me feel really good about myself that i can -- [laughter] i can stand here, and it doesn't impress any of you guys only because you guys can also do that, and we're all impressive. [laughter] yeah. >> um, what do you think's going to happen when they win all the time? it's 20 years and they've been winning the turing test and people still think it's a hunk of metal, you know, it's not thinking? will there -- what's -- [inaudible] the turing test for an actual act of cognition? >> um, i don't know. i generally try to dodge these sorts of questions. [laughter] but i don't know that i can get away with that. i think, actually, that if we get to a point where for all intents and purposes machine versions of intelligence are operating the way that normal intelligence is, we're still left with the thing that makes us different from each other
7:44 pm
which is that we are a product of these individual experiences that we've had. you know? so part of what it means to be human is to be, a, human, to be the product of a very specific life experience that everything you know is rooted in something that happened to you and everything you don't know is rooted in things that did not happen to you. and, you know, we turn to something like wikipedia for information that's mutually verifiable, that everyone would agree that such and such actor was in this movie. but what you say to the person who you went and watched the movie with is not, you know, who was in it? who was the director of frat my? like -- photography? you say, like, what did you think of that? how did what you saw accord with your background and idiosyncratic life history? so we're still left, i think, with that difference which seems
7:45 pm
to be productive. and i think still, ultimately, a life-affirming one. yeah. >> what chance do you think a.i. has at writing effective emotional scripts and integrating those? you've been talking a lot about thinking. >> yeah. >> you know, remembering, you know, actual dimension, but what chance of predicting ways of -- and they are relatively predictable. >> yeah. >> [inaudible] >> defining emotion rigorously is another thing that i try to not do. um, but i think in terms of this question of thinking versus feeling, so if i can slightly separate emotion from feeling something, um, one of my friends is a doctor, and he says, you know, if you're diagnosing someone for being sick, you've
7:46 pm
got a bunch of criteria. so it's what's your heart rate like, what's your temperature like, what's your blood cell count like. and they had a patient come in, and they're, like, whoa, this guy's totally sick. let's check him out. temperature normal, heart rate normal, blood level count normal. but there was something about the guy that was off. and, you know, that, that kind of distinction starts to get to the point where, like, thinking burrs into -- blurs into feeling. i haven't made a conscious decision based on a flow chart of factors, but, like, i've just sort of, you know, assimilated all of this data and come up with this hunch that is actually really sophisticated even though i can't articulate to you, like, a defense of my hunch, you know? and, basically, you know, one of the big obstacles for a.i. in the 1960s when they were starting to roll out these programs that could make these
7:47 pm
logical flow charty sorts of things that we can do, there was this really bullish attitude of, like, cool. well, we're going to be done with this whole a.i. thing in, like, five years. needless to say, that didn't happen precisely because we found out that it was really hard to do these sorts of things. so, like, how do you tell a computer when a guy looks off, you know? or how do you tell a computer when it's encountering someone that it knows? what was the process that you went through to recognize your friend? you don't know, and so you can't repeat it. and that, that's been this whole sort of other paradigm shift in a.i. i think we have time, maybe, for one more. yeah. >> i had a question. the turing test, is there one for, like, literature or music? >> oh, you mean, like, can you compose a piece and then try to determine whether the piece was composed by a --
7:48 pm
>> [inaudible] >> yeah. um, this has been, i mean, in some sense is there has been this very uncomfortable retreat being made by people interested in, like, which domains of human behavior are impossible to sort of break into with machine. and one of the famous critics of a.i. is a guy named douglas to have stetter who in the 1970s win this pulitzer-winning book where he talks about how computers will never be good at chess because chess can requires courage and fear and a sense of danger. [laughter] and that didn't happen. um, and so then he, "the new york times" asks him for his opinion after that point, and he said, um, my god, i used to think that chess required thought. now i realize it doesn't. but computers will never be able to compose music because to compose music you need to have a
7:49 pm
heart, you need to have passion -- [laughter] and so i don't know. i think the jury's still out. but i think at the same time even if you get to the point where two compositions are indistinguishable, the fact that the -- i mean, there's this sort of argument that people make about eating meat. one hamburger came from a sustainable practice, and so, like, it feels better to eat that one even though it's the same, like, at the molecular level. it would still, i think, please me more to listen to a song knowing that someone was moved to write it and that i could feel that i was making some sort of connection to the composer through the piece rather than thinking that the piece was just sort of a set of ratios that had been kind of programmed. um, so in some ways perhaps the most intrinsically human quality
7:50 pm
of art is not any particular sophistication of composition per se as the impulse to make the art. so that, to me, is still the thing that's sacred. thanks. [applause] >> is there a nonfiction author or book you'd like to see featured on booktv? send us an e-mail at booktv at c-span.org or tweet us. >> nathan hodge is the author of "armed humanitarians: the rise of the nation builders." mr. hodge, define nation building. >> yeah. nation building is one of those tricky terms that nobody ever really wants to own, and that's one of the reasons that i chose to write about it. i'm not using it in the sort of political science of the development sense. i'm using it in the way that
7:51 pm
people like george w. bush, barack obama or even dave petraeus would have used it which is that this is a way of describing this kind of mission of armed nation building that we're involved in. it's been described in many some ways as armed social work. and i'm trying to really describe this phenomenon to the ordinary reader who might have this idea when they look at the news and they see what journalists call the bang-bang from places like iraq and afghanistan and show them another side of what's going on, the sort of "three cups of tea" side of war, what the military calls the non-kinetic side of things. i wanted to get at the experience of people who are getting their hands dirty doing these kinds of things, rebuilding schools, digging wells, building roads. fundamentally non-military things going on in iraq and afghanistan. >> so is the u.s. military currently doing non-military functions? >> you'd be surprised to see the extent to which they've embraced that mission, especially in if places like afghanistan where
7:52 pm
doing these kinds of nation-building missions is really kind of a cornerstone of the exit strategy. creating a capable local government that's actually capable of delivering things like, you know, criminal justice. the big concern in a place like afghanistan is that the taliban could outgovern the coalition. so that's really where civilians who have non-military expertise need to be able to step in. >> where did the termination building come from? >> well, nation building is one of those terms, again, it's kind of wooly, it's sort of very unsatisfying. precisely why i really wanted to dig into it. because back in the 1990s there was a lot of hand wringing sort of within the national military set that the nation was going to become -- [inaudible] and, in fact, when he was running for office in 2000 george w. bush said he didn't believe we needed a mission-building cadre, that the u.s. military shouldn't be involved in this kind of thing. by the end of his term, he'd embraced it to the extent which he'd even called for the
7:53 pm
creation of sort of a civilian nation-building response corps in his state of the union address. so it was really a dramatic turn around, and in part it was just because this kind of sort of armed humanitarianism was seen as a way of getting out of the mess that we had gotten into in iraq. >> how is it that nation building became a political term where george w. bush in 2000 said we're not, we don't nation build? >> right. or barack obama in december of 2009 saying that he wanted to send more troops to afghanistan, but with the caveat that the nation that he wanted to do, the nation that he wanted to build was our own. you know, nation building in some circles is kind of a dirty word, you know? it's sort of, it's not what the military's supposed to be doing. they're supposed to be training for this high-end force-on-force conflict, the kind of conflict in a lot of ways the military trains and equips around and in some ways, i'd say, pines for in a way because it's simple, it's direct. your opponent wears a uniform.
7:54 pm
they've got formations that you can count. this is a lot more difficult, and it involves navigating a lot of sort of tricky differences, linguistic barriers, trying to get at these problems has proven a lot harder in this practice than it is in theory. >> so what's been the reaction of the pentagon to its new role? >> interesting. you see some of the more recent remarks by secretary of defense robert gates. he talked a little bit about his worries that the military could become this sort of 19th century victorian constabulary. it's not at that point yet, but the military's trying to master those chores, those fundamental kind of nation-building taskings. but there's a worry, i think, that the pendulum may have swung too far in that direction, that there's a need to kind of go back and concentrate on the basics, the fundamentals. get back to, you know, sending tank rounds down range, that kind of thing. but there's a reasonable argument behind that which is that these fundamentally are not military missions.
7:55 pm
i mean, these are missions for the development, agencies of development. they're there for diplomats. part of the problem is that diplomats, aid workers aren't necessarily trained to operate in this kind of, you know, hostile environment where, basically, they're doing development work while being shot at. and there's been this sort of very difficult transformation for agencies like the department of state, for usaid to send people who are built around the agency and get people to be willing to go out and volunteer on the sort of frontier in afghanistan, for instance. >> so, nathan hodge, does this diminish the role of the state department in our foreign policy? >> well, really what i try to raise in the book is there's kind of a fundamental disconnect between the ambition that is to send these sort of, you know, put more wingtips on the ground, so to speak, and the ability of, you know, agencies like the state department to do it. it's a simple matter of math.
7:56 pm
the department of defense at this point spends somewhere north of around $700 billion a year. it's got -- just look at the japan relief operations that are going on right now. they've got the personnel, the equipment and the training to get to places in a hurry. i saw it, and i describe it a little bit in the book with the haiti relief operation from the military side looked like as well. and part of the effort underway, and it's sort of put into bureaucrat speak, is we need to get these, you know, diplomats to get out there, and we'll all be together jostling along in the back of a humvee going to drink three cups of tea with an elder. well, it's not as simple as that because what happens if you're getting shot at along the way? >> has it been an effective foreign policy tool? >> >> i would argue that it sends mixed messages about who we are as a nation. >> is that where armed humanitarians comes from? >> yes. it's meant to be, you know, it's a contradiction. and it sends a signal that, you
7:57 pm
know, for instance if we're talking about in parts of the developing world that we think that an important principle of civilian control is the military, yet it's our military people who are doing it, who are doing the training, it says something a little bit interesting about kind of who we are. and i worry as well, especially when it comes to operating in places like this, that we adopt a little bit of a kind of fortress america mindset. i talk a lot about part of what they call force protection in the military. and, um, inevitably sometimes that ends up putting because of the risk in some of these situations, putting barriers between you and the people you're trying to reach out to and help. >> now, you've referenced greg mortenson, "three cups of tea," several times. you also reference thomas barnett. who was he? >> in a lot of ways, the guy who's best known for a briefing called the pentagon's new map. he in the early 2000s was kind
7:58 pm
of a guy that captured the zeitgeist of the department of defense, and he had a couple of famous powerpoint briefings that he would go out and deliver to military audiences which explained how the post-9/11 world had shifted. but i drilled a little more into sort of what he was arguing, and part of what he was also getting at was that there needed to be something like kind of a nation-building cadre available and ready on-call to address what he called these gap states, these failing states. i think he called it the -- [inaudible] force. and his idea was you've got to, you know, the leviathan, the army, you know, the big forces that go in and kind of do regime change fundamentally. they go knock over, you know, nations if called on to do so. but then you need people who are on-call, and they're kind of a mix of diplomat, aid worker, boy scout, u.s. marine, you know, kind of this mishmash of different things. but he was one of the early people who articulated it in a
7:59 pm
lot of ways and tried to explain what the new reality was to people in the department of defense. so he's a character, deft, in the -- definitely, in the book. >> how does the center for american security play into your book? >> well, they became the locus of -- they sort of became the home for the counterinsurgency set. really the counterinsurgents in washington started as insurgents in a lot of ways. it was a rebellion by kind of the rank and file within the military establishment, an intellectual rebellion, not anything more than that. by people who had experienced, you know, their first tours in iraq and afghanistan and came back and were groping, you know, intellectually for answer as to why the u.s. military was failing, why were we losing in iraq? and they reached back, and they found sort of these intellectual antisee dents and, for instance, french counterinsurgency theory that really did talk about the roles and missions and how you needed to sort of r
99 Views
IN COLLECTIONS
CSPAN2 Television Archive Television Archive News Search ServiceUploaded by TV Archive on