tv 2018 Annapolis Book Festival CSPAN April 28, 2018 4:00pm-5:01pm EDT
4:00 pm
4:01 pm
>> here are some of the current best-selling nonfiction books according to the new york times. topping the list. madeleine albright's warning against the rising fascist tactics by world leaders. followed by russian roulette. a look at the russian hackers. after that, after and i'll be gone in the dark, the late true crime journalists michelle mcnamara's search for the golden state killer who is responsible for a dozen murders and 50 sexual assaults in california during the 70s and 80s. this wieners position. a look at some of the best-selling books according to the new york's time continues with educated. in seventh is barber's thoughts
4:02 pm
on the fear of death, natural causes followed by the report on business deals made by prominent political families and secret empires. wrapping up some of the books from the new york times, jeff benedict biography of professional golfer, tiger woods, and firing fear he. some of the authors have or will appear a book to be after the programs have aired you can watch them on the website, booktv.org. >> the annual annapolis book festival continues with an author discussion on artificial intelligence.
4:03 pm
[inaudible] [inaudible] [inaudible] >> welcome. this is our last panel to finish out the day. we'll be talking about living in the age of artificial intelligence. my name is carol obama -- at georgetown university. we'll be talking about the emergency roles of artificial intelligence in society. we have an award-winning entrepreneur and the founder and ceo sparks cognition.
4:04 pm
paul is a senior fellow and director of technology and national security program at the center for new american security. a former pentagon official and an army ranger. he's the author of our army of none. we'll start out by giving the authors a few minutes to talk about their book and their view and how they framed artificial intelligence. >> thank you. it's wonderful to be here. it's my first time in its wonderful city. as far as the motivation for the book, what i wrote the book the concern i had is i cannot see much in literature that approached artificial intelligence from a philosophical perspective. there's many scientific aspects one must consider. at the same time artificial
4:05 pm
intelligence could be seen as a mirror to society. many of the problems and concerns we worry about will these machines will they be able to do us? will they kill us? beyond that, will these machines that can outperforms most all forms of labor that humanity has taken to master does this mean the economy will crumble there will be jobs? in many discourses the reaction was one of these machines are taking something fundamental away from us. if you think about the last centuries even the way we name
4:06 pm
ourselves we have less names like porter and farmer and goldsmith. were so invested in the economic labors were capable of of performing that we confuse the skill with an intrinsic value. when we look be on the horizon the skill may not be very valuable. that creates some inherent angst. with those questions in mind i set about looking at the machine in two ways. one is practical talking about what artificial intelligence is doing now the one difference i may bring to this book is that almost everything i talked about in the book i'm involved in.
4:07 pm
so spark cognition is a intelligence company based in austin in almost every collaboration with london stock exchange with large hedge funds, the largest utilities in america and europe, all of these things that run through the book providing vignettes on what artificial intelligence can do. we have a good idea where these are going to not understand artificial intelligence and to not deal with the idea of a
4:08 pm
synthetic creation surpassing humanity in many ways and a synthetic creation doing for the first time things we thought only human beings could do. this adapts our ideas of soul and free will and so on. these are the discussions i attempted to balance. the practical what's happening now in the philosophical underpinnings. i'll save the conclusion for a subsequent kit question. one of the chapters are two chapters focus on where artificial intelligence is being applied to warfare. my colleague worked over many years and about two years ago we published a piece of the journ
4:09 pm
journal. from that this idea of what hyper war is the modern age came about. that's been captured as well as what we hear today about the election being hacked. also talks about these threats and how we can mitigate the threats. these are some things i've covered in the book. i will stop now and get paul an opportunity. >> if you can talk about army of non- >> and paul, author of army of none. what we dig into is the issues that he raced at the end about what artificial intelligence means for conflict and warfare. so 90 countries have drones. including many nonstate groups like the islamic state.
4:10 pm
this number keeps going up, 16 country already has armed drones. one thing we see in science fiction is that a lot of our vision is for where it's going as we build these more intelligent machines and then have this fear that someday they'll turn against us. actually it in advance robotic systems are being born with a gun in their hand. each generation is becoming more autonomous. look at the top-of-the-line card today and many have features like automatic braking, intelligent cruise control, blank keeping, each generation they have or a tongue man intelligence.
4:11 pm
the book looks at what happens when a predator drone has as much a taught me as a self driving car. what are we looking to delegate were talking about life-and-death. so we look at what the military is building around the globe. we have interviews with people at darpa and the pentagon and look at what other countries are doing. and how it these are taking this right up to the threshold of the ability technologically to cross the line and allow machines to make their own decisions. right now humans are in control of deciding who to attack. a lot of these technologies would allow someone to flip a switch and take the human out of
4:12 pm
the decision-making process. the second half of the book we deal with what that means. looking at the wall what the criteria are ethical principles made both for and against the technology. for obvious reedy scenes you don't want to see how it would be a bad idea. but arguments they've made and they draw upon looking at self driving cars and just as automation might reduce accidents and save lives, could there be ways that machines and warfare can make better decisions? but i look at concerns about stability and interactions between states. when you look at stock trading where most of the trades are done by machines and bots.
4:13 pm
we've seen accidents that come from that. unexpected interactions between algorithms the two horrifying catastrophes. they been able to mitigate the challenges by installing circuit breakers that take them off line if it changes too quickly. doesn't stop the problems but it reduces them from causing too much trouble. there's no one to call timeout in war. so what happens if there's a flash or? and we deal with that. >> then the end of the book looks at the history of regulations. an international effort underway led by about two dozen states calling for an international treaty to ban these weapons. i look back at the history and
4:14 pm
trying to ban -. up to the modern age is to talk about how can humanity deal with the technologies, and are there ways to come together to cooperate? >> there's a lot to think about. some of the things people are afraid of. one thing is saying your book is that were starting or contemplation of ai from a place of fear rather than opportunity. what are the real opportunities we have in society with ai? >> let me start with the point of fear. like i said, artificial intelligence has been a mirror. it's allowed me to understand
4:15 pm
myself than what i've understood them previously. were aware of cognitive bias that exists. we think something but reality thinks different. our brains are subject to cognitive bias. and sometimes in books written on the subject like thinking fast and slow there's a huge list described in a can just read the book and the other plays tricks on you. then you realize you made the wrong call. it's an easily confused mind. that exists within humanity. >> there's something to prove there. it's a pretty big gap but something to improve. we think artificial general intelligence that being the kind
4:16 pm
of ai that is human argument plus level that could solve general problems and could learn anything. it can be athletic can sing and read books it can drive a car do all the things humanity can do. artificial narrow intelligences where we are. the system that my company are google and amazon build fall into narrow intelligence. within a certain domain they can outperform human beings. a computer program has beaten the words best -- or atari game player. within certain domains we can obliterate with these programs. they are just what they made
4:17 pm
for. that alone has the ability to impact the economy in a substantial way. for example there's a report that came out about the u.k. economy which is similar in terms of sophistication. their view is in the near term by the early 2020s will be a two to 3% reduction in jobs in the u.k. but by the mid- 2030s, 15 years later that number will rise to 30%. for this, we don't command data from star trek. we just require narrow intelligence the kinds of systems are driving now and turbines that predict when they're about to fail so you need less technicians and
4:18 pm
aircraft that 30 minutes before they land they tell you what's wrong and one comes up with an ipad and looks like step-by-step what to do, no thinking involved. these things are really out there. it's narrow intelligence making a difference. these are positive on the one hand and they have an economic component. in other words what should that truck driver who's had a pristine career never an accident now in his mid- 50s and he's worked hard all his life and an honest job. we said technology has arrived at thomas trucks are here, too bad. go back to college and get a degree. it's not that sort of a way of thinking doesn't shy with the fundamental understanding of who we are.
4:19 pm
you can't treat someone like that in society. the real challenge is delivering the benefits of automation and ai through many things i described in healthcare and green energy and getting people out of hazardous situations there are an unending list of things machines can do for us. with that is connected to his today we have human beings completing that. so getting a human being out of the is connected to taking someone's job away. the solution isn't to ban technology. it's to say what is value. value beyond the needs of human flesh semen agreement.
4:20 pm
we agree now that rap music is valuable to people that sing rat music make money. for people that can hit baseball out of the park should make money. but if you went back 500 years these were not the collective agreements of society. so value is. really a consequence of the agreement. if that's the case it can free us in developing new models where we can compensate people not just what we call labor today but for the things that we didn't think were valuable previously but they are. the book goes into detail. in a nutshell it's our ideas and perspectives that we bring to the table.
4:21 pm
so how do you take that it create a monetary system around that? >> this go back to artificial general intelligence. a lot of what people think about is what they learned popular media. they think about skynet and terminator. what is myth and reality? what do the systems look like today? >> you can assume that anything in science fiction is myth. what were seeing is advanced robotic vehicles. so both aircraft, drones underwater ground vehicles, their most advanced in the air or underwater and challenging on land because of the difficulty navigating on the ground.
4:22 pm
so things like gps or the ability to map the environment may not exist in the military environment. many look like advanced missiles. so missiles that can adapt and fight to their target and can move around in a threats and can talk with one another so swarms of intelligence missiles and they also look like cyber weapons which don't look like anything but have a tremendous effect. there's an incredible program that darpa ran a couple years ago called the cyber grand challenge. it brought competitors to build computers to scan software of its own for vulnerability that could be hacked or patched. they competed in a tournament to
4:23 pm
find these things. they're now at the level where they're not as good as the top human hackers but there in the top 20 which is pretty good. that's enough to have a lot of value in the pentagon is already deploying this software to patch it up. all of these things are not intelligent like us. they don't understand the broader context. whether being used in the way their intended, but if the environment changes they're not prepared they can't see the bigger picture. c can build these things that work well and do what they were taught to do but if the
4:24 pm
situation changes they don't understand the war has ended her that maybe the environment has changed. >> so in your book you do a balance job of talking about the reasons why we may want to leverage ai and the fears that we have. what is the moral issue that's at the heart of all of this? >> the essential question is how do you use technology that could be beneficial without losing humanity in the process? to give a story about my time in afghanistan that might illustrate this. i was on a ranger sniper team. where on the pakistan border and the sun came up there's not a
4:25 pm
lot of trees in this part of afghanistan. so the rocks to give us the cover we wanted in the village beneath the saw and it wasn't long before a girl came out and she was sent out by someone to scout our position. she had a couple of goats behind her but she was looking and reporting on us. she had a radio on her and was reporting back. she circled around us maybe five or six, she wasn't sneaky she was staring right at us. so we watched her as she watched us and eventually she left them and taliban fighters came. we took care of them in the gunfight that happened brought out the entire village. eventually the village -- later
4:26 pm
on were talking about how we would deal with that so if there's a civilian we saw hurting goats or something we could come up to them and see if we could grab a hold of them and pat them down so we would know if we are compromised or not. one thing that never came up was the idea of shooting this girl. it wasn't a topic of conversation. under the laws of war that would've been legal. they do not set an age for combatants. they determine your combatant status based on your action. if you're scouting for the enemy you are an enemy combat. it would be legal to kill you. if you bought a robot to comply with war she would've been shot.
4:27 pm
i think that would've been wrong morally but allowable under the law. this is the challenge that humans understand another situations people may face that are more difficult that way very hard competing tough moral choices. but do we want to end the war where we know they're making those decisions. and what would that mean for warfare or for us if we did so. >> this idea of human judgment and the ability to intervene this correspondence in the civil spaces well. you talk about examples where ai can help society but you also talk about issues with it. how do we go about regulating
4:28 pm
and mitigating these negative concerns. >> there's two things, one in general if you look at it in objectives terms with human development indicators, the increase in wellness of humanity is correlated with the development of scientific process. the overall thrust of headed in the right direction artificial intelligence just an area of work is the area of work. in his new discoveries in a broad range. attempting to curve, stopper drastically reduce the effort that people can apply would be disastrous for scientific progress. what was talked about, this was
4:29 pm
a difficult call. another difficult from the book i was cited earlier researchers has been done judges were adjudicating cases on folks out for perl. there was a timeliness to whether they would accept or deny the cases. around noon they would start to have a high rate of rejection. in the morning it was high tape or fellow then after 1:00 o'clock it increase. after the psychologist in the study they said it was because the brain requires glucose to think through hard decisions. when it lacks the we default to one thinking system.
4:30 pm
the default was just let us go. the implication of that little bug the human brain is ruining dozens and hundreds of people's lives. it's just one example. with every cognitive bias there's issues and concerns. on both sides there's examples that are terrible. i don't worry about a machine that does not exist at that level to american display benevolence. i worry about the benevolence of man. all of these non- intelligent systems completely controlled by human beings there's a risk by then too. given the balance there are
4:31 pm
examples of what paul is talking about examples on the other side as well. that alone doesn't qualify us to stopper kerber and development. the other part is during the time of george bush, george w. bush. he imposed a ban on stem cell research in the united states. the consequence of that is that china became the largest sequencer in the world. close to 100 and several dozen human stem cell experiments were carried out china. they're now the leader in that space. so if you decide you're not going to do something it doesn't mean it won't happen. so if you can't guarantee to you not doing it it won't happen and it's not a simple decision.
4:32 pm
the near and the prisoners dilemma. so no two parties have perfect information about each other's motives. so by assuming the other person is doing the right thing you lose by taking the middle course you gain. so the decision never really happens because no one can assume the other person is doing the right thing. this is the problem with nuclear weapons. you can fly drones and you can photograph large areas of land where nuclear reactors are being built. it's easier to tell a country year you agreed you wouldn't and now but how can you verify that an entire country and every
4:33 pm
computer connected to her present that some lines of source code are not present. some of verifiable activity. artificial intelligence is not a big building or large aircraft. it can be lines of code that i might have in my pocket right now on a 5-dollar disk. on there's no way to validate the lack of that presence in the country. that creates a situation where you have to assume it's there. that kills the whole idea. you can't ban it because the folks that shouldn't have access to probably will behind closed doors. >> i also want to invite any audience member who has
4:34 pm
questions please feel free to use the microphone. so inevitability of using ai this lack of transparency when it's even use. we hear about the blackbox issue but often we don't know when the big news. >> the benevolence comes from human minds. so for example social media has caused a lot of harm recently. in terms of individual privacy and become a platform for elections to be hacked. in many other documented things they have done. the solution -- how does it happen? you're connected to people who force news and updates and so on
4:35 pm
but you don't get to decide what you see. there's a facebook algorithm in the middle that sits there and decides. you can't directly influence that algorithm. why question why can't you have complete control? this is the question to ask. if facebook models is ads that's one thing. but if there model depends on understanding your psychology to a deep extent there and willing to give you any control to change the filter they have to target you that is far more dark than just trying to optimize the sale of an ad. the solution is to give the algorithm away and give it to the user. so i'm not asking for a ban on social networks.
4:36 pm
ultimately once technology is out-of-the-box he can put it back in. >> i don't know if you've address the social consequences of this in your book but it occurs to me that my father's prediction to me when i was 11 in 1965 and he brought in a minicomputer into her home and said the world will never be the same again were not ready for the consequences is really true. first we saw the hollowing out of the middle class on how we grow food and it seems to me as the icons into the world's going to profoundly change how professionals who see themselves as part of the upper-middle-class work and whether or not they're able to
4:37 pm
work. he said we need to figure out how to value something else but we've never had a society and humanity where we were rewarded for people anything other than work. if we don't need as many people to work, what we do with them? >> that's a multi- trillion dollar question. there's lots of proposals. you've heard about universal basic income. there's concepts of going back to the basic concepts of rent. we each on a bit of the earth as a human being. and rent occurring from what we own on earth. my own suggestion is what i called idea coin. not a literal piece of currency. but were connected on a network
4:38 pm
already and contribute ideas that might be valuable to other people. as people find value in those ideas, other people's judgment of how valuable they find sense of value to that concept. because i anticipate on a system like this will be the icing human beings operating at the same time. the faster cis would take all the money away. i've also proposed embedding a reward function. for the first convention you get ten, for the next eight, seven, six, and so on. so the issue where we have finance capitalism now, when there is massive collections of money and in that sense this
4:39 pm
function to make it fundamentally part of our economy can mitigate that effect. there will still be rich and poor. we don't want to make everyone equal. we can say that when this gets into its extreme edges it can prevent it from happening. i'm not suggesting it's occurred but i'm worried that the governments are not properly talking about it. today you have russia's thinking about things and china more holistically divide, i was there two weeks ago and met with the first ai minister. in the united states there's a lot of work to do. paul and i both serve on an ai task force.
4:40 pm
were trying to get something going. it's not at the level of others. we don't have that level of traction. we can find a solution but were behind the times and even behind other countries. >> you spoke about whether ai has benevolent ends or not. i was wondering about unintended consequences and things doing what they're supposed to do or how to split the difference that predicted a girl was pregnant and started sending bathing notifications. or you think about it thomas carson how you handle the infamous trolley problem.
4:41 pm
you can a crash and hit the bus full of nuns. where is not clearly a right or wrong. but there's a middle ground that all of us wouldn't come up with the same answer about. is it enough to put one person in the middle of the kill chain or is it something else entirely? >> putting a person there just to push a button is not an effective solution. we've seen examples where there have been people in the loop and highly automated lethal weapon systems. so people over trust in the automation and were looking for humans to be actively engaged in
4:42 pm
this low type of thinking. understanding the context. and for humans to feel more responsible. i'm just here to push a button, the machine is doing work. in 2003 and air defense system shutdown to aircraft. humans were in the loop for those in afterwards when investigators interviewed one thing they said was their belief was if they overruled the automation when they're trying to target scud missiles they let a missile come in and people died then they would be in more trouble than if they just trusted the automation. it's not just about how it was designed to have to ensure
4:43 pm
humans feel responsible for decisions they make. >> by give you a fantastic answer. i'll answer from a different angle. i think about where we are in terms of improving human nature. you can ask me what is improving and what is human nature. even now my belief that we in america are blessed with a lot. geography has been inclined to us. it could fill a book.
4:44 pm
so we have a lot to be thankful for. in our situation is not the average situation across the room. we could've been somewhere else. we could have been born in place living in a place where there's lots of stresses. what i have observed that man, woman when subjected to stress start to lose higher-level functions of the brain and start to degenerate rapidly and quickly. the reason i'm interested in autonomy and artificial intelligence is that i feel short of reprogramming humanities brains the only way to create a better word is to reduce the amount of ancient stresses that cause us to look at the reptilian part of. we do that very often we have done worse.
4:45 pm
so whether that proves to be true in social engineering we see it happen. people in richard cohen countries people generally behave better. what does that mean? does that mean being richard makes you a better country? traveled the world and i can tell you they're wonderful people everywhere. what makes people better is when they know that there's food on their table that there's fundamental security nobody's coming after them in the dark of night. children have a future. there's consistent reactions to that. using automation that's what -- many years ago had argued for. he said technology has already arrived or we can take care of our population.
4:46 pm
now it's her choice we don't do it. we used to go into grain in the ocean because we wanted to keep the prices of grain high. as opposed to saying if i give this to free to some part of the world who needed it could that reduce anxieties and negative behaviors plant the seed of a new society that could one day become a an economy a customer. my way of thinking is that our best bet is not the politicians. it is not self started messiahs, it's in science and technology. a lot of people working on science technology. and that's my view.
4:47 pm
>> thank you for being here. this is super fascinating. i have a few questions. the first one is around the difficult choices you mentioned that could remove some psychological chill toll on those people and thinking in particular in regard to policing in america it's been clear that bias plays heavily into the division that law-enforcement legs and maybe ai could alleviate some of those conditions. the problem is the group of people programming ai is not very diverse. i'm wondering how you would address that in situations where
4:48 pm
you're creating something for law-enforcement word has to deal delicately with biases? >> the issue sometimes is not about the bias of the programmer. most of the current techniques are supervised learning techniques. you require millions of examples letter label of some object or state may be an animal or a door or something with the label and you require a large number to properly train the model. the issue is not actually getting a broad enough set of data, when you're talking about images human images, the quality
4:49 pm
control problem manifests itself is poor recognition for certain types of ethnicities. this goes beyond just faces. if you didn't have enough pictures of doors it wouldn't recognize stores for a well. his quality control and more data. the programmer is a programming any rules. he's not saying if these are cool true than its caucasian, male or black female those are being decided by -- so we need better quality control and more data. >> a lot of the problems you have this training data and then he played for the real world and it's a disk different distribution. maybe they don't have the people
4:50 pm
of ethnicities. another problem and sometimes it's not that programmers are trying to be bad or biased but it's just bad data. another problem could be the goals. there's an interesting article that a journalist and looking at youtube videos. as she started watching videos she started with you to context when they had more inflammatory context so videos of donald trump at a campaign rally and then eventually it recommended more extremist right-wing speeches like neo-nazi stuff. then she creates a separate
4:51 pm
account starts watching clinton videos and that it was recommended left-wing videos that she started watching jogging videos and it started doing stuff about marathons. so is trying to maximize the maritime year spending on youtube for advertising purposes. and people tend to click on more inflammatory types of content. so the goals out so matter. in this case they wanted to sell an ad. some of these are about what we have living in this world with the tools are being built by people that community is not diverse at all work is very white and male the with
4:52 pm
corporate interests you get a range of other problems. >> thank you we have time for one more. >> this has to deal with social aspects of ai. if mentioned by the 2030s a large percentage of please population one of the jobs they have today. given our federal government has most of its tax revenue from income tax from our labor, how do you see the government earning a large portion of its revenue? to see a robot tax? >> the statistic i gave of 30% unemployment was from a report about the u.k. the 2030s. the u.s. pumpers worse.
4:53 pm
but my view is that there's going to be a few different ways to compensate. i'm not suggesting all of this can be compensated. i think as we migrate the word services and a lot of caregiving type professions employed more people than it used to. that trend will continue. a lot of caregiving and things we always want human beings to do because they're associated with the human touch. the other thing i find so many are you to content creators. so many have big instagram channels. these were jobs that didn't exist before the independent programmer who doesn't work for company could live anywhere the
4:54 pm
world running through companies can earn hundreds of millions of dollars of foreign exchange for these countries. some democratization of work is already happening with some jobs we also people to do. healthcare is a big one. a dr. would be recognized before the nurse. how will we make up the difference? the things that have been talked about that are being experimented one is universal basic income. which can you give $10000 or $15000 as a base income? the country has some fundamental assets. the u.s. is the richest country in the world.
4:55 pm
i don't mean it in terms of dollars we have but the natural gift we have been given. so there are things to monetize through machines that we can share the bounty of that automation. so you can look at it as a robot tax then yielding resources back to humanity. the idea coin concepts is another one that could lead to something. there might be 5 - 8 of these proposals that come about. ultimately the fact that some are already being tried gives me hope. -- are forward-looking and are thinking about ways in which they do things.
4:56 pm
their environment is different. so it's good. they'll come up with other ideas. over the next years that's what we have to do. technology has outpaced innovation. and now there's time for policy experimentation and we need to figure out how we can try these policies that might work. >> there's a range of prediction on automation needs. i do not think people predicting unemployment to saying they'll be 70 jobs created that will all be millionaires or -- the analysis i have seen is work that basically said roughly half of -- are automatable.
4:57 pm
that's what people are doing as part of their jobs and the only impacts 5% of jobs that are entirely replaceable by machine. he is the toll booth operators are most jobs will see some disruption. no see some tasks in the new jobs created. the best work i've seen argues that automation has already been responsible for the suppression of medium wages in the united states. particularly for low educated workers. automation picks up routine labor generally performed by those with less education. so we've seen a really stark drop off in wages.
4:58 pm
we've done some work and we've looked at the automation data and compared across demographic categories and it shows a lot of these fallen the youngest of the workforce. so think about people in their 50s that his summit is hardest for them to adjust. the bulk of it will land on people that are starting to enter the workforce. they have the least amount of education. those who have the most time, the challenges that we need education policies to make it possible to get a good education. if you think about people who are using that lower income job one think they're doing is taken
4:59 pm
away that stepladder they're using now. there are policy solutions that could help with that. >> i want to thank you for an interesting conversation today. [applause] >> you've given us a lot to think about. i have 30 seconds for you to give one last thought. >> i have to think of something. if the idea that a lot of people worry that as computers become faster than there's no value of the human mind. consider that there are infinity ideas for machines and humans to discover.
5:00 pm
they'll discover as much of the ideas we will which is almost nothing. what matters on speed doesn't matter is perspective. that's the reason why we continue to have intrinsic value as human beings regardless of how fast the machine thinks. >> is the machine better than humans? one thing i like task is if we had all the technology we can imagine what would we want humans to be doing and why? what decisions do we want humans to make in our society? >> perfect. thank you. . . [inaudible conversations]
54 Views
IN COLLECTIONS
CSPAN2Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=890150855)