tv Key Capitol Hill Hearings CSPAN November 14, 2016 9:19am-11:20am EST
9:19 am
coming up on c-span3, a look at the use of artificial intelligence by the military. then former ambassador robert galluchi discusses nuclear threats on the korean peninsula. we talk about the state of u.s. infrastructure and modernization efforts. live coverage beginning at 12:30 p.m. eastern on c-span. today, republican congressman kevin brady and u.s. trade represent michael froman join experts for a discussion on trade policy and how it factored in the election. also, what to expect from the trump administration. we begin our live coverage on politico at 5:00 p.m. eastern on c-span2.
9:20 am
tonight on "the communicators," scott keyo, president of audi america, talks about autonomous cars, the hype from the auto industry about they're almost ready and with when they will be on the market. >> see what pittsburgh is doing at carnegie mellon. lo look, in the automotive business b, we're used to a lot of hype. and i think when it comes to everyday matters, a little bit of marketing hype is okay. when it comes to matters such as this, i think it's a little disingenuous because words are flippantly thrown around. when someone says auto pilot, self-driving, but what a consumer thinks, i come out of my home, i hit a button and that car will take me anywhere in america i want under all conditions. we all know that's not the case. >> tonight at 8:00 eastern on
9:21 am
c-span2. >> next, a discussion with technology experts and federal officials on artificial intelligence and its use for the military. cohosted by the carnegie mellon university, it's 90 minutes. good afternoon. thank you all for joining us this afternoon for those of you who are joining us just now. also welcome to those of you on the live stream who are joining us today. the hashtag for the event is #carnegiedigital. it's a pleasure for me to welcome you all here today the together with david bromley, who i will introduce in just a moment. this is the second panel of the first part of the carnegie colloquium.
9:22 am
there will be a second part taking place on december 2nd in pittsburgh to which you are also invited. in case you're interesting, please make sure to drop your business card off outside or send us an e-mail. this second panel focuses on autonomy and counteryou a tommy in the context of military operations. as i explained earlier this morning for the first panel, this event is designed to combine the tech expertise of carnegie mellon university with the expertise of the carnegie endowment. it will be followed by a panel discussion with experts from around the world. so we are particularly pleased and delighted to have people from israel and india who came up all the way specifically for this event. it's now my pleasure to introduce you to daryl brumley, director of the security institute at carnegie mellon university, he's also ceo of a company called for all secure, which won the darpa cyber grant challenge this year.
9:23 am
it's a great pleasure to have him here. with that, i look forward to that panel discussion. thank you. [ applause ] >> thank you, everyone. you read the headlines today, you'll come across headlines such as russia is building robots to fight on the battlefield. the u.s. navy is developing swarms of unmanned drones. and darpa commissions a fully autonomous cyber bot commission. these highlight the increasing role of autonomy in the military. in the second panel we'll take an international perspective on what autonomy and counterautonomy mean in military operations. as tim mentioned, my name is david brumley. i'm a professor and director of cmu, security and privacy institute. i also consider myself a hacker as i run this hacking team that many people have talked about. my job for the next ten minutes
9:24 am
is to give a highly level overview of the issue, why it's so exciting, why it's so timely, and why it's so important to get absolutely right as we go forward. this panel's issue in a nutshell is countries around the world, including the u.s., russia, israel, china, india, are increasingly deploying and investing in artificial intelligence and autonomy in their operations. autonomous technology, once the work of science fiction, is here today. for example, in pittsburgh, you can use your uber app to summon a completely autonomous vehicle to take you home from a steelers game to your house. don't just think physical. think of cyber space. think of social. for example, in august this year, darpa demonstrated that it's possible to build fully autonomous cyber bots in full spectrum offense and defense. it then went on to demonstrate that these bots can supplement human capabilities in the manual defcon competition.
9:25 am
we also need to think about social networks where autonomous systems can be used to sway the opinion of a population. key pros include faster and better decisionmaking in weapons systems, cyber space operations, and it even creates the possibility of fully robotic soldiers in warfare. these are all significant benefits that lower the cost and lead to better protection of human life. however, there are significant policy, legal, and ethical questions. many questions revolve around how much control we should cede to machines. what sort of actions should we allow machines to take and when. and how do we handle the case when machines have mistakes, when there's bugs that could be exploited by your variations. autonomy results from delegation of a decision to authorize and take action within specific boundaries. we'll be talking about delegation of a decision. in the context of this panel, we delegate that decision to a computer program.
9:26 am
an app, if you will. everyone is familiar with apps like games and web browser. but these are not autonomous. they follow a fixed set of rules and interact with the user in a very limited way. an autonomous system must be more than an app following a prescriptive set of rules. it must be able to make a decision about how its actions will affect the environment. today we focus on autonomous decisions where we delegate a decision to take action and that action has been ceded to a computer app. that app interacts with the world and the world interacts with it. i also want to set the stage for the size and the scope of the investment in autonomy. i want to use the u.s. department of defense in history as a illustrative lens.
9:27 am
autonomy and ai are center stage. this strategy is called the third offset strategy. when i heard this phrase offset strategy, i didn't really understand what it meant. it seeks to offset a numerically superior force with technical supremacy. an offset strategy allows someone like the u.s. to win without matching the enemy tank for tank for plane for plane. to get a sense of the scale, the very first offset was our nuclear weapons strategy. the u.s. invested heavily in nuclear weapons, especially battlefield and technical nuclear weapons, because it provided an effective deterrent. we didn't have to match the enemy tank for tank, plane for plane. in the mid-'70s, russia reached nuclear parity with the u.s. and the offset was no longer an offset. the u.s. and other countries started looking for other offsets. the u.s. came up with the second offset, where the idea was using accurate guided munitions delivered by effective delivery systems, you could achieve the same effect without the collateral damage. this investment led to huge
9:28 am
advances in science that went beyond the military domain. things like gps wouldn't have been possible if the u.s. didn't invest in this idea. we expect the investment and radical change in international policy to be just as specific. the race to autonomy is not only happening in the u.s. and to implement these sort of offset strategies. it's also in other countries. for example, it includes russia and china, which i mentioned just a few minutes ago, are investing in roboticized armies. in 2014, a bank of america report states that japanese companies invested more than $2 billion in autonomous systems led by tech companies such as facebook and google. we don't get to just deploy autonomous systems and call it done, though. once we deploy them, they themselves may become targets. that leads to a notion of counterautonomy, where adversaries may go after the
9:29 am
autonomous systems themselves as a way of getting at their intended target. as an example, just to kind of put this in scope, there's a very famous chess computer called ripka. it was defeated because someone found a flaw in the engine. in chess, if you go more than 50 moves without moving a pawn, it's a draw. the chess engine had a flaw where it would try to avoid a flaw under all circumstances. this player would go after the autonomous system by offering a piece as a sacrifice. the computer thought it was piece up. the player would move 49 moves without a pawn move. the computer would say, oh, no, a draw is coming up. it would try to avoid it. and the player could go to town. this is going after the algorithm, not just the test game. autonomy is going to be huge. it's critical we get it right. the stakes are extremely high for many reasons. one of them is autonomy will drive us to take decisive action faster and faster. these actions will be in the cyber domain and the kinetic
9:30 am
domain. remember what i said. autonomy is a delegation to an authorized entity to take action within a specific boundary. i want to think about a couple of different dimensions. what decision is being delegated? second, in what circumstances? and third, what are the appropriate boundaries for using this sort of technology? and to dig a little deeper, the decision being delegated is a different question. countries are forming stakes on the ground on how they're going to think about this. the deputy secretary of defense in 2014 said humans in the united states' conception will always be the ones who make the decision to use lethal force, period, end of story. when he was questioned whether a computer would ever take lethal action. but the pace of technology makes applying these high level philosophy and principles to different situations difficult. for example, should an autonomous system shoot a suicide bomber before they have an effect?
9:31 am
is that okay? is that defense? is that offense? when is the decision ceded? he goes on to say, and he qualifies him that there may be times when it's okay for the computer to take control, for example suppose you got 60 mills coming at you. there's no way a human is going to be able to sort all that out. the human will make the decision but make it ahead of time for the computer to be able to react to that. this isn't a hypothetical conversation. it's here today. for example, consider for a minute the fire and forget missile systems. we've all heard of these probably in the newspaper. one example is the uk bridgestone missiles, which groups that one of our panelists
9:32 am
serves on, illustrates there are no clear lines when we've ceded control. the fire and forget systems are often described as autonomous. some will say they're semi autonomous. it really just depends on which definition you're looking at. the british air force described it as effective against all known and protected armored threats. brimstone's radar seek and searches, comparing them to known target signatures in its memory, the missiles rejects returns which do not much and continues searching and comparing. the missiles can be programmed not to search for targets, allowing them to safely overfly friendly forces or only to accept targets in a designated boxed area, thus avoiding collateral damage. this is an interesting question with fire and forget because the control has been ceded ahead of time. someone has decided to use lethal action but it was up to the computer to identify who to take lethal action against. there's another more subtle question, what do we do when there's a bug in the software, that it maybe misidentifies where it's supposed to go? so finally, what are the constraints? again, a very realistic question today. if we go back to the uber
9:33 am
example in pittsburgh, suppose a pedestrian walks out in front of a self-driving car and it can only miss the human by driving off a bridge. who should you save? the driver or the pedestrian? a good question. there's no clear solution. and military operations, we often have similar questions. who are we going to save when given the choice? how are we going to program the objective functions in these military operations? so with that framing, i would like to introduce our moderator and speakers. our moderator is the vice president of studies at the carnegie endowment for international peace, george burkovitz. george, can you please step up. his work is primarily on nuclear strategy and nonproliferation issues and on south asian security. george is the author of the prize winning book, "india's nuclear bomb," called an extraordinary and definitive account of 50 years of indian policy making. george has been a member of the national academy of sciences committee on armed control and international security, the council on foreign relations on
9:34 am
nuclear policy and many other such advisory committees. thank you, george, for joining us today. our panelist is daniel riser, can you please come up. daniel is a partner at the herzog, fox law office. he joined in 2008 as the firm's public international law, security, and defense partner, recognized as one of israel's leading public law experts, ten of which he served as head of the israel national law department. he was the senior lawyer responsible for advising the israeli leadership. i hope you can advise us on this issue as well. i would like to invite the director for the arms division where she led the human rights watch advocacy against particularly problematic weapons that pose significant threats to civilians. she is also serving as the
9:35 am
global coordinate of the campaign to stop killer robots, one of the people i quoted earlier on the uk brimstone. she worked for the vietnam veterans of america foundation, assisting jodie williams and coordinating the international campaign to ban land mines, co-laureate of the 1997 peace prize. finally, general panwar. he served in the corps of signals indian army. he retired after 40 years of active military service in the corps of signals indian army in april this year. his last appointment was commandant of the military college for telecommunications engineering, which carries out training in the fields of ict, electric warfare, and cyber operation, and is also designated a center of excellence for the indian army for these disciplines. the general officer has received many such awards.
9:36 am
i want to call out a few of them. he's been the recipient of the president for distinguished service in the defense forces. he's also been awarded the department of defense production for r&d work. and last year he was conferred the coveted distinguished alumnus award by the india institute of bombay and is the only defense officer to ever hold such an honor. with that, thank you, panel, and i'll turn it over to george. >> great, thanks a lot. great, thank you. what we want to do have as much of a conversation as possible, first amongst ourselves up here, and then with you all, to basically draw out a number of the dilemmas in this area and to help identify what are the questions that might be the most worth pursuing, as different countries and different actors move down this agenda. so to start us, i want to ask the general to build on what david said a bit. certainly there must be other drivers beyond dealing with numerical asymmetries that would
9:37 am
make autonomous systems attractive to a military and to a government in terms of problems they solve and advantages they confer. can you give us your perspective, what are the attractions of autonomy in this space? >> i'll start by saying that one can't get away from the fact that weapons are meant to destroy and kill. but they destroy and kill, supposed to kill military potentially. and the idea is not to effect the noncombatants. the noncombatants have to be saved. a major question you have to ask is, does ai have the potential of reducing the negatives of destroying the noncombatant, potentially. i feel that in a sense, by its
9:38 am
character, artificial intelligence has great potential towards this goal. having said that as an opening, let's see how warfare is actually changing in the last few decades. there are two things which have happened. on the one front, there is a change in the nature of warfare from the conventional to what is normally referred to as fourth generation warfare, where the lines of politics and military are blurring. so there is a different context in fourth generation warfare. india happens to have the context of both conventional warfare as well as the fourth generation warfare. and so some of the things in the discussions which come up really get related, at least my examples really get related to
9:39 am
how the benefits turn up here. the other change in warfare which is happening is to do with the information age. now, here again, you have on the one hand cyber warfare, electronic warfare, one thing that is happening in the information age. coming to the relationship with artificial intelligence, before that, because of information, coming into the weapons systems, what you are having over the years is greater and greater precision in the weapons systems. now, eai, again, has a potential of increasing this precision, and discrimination as we'll be discussing, i'm sure, as part of the panel. and that is where, again, the aspect of having lesser and lesser noncombatant casualties is going to occur. now, when it comes to specifics as to what are the types of systems, so increasing degree of
9:40 am
what ai can do. so let's start with just four different examples. you can have a defensive system. and in a defensive system, like for example handling of diffusion of aries. diffusion of aries. there the adversity is not involved and ai can do a lot in coming up with the systems in any case. at the next level, you have defensive eai. so we talk of, you know, the systems like phalanx which have been deployed for decades now. there, the missiles are coming in, you're destroying the missiles, autonomous systems, ai autonomous systems are in place so that casualties are reduced. and the third, you have
9:41 am
precision coming in now. so you can have offensive systems. for example, if you have drones, armed drones which are autonomous, okay, you already have armed drones in effect, but you have autonomous armed drones, when the pilots -- that's the third level, the offense is coming in. at the fourth level, if the graduation of the ai takes place and it develops to the extent where it can also mimic the empathy and judgment aspects, when it graduates to that stage, further savings will be possible. there are many other benefits, which we can talk about, but in increasing degrees of complexity as a.i. graduates, i would say these are the four areas we can talk of as a starting point. i think i'll stop right there. >> thank you. that was a brilliant setup. you raised a number of issues
9:42 am
that i think we'll dive farther into, including the questions of offense, defense, and other function. let me turn to mary and in a sense ask you to respond, but in particular to the extent this capability allows one to be more discriminating and precise. when you look at kind of parsing what can be advantageous in these capabilities to what should be avoided, can you hone in on what the distinctions should be? >> you talked about the dangerous tasks autonomy has been used for in the military for cleaning ships, disposal, robots to assist the soldiers, and now we're moving into a phase where we see greater autonomy in weapons systems and
9:43 am
that's seen with an autonomous aircraft that can fly great distances and carry a great payload. we mentioned some of these systems in first report we did back at human rights watch called "losing humanity." in our view, they were not fully autonomous. they had a degree or nature of autonomy in them, but they were normal fully autonomous. we called for a ban on fully autonomous systems. the ban calls on a future ban and not the ones that are in use today. but we did that because we looked at where the technology was headed.
9:44 am
we said we're concerned about where this is headed. we're worried about this. that was part of the rationale behind forming this campaign to stop killer robots that launched in 2013 and is still going. it's a global coalition. i coordinate it on behalf of human rights watch. you know, this is not a campaign against autonomy in the military sense. it's not a campaign against artificial intelligence. there are many people working in autonomy and artificial intelligence. it's a campaign to draw the line and establish how far do we want to take this. so you can view the call of the campaign as being a negative one, calling for a preemptive ban on the development, production, and use of fully autonomous weapons or you can view it in a positive way in terms of how we want to retain or keep meaningful human control over weapons system and not over
9:45 am
every aspect of the weapons system, but the two critical functions of the weapons system which in our mind is the selection of a target and the use of force. we know that sounds very easy. it's harder to put into practice, but this is where the debate has been centering for the last few years when it comes to autonomous weapons systems. >> let me draw you out and then i'll turn to daniel. you talk about drawing the line and what i take as drawing the line is basically at target selection and decision to fire as it were, saying there should be a human there. i get that in a sense, but in terms of objectives, if an objective were to minimize casualties or risk of indiscriminate -- civilian or
9:46 am
non-targeted deaths, i would say, greater precision if different versions of these weapons could be demonstrated to provide more precision and reduce collateral damage and inadvertent deaths, why should it matter whether a human was in the loop or not? i'm not trying to argue with you. i'm trying to draw you out about why kind of the principle of a person in the loop as distinct from the outcomes -- i'm related to people that i don't want in the loop. i'm croatian descent. tell me what's wrong with that.
9:47 am
>> there are many benefits to employing autonomy in the military sphere. our concern is we're going to have stupid systems that are weaponized before we have the smart ones that can do the level four, the mimicking of human judgment and empathy. our concern is we're going to have stupid autonomous weapon systems being deployed before we have these super smart ones which are further in the future as we understand it. it was first the robotists and the ai experts that came to us and said you don't understand what could go wrong when these are deployed in the field. we have many technical concerns.
9:48 am
there will be unanticipated consequences and unanticipated things will happen there. but then the other kind of elements of the campaign that have come on board, the faith leaders and nobel peace laureates are worried this will make it easier to go to war because you can send the machine rather than the human. of course, we want to try and keep them out of fighting as much as possible. but the fear if the human soldiers are not in there and just the machines, it will be a worse situation on the battlefield for civilian populations.
9:49 am
this is why we see a need to draw the line. >> daniel, let me draw you in on any of this, but in particular how you thought about whether there's a valid difference between offense and defense or territoriality. i'm listening to mary going i totally get that if you're operating on someone else's territory. >> let my start by saying autonomous weapons systems are already here. the issue is no longer forward facing. it's also current facing. and while we don't know all the autonomous systems out there because some of them are closely guarded secrets, we know a lot of them. and i think mary is right in one respect of that. the capability to deploy
9:50 am
autonomous systems is still outpacing the capability to train them to be human replacements. now i say that in spite of the thinking in speed or numbers of calculations, et cetera. one of the problems we face is that what we want to train the autonomous weapon system to do, we're not sure how to do that. let me go into that for one minute because you'll see some sort of sitting in between the two positions. i used to train soldiers to comply with the laws of war. and when we trained human beings to do so, we have a system. it's more or less the same in military organizations. we have a specific set of rules. there's the principle of discrimination. you have to discriminate between a legitimate combatant and non-combatant.
9:51 am
when we think about how to try to teach a computer to do this, we're not sure how to do that as human beings. artificial intelligence doesn't learn like a human being. it learns differently, and there are different ways to teach computers, but none of them are putting them in a classroom and giving them a lecture and then taking them into the field and trying out a few dry runs. we learned the old ways we taught the system don't work on computers. the first point i want to stress is we see a chasm opening between the ability to deploy the autonomous systems and the capability of teaching them what the rules are. obviously, that gap will close
9:52 am
as computer systems continue to develop. and that is quite possible, but to be fair i think the military hardware is outpacing the ai side currently. that's my first point. my second point is it's going to be relatively easy to field. again, we discussed this in e-mail before the panel. most autonomous systems today are still stationary, both. why? because movement for autonomous systems is complex. it's difficult. you have object avoidance. you have lots of different types of capability.
9:53 am
now the most oldest autonomous weapon system on the planet is the land mine. now some people would say it's semiautonomous because of the way it works, but if you want to go into detail, if you take the acoustic land mines deployed in the 1950s and 1960s, there were small computers on board on enemy ships and they would only target enemy vessels with a particular acoustic signature. they were actually, i think, the first really solid autonomous weapon systems in the world. those have been around for a long time. however, they don't actually go around and try to find targets. that adds a level of complexity, which is huge. so the autonomous machine guns connected to land radars, which the south koreans use and other countries, those are staying in one place. when you go into a territory where a machine has to learn the environment -- and this is a
9:54 am
very complicated machine view experiment. looking at territory it does not know and identifying a human from non-human and friend from foe and to do that, that is a complicated experiment. now, i say that with one final comment. i'm not even 100% sure the rules we have actually fit robots. and i'll explain why. you see, we built the rules for combat today for humans, and they come with a few hidden assumptions. one, human beings make mistakes and we're okay with that. we accept a certain level of risk in combat for human soldiers. you're allowed to make a mistake if you're a soldier. it's not a war crime to make the mistake. it's a war crime to do something really nasty. i tell you a very sad story. in one of the israeli military operations 14 years ago, there
9:55 am
was -- the terrorists had fielded one-ton ieds under the roads to blow up tanks. and tanks couldn't withstand the blasts and they were blowing up. one israeli tank was traveling in the location and they were really on guard for that event. sunday they had this huge boom from the bottom of the tank. they were sure they had gone over an ied. they were searching for the turret. they look into the periscope and they see two people running away from the site. they shoot them and manage to hit them. only ten minutes later did they realize it wasn't an ied.
9:56 am
the tank had gone over a huge boulder, which hit the bottom of the chassis of the tank and it sounded like a huge explosion. the two people were innocent. the reality is in a combat situation the crew had killed two innocent people because they were in a combat situation. there was a military court-martial. they weren't found guilty. are we willing to give computers the benefit of a mistake? now, remember human beings get self-defense as a defense in criminal proceedings. are we going to give autonomous self-defense, the defense of necessity? all of our system is geared for human beings, so the bottom line is not only is it difficult to train the robots for the rules. i'm not 100% sure the rules are ready for artificial
9:57 am
intelligence. >> i'm processing that because i think you're onto something obviously that's extremely important in terms of this whether it's a challenge to develop new rules to deal with ai or new rules to deal with an even broader category and what the expectations are. my sense is we can all learn a lot in terms of how we think about this and how we might think about it at coming at it from the liability side rather than trying to define autonomy or non-autonomy. non-autonomy should be avoided and how do we go about it.
9:58 am
i think if we go about it in a case law sense it's enormous. i want to come up with a number of things that you said. does it from any one perspective -- is the distinction between stationary and mobile an important distinction? if one thinks about prohibitions or what could be avoided, does it matter? and then relatedly, the defense of one's territory versus action out of one's territory. do you want to jump in on those two points? >> can i say a couple of things? >> sure. >> i think the feel was that stupid autonomous systems will be deployed before the actual intelligent ones, so that is not acceptable.
9:59 am
well, that is actually presuming, i feel, the sting of the weapons and the people who are deploying them are doing it in an irresponsible manner. that's a field which is there, but i think that's way the inductions of the technology in the armed forces are done. we have to look at what is inherently wrong with fully autonomous weapon systems. it is not autonomy we're campaigning against. it's fully autonomous weapons systems and meaningful human components is what's being looked at. and there is a weakness in what does -- is this fully autonomous weapons system. weapons systems that are fully autonomous are those which can select and engage targets without human intervention. that "and" is important. what is meant by selection? only selection is acceptable. only engagement is also acceptable. only selecting is also acceptable.
10:00 am
but selecting and engaging is where the question is being drawn, and the reason behind that is between the selecting and the engaging there's a decision point. and that decision to kill is what is felt today that -- the machine -- the decision should be left to human is one point of view. i would like to make the point that if we're looking at various technologies, the kill chain as we talk about where you first identify -- you navigate to objects -- in fact, if we look at the narrative of 2009, it specifically brings out this. in all these functions, autonomy is permitted. nobody would even object to it. it is only the decision to kill. the point which i really want to make is the complexity of ai is
10:01 am
10:02 am
technology is concerned. coming to the question that you asked. defense and offense actually, my military person would know it's not defense and offense. there's no difference between -- there's an aspect of mobility coming into it. the autonomous systems which are meant for defending and going into offense would really be of the same nature. i don't see any difference between the two.
10:03 am
>> what about territoriality? you can avoid that distinction by saying you can operate it on your territory, but not outside your territory. >> okay. i'll elaborate a little more on that. i brought out this aspect of conventional warfare vis-a-vis four scenarios. i'll take an example from india. if you have a line of control, there's a sanctity and it cannot be crossed. if we're looking at that scenario, then if you try to defend, that defense involves -- there's not much more mobility.
10:04 am
you could have non-mobile robots looking at the defense. when we have gone into a conventional operation, then when you're talking about defense, you're also talking of mobility. so you attack. what i'm saying is depending on which backdrop you are looking at, defense may or may not involve mobility. that's why i'm saying that in general to try and draw a distinction between offense and defense may not be very correct from a technology point of view. however, it would be more acceptable to those who do not want to delegate to the machines. >> i'll ask daniel to comment on the territoriality thing. i'm thinking of the iron dome which operates over israeli air space. then there's the wall. i'm thinking of analogies because the general is talking about the line of control, which celebrates the part of kashmir that india controls. you can imagine that kind of boundary being a place where one might put autonomous weapons to prevent infiltration that's not supposed to be coming across. on the other hand, presumably,
10:05 am
like the last month when india's -- there's movement going back and forth, you might want to turn those systems off so you don't hurt your own people. given israel's experience and your experience there, does the distinction of territoriality matter practically or legally or no? >> if we take, for example, the iron dome system, it has been made public the iron dome system has three different settings. you have the manual setting, the semiautomatic, and the automatic setting. and it's a missile defense system, right? and the idea is you want to shoot down the missile over a safe location, so part of the
10:06 am
algorithm there is to know the israeli system works like this. then it calculates what it is going to hit because it is on a ballistic trajectory, so it's not going to deviate from its track. you automatically do a lot of things. then it calculates if it's going to hit in a dangerous place, where to shoot it down so that it minimizes damage. theoretically, boundaries are not relevant for that. you can catch the missile early enough, we wouldn't care if it landed in another country's territory. the idea being that the system is not supposed to take boundaries into consideration.
10:07 am
it's supposed to take human lives into consideration. my gut feeling the stationary versus the mobile issue is just a technological difference of complexity and the geography is not a real issue. although again following the general's footsteps, i think people will be more easy to accept the fact you would feel such things in your territory when you have sent them into another country. on the moral public opinion side, there are arguments to be made that these are additional steps down the road. but from a technological and even from a legal side, i don't really think that there is that distinction. i don't think it holds. >> on the complexity, let me
10:08 am
mention again the systems which we'll be targeted, let us say mobile targets, would be many more times complex. let me just paint a picture. again, i'll take this example from conventional warfare. for example, you have in an area of 10 kilometers by 10 kilometers a battle. it is a contested environment where there are no civilians present. now this has to do with military capability. here are the models. so today how this battle would be fought is another 100 tanks would be contesting amongst each other. the blue forces would be destroying the tanks, so they're
10:09 am
on par, the two states. let's say one has ai technology and you have piloted autonomous armed drones instead of tanks. now i'm trying to analyze as to what is the complexity as compared to today's technology of these armed drones picking up those tanks and destroying them. i think the complexity gap is hardly anything for technology, which is there. drones are already in place. you only have to pick up tank signatures in a desert. so if a country develops a military capability, those lives would be saved. in such a scenario the complexity is not there. the complexity is there where there's a terrorist, which is mixed up in a population. it's mixed up in a population
10:10 am
and sort of distinguish between a terrorist -- there's no external distinction at all, so that's a complex problem. i just wanted to comment on that complexity. >> mary, come in and sort all this out for us. >> thanks. i'm just thinking about the international talks we've been participating in the last three weeks of talks over the last three years. they look for points of common ground where the governments can agree because there are about 90 countries participating. at the last meeting they said fully autonomous weapons systems do not exist yet. there was pretty widespread acknowledge that what we're concerned about, the lethal autonomous weapons systems, are still to come. and the other thing the states seem to be able to agree on is international law applies, international humanitarian law applies, the testing of your weapons and doing that through article 36 of course applies to
10:11 am
all of this and the kind of notion of what are we talking about. we're talking about a weapon systems that selects further human intervention. what they haven't been able to do yet is break it down and really get down into the nitty-gritty details here. i think that's where need to spend at least a week to talk through the aspect of the elements or the characteristics that are concerning to us.
10:12 am
is it mobile rather than stationary? is it targeting personnel rather than material targets? is it defensive or offensive? although those words are not so helpful for us either. what kind of environment is it operating in? is it complex and complicated like an urban environment? are we talking about out at sea or out in the desert? what is the time period in which it is operating? because it's no coincidence this campaign to stop killer robots was founded by people who worked for the campaign to stop interpersonal land mines because we're concerned one of these machines could be programmed to go out and search for its target not just for the next few hours, but weeks, months, years in advance. where's the responsibility if you're putting a device out like that? that sums up the break time we need to have in the process to really get our heads around what are the most problematic aspects here because not every aspect is problematic, but that will help us decide where to draw the line and how to move forward. >> if states have agreed that the laws of armed conflict and international law would apply, it seems to me that's a
10:13 am
different circumstance than if they don't agree. dan is shaking his head. tell me why you're shaking your head, but pick up on this too. >> mary's absolutely right. you know, when i grew up, there was a u.s. band called supertramp. >> sure. we're dating ourselves. probably when you were a teenager. >> one of my favorite songs when i was growing up was the opening lyric "take a look at my girlfriend, she's the only one i've got." we don't have a plan b. as a very old-time international lawyer who deals with this issue, i don't have an alternative set of rules to apply to this situation. so we have no choice but to say at the international convention we will apply existing rules. we don't know how to do that. >> right. >> that's one of the problems.
10:14 am
the rules don't work as easily on robots as they did on humans and they don't work on humans as easily as we thought they would. in reality when we'll be asked to translate that into reality, we'll have a huge new challenge. that's one of them. >> let me jump right in on this and we can continue it as a conversation. that seems to me one of the strongest arguments for at least a pause, if not a ban, a moratorium, to the extent what you just said obtaining. the argument is let's wait until we can sort this out then. tell me what's wrong with that, or if anything, whether the problem is it is not practical, but from a legal point of view.
10:15 am
>> i am also a cynical international lawyer, okay? and the reason i am is because i used to do this for a living. international law is often a tool and not an end. if you look at the list of the countries participating in the process, you will not be surprised that the primary candidates for fielding such weapons are less involved than the countries who are not supposed to be fielding those weapons. in fact, if we take the land mine issue as a specific example, the countries who joined the anti-personnel land mine regime, the world is divided into two groups. as a result, it is not a rule of international law. it is only binding on the member states, which creates a very bad principle of international law, which is international law is different for every single
10:16 am
country. this is part of international law. it is how the system works, but it is one of the fallacies of the system. for example, for canada, it's unlawful to develop or field an anti-personnel land mine, but for israel it is totally legitimate to do so. if israel and canada could fight, canada could not, which shows you how stupid international law could be. i say that to tell you what happens with autonomous weapons systems. i know who is going to field them. the countries are going to field them are not the countries that are going to be administering any type of results from that process. and the last thing i want to have happen is the normal countries who have very complicated projects and approval processes for fielding weapons like india, who came up with the robotic revolution 15 years ago, they took this
10:17 am
problem on board as one of the issues they need to tackle with. i would trust them much more to handle this issue effectively than a country where i know they don't care about the collateral damage as much. so my problem with the proposed ban is -- my concern is it will achieve the opposite result. the good guys who will take care only to field systems after they know they can achieve all of the good results we think they can won't field them until they're ready with a small mistake probability. but the other people will field them earlier, and that is not necessarily a reality i want to live in, so that is where i come in on the discussion. >> mary, how do you respond to that? >> treaty we're talking about is a geneva based convention. all the countries that are interested in developing
10:18 am
autonomous weapons technology are participating in it, so nothing would be discussed in this body without the agreement of all of these countries. we do have china, russia, the united states, israel, south korea, and the u.k. in there debating this subject. and just to come back on the land mine side, we do have 162 countries that have banned these weapons. we have gone from 50 countries producing them down to 10 as a result of the international treaty. and the international treaty includes former major users and producers. we're not talking about doing a land mine's treaty here on autonomous -- not yet, anyway.
10:19 am
we're talking about trying to deal with it within this particular framework. we want this to work. because if we can't do this with everybody around the table, you might end up doing these other efforts. at moment, at least there's consensus to talk about it. there's not consensus on what to do about it yet. >> what has been the thinking of a moratorium as distinct from a ban? i say it for the following reasons. if there's also the possibility that smart versions of these weapons could be more discriminating and have other positive values from a humanitarian and other point of view, then an indefinite or permanent ban seems to be a priority one might want to question. on the other hand, because people stipulate that they don't quite know how to apply international law and other things to this argument for a moratorium until that's worked out, just in a bar that would
10:20 am
make sense to people, i think, which is how i try to think about things. take me through the moratorium versus ban. i know you're working on a ban, so i'm not asking you to endorse something you're not working on. >> the moratorium call came from a u.n. special effort. hines issued a report in 2013 in which one of his major findings is there should be a moratorium until the international rules of the road were figured out here. when he was on his way out, he issued more reports calling for a ban. that was his initial position and then he moved towards the permanent ban.
10:21 am
we haven't talked about a whole lot of the other concerns that are raised with these weapons systems, but the moral concern you're ceding the responsibility of taking a human life is something we take issue with. we're already seeing the effects of weapons with some degree of autonomy with them. and they don't want to cross that moral line. there's a lot of countries talking about security and stability and what happens when one side has these weapon systems and the other doesn't. what's the nature of -- what does it do for the nature of conflict and for war fighting when you have one side who has all the high-tech nice technologies and who can use that and the other side that cannot? the question here is are we going to level the playing field so everybody has these weapon systems or is it better that
10:22 am
nobody have them because at the moment there's still time to sort this out here. there's still time to put out some rules and there's still some time to prevent the fully autonomous weapons systems from coming into place. >> i think you used the terminology fully autonomous weapons system. if we put a moratorium only on the fully autonomous weapon systems, which again applies a human in the decision to kill, so we are not putting a moratorium -- i mean, this is proposal is not trying to put a moratorium on user ai autonomy and all the other functions in the kill chain. the decision to kill does not require ai. it is just an implementation problem and how that system works on ground. in effect, nothing will happen on ground because all the
10:23 am
enduring technologies will get developed. the last part what you said was on -- particular type of system per >> in terms of who has these weapon systems, the haves and the have-nots. >> that rationale, the whole idea of developing this technology to have that military capability to have predominance over adversaries, that logic cannot be applied to a particular type of system per se. the idea of this new technology oh than having a technological edge over adversaries has brought out bringing ai into weapons systems is going to lead to a cleaner form of warfare. even vis-a-vis the standard bombs being dropped from aircraft, ai is better. more intelligence, so more discrimination, even if you
10:24 am
don't have aspects of empathy and judgment at a much later stage, it will lead to more precise targeting of what you want to target. to that extent, on the one hand you're building military capability. on the other hand, you're leading to a cleaner form of warfare. i would say just saying moratorium is not going to lead to anything positive on ground. if ultimately the conventions decide from other points of view -- you look into the future and this aspect of ai taking over the human race, et cetera, to that extent, on the one hand you're building military capability. on the other hand, you're leading to a cleaner form of warfare. i would say just saying moratorium is not going to lead to anything positive on ground. if ultimately the conventions
10:25 am
decide from other points of view -- you look into the future and this aspect of ai taking over the human race, et cetera, if you're looking at that perspective and from that point of view in developing the ban of technology at this stage, that is worth considering as a point, but not from the issue of the decision to kill. i'm really speaking how i feel. >> dan, i want to open it to the broader discussion here. >> i think the point i want to make is there are several different agendas, all legitimate, at work here. one school of thought says we're not ready to field such fully autonomous systems yet. i think they are currently right. i think we haven't solved the
10:26 am
technological requirements to make sure the statistical accuracy of our systems in a complex system, not in the simple one you have said general, but i haven't heard of anyone who has solved the ai problem of doing that. yet it requires so many schools of technology. it requires the target identification. remember, this is in a combat situation. it would need accurate target identification in complicated environments. you need a machine to be able to do so under a lot of stress,
10:27 am
physical stress. lots of challenges which i call them technical, but they're really intelligent technical difficulties. but they will be solved. i'm a million percent confident they will be solved, but just not today. one group is saying wait before you allow a machine to push a button that kills a human being. that is one group of thought. another group said something wiser. we don't want machines to kill people, period, irrespective of how good they are in doing it. we don't think this should take place. now this is a moral philosophical important discussion on a totally different level which has nothing to do with the technology involved. i will point out here that we have already undergone a partial robotic revolution in the civilian sphere. they've become invisible already. but if i go back in time, one of my favorite stories is the first elevators in the world were built in chicago when they had the first high-rises. like you saw in the old movies,
10:28 am
there were elevator operators who used to operate the elevator. what happened is they built a high-rise which was too high for human operators. it had to move too far. they built the first ever machine operated elevator in chicago. now the problem was that when people walked into the elevator and didn't find the operator they thought it wasn't working. they put the sign. i have a copy of that sign explaining this is the first ever machine-operated elevator. it's perfectly safe to use. no one would use that elevator from the beginning because they thought it was unsafe. how could a machine know where to stop? an elevator is a very primitive form -- today, especially with the quite complicated software you put in them of an autonomous machine that can till you. traffic lights are autonomous systems. if they make a mistake, people can die. we have long time accepted the fact that computers can make decisions for us which can kill us. what's happened for the first time is we have reached the stage where we're thinking about they can do it on purpose.
10:29 am
this is a decision point which we need to decide if we're crossing it or not. being the cyncist i am, i think we have already crossed it. i'm happy we're having the discussion now and not 20 years from now. and the final school of thought, i think the general voiced it perfectly, is a question of do we want cleaner wars. there are two schools of thought on that one. one saying the more accurate missile systems -- coming from israel, remember we are the advocates of accurate missile systems because the less civilians we hit, the less israel is targeted for doing something wrong. we have a vested interest in using more accurate systems. there is a legitimate counterparty saying part of the reason why there are not so many wars is because it is dirty and
10:30 am
civilians die, et cetera. if you just kill the combatant, you'll be happier to go to war. i'm not saying i agree with that position, but i'm showing you the different schools of thought which are converging around this issue. each one is a separate discussion, and you need to choose which one you want to focus on at a given moment. >> that was a great summation and taxonomy of the discussion. let's open it to discussion. you all know the procedure. i call on you and then you say who you are. somebody will bring you a microphone. there's a lady here about midway and then the gentleman you're walking right by. ladies first, at least for the next eight days or until january 20th. >> i'm from the center for naval analysis.
10:31 am
i was wondering how you think this discussion applies to cyber warfare, particularly in scenarios where cyber weapons could be lethal. >> actually, the cyber domain is very much part of this discussion of autonomous systems, how autonomy should come into play as far as warfare is concerned. the current heated debate is about human lives, killing human lives. and cyber while in a sense can effect human lives, but in a -- so you're talking about a cyber attack. an autonomous response from the adversary who killed that attack which was coming up, that's very much part of autonomy playing part of this warfare in the cyber domain.
10:32 am
but there's no objection to that. to that extent, i think that field is getting developed and will progress without any legal and ethical issues involved in it. that's what i would say. >> dan, you want to jump in on that? >> yeah, i actually think it's part of the discussion. one of the reasons i say so is because i don't actually know where cyber starts and kinetic begins anymore. i used to know. i don't know anymore. one of the discussions, for example, we've been having on fielding robotic systems is what
10:33 am
type of protection do you want to give those systems against being hacked because -- example, one of the ideas that came up in a discussion a few years ago, maybe we need to create a kill switch, which you can turn off a malfunctioning -- we call them w.a.r.s., weaponized autonomic robot systems. i agree cyber attacks are generally not focused on killing human beings, but indirectly they can do tons of damage. there's a subject of cyber autonomy which is scarier than anything we've discussed so far that in the cyber world there is a possibility of self-replication. we don't know how to create an autonomous fighting vehicle, which will create a copy of itself and go out into the battlefield. however, we already know how to do that with computer viruses. i actually think the cyber
10:34 am
autonomy world is even scarier because it has the potential of us losing control more than the kinetic side, but that's another issue for discussion. >> it's halloween, so scary is okay. the gentleman right there, burt, and then we'll go back. >> i'm a professor at george mason university. i want to pick up on a point that dan raised about sort of varying levels of autonomy that we have in technology currently and that we're almost on the cusp of different types of autonomous systems that can take lives. and i want to point one that already does and that's self-driving cars. >> yeah. >> right? they make moral decisions to kill. they're going to crash as a matter of law of physics or statistical probability, and they need to be programmed to make a decision that is a life and death decision. i would like to hear a little bit about maybe some of the
10:35 am
distinctions the panelists see between this type of technology and lethal autonomous weapons. >> i've done some work on that, and you mentioned that in your opening comments. the short answer of course no one has a good answer what we're supposed to do with an autonomous car, right? being a procedural lawyer, the question then becomes not what do we do, but who is responsible to do it. so we now have a discussion which goes something like this. option number one -- this was in a discussion two weeks ago, by the way, with some of the companies that do this. option number one is you allow the guy who buys the car to make the decision when he buys the car. when you choose the color of the
10:36 am
car, by the way, would you rather commit suicide if a car encounters the following situation or would you prefer not to, sir? one of the people in the meeting said let's agree we give different colors to those cars so you know who they are on the road. it's a true discussion. that is one example to say, no, the car comes hard wired with a decision. do we tell the people who buy the car what this is? you can't because it's an algorithm. it is way too complicated to explain. of course, the car won't automatically kill you. it will go through a process of decision making, and it will do its best efforts to come up with whatever the guy who wrote the code told it to do. there's no way we can summarize that in a way in which the customer will understand. i'm taking you through this because when we try to move the analogy to the warfare side, the main difference is on the warfare side this is all intentional. but the reality on the warfare
10:37 am
side is the distinction part. when you have different people on the battlefield and you want to identify who is foe and who is non-combatant, then you need to find a way to optimize what you're going to do so that you hit those. it is actually exactly the same question if you take away all the fluff. then the questions arise of who is going to make the decision. are you going to ask you the commander to tell you in advance what level of civilian casualties is acceptable, which is option one. this is easier for me to say, for example, because that's how military operations work today. or are you going to allow the manufacturer of the autonomous weapon system to hard wire that into the system and me, if i went back into my military career and i'm back being a
10:38 am
colonel in the israeli army, i have no idea what it is going to do when i press the button. i have no way of controlling that. so the questions are exactly the same, although the scenario is different and i think you're right. i think we're facing the same dilemmas now on the civilian front as we're going to face on the military front in the future. >> i think one of the things which is happening in these discussions is that we are talking about autonomous systems in general. there are grades of autonomous systems to be used in different contexts. in today's context that may appear to be simplistic, but in yesterday's context picking up tanks with an autonomous vehicle was not a simplistic affair.
10:39 am
a simplistic situation is you tell your autonomous systems all the enemy air feeds. you said that was an easy system. the next less complicated one is what i painted as tanks. when you come closer, i can paint another situation where a company is going in for an attack and there are bunkers. that's more difficult closer situation where it could now end up in a close quarter battle where aspects like empathy would come in. this is a more complicated situation. the broad point, which i'm making is, what we are banning
10:40 am
or what we are deploying, let's not talk about banning, has to be in a graded fashion. whatever technology level is reached to that extent that autonomous system should be permitted to be deployed in a responsible manner. already, there are autonomous systems on ground. they're been there for decades. you painted mines as the most primitive autonomous system. there's a convention against mines for similar reasons. let's not talk about mines. they're already there. as you perfect technology in a responsible manner, they should be deployed. rather than talking in general, the moral aspects of the question that were raised just
10:41 am
now will come at a much later technology. if all the autonomous systems can mimic the empathy and judgment, that will be much later. if it is perfected to that extent, that brings me to a second point. that is about who is accountable. the point of accountability was raised. is it the developer, the manufacturer, the commander, the state? different levels of accountability. i wouldn't say that if an autonomous system malfunctions on ground the commander and the state -- the state in any case cannot absolve itself of responsibility. it is responsible in every case. but even the commander who is there, the military person who is there, is responsible.
10:42 am
the more complex that we make and it's so we will move this direction and the military systems in other words, do we want cleaner wars is the question? do we want fewer traffic fatalities? the answer may be yeah, but i would rather -- it's easier to be in a system where the driver and soldier are accountable than even if it's safer and cleaner, but now a big supplier is accountable or the state is accountable. it's interesting the issues this is going to bring up. including financial reasons that i would rather not take on the liability, i would rather you have the liability. brave new world. this gentleman here in the middle. yep, then why don't we take two. this lady with the blue and
10:43 am
white striped shirt here. if we can get another microphone to her. let's take two in the interest of time, we are bumping up against it s. k hi. keeping in the theme of things that are scary. we touch on autonomous deterrents. that could be where you put in an input of what to do the second strike nuclear attack or in the realm of cyber, you launch an attack before the systems go offline completely. the question is, how do you integrate the questions about deterrence and autonomous weapons systems and the effect where you don't tell your enemy you have these capabilities for operational security reasons, but make it likely things will get out of control. >> let's take -- while you are thinking about that, let's take the other question, then parallel process. >> my name is lauren green, i
10:44 am
was a holistic essay assessor and scored the test of english of foreign language for 6 1/2 years until an algorithm that became a human reader. i'm becoming a journalist. my question is, are you not crucially or aware that artificial intelligence to make computers think as humans is destroying our reasoning process because we are granting these machines so much importance to council our own reasoning out of the process? >> wow. i didn't do that well on my s.a.t.s. i'm not sure i understand the question. i'm trying to process it. >> trying to create a system where a robot serves as a person about when to strike, where to strike and how to strike.
10:45 am
the process, maybe a computer system to reason when it would be appropriate to strike. so, we are granting this algorithm we are creating more weight than our own thinking and spontaneous thinking. it's counseling our own ability to think spontaneously and reasonably even as demonstrated today with the explanations that you have provided and maybe lacking a real critical target in your arguments. there was a lot of just open processing without really making a definitive, in some cases, answer. also, the process for deterring autonomous weaponry is entirely too slow. i think most people are critically aware, there's a lot of apathy against -- a lot of
10:46 am
apathy against all together cancelling out the prospect of fully autonomous repry. i'm wondering if that's because so much money is invested in the artificial intelligence process and not enough in human capacity. >> okay. >> the first part of the question was about you are delegating, you are saying a machine can be more reasonable, make more reasonable decisions and be safer and the correct decision better than a human? >> i think the opposite. >> she's questioning that. >> she's asserting, not questioning it. she's saying what it is. that it's not going to work. we are destroying our own capacity to reason and think by
10:47 am
pursuing it. >> okay, is she saying a machine will never develop to a state where it can do better than humans? isn't that what she is asserting? >> yes. >> that's for the scientists to, you know, say. >> why would we ever want that? >> it's not wholistic technology. it's whether it will be able to understand national language. you will see what is happening today. so, the aspect of reasoning, my own belief, with a layman's knowledge of ai is that anything that the human mind can do, other than aspects of, including empathy, mimicking judgment at any level, it is not far off to develop to that stage. there is no scientific reason to believe otherwise. >> it is only mimicking judgment, it's not rationally judging.
10:48 am
>> dan, you want to jump in on this? >> i want to talk to you about the two questions together. it's all a question of delegation. you use that word in your introduction. your questioning whether it's right for some forms of decisions to machines. ef the assumption it's a bad idea. i do not agree with you on every scenario, but i think it's a good question. you went a step further, should we delegate authority to use significant amount of power in a disastrous situation where human beings may not be able to respond quickly enough, effectively enough or intelligently enough to counter attack or whatever. these are great questions
10:49 am
because they raise the question of what are we developing ai for? okay? now, it started off, if we forget the first few years when it was a scientific experience, it's supposed to be something that makes our lives better and easier. it's the idea behind this subculture. so, for example, if it can make a good decision quicker than a human being and save a life, most people would say that's a good thing. and, as we are seeing technology develop, i, personally, being a tech know logical layman working in this field, can tell you i have seen numerous examples where computers are better than humans at making a decision. human beings are scared, they are tired, they don't have all the information and sometimes act on what we call instinct, which turns out to be a
10:50 am
subculture thinking program which is good and sometimes really, really bad. it may not always be a good thing to delegate authority to a machine. i think the decision we need to make is where we agree that the machines come to help. help us. and your scenario, which is extreme scenario so i would rather not let the machine make that decision personally, but i can definitely identify parts of life where i want machines to help me out, where i really like the fact th. but i do not want them to replace us in the things which i care about and this is the type of discussion which i think we should have now before we let technological companies and market pressure push us in a direction we are not necessarily willing to go. >> if no one else wants to -- go ahead. >> just to say we hear quite a bit from the artificial
10:51 am
intelligence community the guys out in silicon valley about how artificial intelligence can be beneficial to humanity, this is their big kind of catch phrase and they're investing money in ways which it could be beneficial to humanity but delegating the authority to the machine on the battlefield without the human control is the line which many draw. we haven't talked about policing or border control. we're talking about armed conflict at the moment, this is not just in the realm of armed conflict. it's broader than that. the point at which the campaign to stop killer robots comes in is the point which it's weaponized. there's a broader bigger debate and we don't have the answers to much of it. >> great, i want to thank the panelists but all of you for at least here beginning the process of this debate and helping us really i think hone in on what some of the key questions and issues are, so thank you all
10:52 am
again and thanks. [ applause ] >> thank you all for coming for this first part of the carnegie colloquium. we hope to see many of you december 2nd in pittsburgh or join us via the livestream, the question about cyber deterrence will be one of the panels we will be looking at on december 2nd and in the meantime i encourage to you download the carnegie app with content of our latest analysis and would like to last but not least ask you all to join me in thanking the team that has helped with this event, particularly lauren and rachel, who have helped with the organization, so thank you very much.
10:53 am
republican donald trump is elected as the next president of the united states and the nation elects a republican-controled u.s. house and senate. follow the transition of government on c-span, we'll take you to key events as they happen, without interruption. watch live on c-span, watch on demand at cspan.org or listen on our free c-span radio app. >> today a republican congressman kevin brady and u.s. trade expert michael growman have a discussion on trade policy and how it factored in the election and also what to expect from the trump administration. we begin our coverage at 5:00 p.m. eastern on c-span2.
10:54 am
s today jeh johnson talks about structure and modernization efforts live coverage at 12:00 p.m. eastern on c-span. >> now a look at north korea nuclear threats. former ambassador robert gallucci discusses nuclear negotiations on the korean peninsula. from the institute of korean-american studies this is an hour and 40 minutes. >> good afternoon, everyone. thank you for joining us today, and i know we are competing with election day, and i'm impressed that we have a good turnout and thank you. and thank you for your vote for ambassador gallucci.
10:55 am
i'm synja kim, president of ics and i'd like to welcome all of you to the ics fall symposium 2016. it is wonderful to have ambassador gallucci here with us again today, since he last spoke at the ics spring symposium about korean peninsula issues in may of 2006, that is ten years ago, right in this room kennedy caucus and welcome back ambassador. many things happened since then in and around the korean peninsula for the last ten years, and many ideas and theories have continued to flood in the way of possible resolution for the korean peninsula issues, yet no promising signs have yet to be formulated. then the most recent development in kuala lumpur had captured much of our attention on the
10:56 am
informal dialogue between the u.s. team led by ambassador gallucci, one of the most highly respected diplomats in north korea experts for the last quarter century, and as the chief architect of the 1994 agreed framework. for the north korea side, deputy foreign minister han sun the highest ranking north korea official engaged in such track to dialogue. today we are delighted and privileged that ambassador gallucci has generously accepted our invitation joining us and he will share with us his insights and experience from the meeting and his vision towards the peace in the korean peninsula and the region at the dawn of the new administration in the united states. as far as the format of the
10:57 am
proceeding today, after the presentation of the ambassador, there will be a q&a session between the ambassador and the discussants and then the floor will be open to the audience for your q&a. our discussants today are, they're all isis fellows, joseph vosco, non-resident, senior associate ics, william brown, who may be standing in line at the voting booth at the moment, he is adjunct professor georgetown university, peter huessy, president, geo, strategic analysis senior director, strategic studies, mitchell institute, and tong kim, who is also will be joining us a little later, washington correspondent and columnist of "korea times" in seoul, korea, and larry niksch, senior associate csis.
10:58 am
with that i hope you enjoy the program and let's welcome ambassador gallucci. >> she is a ninth grader, this is how we do it. >> thank you, dr. kim, for this great opportunity to introduce the honorable robert gallucci. ambassador dpluchie served as dean of the school of foreign years until he left in july 2009 to president of john d. and catherine dr. macarthur fun dags. bob was appointed dean in 1996 after 21 years of distinguished service in a variety of government positions focusinging on international security. as ambassador at large and special envoy for the u.s. department of state, he dealt with the threats posed by the proliferation of ballistic missiles and weapons of mass
10:59 am
destruction. bob was chief u.s. negotiator during the north korean nuclear crisis of 1994, and served as assistant secretary of state for political military affairs and also as deputy executive chairman of the union special commission overseeing the disarmament of iraq following the first gulf war. he earned his bachelor's degree at the state university of new york at stonybrook and his masters and doctoral degrees at brandeis university. ladies and gentlemen, please join me welcoming the honorable robert gallucci. >> thanks, lena. good afternoon, everybody.
11:00 am
i am pleased and honored with this invitation, happy to be with you, as was noted, it's been ten years. i wished i would say it was ten years of progress, but i don't think that would be exactly appropriate, under the circumstances. i am, not withstanding the title, sang ju titled this and i didn't stop him, i am going to mention kuala lumpur. there was a small group of us, four of us who traveled the 20 hours or so to meet with north korean delegation for a couple of days, and that was just about three weeks ago. word about that. for the dprk side in that meeting, i think one of the main
11:01 am
things they wanted to do was to explain to us why they were concerned about u.s. policy, and specifically why we were there, which is to say that they did not wish to meet with the u.s. government. that's why we were meeting in as i'm sure you know the trap two mode rather than gov-gov track one. for our side, we explained what we understood to be the washington perspective these days on north korea, north korean policy. we focused particularly on the dangers, the threats that we could see, threats or dangers to the region, and to the united states. we did not represent anyone
11:02 am
except ourselves, so we didn't issue any warnings, only observatio observations. the key question i think on the minds of the representatives led, as i think sang ju said by vice foreign minister ahn, the key discussion what should both sides, in this u.s. and dprk, what should they expect and what should they want to happen early in next year, with the new administration in washington? i think we should, could usefully talk about that. you have a distinguished panel here. you all are, it's clear to me, been around the block on this issue. this is not your first rodeo, as they say, with north korea, so we can have a useful discussion
11:03 am
about that. what i want to do with my time this afternoon is lay out what i think are six key questions that are, for me at least, the most important, the most timely for consideration, and all the questions i want to ask are framed in terms of what does the dprk actually believe, and then i'll give the subjects. so let's try this out and see if this works, if this is useful. first question, does the dprk believe its own narrative on recent history? in other words, what do they think caused the collapse of at greed framework of 1994, and brought us to the events that began in 2002. they have a narrative.
11:04 am
what do they think the failure to implement the agreement of 2005 and 2007 and '08, what happened? what do they believe was the role, if any, of the dprk in the construction of a plutonium production reactor in syria, which was destroyed by the israelis in 2007. what is their explanation for the failure of the leap day agreement, the events of 2011 and 2012? now, the question i ask is, does the dprk believe its own narrative on this recent history. my answer to that is incredibly,
11:05 am
yes. let me be clear about this, i have no doubt that the dprk acted inconsistently with the terms of the agreed framework or to put it in the vernacular, cheated on the agreed framework with their deal to accept uranium enrichment centrifuge technology and equipment from pakistan during the middle to late '90s and then into the next decade. i have no doubt that the agreed framework excluded this through the reference to the north/south reference on denuclearization. that's not their view. that's my view. so i'm asking, do they believe their own view as they present it? i'm saying i think they do, incredibly. i have no doubt that it was north koreans who built yai don
11:06 am
thinkby i don't think and alkubar in syria. they say it wasn't us. i say it was them. do they, some of them believe that north korea is innocent of that? i believe that some of them actually think they didn't do it. i have no doubt they did. i have no doubt that, over the last decade or so, since i last spoke here, the dprk bears the principal responsibility for both sides adopting postures that both have characterized as strategic patience, in other words, i believe they bear most of the responsibility for the failure for engagement to succeed between the dprk and the u.s. side. but for whatever it's worth to you all, i believe also that some in the dprk believe their
11:07 am
own rhetoric on recent history. they believe they have wronged, wronged by the united states of america. what i'm trying to say here is on the first point, there's room for possible misunderstanding between the dprk and the u.s. side. one of my favorite movies is "cool hand luke" and there's a line in that movie where the bad guy says to the good guy, "what we have here is a failure to communicate." this is supposed to be irony. because it wasn't a failure to communicate. i am not telling you that all that's going on between the dprk and the united states of america and the republic of korea is a failure to communicate. i am not saying that. i'm saying that in this interpretation of recent history, there's room for
11:08 am
misunderstanding, and i think there is some. that's one of the things i conclude. second question, does the dprk believe that when it achieves the capability of mating an icbm with a nuclear weapon that could reach the continental united states, it will change everything. answer? i think dangerously, yes, they do think that. they think everything will change when they can threaten the united states, continental united states with an icbm with a nuclear warhead. i note here that some in the u.s. defense community would agree. they think u.s. vulnerability to a new third country with nuclear
11:09 am
weapons will alter our relationship in fundamental ways. i don't. they do. i believe the u.s. deterrent will remain credible vis-a-vis the dprk, just as it has been vis-a-vis russia and china. i believe that the u.s. extended deterrent to its allies in northeast asia, seoul, tokyo, will remain credible just as our extended deterrent in nato has remained credible vis-a-vis russia and before that, the soviet union. but here comes the interesting part. but what will change is the dprk's vulnerability.
11:10 am
ladies and gentlemen, even those of us who are opposed to preventative war would support, indeed insist on a preemptive strike, a preemptive strike, if we judged a north korean strike against the rok, japan, or the united states as being imminent. do you see what i'm saying here? preventative war? no. preemptive strike, yes. and what the north koreans will achieve is that they will create a vulnerability that they do not now have when they get that capability. so i'm arguing here that the dprk's security may be fatally
11:11 am
compromised rather than enhanced by this capability that they are so dedicated to achieving. the third question, does the dprk think that its current nuclear weapons capability, the ability to strike the republic of korea and japan with ballistic missiles, armed with nuclear weapons will deter the united states and its allies from responding to provocations in the dmz or at sea? the answer to that question, i think, possibly yes, they do think their nuclear weapons capability gives them this deterrent. i believe they are wrong, if
11:12 am
they believe that, but i think they may believe it. the united states and russia have long experienced going back to the time of the united states and the soviet union with nuclear weapons and with deterren deterrents, but we know mistakes are still possible between us. the question here that i'm posing is what does the dprk think nuclear weapons are good for? besides deterring an enemy attacking them with nuclear weapons? or, to put it differently, when is the threat of the first use of nuclear weapons by a state credible, particularly when that state is dealing with another nuclear weapons state?
11:13 am
what good are nuclear weapons to the dprk, is the question? my answer is that they're only relevant, they're only useful when national survival is at risk. they're certainly not useful for small dpanz. they're not credible. they're not useful to protect them against a retaliation for incidents at the dmz or at sea. but as it turns out, my answer really isn't very important. kim jong-un's answer is very important and i worry that he may expect more of his nuclear weapons capability than a good appreciation for deterrence would warrant. fourth question. does the dprk think that if a
11:14 am
new administration in washington, and we're going to get one, begins by proposing talks about talks, negotiations, rather than immediately seeking tougher sanctions, do they believe that would be a sign of we weakness? answer? i think maybe. let me be clear about my own view here. i would like to see the new administration in the united states that takes office in january 2017 in consultation with the roc and japan, i would like to see that new administration pretty early on, maybe after a policy review,
11:15 am
seek talks about talks with the dprk, with only one condition, and that is that while they're talking, there be no tests of nuclear missiles or weapons even at the very preliminary stage. those of you who are very attentive on this issue will note that one of the candidates, secretary clinton, is being quoted as saying that's not what she would do, and i know some who advised her believe a different course would be more prudent, something i would call the iranian model, where instead of seeking talks early on, you immediately seek tougher sanctions earlier on in order to create the right state of mind
11:16 am
in pyongyang. show your toughness first, so that talks would be a way of releasing that pressure. so that is an alternative view. it's not mine. i told you what mine would be, but this question is on the minds of those who will be in the next administration, and i believe it deserves thought and discussion, and i hope we can have some here. the fifth question, does the dprk believe it can keep its nuclear weapons program, it can keep its nuclear weapons program and still negotiate a peace treaty, the end of the u.s./roc exercises and sampnctions relie?
11:17 am
in other words, does the dprk believe it can take its nuclear weapons program off any negotiating table? i believe it isn't sure whether it could do that. i would note that some who are in this administration now certainly believe they will, they, meaning the dprk, will never give up their nuclear weapons program. if we went around and we asked everybody here to comment on that, i dare say at least half of you would say they'll never give up their nuclear weapons program. i believe by saying that, you give the dprk hope that they can keep it. my view is that we should
11:18 am
destroy that hope. explicitly we should not, repeat not settle for a freeze on their nuclear weapons program, unless the freeze were simply a step to denuclearization. to put this another way, i'm posed to talks with the dprk if they take their nuclear weapons program off the table. i believe to engage in talks when they cannot by agreement ahead of time produce the denuclearization would legitimize the dprk's nuclear weapons andeye am opposed to that. sixth and final question, does the dprk believe it can resist international pressure to
11:19 am
improve its human rights behavior? as with the previous question, i believe the dprk isn't sure it can get away with that. i can tell you from firsthand experience that they are concerned that the phrase "improving human rights behavior" is code for ending the kim regime. our position, i believe, should be the following, that we cannot address legitimate dprk security concerns unless we ultimately reach a political settlement with the dprk and probably one that includes a treaty of peace, and since i believe that, i do not thk,
68 Views
IN COLLECTIONS
CSPAN3 Television Archive Television Archive News Search ServiceUploaded by TV Archive on