tv Key Capitol Hill Hearings CSPAN November 29, 2016 1:00pm-3:01pm EST
1:00 pm
when machines have mistakes, when there's bugs that could be exploited by adversaries. what does autonomy mean? to quote the board, autonomy results from delegation of a decision to authorize and take action within specific boundaries. we'll be talking about delegation of a decision. in the context of this panel, we delegate that decision to a computer program. an app, if you will. not everyone is familiar with apps and web browser, but not everyone is autonomous. they follow a fixed set of rules and interact with the user in a very limited way. an autonomous system must be more than an app following a prescriptive set of rules. it must be able to make a decision about how its actions will affect the environment. put it all together, today we focus on autonomous decisions where we delegate a decision to take action and that action has been ceded to a computer app. that app interacts with the world and the world interacts
1:01 pm
with it. i also want to set the stage for the size and the scope of the investment in autonomy. i want to use the u.s. department of defense in history as an illustrative lens. the u.s. is crafting it's strategy where autonomy and ai is center stage. this strategy is called the third offset strategy. when i heard this phrase offset strategy, i didn't really understand what it meant. let me explain it a little bit. it seeks to offset a numerically superior force with technical supremacy. an offset strategy allows someone like the u.s. to win without matching the enemy tank for tank, plane for plane. to get a sense of the scale, the very first offset was our nuclear weapons strategy. the u.s. invested heavily in nuclear weapons, especially battlefield and technical nuclear weapons, because it provided an effective deterrent. we didn't have to match the enemy tank for tank, plane for plane. there was always this moving threat.
1:02 pm
in the mid-70s 70s, everything changed. russia reached nuclear parity with the u.s. the offset was no longer an offset. the u.s. and other countries started looking for other offsets. the u.s. came up with the second offset, where the idea was using accurate guided munitions delivered by effective delivery systems, you could achieve the same effect as nuclear weapons without the collateral damage. this investment led to huge advances in science that went beyond the military domain. things like gps wouldn't have been possible if the u.s. didn't invest in this idea. precision munitions. those are two off sets. now the third offset. we expect the investment and radical change in international policy to be just as specific. the race to autonomy is not only happening in the u.s. and to implement these sort of offset strategies, it's also in other countries. for example, it includes russia and china, which i mentioned just a few minutes ago, are investing in roboticized armies. it's also an industry.
1:03 pm
for example, in 2014, bank of america report states that japanese companies invested more than $2 billion in autonomous systems led by tech companies such as facebook, google, hitachi. we don't get to just deploy autonomous systems and call it done, though. once we deploy them, they themselves may become targets. that leads to a notion of counterautonomy, where adversaries may go after the autonomous systems themselves as a way of getting at their intended target. as an example, just to kind of put this in scope, there's a very famous chess computer called ripka. it played at the grandmaster level. it was defeated after a five-minute tournament because someone found a flaw in the engine. that flaw was in chess if you go more than 50 moves in chess without moving a pawn, it's a draw. the chess engine had a flaw where it would try to avoid a flaw under all circumstances. this player would go after the autonomous system by offering a piece as a sacrifice. the computer thought it was piece up. the player would move 49 moves
1:04 pm
without a pawn move. the computer would say, oh, no, a draw is coming up. it would try to avoid it. and the player could go to town. this is going after the algorithm, not just the test game. autonomy is going to be huge. it's critical we get it right. the stakes are extremely high for many reasons. one of them is autonomy will drive us to take decisive action faster and faster. these actions aren't in the cyber domain but also in the kinetic domain. remember what i said. autonomy is a delegation to an authorized entity to take action within a specific boundary. i want to think about a couple of different dimensions. what decision is being delegated? second, in what circumstances? and third, what are the appropriate boundaries for using this sort of technology? and to dig a little deeper, the decision being delegated is a different question. countries are forming stakes on the ground on how they're going to think about this. the discussion is very timely.
1:05 pm
for example, the deppsey secretary of defense in 2014 said humans in the united states' conception will always be the ones who make the decision to use lethal force, period, end of story. when he was questioned whether a computer would ever take lethal action. but the pace of technology makes applying these high level philosophy and principles to different situations difficult. for example, should an autonomous system shoot a suicide bomber before they have an effect? is that okay? is that defense? is that offense? when is the decision ceded? he goes on to say, and he qualifies him that there may be times when it's okay for the computer to take control, for ex -- for example, suppose you've got 60 missiles coming at you. there's no way a human is going to be able to sort all that out. the human will make the decision but make it ahead of time for the computer to be able to react to that. this isn't a hypothetical conversation. it's here today. for example, consider for a minute the fire and forget missile systems. we've all heard of these probably in the newspaper. one example is the uk
1:06 pm
bridgestone missiles, which groups that one of our panelists serves on, illustrates there are no clear lines in the definition of autonomy or when we've ceded control. the fire and forget systems are often described as autonomous. some will say they're semi autonomous. it really just depends on which definition you're looking at. the uk air force describes the missile as fire and forget weapon against all known and protected armored threats. brimstone's radar seek and searches, comparing them to known target signatures in its memory, the missiles rejects returns which do not much and continues searching and comparing. the missiles can be programmed not to search for targets, allowing them to safely overfly friendly forces or only to accept targets in a designated boxed area, thus avoiding
1:07 pm
collateral damage. an interesting question. someone has decided to use lethal action but it was up to the computer to identify who to take lethal action against. there's another more subtle question, what do we do when there's a bug in the software, that it maybe misidentifies where it's supposed to go. finally, what are the constraints? again, a very realistic question today. if we go back to the uber example in pittsburgh, suppose a pedestrian walks out in front of a self-driving car and it can only miss the human by driving off a bridge. who should you save? the driver or the pedestrian? a good question. there's no clear solution. and military operations, we often have similar questions. who are we going to save when given the choice? how are we going to program the objective functions in these military operations? so with that framing, i would like to introduce our moderator and speakers. our moderator is george
1:08 pm
perkovich, advice -- vice president of studies at carnegie endowment for international peace. george, can you please step up. his work is primarily on nuclear strategy and nonproliferation issues and on south asian security. george is the author of the prize winning book, "india's nuclear bomb," called an extraordinary and definitive account of 50 years of indian policy making. george has been a member of the national academy of sciences committee on armed control and international security, the council of foreign relations task force on nuclear policy and many other such advisory committees. thank you, george, for joining us today. our first panelist is daniel reisner. can you please come up, a partner at the her zog law office. he joined in 2008 as the firm's public international law, security, and defense partner, recognized as one of israel's leading public law experts, 19 years in the field, 10 of which he served as head of the israel national law department.
1:09 pm
he was the senior lawyer responsible for advising the israeli leadership. i hope you can advise us on this issue as well. i would like to invite the director, mary wareham, human rights watch advocacy against particularly problematic weapons that pose significant threats to civilians. she is also serving as the global coordinate of the campaign to stop killer robots, one of the people i quoted earlier on the uk brimstone. she worked for the vietnam veterans of america foundation, assisting jodie williams and coordinating the international campaign to ban land mines, co-laureate of the 1997 peace prize. finally, general. general panwar. he served in the corps of signals indian army. he retired after 40 years of active military service in the corps of signals indian army in april this year.
1:10 pm
his last appointment was commandant of the military college for telecommunications engineering, which carries out training in the fields of ict, electric warfare, and cyber operation, and is also designated a center of excellence for the indian army for these disciplines. the general officer has received many such awards. i want to call out a few of them. he's been the recipient of the president for distinguished service in the defense forces. he's also been awarded the department of defense production for r&d work. and last year he was conferred the coveted distinguished alumnus award by the india institute of bombay and is the only defense officer to ever hold such an honor. with that, thank you, panel, and i'll turn it over to george. >> great, thanks a lot. [ applause ] great, thank you. what we want to do have as much of a conversation as possible, first amongst ourselves up here, and then with you all, to basically draw out a number of the dilemmas in this area and to
1:11 pm
help identify what are the questions that might be the most worth pursuing, as different countries and different actors move down this agenda. so to start us, i want to ask general panwar to build on what david said a bit. certainly there must be other drivers beyond dealing with numerical asymmetries that would make autonomous systems attractive to a military and to a government in terms of problems they solve and advantages they confer. can you give us your perspective, what are the attractions of autonomy in this space? >> i'll start by saying that one can't get away from the fact that weapons are meant to destroy and kill. but they destroy and kill, supposed to kill military
1:12 pm
potentially. and the idea is not to affect the noncombatants. the noncombatants have to be saved. a major question you have to ask is, does ai have the potential of reducing the negatives of destroying the noncombatant, potentially. i feel that in a sense, by its character, artificial intelligence has great potential towards this goal. having said that as an opening, let's see how warfare is actually changing in the last few decades. there are two things which have happened. on the one front, there is a change in the nature of warfare from the conventional to what is normally referred to as fourth generation warfare, where the lines of politics and military
1:13 pm
are blurring. so there is a different context in fourth generation warfare. india happens to have the context of both conventional warfare as well as the fourth generation warfare. and so some of the things in the discussions which come up really get related, at least my examples really get related to how the benefits turn up here. the other change in warfare which is happening is to do with the information age. now, here again, you have on the one hand cyber warfare, electronic warfare, one thing that is happening in the information age. coming to the relationship with artificial intelligence, before that, because of information, and it's hierarchy, coming into the weapons systems, what you are having over the years is greater and greater precision in the weapons systems. now, ai, again, has a potential
1:14 pm
of increasing this precision, and discrimination as we'll be discussing, i'm sure, as part of the panel. and that is where, again, the aspect of having lesser and lesser noncombatant casualties is going to occur. now, when it comes to specifics as to what are the types of systems we talked about, so increasing degree of what ai can do. so let's start with just four different examples. you can have a defensive system. and in a defensive system, like for example handling of diffusion of aries. there the noncombatant or adversity is not involved and ai can do a lot in coming up with the systems in any case. at the next level, you have defensive ai. so we talk of, you know, the systems like phalanx which have
1:15 pm
been deployed for decades now. there, the missiles are coming in, you're destroying the missiles, autonomous systems, ai autonomous systems are in place so that casualties are reduced. and the third, you have precision coming in now. so you can have offensive systems. for example, if you have drones, armed drones which are autonomous, okay, you already have armed drones in effect, but you have autonomous armed drones, when the pilots -- that's the third level, the offense is coming in. at the fourth level, if the graduation of the ai takes place and it develops to the extent where it can also mimic the empathy and judgment aspects, when it graduates to that stage,
1:16 pm
further savings will be possible. there's many others you can talk about. but in increasing degrees of complexity, i would say these other four areas we can have as a starting point. >> thank you. that was a brilliant setup. you raised a number of issues that i think we'll dive farther into, including the questions of offense, defense, and other function. let me turn to mary and in a sense ask you to respond, but in particular to the extent this capability allows one to be more discriminating and precise. presumably that's good. when you look at kind of parsing what can be advantageous in these capabilities from what should be avoided.
1:17 pm
you hone in on what the distinctions should be? >> it was great to hear your introductions. >> you talked about the dangerous tasks autonomy has been used for in the military for cleaning ships, disposal, robots to assist the soldiers, and now we're moving into a phase where we see greater autonomy in weapons systems and that's seen with a very large autonomous fighter aircraft that can fly great distances and can carry a payload. we're also looking at autonomous weapons system ground-based and stationary. that can select targets that way on dnc and korea and elsewhere. we mentioned some of these systems in first report we did on this topic back at human rights watch called "losing humanity." we called them precursor weapons systems. in our view, they were not fully
1:18 pm
autonomous. they had a degree or nature of autonomy in them, but they were not completely autonomous. we in that report called for a preemptive ban on fully autonomous systems. the ban calls on a future ban and not the ones we have today. but we did that because we looked at where the technology was headed. we talked to people -- actually roboticists came to us and said, where is this headed? we're worried about this. that was part of the rationale behind forming this campaign to stop killer robots that launched in 2013 and is still going. it's a global coalition. i coordinate it on behalf of human rights watch. you know, this is not a campaign against autonomy in the military sense. it's not a campaign against artificial intelligence. there are many people working in autonomy and artificial intelligence who are part of this campaign.
1:19 pm
it's a campaign to draw the line and establish how far do we want to take this. so you can view the call of the campaign as being a negative one, calling for a preemptive ban on the development, production, and use of fully autonomous weapons or you can view it in a positive way in terms of how we want to retain or keep meaningful human control over weapons system and not over every aspect of the weapons system, but the two critical functions of the weapons system which in our mind is the selection of a target and the use of force. those are the two things we're concerned to retain human control over. we know that sounds very easy. it's harder to put into practice, but this is where the debate has been centering for the last few years when it comes to autonomous weapons systems. >> let me draw you out and then i'll turn to daniel. you talk about drawing the line and what i take as drawing the
1:20 pm
line is basically at target selection and decision to fire as it were, saying there should be a human there. i get that in a sense, but in terms of objectives, if an objective, for example, were -- going back to what the general said -- objective were to minimize casualties or risk of indiscriminate -- civilian or nontargeted deaths, i would say, greater precision if different versions of these weapons could be demonstrated to provide more precision and reduce collateral damage and inadvertent deaths, why should it matter whether a human was in the loop or not? i'm trying to understand.
1:21 pm
i'm not arguing with you. i'm trying to draw you out about why kind of the principle of a person in the loop as distinct from outcomes -- i'm thinking in terms of persons in the loop, i'm related to people that i don't want in the loop. i'm croatian descent. we can talk. there's a lot of passion, a lot of history. you're under fire. the idea of somebody cool and detached might seem welcoming. tell me what's wrong with that. >> yeah. like i said, there are many benefits to employing autonomy in the military sphere. our concern is what they have been telling us, we're going to have stupid systems that are
1:22 pm
weaponized before they can have the smart ones that can do the level four, the mimicking of human judgment and empathy. our concern is we're going to have stupid autonomous weapon systems being deployed before we have these super smart ones which are further in the future as we understand it. in terps of concerns, it was first roboticists and the ai experts that came to us and said you don't understand what could go wrong when these are deployed in the field. we have many technical concerns. there will be unanticipated consequences and unanticipated things will happen there. but then the other kind of elements of the campaign that have come on board, the faith leaders and nobel peace laureates are especially concerned about this making it easier to go to war because you can send in the machines rather than the human soldier. i guess that human rights watch we look at it from the
1:23 pm
perspective of the protection of civilians, the noncombatant, the collateral damage which i've heard here this morning. of course, we want to try and keep them out of fighting as much as possible. but the fear if the human soldiers are not in there and just the machines, it will be a worse situation on the battlefield for civilian populations. this is why we see a need to draw the line. >> daniel, let me draw you in on any of this, but in particular how you thought about whether there's a valid difference between offense and defense or territoriality. i'm listening to mary going i totally get that if you're operating on someone else's territory. but on one's own territory, is there a distinction. so just take us through your thinking. >> let my start by saying autonomous weapons systems are already here.
1:24 pm
the issue is no longer forward facing. it's also current facing. and while we don't know all the autonomous systems out there because some of them are closely guarded secrets, we know a lot of them. and i think mary is right in one respect of that. the capability to deploy autonomous systems is still outpacing the capability to train them to be human replacements. now i say that in spite of the fact that computers can beat human beings in chess and in fact anything that requires thinking in speed or numbers of calculations, et cetera. one of the problems we face is that what we want to train the autonomous weapon system to do, we're not sure how to do that. let me go into that for one minute because you'll see some sort of sitting in between the two positions. i used to train soldiers to comply with the laws of war.
1:25 pm
and when we trained human beings to do so, we have a system. it's more or less the same in military organizations. we have a specific set of rules. there's the principle of discrimination. you have to discriminate between a legitimate combatant and non-combatant. there's the principle of portionality. a few basic principles. however, none of them are easy or none of them are really basic. in fact, after more than 30 years, it's still very difficult to explain what is allowed and not allowed. when we think about how to try to teach a computer to do this, we realize we're not really sure what we want to do with human beings. the second challenge is, because artificial intelligence doesn't learn like a human being. it learns differently, and there are different ways to teach computers, but none of them are putting them in a classroom and giving them a lecture and then
1:26 pm
taking them into the field and trying out a few dry runs. we learned the old ways we taught the system don't work on computers. the first point i want to stress is we see a chasm opening between the ability to deploy the autonomous systems and the capability of teaching them what the rules are. obviously, that gap will close as computer systems continue to develop. it goes into it's next revolution which my friends tell me will be the year when we have next level of artificial intelligence, et cetera. that is quite possible. to be fair, i think the military is outpacing the side currently. that's my first point. my second point is, it's going to be relatively easy to feel. we discussed this in e-mails before, the panel. most autonomous systems today
1:27 pm
are still stationary. most. why? because movement for autonomous systems is complex. there are a lot of different types of capabilities. putting something in place now. now, the most oldest autonomous weapons system on the planet is the land mine. some people would say it is semiautonomous because the way it works. if you want to go into detail, if you take the acoustics from the 1960s and '70s, small -- actually have small committees on board with signatures of enemy ships. and they would only target enemy vessels which meets specific acoustic signatures. they would lie and wait on the bottom of the ocean to operate. those are very primitive. they have been running 40, 50 years right now. they were the first solid autonomous weapons systems in the world.
1:28 pm
those have been around for a long time. however, they don't go and try to find a target. that adds a level of complexity, which is huge. the autonomous machine guns connected to land radars and south koreans feel and other countries i know, they are staying in one place. when you go into a territory where the machine has to the learn the environment and this is a very complicated machine experiment. looking at machinery and territory and identifying human from nonhuman and friend from foe. and to do that, that is a complicated experiment. now, i say that with one final comment. i'm not even 100% sure the rules we have actually face robots. and i'll explain why. you see, we build the rules for combat today for human. and they come with a few hidden assumptions.
1:29 pm
one, human beings make mistakes. and we are okay with that. we accept a certain level of risk in combat for human soldiers. you're allowed to the make a mistake if you are a soldier. i tell you a very sad story. one of the military operations 14 years ago there was the terrorists had fielded one ton ieds under the roads to blow up tanks. and tanks couldn't withstand the bluff. they were blowing up very big. one tank was traveling in a location. they were really on guard for that. and suddenly they heard this huge boom from the tank. they thought surely they must have gone over an ied. they were searching for the turret. so they look in the periscope. and they shoot them. they realized it wasn't an ied. the tank had gone over a huge boulder and it sounded like a huge explosion.
1:30 pm
the two people were innocent. the reality is in a combat situation, the crew killed two people because they thought they were in a confrontational situation. they were court mar court-martialed but weren't convicted. would we be willing to reach the same conclusion in an automated system? are we willing to give the computer the benefit of a mistake? human beings get self-defense in
1:31 pm
criminal proceedings. are we going to give autonomous systems self-defense, the defense of necessity. if you westminster abbey to prevent a bigger harm, you're allowed to -- all of our systems are geared for human beings. so the bottom line is not only is it difficult to train the robots and not even 100% sure the rules are ready for the artificial intelligence. >> i'm processing that. i think you're onto something obviously that's extremely important in terms of whether it's a challenge to develop, rules to deal with ai or rules to deal with an even broader category and what the expectations are. my sense is that we can all learn a lot in terms of how we think about this and how we might think about it coming from the liability side rather than
1:32 pm
trying to define autonomy or not and saying, well, autonomy should be avoided. how do you go about it. but if case law i think it helps enormously. so i want to come back to that. but i want to pick up on a couple things you said and bring the gentlemen in and mary also. from anyone's perspective, is the distinction between stationary and mobile an important distinction? especially, mary, from the sense if one thinks about prohibitions or what could be avoided, does it matter? relatedly, the distinction between defense of one's own territory versus action out of territory which implies mobility. so i'm just trying to see how to
1:33 pm
think about this. do you want to jump in on those two points and then -- >> can i say a couple of things? >> sure. >> before answering that, a couple of points on what mary said in the last interaction. i think the feel was that stupid autonomous weapons can be deployed before runs. that is not acceptable. that is presuming i feel the sting of the weapons and the people who are deploying them are doing it in an irresponsible manner. that's a feel which is there but i feel that's the way the inductions in armed forces is done. we have to look at what is inherently wrong with fully autonomous weapons systems. it is fully autonomous systems
1:34 pm
and meaningful human control is what is being looked at. and there is a weakness in what is this fully autonomous system. there is a gauge. everything onto. weapons system which are fully autonomous are those which can select and engage targets without human intervention. now that "and" is important. what is meant by selection. only selection is accepted. only engagement is also accepted. after all, we are only engaging. we are not selecting. and only engaging is acceptable. but selecting and engaging where is the question is being drawn. the reason behind that, between selecting and engaging there's a decision process. the decision to kill is what is felt the day the machines -- the
1:35 pm
machines -- from the point of view. decision should be left to the human is one point of view. it could possibly come up. and i would like to make the point that if we are looking at various technologies, the chain we talk about where you first identify that nobody objects to the navigation or nobody objects to the automation, autonomous functioning in the tracking, in the selection, including in the -- in fact, if we look at the 2009, it specifically says in all of these functions is permitted and nobody would object to it. it is on this the decision to kill. the point we want to make is that the the complexity is not
1:36 pm
going into that decision. the decision is one aspect of it. as long as the human is there, there is no technology involved. ai brings the rest of the functions which goes into making an autonomous system. that is one point i want to make. if you are thinking about banning technology, you are talking the most important part as far as technology is concerned. now, coming to the question you asked whether on the defense side, defense in any military person would know when you say defense, it's not defense. offense and defense is part of defense. so conceptually there's no real different aspect coming into it. but that also comes in offensiveness. so going into offense would really be of the same nature.
1:37 pm
i don't see any difference between the two. >> how about territoriality, on your own territory. you can avoid that about offensive defense by saying you can operate on your territory but not outside your territory. >> okay. i have a little more on that. i'll take an example from india. so, for example, if you have an ied, a line of control. the sanctity cannot be crossed. if we are looking at that scenario, then if you try to defend, that defense embroils entry. you can have nonmobile robots also looking at defense.
1:38 pm
but when you run into a dimensional operation, then you're department of defense, you're always going across. so you attack. so what i'm saying is depending on which backdrop you are looking at, defense may or may not be involved. that's why i'm saying in general, distinction between offense and defense may not be really correct from a technology point of view. however, it would be more acceptable to those who do not want to delegate to the machines. defense of a system would be more acceptable then something offense. that's our distinction again. >> daniel just on the territoriality. i was thinking of iron dome, israeli system which operates over the israeli airspace.
1:39 pm
and then there's a wall. so i'm thinking of analogies because the general is talking about the line of control which separates the part of kashmir and there has been firing for the last month or so and lots of movement. in better times there's not meant to be. so you can imagine that kind of boundary where one might put autonomous weapons so there is not infiltration coming across. on the other hand, presumably if like the last month when india's -- there's movement going back and forth. you might want to turn those systems off so you don't hurt your own people or manage that in some way. so given israel's experience and your experience here, does the distinction on of territoriality matter practically or legally or no? >> first of all, realistically, if we take the iron dome system,
1:40 pm
it had been very public. the iron dome system had three different settings. you have manual setting, semiautomatic, and automatic. so it's a missile defense system, right? the idea is you want to shoot down the missile in a safe location. part of the algorithm there is for the computer to not only know -- every system works like this. first of all, it identifies with the radar, the incoming missile or whatever is coming in. then it calculates where it's going to hit because it's on a ballistic trajectory. so it's not going to deviate from its track, right? you know where it's hitting. you automatically do things. you warn people in that specific area that should take cover, et cetera. but then it calculates if it's going to hit in a dangerous
1:41 pm
place, it calculates where to shoot it down so where to minimize the damage. now, theoretically at least, boundaries are not relevant for that. so if you can catch the missile early enough, we wouldn't care if it landed in another country's territory. although the calculation of landing in an unpopulated area would still be the same for the system. but the idea that the system is not supposed to take boundaries into consideration. it is supposed to take saving human lives into consideration. so my gut feeling is that the stationary versus the mobile issue is just the technological difference in complexity. and the geography is not a real issue. again, following the general, i think people would be more easy to accept the fact that you would field such things in your territory and then you send them to another country. on the moral public opinion side, there are arguments to be made that these are initial steps down the road.
1:42 pm
but from the technological and even from a legal side, i don't really think that. i don't think it holds. >> just on the complexity which you mentioned again. but the systems which will be targeting let us say mobile targets, many more complex i think is the statement. now let me just paint the picture. and, again, i take an example from a conventional robot. for example, you have in an area of let's see 10 kilometers by 10 kilometers, 100 tanks. that's the number of tanks you have. and so it is a combustive environment. now instead of what you normally do, this has to do with military capabilities. these are the models.
1:43 pm
ai is coming into place in the military. so out of the 100 times, among each other. so the blue forces, they will be destroying the tanks. so they are equal on part. one has ai technology and you have autonomous tanks. so now i'm trying to analyze what is the complexity as compared to this technology of these drones picking up those and destroying them? i think the complexity gap is hardly anything you would pay for for technology which is there. you only have to pick up signatures in a desert, which is pretty simple. in such a scenario, the complexity is not there. the complexity is there where is the terrorist mixed up in a population, and so there is more
1:44 pm
external distinction. so in such a scenario, the complexity is not there. the complexity is there where there's a terrorist which is mixed up in the population. maybe the terrorist, others mixed up in a population, so distinguish between terrorist, there's no distinction at all. that's a complex problem. i just wanted the complexity there. >> mary, come in and sort all this out for us. >> i'm just thinking about the international thoughts we have been participating in three years. not three years of talks but three weeks of talks over the past three years. they look for points of common ground, where the governments can agree. there's about 90 countries participating on this. i thought i heard them all saying these don't exist yet. autonomous weapons do not exist yet.
1:45 pm
we do have some types of autonomous systems in place at the moment. but there was pretty widespread acknowledgment that what we're concerned about, the lethal autonomous weapons system are still to come. and the other thing the states seem to agree on is international law applies. international humanitarian law applies. the testing of your weapons and doing that through article 26, of course it applies to all of this. and the kind of notion of what are we talking about? we're talking about a weapons system that has further human intervention. there's a fair amount of convergence around that as well. what they haven't been able to do is break it down and get into the nitty-gritty details here. that's where i think they need to spend a week just talking it through the aspects of the elements or characteristics that are concerning to us. is it that it's mobile rather than stationary or that it is targeting personnel rather than material targets.
1:46 pm
is it defensive, offensive. although those words are not so helpful for us either. what kind of environment is it operating in? is it complex and cluttered like an urban environment is, or are we talking about out at the sea or out in the desert. and finally, this one has not really been talked about. but what is the time period in which it is operating? because it's no coincidence this campaign to stop killer robots was funded by the people campaign to go stop and land mines. we are concerned that one of these machines could be programmed to go out and search for its target not just for the next few hours, but weeks, months, years in advance. and then where is the responsibility if you're putting a device out like that. so that's some of the kind of breakdown we need to have in the process to really get our heads around what are the most
1:47 pm
problematic aspects here. not every aspect is problematic. that will help us understand where do we draw the line and how do we move forward. >> pick up, if states have agreed that the laws of conflict and other international law would apply, then it seems we have some different circumstances and we should play out than if they don't agree. dan is shaking his head. tell me why you are not shaking your head. pick up on this too. >> mary is absolutely right. you know, when i grew up, there was a u.s. band called super tramp. >> yeah. we're dating ourselves. >> one of my favorite songs growing up was the opening lyric were take a look at my
1:48 pm
girlfriend. she's the only one i've got. now international is that. we have no alternative. we don't have a plan b. so as a very old time international lawyer who deals with this issue, i don't have an alternative set of rules to apply to this. so we have no chance to say at international convention, we will apply existing rule. the part we're not telling them is we don't know how to do that. that's one of the problems. the rules don't work on robots as easily as humans. they don't work on humans as easily as you think they do. and because of that, the principle, i am 1 million percent convinced it would be international law artificial intelligence as if they were human beings. in reality, we will be asked to translate in reality. we will have a huge new challenge. so that's one of them. >> let me jump right in on this. and we can continue as a conversation. that seems to me one of the strongest arguments for at least a pause, if not a ban, a moratorium.
1:49 pm
to the extend what what you just said, obtaining, let's wait until we can sort this out then. so tell me what's wrong with that. if anything, whether the problem is it's not practical but from a legal point of view. >> okay. so i am also a cynical international. and the reason i am is because i used to do this for a living. now, if you look at the list of countries participating in the process, you will not be surprised that the primary candidates fielded such weapons are less involved than the countries who are not supposed to be fielding those weapons. in fact, if we take the land mine as a specific example, they will join with very few notable exceptions all the countries who do not have land mines.
1:50 pm
so the world is divided into two groups. the group that says no more land mines. and with the exception of three countries they have not joined the regime. as a result, it is not a rule of international law. states, which creates a bad principle of international law because international law is different for every single country. this is part of international law. it is how the system works. for example, for canada, it is unlawful to develop or field an anti personal land mine. for israel it is totally legitimate to do so. in the unlikely event israel and canada were to fight, israel could use that and canada could not. what will happen with autonomous weapons systems and why i am not waving the band flag together with mary is because i know the countries that are going to field them are not going to be administering any type of result
1:51 pm
from that process. the last thing i want to have happen is the normal countries who have very complicated projects for fielding weapons like india who came up with a robotic revolution 15 years ago. i think they have the biggest in the world today, sidebar. but they took this problem on board as one of the issues they need to tackle, i would trust them much more to handle it effectively in a country where they don't care about the collateral damage as much. my concern is they could achieve the opposite. the good guys who will take care only the field systems after they know they can achieve all the good results we think they can won't field them until they're ready with a small
1:52 pm
mistake probably. but other people will field them earlier. and that is not necessarily a reality i want to live in. so that is where i come in. >> mary, how do you respond to that? >> i mean, just to say the treaty is called the conventional conventional weapons, geneva framework conviction. all the countries interested in developing autonomous weapons technology are involved in it and participating in it. nothing would be discussed without the agreement of all of these countries. we do have china, russia, the united states, israel, south korea and the uk in there debating this subject. and just to come back on the land mine size, we have 162 countries now who have banned these weapons. 45 million have been destroyed from stockpiles. we have gone from 50 countries producing them down to 10 as a result of the international treaty. and the international treaty includes former users and
1:53 pm
producers of land mines. that problem is in stocks that have been mass manufactured. we're not talk building doing a land mines treaty on the autonomous. not yet. we are talk building doing it in this particular framework. we are quite sincere in wanting this to work. we cannot do this with everybody around the table, then you might end up doing these kind of efforts. right now there is consensus at least to talk about. not so much on what to do about it yet. >> and how about -- what has been the thinking about a moratorium as distinct from a ban? and i say it from the form of, if there's also the possibility that smart versions of these
1:54 pm
weapons could be more discriminating and have other positive values from humanitarian and other point of view, then kind of an indefinite or permanent ban seems to me a priority where one would want to question. on the other hand, because people stipulate that they don't quite know how to apply international law and other things, arguing for a moratorium until that is worked out that. would make sense to people, which is how i try to think about things. so take me through the moratorium versus ban. i know you're working on a ban. i'm not asking you to endorse something you're not working on. you and then i'll ask the general. >> the moratorium call came from a summary. christoph hynes issued a report in 2013 in which of his major findings was that there would be a moratorium. so it wasn't a proposal from the campaign.
1:55 pm
when he was on his way out this year he issued more reports cooling for a ban. we haven't talked about a whole lot of other concerns that are raised with these weapons systems. but the moral consent, but you are seeing the responsibility to take the human life to a machine is something that people are not comfortable with and that they want to debate this. it's not just countries like the holy sea, it is countries who have been the victim of drone aspects, they have already seen the effect of some degree of autonomy and they don't want to cross that moral line. there's also a lot of countries talk building security and instability and what happens when these have the weapons and the other country doesn't. what does it do for the nature
1:56 pm
of conflict and war fighting when one side has taut high-tech whiz bang and the other side cannot. the other question is are we going to level the playing field so everybody has these weapons systems, or is it better that nobody have them? at this moment there's still time to sort this out. there's still time to put down the rules. and there is still time to prevent the fully autonomous weapons from coming into place. >> i think again you used the terminology of fully autonomous. that is very important. we will put a moratorium only on the fully autonomous system which replaces a human in the decisions to kill. and that's all. that's all that begins. so we are not putting a moratorium on -- this proposal is not trying to put a mortar on use of the autonomy or all the other functions. essentially there is no moratorium on ai deployment in any of these systems.
1:57 pm
the decision to kill does not require ai. it is implementation of a weapons systems. so that's one part. so when you say moratorium, in effect, nothing will happen on the ground. the second part that you mentioned was -- the last part of what you said was on? >> in terms of who has the system -- >> the third strategy of the u.s. now that rationale, to have this military capability, to have this technology, to have this dominant. so that logic can only
1:58 pm
particular particular attention to a system. and the idea of this new technology other than having a technological edge is also that bringing ai into the weapons systems will lead to a cleaner form of war. pgms can be ruled out. but vis-a-vis the standard ones that drop from aircraft. they are better because you do lesser combatant loss. in property. in a similar manner, more intelligence, more discrimination. even if we don't have the other stage, it will need to move
1:59 pm
resighs targeting of what you want to target. is so from that extent, on the one hand you're building military capability. the other is cleaner form of war which benefits to have it. in summary, i would say that just saying moratorium is not really doing anything positive on the ground. if we decide from other points of view, look into the -- if you look at human race and that perspective and that point of view one wants to sort of ban the buildup of technology at this stage, well, that is worth considering as a point. thank you. dan. and then i want to open it to the broader discussion. >> i think the point i want to
2:00 pm
make is that several different agendas are legitimate that work here. one school of thought says we're not ready to fuel such autonomous systems yet. i think they are currently right. i think we haven't solved the technological requirements to ensure the statistical accuracy, not in the simple one, but in the complex -- i don't think anybody has solved the ai problem. yet it requires so many different schools of technology. you need a machine to be able to do so under a lot of stress, physical stress. lots of challenges. i call them technical, but they are really intelligent technical difficulties.
2:01 pm
but they will be solved. they are just not yet ready today. the one group is saying wait until you're sure before you allow a machine to press a button which shoots and kills a human being. that is one group of thought. another group actually said something wider. we don't want machines to kill people, period. irrespective of how good they are doing it, we don't think this should take place. now, this is a more philosophical important discussion of a totally different level which has nothing to do with the technology involved. i will point out here that we have already already gone a partial robotic revolution in the civilian sphere. they have become invisible already. but if i go back in time, one of our favorite stories is, you know, the first elevators in the world were built in chicago when they had the first high-rises. like you saw in the old movies,
2:02 pm
there were elevator operates who used to operate the elevators to the stop at the floor. but they built buildings too high for human operators. so they built the first machine-operated elevator. the problem is when people went in and there was no operator, they said this is the first ever machine-operated elevator. it is perfectly safe to the use. nobody would use it for the beginning because they thought it was unsafe. how can a machine know where to stop? an elevator is a very primitive form of today, especially when an autonomous machine can kill you. traffic lights. aircraft. they make the decision which humans -- if they make a
2:03 pm
mistake, people can die. we have long time accepted the fact that computers can make decisions for us that can kill us. what happens for the first time is that we have reached a state where they can do it on purpose. this is a decision point which we need to decide if we're crossing it or not. being the cynicist i am, i think we have already crossed it. but i'm glad we're having the conversation now and not 20 years from now. and the final school of thought, do we want it. the two schools of thought on that one. the one saying the more accurate missile systems. coming from israel, remember we are advocates of accurate missile systems because the less civilians we hit, the less
2:04 pm
israel is targeted for doing something wrong. so we have a vested interest in using more accurate. the cleaner you make the battle field the easier it is so fight. part of the reason there are not so many wars is it is dirty. if you just kill the combatants they will be happier to go to war. i'm showing you the different schools of thought. they are converging around this issue. each one is a separate discussion. and you need to choose which one you will focus on because each one does something different. >> that was a great summation and taxonomy of the discussion. i want to thank you and each of our panels. a terrifically sharp and informed conversation. let's open it to discussion. you know the procedure. i call on you. you say who you are.
2:05 pm
they bring you the microphone. there is a lady here midway and a gentleman. ladies first at least for the next eight days or january 20th. >> diane from the center for analysis. i was wondering how you think this discussion apply toss cyber warfare. particularly thinking where cyber weapons could be useful. >> okay. cyber warfare, cyber domain is very much part of this discussion of automated systems. it is about human lives. killing human lives. and cyber is very innocent. it can affect human lives but in a different sense. when you talk cyber defense or let's say cyber attack, an
2:06 pm
autonomous attack. that's part of the cyber. that's what i would say. >> dan, you want to jump in on that? >> yeah. i think it's part of the discussion. one of the reasons i say so is because i don't actually know where cyber stops and kinetic begins anymore. i used to know. i don't know anymore. one of the discussions, for example, we've been having on fielding robotic systems is what type of protection do you want
2:07 pm
to give those systems against being hacked? because -- one of the examples i came up in a discussion a few years ago is maybe we need a kill switch. weaponized autonomous systems. then someone said but someone can hack it. the reality that i think most of the discussions are the same. i totally agree with the germ that direct cyber attacks are usually not focused on killing human beings but indirectly they can do tons of damage. so i think the -- it would create a copy of itself and go out into the battle field. however, we already know how to
2:08 pm
do that with the computer viruses. so i actually think the cyber autonomy world is even scarier because it has the potential of us using control more than the kinetic side. but that's another discussion. >> halloween is the scariest discussion. >> that gentleman there. >> jessie kirkpatrick, professor on the george mason university. i want to the pick up on a point that dan raised about sort of varying levels of autonomy we have in technology currently. and we're also in the cusp of different types of autonomous systems that can take lives. and i want to point to one that already does, and that's
2:09 pm
self-driving cars. right? they make moral decisions to kill. they are going to crash as a matter of the law of physics or statistical probability. and there will need to be programs that is a life and death decision. so i would like to hear a little bit about the distinction that the panelists see between this type of technology and autonomous weapons. >> you mentioned that in your opening comments. the short answer, no one has a good answer what they are supposed to do with an autonomous car. being a procedural lawyer, the question becomes not what do we do but who is responsible to do it. and so we now have a discussion that goes something like this. option number one -- this is a discussion two weeks ago by the
2:10 pm
way, with some of the companies that do this. option one, you allow the guy who buys the car the decision. would you rather suicide or would you prefer not to, sir? i don't know how many of you would buy the car with the, you know, presumption. but it is a decision. one of the people in the meeting say let's agree we different different colors to those cars so you know who they are on the road. it's a true example. another way of doing it is saying, no, the car comes hardwired with the decision. do we tell the people who buy the car? the answer is you can't because it is an algorithm and is way too complicated to explain. of course it will automatically kill you. it will do whatever the guy who wrote the code told it to do. there is no way we can summarize in a way that the customer will understand.
2:11 pm
aim taking you through this because when we move the analogy to the warfare side the main difference, and i thought this was your question, the warfare side this is all intentional, right? but the reality is the warfare side the big problem the general is afraid of is the distinction part. when you have people in the battle field and you want to identify who is noncombatant, you need to optimize what you are going to do so you minimize. it is exactly the same question. if you take away all the fluff. then who is going to make the decision? are you going to ask the commander what level of casualties is acceptable, which is option one. that is actually easier for me to say. that's how major operations work today. or are you going to allow the manufacturer of the autonomous
2:12 pm
weapons system to hardwire that into the system and me, if i went back into my manager career, i have no idea what the system is going to do when i press the button. for all i know it is going to kill one or two civilians or none at all. if it does, i have no way of controlling that. so the questions are exactly the same, although the scenario is different. you're exactly right. we are facing the same side on the civilian front as we will face in the military front in the very near future. >> i think one of the things which is happening in these discussions is that we don't know the systems in general. there are degrees of systems
2:13 pm
that can be used in different contexts. so while daniel said in a simplistic situation, in today's context, that may or may not be simplistic. the systems, all the enemy airfields, that is much easier. you go in and bomb. you send that system. the next less complicated is what is in meters. when you come closer to another situation where a company is going in bunkers. when they go into this company, then to that extent there should be more of an issue. that is more difficult, closer situation. a human would be there. we are seeing whether humans are there.
2:14 pm
what we are deploying, i would say, has to be, whatever technology level is reached, that type of system should be permitted to be deployed in a responsible matter. as such we are already in that esteem. there are already systems on the ground. they have been there for decades. mines are being -- i mean, there's a convention against mines for similar reasons. so they are already there. as a new technology, there is a
2:15 pm
responsible manner in which they should be deployed. in general, the modern aspects of the question, there is a much bigger stage. if at all the systems can mimic empathy and judgment, that will be much later. if it is perfect to that extent, that brings me to a second point. and that is about who is accountable -- the accountability was raised.
2:16 pm
is it the manufacturer, the member, the states. in the system malfunctions on the ground, it is definitely the responsible. but the person who is there is responsible. it is within that bonded capability of the system that is supposed to deploy. it is an angle to this degree which has to be really strong. the more complex systems that will be deployed. >> i suspect as with the vehicles, so if we go this direction with military systems that latter point will be more debatable. do we want cleaner wars was the question. do we want fewer traffic fatalities. the answer may be yeah but i'd rather -- it's easier to be a system with a driver and soldier accountable than even if it's safer and cleaner but now a big supplier is accountable, the
2:17 pm
state is accountable. it's something what issue this is going to bring up. for including financial reasons. i would not rather take on the liability. i would rather you have the liability. brave new world. this gentleman here right in the middle. yeah. why don't we take two. and this lady with the blue and white striped shirt here. if we can get another microphone. let's take two in the interest of time. i think we're bumping up against it. >> in keeping with the things that are scary, you talked about autonomous offensive and defensive capabilities. but we only talked about auto ought ton autonomous deterrence. where you would put an input on second strike nuclear talk or in the line of cyber, a retaliatory attack before your systems go
2:18 pm
off entirely. how do you bring it into autonomous weapons systems. you make it much more likely that the things will get out of control. >> let's take the other question and then we can parallel process. >> hi. my name is lori green. i was a holisitic essay assessor and scored the test of english r and scored the test of english as foreign language for six years until the advent of an autonomous system replaced the human reared and now i'm becoming autonomous. becoming a journalist. are you not aware that a journa. are you not aware that autonomo. becoming a journalist. are you not aware that . becoming a journalist.
2:19 pm
are you not aware that artificial intelligence and programming weaponry to think about humans is destroying our own language processing because we are granting these machines so much importance to cancel our own reasoning out of the process? >> wow. i didn't do that well on my s.a.t. so i don't know if i understand your question. but i am trying to process it. >> we are trying to create a system like a robot is going to think like a human about reasoning, when to strike or what to strike or how to strike. and so in the process maybe even a computer system to reason when it would be appropriate to strike. so we are granting this el ga rhythm algorithm and it's councilingalgorithm and it's eig our own ability to think spontaneously and reasonably, even as demonstrated i think
2:20 pm
even today with some of the explanations that you provided and lagging a real critical target in your arguments. there was a lot of just open processing without really making it definitive, in some cases, answer. also, the process for deterring autonomous weaponry is entirely too slow. i think most people are critically aware there is a lot of apathy against the idea of just altogether canceling out the prospect of autonomous, fully autonomous weaponry. and i'm wondering if that's because so much money is invested into the artificial intelligence process and not enough in human capacity. >> i think the first part of the question was about you're delegating. you're saying that a machine can be more reasonable and take more reasonable decisions and be able to arrive at the correct decision in a better manner than a human?
2:21 pm
>> no i think the opposite of that. >> she's questioning that. >> she is saying what it is. that it is not going to work. we're destroying our own capacity to reason and think by pursuing it. >> okay. she's saying a machine with ai will never do better than humans? is that what she is saying? >> yes. >> that is for ai to say. >> why would we ever want that? >> it is not technology. it is how the ai is -- whether they will be able to understand international language. we would say no. but you see what is happening here today. the aspect of reasoning -- in fact, my own belief with a layman's
2:22 pm
knowledge of ai is that anything that the human mind can do, including mimicking and empathy at any level, it is not at that stage. there is no scientific reason to believe otherwise. >> it is only mimicking judgment. it is not rationally judging. >> dan, do you want to jump in on this? >> i want to talk about the two questions together. it is all a question of delegation. you used that word in your introduction. and you're questioning whether it is is right to dell gate some forms of decisions to machines. and you have an assumption that it is a bad idea. i do not totally agree with you on every scenario, but i think it is a legitimate question. you went one step further, should we delegate authority to use significant amount of power
2:23 pm
in a disastrous situation where human beings may not be able to respond quickly enough, effectively enough or intelligently enough in order to counterattack or whatever. and these are great questions because they raise the question of what are we developing a.i. for? if we forget the first two years and scientific gains, it is supposed to be something that makes our lives better and easier. that is throughout the expire subculture. for example, if it can make a good decision quicker than a human being and save a life, most people would say that's a good thing. and i was seeing technology develop, i personally being a technological layman who was working in this field, can take you that i have seen numerous
2:24 pm
examples where computers are much better than human beings making the decision, which i want them to make. human beings are scared, human beings are tired, human beings don't have all the information, and human beings act on instinct which turns out to be a decision which is sometimes really good and sometimes really, really bad. now, it may not always be a good thing to delegate authority to a machine. and i think the decision we need to make is where we degree the machines come to help. and your scenario, which is an extremely scenario, i would rather not want a machine to make that decision personally. but i can definitely identify parts of life where i want machines to help me out. where i really like the fact that i don't need to trust human beings with all the communications. but i do not want them to replace us in things which i
2:25 pm
care about. and this is the type of discussion i think we should have now before we let technology logical companies and market pressure push us in a direction we are not necessarily willing to go. >> if no one else -- go ahead. >> just to say we heard quite a bit from the artificial intelligence community, the guys in silicon valley, how artificial intelligence can be beneficial to humanity. this is their big catch phrase. and they're investing money in trying to determine in ways which in it could be beneficial to humanity but delegating authority to a machine on the battle field is a line in which many of them draw. we haven't talked about policing. we haven't talked about border control. we are talking about armed conflict. this is not just the armed conflict we're concerned about.
2:26 pm
it is much broader than that. but the campaign for the robots is the points at which it is weapon sized. it is a much broader, bigger debate. and we don't have all the answers to it. >> well, i want to thank all the panelists and all of you at least here beginning the process of this debate and helping us really i think hone in on what some of the key questions and issues are. thank you all again. and thanks, dan, general, and mary. >> thank you all for coming for this first part. we hope to see many of you on december 2nd in pittsburgh, where you can join us via live stream. the question of cyber deterrence will be one of the panels we'll be looking at on december 2nd. in the meantime, i encourage you to download the carnegie app, and would like to last but not least, thank the team in helping with this event, lauren and rachel, who help with the organization.
2:27 pm
thank you very much. >> now to a conference on artificial intelligence, privacy and security. this is about consumer privacy and legal issues tied to massive data collection and sharing. >> good morning, everybody. welcome to carnegie endowment. for international peace. my name is timor, co-direct policy initiative. together with david brumley, who is the director of the security and privacy institute at carnegie-mellon university, we
2:28 pm
are delighted to welcome all of you here in person. for those joining us on the live stream online the hashtag for this event is carnegie digital. i now have the pleasure of introducing ambassador bill burns for the welcoming remarks and look forward to this day with you. thank you very much. [ applause ] well, good morning everyone. let me begin by congratulating tim and david for putting together this extraordinary colloquium. i'm delighted to launch with suresh, whose remarkable leadership of carnegie mellon reminds me how fortunate i am. to be part of the extended carnegie family. as president of the carnegie endowment over the past nearly three years,
2:29 pm
as a diplomat for 33 years before, that i've had the privilege of welcoming heads of state, military generals, foreign ministers, university presidents and distinguished thinkers and doers of all stripes, but i've never had the privilege of introducing a robot, let alone several. so it's a great pleasure to welcome them and their friends. like all of you, i look forward to getting a glimpse of our robotic future in today's program. robots are not today's only first. today is also the first of two events we're holding for the first time with carnegie mellon university, one of the premier universities and fellow member of the impressive group of institutions founded by andrew carnegie more than a century ago. andrew carnegie created institutions at a critical historical juncture. the foundations of international order that had prevailed for most of the 19th century were beginning to crack. catastrophic war and disorder loomed.
2:30 pm
the last great surge of the industrial revolution was performing the global economy. the carnegie endowment together with sister organizations sought to help establish and reinforce the new system of order that emerged of the two worlds, a system that produced prosperity in the second half of the 20th century than andrew carnegie could have imagined. hard to escape that the world is at a transforming moment, underpinnings of order, return of rivalry and conflict after many years of decline. the growing use of new information technologies both as drivers for human advancement and as levers of disruption and division in and among countries, the shift in economic dynamism and rejection by societies in many regions western led and angry nationalism. here at carnegie we're trying to meet these head on across our programs and six global centers.
2:31 pm
we focused this with carnegie mellon, one of the significant of these challenges, intersection of emerging technologies and innovation in international affairs. technology's capacity, as all of you know very well, to simultaneous advance and challenge global peace and securities increasingly apparent. in too many areas, the scale and scope of technological innovation is outpacing development of rules and norms intended to maximize its benefits while minimizing its risk. in today's world no single country will be able to dictate rules and norms. as global institution with expertise, decades of experience and nuclear policy and significant reach into some of the most technologically capable governments and societies, the carnegie endowment is well positioned to identify and help bridge these gaps. earlier this year we launched a cyber policy initiative to do just that, working quietly with government officials, experts and businesses in key countries, our team is developing norms and measures to manage the cyber threats of greatest strategic significance. these include threats to the integrity of financial data, unresolved tensions between governments and private actors regarding how to actively defend against cyber attack, systemic corruption of the information and communication technology supply chain and attacks on
2:32 pm
command and control of strategic weapons systems. our partnership with carnegie mellon seeks to deepen the exchange of ideas among our scholars and the global community with technical experts and practitioners wrestling with the whole range of governance and security issues. today's event will focus on artificial intelligence and its implications in the civilian and military do mains. tim and david have curated panels with diverse and international perspectives. december 2nd we'll reconvene in pittsburgh for an equally exciting conversation on internet governments and cyber security norms. our hope is that this conversation will be the beginning of a sustained collaboration between our two institutions and with all of you.
2:33 pm
there's simply too much at stake for all of us to tackle this problem separately. we can and, indeed, we must tackle together if we hope to sustain andrew carnegie's legacy. i'd look to conclude by thanking carnegie corporation of new york for making this colloquium possible. let me welcome to the stage suresh, extraordinary leader of
2:34 pm
extraordinary institution and terrific co-conspirator in this endeavor. thank you all very much. [ applause ] >> thank you, bill. i also want to thank tim and david for all their efforts. welcome to inaugural carnegie colloquium, part of initiative to shape manners of cooperation in artificial intelligence, machine learning, and cyber security. first and foremost, i would like to thank ambassador bill burns for hosting this event today. as two organizations that reflect a strong legacy of andrew carnegie, carnegie mellon university and carnegie endowment for international peace have formed a powerful partnership to examine
2:35 pm
technology and diplomacy across the set of emerging areas, critical to our collective future. it's my sincere hope that this event as the follow up colloquium which will take place at carnegie university on december 2nd formed the basis for broader relationship between our institutions. let me also add my thanks to dr. gregorian, president of the carnegie corporation of new york who provided support for both of these events. and in fact, based on a conversation that ambassador burns and i had a few months ago. dr. gregorian was very enthusiastic and supportive of this effort. i understand -- to understand cyber and security, we must
2:36 pm
first recognize cmu as a place where pioneering work in artificial intelligence took place decades ago. every since herbert simmon and alan newel created artificial intelligence in the 1950s before the terminology was even recognized broadly, cmu has remained cutting edge of this field. carnegie mellon took the bold step a generation later to create its software engineering institute, which has served the nation through the department of defense and served industry by
2:37 pm
acquiring, developing, operating, and sustaining innovative software systems that are affordable, enduring, and trustworthy. designing safe software systems and attempting to create learning abilities of the human brain where natural progression where two of the modern world's most pressing concerns, cyber security and privacy, to meet this challenge, carnegie mellon cyber security and privacy research is multi-disciplinary and encompassing a broad range of disparate disciplines. it incorporates faculty across university in areas of policy development, risk management and modeling. our aim is to build a new generation of technologies has deliver quantifiable computer security and sustainable communication systems.
2:38 pm
the policy guidelines to maximize that effectiveness. tmu's prime ear research center on the subject is cilab, visionary public private partnership that has become a world leader in technological research, education, and security awareness among cyber citizens of all ages. by drawing expertise of more than 100 cmu professors from various disciplines, a world leader in the technical development of artificial intelligence, cyber offense and defense and is a pipeline for public private leadership in organizations as varied as nsa and google. the work of cylab was featured
2:39 pm
in nova program and "60 minutes" report on machine learning and many other aspects. in particular the professor's facial recognition programming helped match a very blurry surveillance photo with the boston marathon bomber from a database of 1 million faces. you'll have an opportunity to see the professor's work in action today during the lunch time demonstrations downstairs. today you'll hear from cy lab's director david brumley who led a cmu tomb just a couple months ago that won this year's super bowl of hacking, darpa's $2 million cyber challenge. congratulations, david. [ applause ] just a week later, a week after that, david took a team of cmu students to def com where they won a game in another hijacking competition.
2:40 pm
you'll hear from andrew moore, dean of school of science, who was also featured in "60 minutes" report on artificial intelligence. i would also like to acknowledge dr. jim garrett, the dean of the college of engineering at carnegie mellon university who joins us along with rick siger who played an important role helping put together this event which means carnegie mellon and carnegie endowment. tmu's advancements in cyber security will be highlighted in the colloquium today, which is an outgrowth of the partnership between our two organizations. you will learn more about this in the two panel discussions today.
2:41 pm
we hope that these discussions on the future of consumer privacy, autonomy and military operations will lay a strong foundation for future colloquium and better inform ongoing thinking and technology and diplomacy in these critical areas. we'd like to welcome you to the colloquium today and also like to close by thanking again the ambassador. [ applause ] >> so we will now get started with the first panel discussion. before we start, let me briefly outline two key ideas that have been driving this event. when david and i started with the planning for this, the first one was essentially to bring together technical experts of carnegie mellon university and policy expert from the carnegie endowment. that is why each panel is proceeded by setting a stage
2:42 pm
presentation with one of the technical experts from carnegie mellon university and will be followed by the panel discussion. the second idea was to draw on carnegie mellon's global network to bring in people around the world for the panel discussion. i'm particularly pleased to not only welcome partners from pittsburgh but also welcome, for example, all the way from hong kong if you're interested to join the event on december 2 in pittsburgh, be sure to drop your business card off outside or send us an e-mail. i would like to introduce andrew moore, the dean of computer science at carnegie mellon university and the computer science school at carnegie mellon university has been ranked as the number one school by u.s. news repeatedly in the past few years for the grad school program and prior to becoming dean for the last two years, andrew was vice president of engineering at google commerce, has been on the faculty of sam u since 1993 and has been on the program for advance in of artificial intelligence. in 2005. keeping with the global theme of
2:43 pm
this event, he hails from bournemouth in the united kingdom. thank you very much. >> so this is an interesting and exciting time in the world of artificial intelligence for many people, for regular consumers it's got great promise for them. for companies it's an absolutely critical differentiator and for societies in general we do have options here to make the world a much better place through careful application of technology. what i'd like to do to set the stage here is talk about two things which are -- which at first site sound like clear goods, personalization -- i'll explain what that means -- and privacy, two extremely important issues. then i'm going to run through a series of cases where these two great principles start to bump into each other. and they will get increasingly
2:44 pm
sophisticated and by the end of this stage setting i hope to have everyone squirming in their seats because it's so annoying the that two wonderful and important things, privacy and personalization, which seemed like clear goods, lead us to very difficult societal and technical challenges. so that's what i'm going to try to do in the next couple of minutes. so let's begin with privacy. it's a clear right and we would almost all of us agree that anyone who intentionally violates privacy by revealing information which they gained in confidence is doing something bad and there's laws in our international legal system and in all our domestic legal systems which deal with that issue. so that's important. personalization is probably one of the most critical features of a world based on artificial intelligence
2:45 pm
and machine learning and i'll explain places where it's obviously good. many great institutions, including carnegie mellon under dr. sureshi's leadership have developed ways to help children learn more effectively. if it turns out that i as a child have problems with understanding when to use the letters "ck" while i'm writing, it makes a lot of sense for an automated tutor to personalize their instruction so they can practice that particular issue with me, no doubt about it, that seems like a sensible thing. if i'm a patient in a hospital and it becomes pretty clear that unlike most patients i cannot
2:46 pm
tolerate more than a certain amount of ibuprofen within 20 minutes of a meal as we learn that, of course it makes sense to personalize my treatment. so that is good. and in a moment there's no difficulty involved. here's where it gets interesting. some aspects of personalization -- like, for instance, how i'm likely to react to some liver cancer medications, it's not like we can personalize it by just looking at what's happened to me over my lifetime. when you're building a personalization system the way you power it is to find out about me and then a ask the question so to make things good for andrew what should i do and what can i learn from other people like andrew? and that is suddenly where you begin to see this conflict. other people like andrew is something which can help me a lot because if it turns out that everyone who's over 6'3" with a
2:47 pm
british accent virulently opposed to, for example, the electric light orchestra, it's an extremely useful thing to know so i can make sure that's never recommended to me. so it makes sense to use other people's information in aggregate to help personalize thing for me and in many examples that can really make things better. recommendations of movies is an obvious one and then when you start to think of information on the web, for example if i like to browse news everyday and we notice that i'm typical of people who perhaps in the mornings are very interested in policy-related news but in the evening when i'm relaxing i tend to like technology-related news, that's useful information to make sure i'm a happier person when i'm reading the news. so this is the upside of personalization. personalization uses machine
2:48 pm
learning. machine learning is exactly the technology which looks at data, figures out the patterns to usefully say what would other people like andrew want? and the definition is what it is for someone to be unlike with me or dissimilar. it's the thing which powers ads in gmail and movie recommendations and the things which helps the personalized medicine initiative figure out how to treat you, you'll probably need different treatment from someone else. and now i'm going to go through four examples of increasing squirminess of why this stuff is hard. why privacy and personalization actually starts to conflict with each other. the first is a simple case of things we'd like to think society is going to do. if someone publishes
2:49 pm
unauthorized data about me they are breaking the law and that should be remedied. that's the simplest case and the responsibility there in a good company or a well-functioning government is you actually have the legislation in place, you have clear rules and if shall be does, for example, look up the bank account of a famous celebrity just so they can blog about it that person is going to get fired and in some cases the consequences are serious and there's a more significant penalty. now, cases two, three, and four are ones where it starts to get a little fuzzier. case two. someone uses your data in a way you didn't expect but it turns out you kind of agreed to it. and a famous example is a firefighter in everett, washington, who was suspected of actually starting fires and one
2:50 pm
of the ways in which the police really got to understand that this was a serious risk was they then went to their grocery coupon supplier and looked at looked at the things that this particular person had purchased in the last couple of months and found a huge number of fire starting kits from that. in another case someone that's suing a super market for a slip and fall accident. >> now, those are not actually illegal and those were covered under the terms of services and terms of surface and the laws of the land and data that point we have all right hit something that the general public is going to be very uncomfortable and it's the thing that we all feel
2:51 pm
uneasy when we sign the terms and services. those are difficult ones and then we have the ones that are very interesting approximate for those that are trying to do good and can quit easily do bad the next is you using machine learning to really help people and accidentally the machine learning system starts to look like a bigot or make decisions that most of us would not make. a good example is one that showed a little google experiment that he looked at the
2:52 pm
adds that are shone no response to queries of of job searches. he revealed that they were male and female and horribly it turned out that the adds hone when the person reveals to be female were for jobs with lower pay. you look at that and say if it was a person, they're both a jerk and fact and just this morning an example with facebook that's introducing the add as and ethics and why would they do
2:53 pm
this. the reason was that the machine learning system had just observed in the data that all else is equal and that's very difficult and dangerous phrase to use and it's a defense and can really settle and i will show people the most likely and it's not my fault that society is set up in such a way that is showings woman are clicking on lower paying adds.
2:54 pm
2:55 pm
finally, i'm going to mention the ninja hard case. now, this is pretty simple. it is the case that if you really want to preserve privacy, you can cost other people their lives. there are examples of this in many law enforcement situations. but another simple one is in medicine, where if you're involved in a drug trial, and suppose you had 20 hospitals all trying out some new drug treatment on 20 different patients, then it is definitely in the interests of those patients for the hospitals to pool their data, to actually share data with each other so that one central body can do the machine learning with a large end, statistical significance, to find out if the system is working or not. now, if you decide not to do that, just so worried about privacy that you're going to not let the hospitals reveal details about the patients to each other, then you can still actually get some statistically significant results as to whether the medication is effective or not,
2:56 pm
it's just going to take you considerably longer, and many more patients will have to be in the trial and you'll have to wait longer before you get the answers. and matt frederickson, a computer science faculty member at carnegie mellon, has shown very clear cases of the actual analysis of privacy levels, years of lives saved. unfortunately, and that's exactly what this room doesn't want to hear, there's a tradeoff curve there. it's almost certain in our mind that we don't want to be on the extreme end of that tradeoff curve, but we do need to decide where we are in the center of it. so, hopefully we're squirming. i've tried to show you that no extreme position on personalization is good through privacy, or prif si is personalization. none of those extreme positions are
2:57 pm
useful. we have to use our technological smarts, and our policy smarts to try to find the right place in the middle. and that's the setup for this panel discussion. [ applause ] >> thank you. at this point i would like to introduce our panelists. yuet tham who is an expert on cross-border compliance, and international agreements regarding data use. if you want to come up to the chair. paul timmers, the director of the sustainable and secure society of the european community. has been head of the ict organization for inclusion and e-government units. so we have experts here from asia and from europe who are helping us discuss this issue. next i'm pleased to introduce ed felten, a computer scientist. he's a
2:58 pm
computer scientist and the deputy director of the white house office of science and technology policy. who has been leading a bunch of intense strategic thinking about artificial intelligence over the next few years. and then i would like to introduce our moderator, ben scott, the senior adviser for new america who is the senior adviser to the open technology institute, and also nonresidential fellow at the center of the society at stanford. >> thank you very much, andrew, for that introduction. we're going to jump right into the discussion with our expert panelists who, as you see, strategically represent different regions of the world. so it can offer perspectives on these questions from across the globe. if i may quickly
2:59 pm
summarize the policy conundrum, it is this. machine learning in a.i. benefits from the personalization of data using in learning algorithms. personalization requires large data sets. it requires the collection and processing of data at a large scale. that raises two key questions. one is, what are the rules governing the processing of data for commercial uses of a.i., and raises key issues for what are the rules for collection and processing of data for government use of a.i. underneath that sits the basic question will algorithmic accountability. if you decide it is unacceptable to have an algorithm that reflects gender bias in employment practices, how do you regulate that. and if you decide you regulate that at the national level, how do you
3:00 pm
coordinate that at the international level when data markets are global. these are the problems that we are all facing in government. and i think it's fair to say that the technological reach of machine learning and artificial intelligence has exceeded the grasp of our policy frameworks to contain and shape these new forms of digital power in the public interests. so what i'd like to do is start with setting a base line of where different parts of the world are coming down on these issues, and what the building blocks look like at the regional level. there have been lots of efforts in the u.s. to address these questions. there's been lots of debates in the european union to address these questions. i would say less so in asia, although i'll be interested to hear more from yuet about what's happening in asian markets. but i want to first begin by allowing all of our panelists to speak from their own perspectives about what's happening in their -- in this field in their region. what is the approach to regulating or
64 Views
IN COLLECTIONS
CSPAN3Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=2111386980)