tv Military Technology Development CSPAN June 5, 2018 2:12pm-2:59pm EDT
2:12 pm
part of c-span's 50 capitals tour and our stops in pier, south dakota, and bismarck, orth dakota. >> former navy secretary richard dan zig outlined his new r about the potential risks and ology, speaking at -- emergingli the center for new american securityerinas h compared the technology development efforts of the u.s. speakers included former joint chiefs of staff vice chair cartwright and several military experts, this is about an hour. > welcome, everyone, i'm paul. i'm pleased to announce the
2:13 pm
launch of a new report talking about how we think about technology risk. secretary dan zig has been a leader on emergebt technologies for a long time ind i'm excited thisewept of th valuable contributions to how we think about risk for emerging technologies in the national security space. number of countries have expressed desire to work in artificial intelligence. china announce thared poll on artificial intelligence and the united states also announced research in it. we have a great lineup of speakers. i'm going to give brief -- i'll briefly mentioned they will -- mention them. you have your full buyos in your handout.
2:14 pm
we're going to have a great discussion and i hope it will address how we look at risk in advanced technology. >> thank you. not only for this event but for role in this support. we are grate grate -- quite grateful for the contributions you all made. this report i'll just briefly mention, stems from an about tion about this -- our present thinking about maintaining technological superiority. it's probably the case that no single theme has been as dominant in recent times in national security discussion. i think this is a consequence as certain normalization of the world this epresent national security establishment grew out
2:15 pm
of world war ii experience and the cold war experience. this was an experience where the u.s. strongly dominated. at the end of world war for we had a vast collection of refugees from europe. we had 50% of global g.d.p. structure of industry and the like. this led naturally to our dominance, but we nurtured it nd sustained it. now we see, as i say, a more normal world, in which produced, has been where china is moving toward equality with us and notably if you project as some economists have 2050 you might see a world in which chinese economic
2:16 pm
capability measured in g.d.p. would be 60% greater than hours. there are questions about our technology superiority. my report is not actually, though, about how we maintain that. lots of people have spoken about that. the focus is on a different thing which is the superiority is not synonymous with security. the race itself generates risk. with we see moving toward is increasingly complex technologies which we cannot really fully comprehend being employed in military contexts where they're connected to highly destructive weapons, nuclear weapons being the most dramatic example. and these technogies, whether for intelligence or information assistance or biology will create potential for unexpected
2:17 pm
effects, for accidents, for emerging defects, things that no one could predict. interactive effects and they're iable to sabotage. it's striking for example in our exploration of self-driving cars, we naturally insist on what would be 100 million miles or many to tens of millions of miles of driving to encounter all unexpected kind of contingencies and we debate what it means when one company or anothean accident. in the military we can't anticipate the same way. e't dthe equivalent of hundre omiions of miles of
2:18 pm
driving in combat. we can't see how systems interact with ours. this compounds and intensifies the risks. it's also the case that we, inate a lot of thi party e, review and in fact have complications ourselves internally. with different technogy operators. some of the users will not gain access to special access program materials and the like until they're actually called upon to use them. the risks are substantial. i just would underscore the nare of thos risks is particularly intense when you look back at the history of our managing other technogies. i give some examples in the paper, just two i'd highlight to you, the castle bravo nuclear test in 1954, where there was call cue ligs of the fallout range done by very smart,
2:19 pm
analytic people and those people conclude that potential radiation dispersion is x they go ahead with the test, turns out to be 3x and some 600 people were exposed to radiation. another example i offer in the paper is associated with the influenza virus where remarkably there's a quiet consensus among biologists,ou can find traces of it in the literature, that there was a dissemination of that virus, the dissemination of that virus was probably caused by the escof the vi from a government freezer. probably a military related thing. or possibly the result of an open air vaccine test which was conducted around that time. either way you have a worldwide
2:20 pm
andemic. clearly this didn't arise from a natural cause but from interaction with technology. the paper lays out recommendations about what to do about all this, we can get into that in the discussion. in the discussion we highlight two aspects of this, one is diagnostics. is this a right picture of the world? should we be worried about this in the way thatcribed? and then the second is prescriptive. what is it that we ought to do about this? so our intent really is not to have presentation from these three very skilled experts, but really to have a discussion amongst the fur four of us. i would encourage this try yow and if you know this try yow you know they probably don't need much encouragement, to be nding some things they like in this report.
2:21 pm
we decided amongst ourselveses that general cartwright would start. >> i'm the knuckle dragger. this discussion is important to get in the front end as early as possible and should be redone as new technologies are introduced. for me, on the, appearance side of the equation, the one area that i think has to be explored a little more than probably was given opportunity in the paper, risk is a continuum. and you operate on that continuum. audiences are in different places on it. demographically people are in different places on that continuum of risk, what they're willing to tolerate, what they aren't. where you erstanding
2:22 pm
are on the continuum, and how much risk you're willing take, for me with a problem of somebody appeared on the flank of that unit and was able to take advantage of a position that we had, we lost people, willing to say at darpa, willing to take the 20% right now 20678% solution, untested, anything that will get me lives protected, even if it's not all of them, i'm willing to take that risk. if it's should we introduce this, that's a different decision. and more on these decisions is laid out in the paper. i think that's important to understand. thqutithe first case, when they're willing to take the risk, at what point do you intervene and how do you
2:23 pm
continue to intervene as they move along understand the risks that you signed up to in urgency and now how do they look in the light of day when you've got time to look at how they were practiced, how they were fielded, what effect they had, etc. often times we get down that road and relook but that continuum is important to me. we've got to understand it, got to make sure we have process for both ends of the containment. >> recognizing that point, the paper argues that these technologies have unpredictable aspects, the opaque and complex. do you think that's right? >> very much. it's difficult as you start down a path you start down for a reason, for a desired outcome you'd like to enhance more or create. we can often start down the paths and all of a sudden the
2:24 pm
outcome is maybe rooks good but the second or third order effects may look bad or may look even even better. i go back to the stealth/precision. with ated down that path sense of, if we could put people in harm's way less often have better management of collateral damage, understand our targets better and the precision on the intelligence side of the equation, then our deterrence ternts would be better. our coercive effect at war and imposing will would be more effective. much of that is true. but cond and third order effects of it have been very interesting in a societal condition instruct and the expectation construct, i'll never forget the first night of the second time we went into iraq, i was watching baghdad bombing did not interrupt rush our.
2:25 pm
people kept going. >> whereas in washington, everything interrupts. >> everything but it has changed. the thought process b effect was something we never expected which was substantial reduction in the amount of men and equipment on the battlefield. cause we didn't have to move 15 rounds forward in truckloads to keep up with artilleries. it changed the game. changed the game on how much it took in the iron mountain, so to speak, to get forward. those kinds of second and third order effects are beneficial. ere arsome that are equllly not beneficial. ou have to reassess.
2:26 pm
>> on this point of there's a continuum of risk, currently national security agencies are at one position in this risk. collectively, they're around some value. part of the reason you're writing this is you think they eught to be someplacls right? that the risk from accidents are being too highly discounted or just not considered at all so they're discouldn'ted differently. in favor of immediate operational. a question that i have is very much like the general said about where do we want to be? how will we the when we're done? how will we kw en we're appropriely balanced? what will be the point at which you would comfortable writing the counter to this, say weerg too much now? andee like we yet have a principled way of answering that question other
2:27 pm
than hearing which voices are yelling the loudest. >> let me disagree with both of you, actually. valley -- valerie, we'll invite i n't think we do or should . dial up risk across this ntinuum. i'm closer to the normal accident as referenced in the paper. i think the duplication of the emergent effects and other things is, we can't really calculate this. i don't think wean or ould dramatically dial down risk or dial it up. i thinkhaweught tois do a better job of recognizing and preparing for it so we can take certain steps to repress the likelihood that we see or give us a better ability to respond even in concert with our
2:28 pm
opponents to inadvertent consequences. i can say more about what i think. do you want to say anything? you don't have to eak if you agree with them, not me. >> when i read this, one thing that cames to mind, i remember the quote in the movie "jew rassic park," i can't remember it exactly but technology is so busy worried about whether they could, they nevertopped to thk about whether they should. that's kind oftle bit of what i thought about. so from that perspective, i wrestled with whether technologists are really the right people to in and of themselveses be deciding how technology should be used. they can inform the discussion but it needs a greater community
2:29 pm
that probably brings motivation be technologists themselves bring to the table in terms of trying to -- >> i wonder if it increase misfaith in national security that you're quoting "jurassic park." did you have more you're wanting to say? >> i did but it'll come out. >> in the last part of the paper, there's an appendix from which i cite the excellent question that he provides to program managers and directors will they have unintended consequences and how do we weal diehl with that. do these questions make a
2:30 pm
difference? >> definitely. i'm not just saying that because he's my boss but i've seen situations with myself and others ir extended family where having these questions brought to us leads us to different action. it's often the case, we're techlogists, we're scientists, our thinking of what can be done and what can be built and seing the immediate first order ramification of that for operational use and then not considering the second and third order effects. but as soon as the question is asked, what are the larger effects, things start to look different. one, more complex but also that maybe it's not such a good idea to go down that route. so one of the questions is about how quickly would an adversary be able to replicate your capability once you build it. they could build it completely on their own but if you build it, how long until they get some of the benefit of your efforts? and i could have adversaries
2:31 pm
that are nation states or terrorist groups. and the answer is almost always that it will help them, right? a capability, you're contributing to your adversaries having that ca. so trying to be technologically superior is all well and good if you can stay superior but the fact that you're running faster means that adversaries canft off you. so if you can find a way to build defenses before the weapons so that it doesn't matter if you or your adversary builds it, the new capability no longer matters, that leads to better game theory. more stable situations. where you have more strategic stability but also sort of breathing space in order to be
2:32 pm
able to now start to address technological access. >> can you prvide an example where somebody, an entity is second to the game has received n advantage. >> by advantage, do you mean superior to us? i mean the cold war wasn't good. and the ussr having nuclear weapons was assisted by us having nuclear weapons, right? very m not in a controversial position where i countriesng these two offsetting each other was not stable. that was just one. >> we were always in a position f power. >> you think they're trying to -- >> i do.
2:33 pm
2:34 pm
the question of unintended consequences do come up. we do need tth about them. maybe arguably not as much as we cod. but generally we look to see how might we be surprised, we are always looking at where might the surprise come from. >> the project seems to be an example, can we spend a moment on that. it' not in my office. it's becoming a fairly democratized tool, looking at both unintentional, if something ets out into the environment how go we know there's a gene that's beendid d how do we
2:35 pm
turn it off? >> i'm struck by the fact that creators have said they created something that escaped into the in the context of some of the d.n.a. war we can changes but ly also make our new creations dependent on some supply of ingredients that are naturally available so we can control it. strike me as examples of this kind of thing. >> but you're never going to completely eliminate the urprise. >> this is central to my view.
2:36 pm
we should never be surprised again. my view and i've argued with generations about this, that's wrong. e are going to beurpris. it is the nature of technology and we should stop creating the illusion that it would be otherwise. just as yosaid, we will be surprised by what happens. the fundamental comments you offered at the outset, the risk is there, it will always be there no matter what we do. the question is, what do we do about it? and i'd like to come back to the prescriptive side of this but if any of you want to say more about that before we return. >> i think there's one other case that ngin it into the current environment. chf the challenge we have, it's easy to look back at where it all ended up and why it ended
2:37 pm
up there. that's one approach but most of the time the challenge is, we're in an area where we d not understand the risk. we can't apply standard metrics and observation to it because we don't understand it. of this world of complexity course interaction of parts, in the engineering we have done to date at least on the military side, high reliability is done by wiring something in hard so when you push a button you know exactly what's going to happen at the other end. world we're entering into where things are connected together, ideally to make them does not yield the same result every time you push the button. you can see it in self-driving cars , in gen complexity is really being paced by kmp eweational power. we're trying to straight line a
2:38 pm
forecast in the future and the downside risks against an unknown line. we're justing what we know and turning cyber into, you know, an exiss ten rble risk, maybe, maybe not. we don't know. so the exploration piece of this which takes time, deductions, strategies. it's a very good app. this complexity issue that the outcome is not the same for the same action that we're starting to enter into. so when you look at a learning algorithm, you look at deep learning, if it takes a curve driving a car that's a data point. one of millions. and it just keeps learning. wit will -- it will never take that curve the same bayway because the background information is different. eople don't understand that.
2:39 pm
>> i think the 20th century mentality not only obscures the point you're making but also says if we can assess these thing well, can do it at the begiing and have it. these systems evolve. the biology systems evolve because evolution is a biological phenom no. artificial intelligence systems evalve because that's the nature of their self-learning capabilities. it's not just them even if you take traditional dital technology, traditional as we understand it after 40 years of experience, 90% of what happens on a software system is after a software is introduced. the microsoft system has to interact with the adobe system and etc. as the systems grow more complex and more interactive, they're used out there in the field on many ships.
2:40 pm
we're constantlyhaing the snm a variety of ways. so one of the things that recently the accident report observed is, new people come on and the system evolved in ways beyond. this is not just reflected at the begin bug also continuously. that's a different kind of talent. >> so i absolutely agree with a front the risk isn't loaded, some is back loaded or appears in operation. for all the reasons you just said. and i'm also very sympathetic to the notion that the risks are wild. they are very difficult to quantify or identify, what not. and so i'm very much in favor of being broad minded when considering all the things that can go wrong but when one starts to bringn this concept of, it's just undefinable risk, it's
2:41 pm
infinite risk, that's what it shakes out to mathematically that leads to a trick injury spot. you still have to act, you still have to choose between options. and in choosing between options, you are hopefully explicitly but often implitly making judgments act relativeness. we're choosing to invest $10 $1 million or $1 billion toward addressing some category of risk that means we think it is worth doing so and not spending that amount of money on something else with a more well-defined benefit, right? so we are in fact saying something implicitly about the quantitative structure of these risks. so saying they're undefinable doesn't get us all the way to an operational reality. this is not so terrible because in fact even with these wild risks, there are lots of things
2:42 pm
we know. so yes, the automated vehicle will drive around and could potentially do all sorts of things. we know it's not going to sprout wings and fly off. try to -- >> i'm not sure about that >> we are aually -- we stwrail a lot of bounds on this behavior. so i think it's the case that you use the analogy of math or if you didn't write it, then i thought it while reading. so we have all this unforeseen terrain we haven't explored yet, right. we have some detail about the terrain that we have been in but not perfect detail. but we have all this areas of the vast continent of technology we haven't explored yet. we don't know it's there. we do know something. if there is a mountain over there and i ask you what's on the other side of the mountain you're not going to say, a sun. there's not a star there there's going to be some combination of dirt, water, and air. you actually know a lot. so if one recognizes that, that
2:43 pm
arows you to put bounds on the risk and now begin to make statements. >> i agree with what you just said. one thing though that i observe that i think intensifies your point is given the proliferation, the disn of science,s the case that this has drive of its own, this modern technology, auto catalytic, contributes all these parts to each other. you not have to decide what you're going to do, you have to decide in light of what others are doing. you not only have to attain the risk associated with us doing thing bus also risk of them doing in. one of the more aggressive propositions prescriptively in the paper is, you have to substantially step off talking with opponents as well as our alslies. about these rifpks. or intensify them. and part of the planning process
2:44 pm
is trying to develop some robust mechanisms and contingency planning against events occurring. an example i use is if you had a biological pathogen that was highly infectious that broke out in some other country we wouldn't be able to say, that's just their concern. this is obviously going to be ours. but we don't have joint planning around these kinds of things. we don't have the equi lent of the nuclear circumstance where we blame criminal well, educate other countries about how to control their weaponry. what do you think of this? is this pie in the scay? is it impossible? desirable? oth? >> yeah. in a standard ax onmy of policy to norms to regimes to law, new technology can manage, as long
2:45 pm
as we can for the unknowns down at policy regimes and norms. it's regimes and norms that you reach out to your allies and your adversaries in other venues to have conversation. for all s an example its wart the me amble talks about conventional that looks like strategic, defenses versus pure, it's a venue by which both sides can say to you, i'm looking in this area. i'm giving you a heads up so if d't like it you can say something about it. we can talk about attributes of that system at some point. we may listen to you or we may not, but at least we'll have a conversation. we may elect in norms and regimes to not go down that way at all. even though it may receive some
2:46 pm
perceived advantage to you. it's robust but it's still voluntary. >> my problem -- go ahead. >> you talk about the example of the buy yow -- biological weapons. our decision was used to release. they up their efforts in that area. so i would just -- i agree we should pursuing all potential paths toward awareness of unintended consequences and mitigation strategies for them. but i also think we need to understand again that just as technological superior does not equal national security neither does development of treaties and agreements equate to that as ell.
2:47 pm
the opioid crisis we have today an example. we had studies initially and people were dying per day from overdoses. there's always going to be that unintended consequence or mise. whether intentional or not. thinking about and not being too satisfied with any sort of arrangement we might be able to have. >> do you want to comment? >> yeah, so, i'm very much in the camp that there are opportunities through talking with allies and adversaries to create good game theory. to create better interactions with each other that even if everyone's self-interest will yield better situation. the difficulty is that people may have desires or incentives
2:48 pm
to defect and with some of these technologies it can be difficult to tell when they have. you meioned with regards to buy yow weapons. -- bio weapons and that involves detection and supply chains and what not. or also dual use, proliferation of technology in commercial and private sector doesn't help. so some technologies are more amenable to this than others. but for the ones that are not so amenable, the strategy shouldn't be to say, well, that's it. we're done. this is just a negative development for humanity and we're just going to have to hope we can live with it because the alternative is that we don't, right? i say no, we still in fact have a way we can try to address these things with technology. as technologist i look for the opportunities to differentially that r develop technology will change this game.
2:49 pm
we have programs dedicated to detect if an organism has been biological engineers as one small pete of -- piece of the pipeline for making bio wmore d. this is one example of where a tengnologist can try to imagine what's over that mountain rather than that mountain and choose to go that way because it will plead to better die nam exs. >> i want to make two concluding points and then open up to questions and comments from the floor. general cartwright's nice construction of how we move from technology innovation, through we follow o forth,
2:50 pm
the technology innovation, they led so much. as we hit technology slerlings which we're all witnessing, the importance of that delay and as these technologies become more potent becomes more and more troublesome. maybe i can afford 2 years before society catches up or before the regime internationally gets there. if i'm dealing with a relatively controlled weapon with relatively few competitors. when there are quickly a large number of people this proliferates to a lot more. my problem as a lawyer is how laggard my profession is in this regard. the second thing i want to note is, a lot of what we do in pursuing technology superiority is justified and i think several comments from the panel suggest
2:51 pm
this, on theories of deterrence. it's risky, but less risky than letting our opponents get ahead. we maintain superiority, can deter undesirable acts. this theory is time tested and has power. but body thinks deterrence substantially deters everything. so the most potent rationale for our advancement in the technology superiority race doesn't help us in the context of what i'm flagging here. i do not want a world in which we cease to take risks or in which we don't pursue technology superiority but i think our establishment very much undernftses for all kinds of reasons in learning about the kind of things that this discussion has been about. you'll get a chance to say more, i'll look to see if anybody from
2:52 pm
the audience wants to ask a question or make a comment you don't have to be the normal washington thing of disguising your comment as a question. if you want to simply make a comment, just be brief. go ahead. and if you could identify yourself since we have a television audience. it's not me making these derogatory noises. >> your argument, you're not worshipping the value of humans in the loop. in fact you're a little bit cautious. i was looking for recommendations at the end of the paper on what we should do maybe to get humans out of the loop constructively but you didn't quite go that far. i wonder what you'd tell policymakers and those responsible for a.i. development niche ties now, how to keep humans from screwing things up. i would say that there is a
2:53 pm
deus ex machina, there's a figure off the stage, the human who will control the ultimate decision making this epentagon emphasis is constantly is on, humans are making these decisions. i think it's overstated in in its significance. there are cistwhere it matters. i like the fact that we are adding situations. [inaudible] a second variable , in this case a human variable in addition to the machine. the machine is digital and the human is analog, it's a check on the system. the reality is midwest human decisionmakers are highly reliant on machine information, rhythmic data, on algo
2:54 pm
analysis that's led -- on gorithmic analysis of flight trajectories of ballistic missiles and the h, very little time, is typically inclined to go along with, or in some instances is as likely to add error of a different kind as the human is to be corrective. my view about this is we should stop putting so much emphasis on the human in the end and place much more emphasis on the kind of upfront analysis cushioning, creation of consequence management activities and the like. because it's the front end that most of our opportunities exist. but for some of the reasons we have talked about, i think the front end is undervalued. >> i want a few seconds on that one. again, looking forward, a lot
2:55 pm
harder than looking back. but i'm not sure we know in the a.i. construct where the human actually belongs, number one. number two, i believe the paper that up good in stating until now we offered pattern rebling in addition, we offered for a long time the ability to see something and identify it. much of the five senses have been overtaken by other kinds of sensors that are more effective. i can see 30 miles out at facial quality. instead of my own eyes in an airplane seing from maybe 3,000 or 4,000 feet. so things have changed. i don't know that we know exactly where it is. and i think that to think otherwise is to go against human nature in and of itself. if i ask anybody in this room to
2:56 pm
create an automated process of, you know, putting their coat on in the morning, we'd do it just like we do it today now, we'd try to write a program for that and then over time we'd do it, the machines would do it differently because we'd find different ways of doing it. we experienced this in fly by wire. use that as an example. classic example. that's the most dangerous thing in the world. we should never even think about going there because if it's not metal connected to metal, i've lost control. so what did we do? told people that they were making the inputs. but it was seven years before we told pie will thes the stick wasn't connected to the flight control. that's not necessarily a path to go down but we didn't know where the person input piece, how we should describe it. it wasn't the same as before. that's the key. we are in a path right now where
2:57 pm
we're doing the normal thing we did with the pony express and the steam engine to the car, etc. we're competing. if we put big blue out and we win at chess, we put watt soon out and win at "jeopardy," we're competing with a machine. we have not entered into the stage of partnership by a long shot. we don't know what thowled that would look like. the potential here is we offer diversity in how we make decisions. just by our own uniqueness and by the uniqueness of machines. how does that get implemented? i'm not sure. but i think the last point i'd make on this is, don't forget about the disruption that is caused by the people that are left behind. our education system 2kr07s out at least one full generation if not two in any one of these transitions. right now we can't stand that.
2:58 pm
that's just -- that's hugely debilitating to culture, to governance, structure, etc. we've got to think more about how we do, your example of a ship. how de me -- we have over the air transaction and change capability, how does the next crew understand how nteract with that machine? where does the learning come from? our education process right now is dropping those generations out, we've got to figure out thousand fix that. >> i'd like to add on to that. you mentioned that we need to better understand what human on the loop means and how we're using that phrase. i'm going to say something somewhat pretentious, i want to make clear it's me, not darpa speaking. anybody who has worked with me has heard me say that i think machine learning is a misnomer. i don't think we have machine learning. if they were learning they'd be able to tell you when they were
2:59 pm
giving you a wrong answer. we have machine training systems. the information they give back to unly as good as what we programmed in or trained them to do and a function of the input day them t that we are asking to evaluate. so the notion that these machines are at fault or somehow misleading us is in itself a misleading statement. they are as good as we -- >> live now to the white house for what president trump is calling a celebration of america. this came about after canceling a scheduled event with super bowl champions the philadelphia eagles. trump -- mr. trump tweeted the white house will host the united states marine corps band and hear the national anthem. there you see the president, live coverage now here on c-span. >> ladies and gentlemen, please join us in singing our
90 Views
IN COLLECTIONS
CSPAN Television Archive Television Archive News Search ServiceUploaded by TV Archive on