Skip to main content

tv   Military Technology Development  CSPAN  June 5, 2018 6:55am-7:55am EDT

6:55 am
6:56 am
6:57 am
6:58 am
6:59 am
wothich w really fully comprehend, being employed in military context where they are conducted to highly destructive weapons, nuclear weapons being the most are maddock example in these technologies with artificial intelligence or biology, create potential for
7:00 am
unexpected effects f accidents, to interactive effects and liable to sabotage. if you think about how to construct civilian technologies, and the expiration of self driving cars, $100 million attends of 15 miles driving to encounter contingencies and debate what it means, and in the military context, we can't anticipate in the same way or do the equivalent of $100 million in combat, involving high end sophisticated adversaries or
7:01 am
how their systems interact with hours and this intensifies the rest. because of secrecy and the like, we eliminated a lot of third-party review and have complications ourselves with silos of different technology operators and users will not gain access to special access program materials and the like, so the risks are substantial. and it was particularly intense, you look back, we look at managing other technologies, to highlight to you, nuclear tasks in 1954 where there was a calculation of the fallout range by smart analytic people
7:02 am
and those people conclude the envelope of potential radiation dispersion is next with the task and it turns out to be 3 x, 600 million people exposed to radiation. another example i offer is associated with 1977 h1 n1 influenza virus where remarkably there is quiet consensus among biologists to find traces in literature that there was dissemination of that virus probably caused by the escape of a military related thing. and the open-air vaccine test the chinese conduct around that time and you have worldwide pandemic which for reasons we get into with interest, the panel, you all, clearly doesn't
7:03 am
arise from a natural cause but rather interactions of technology, and to highlight two aspects of this, diagnostic, is this a right picture of the world? should we be worried about this in the way that i described and it is prescriptive, what is it we ought to do about this. the intent is not to have presentation from these skilled experts but to have a discussion among the poor laws and i encourage this trio, you probably don't need much encouragement, as hopefuy finding things they like in this report. we decided amongst ourselves
7:04 am
with gen. cartwright, the most courageous of the four of us. >> this discussion is imrt he fnt end early into the game as possible and should probably be redone on multiple occasions as new technologies are introduced. i think for me, the experience side of the equation, the one area that has to be explored a little more was given opportunity and favor but the risk is a continuum and you operate on that continuum and audiences are in different places on it and demographically people are in different places, not continuing the risk of what they are willing to tolerate so for me, understanding where you
7:05 am
are on the continuum adjusts what steps you take as you move forward and what steps you are willing to take. for me with a problem of somebody appeared on the flank of the unit and was able to take advtage of a poti we had, we lost people, willing to take the 20% right now, 20% solutionnd athing at will get me lives protected even if it is not all, willing to take that risk. >> if it is the approach of introducing precision and stealth, that is a different calculation more in line with decisions that are laid out in this paper, it is important to understand. the question in the first place if you are willing to take the risk, how do you continue to
7:06 am
intervene, and in urgency, how do they look in li day to look at how they were practiced and fielded, etc.. and re-wicker. the continuum, and to make sure we have process. >> the paper argues these technologies have unpredictable assets. and and on the military side of the equation. to enhance or create. we can start down those paths. the third order of facts look
7:07 am
bad or even better. the stealth/precision, we started down that path with a sense if we could put people in harm's way less often, have better management of collateral damage, and all the targets and precision on the intelligence side of the equation, deterrence would be better, coercive affect of imposing will would be more effective, much of that is true. second and third order effects have been very interesting in a societal constctnd expectation of construct i will never forget the first night of the second time we ran into a rack, watching baghdad, bombing did not interrupt russia, just
7:08 am
kept going and traffic lights worked and everythin >> everying interrupts. >> it has changed the thought process. the second order was something we never expected which was substantial reductioin the amount of pmenmen anon the battlefield because we didn't have to move 155 rounds forward in truckloads to keep up with the artillery's. it fundamentally changed the game on how much it iron mountains, so to speak. those kinds of effects are beneficial. there are some that are equally non-beneficial and have to reassess. >> did you disagree? >> on this point of the continuum of risk, currently
7:09 am
the national security agencies are at one position on this risk in diversity, collectively they are around and part of the on youre writing this is you think they ought to be someplace polls and the risk from accidents are too highly discounted or not considered at all. in favor of immediate operational -- a question that i have very much as you said about where do we want to be? how will we know when we are done? how will we know when we are appropriately balanced? what will be the point at which you would be comfortable writing the counter to this thing, we are doing too much now? i don't feel like we yet have a principled way of answering the question other than hearing which voices are yelling the
7:10 am
loudest. >> let me disagree with both of you. valerie is hopefully on my side. i don't really think we do or should dial up risk across the continuum. closer to the parole, normal accidents referenced on paper and i think thimplication of the emergent effects and other things is we can't calculate this terribly well. i don't think we can or should dramaticalisk or dial it up. what we ought to do is do a better job recognizing and preparing for it, will press the likelihood we see, in
7:11 am
concert with o opponents for inadvertent consequences. >> valerie will take sides here. >> when i read this, one of the first thought that came to mind. i remember the quote interest park but technologies are so busy worrying about whether they could they never stopped to think about whether they should. that is what this is about. from that perspective, i wrestle with weather technologists are the right people to be deciding how technology could be used. they can inform the discussion but it needs the greater community that probably brings
7:12 am
motivations what chnologists themselves bring to the table in terms of trying -- >> increases my faith in national security and darpa. >> let me ask. to say?you have more you wanted >> the last part of the paper there is an appendix, an excellent question, program managers and directors, think about these considerations before we embark on the project, will it proliferate, and how can we cushions that. these questions look any different? >> not just saying that because he is my boss but i have seen instances with myself and other
7:13 am
program managers in our extended family where questions brought to us leads to different action and it is often the case where our scientists are thinking of wha built and the immediate first order and not considering second or third order effect. as soon as the question was ked, as what are the larger affects. things start to look different. one of the questions is how quickly would an adversary be able to replicate that. if you build it, how long till you get the benefit of your effort and you have adversaries and terrorist groups and the answer is almost it will
7:14 am
help them. and by building a capability you are contributing to your adversaries for having that capability. having this race condition, trying to be technologically superior, is all well and good and stay superior. the fact that it also means your adversaries can draft audio, it is not desirable. if you can find opportunities to build defenses before the weapons, so it doesn't matter if you or your adversary builds it, the new capability no longer matters. and more stable situations where you have more strategic stability but also breathing space in order to address
7:15 am
technological actions. >> you pvide an example where an entity second to the game achieved an advantage? >> to be supeor to ? the cold war wasn't good. they are having nuclear apons, aisted by us having nuclear weapons. i'm not in a controversial position, it was not stable. it was a bit risky. that is just one example. >> we were always in a position of technology. >> do you think darpa is trying to think of the kind of defenses? >> i do.
7:16 am
examples in our portfolio in the context of ai where we are anticipating these sorts of challenging and those proams. the challenges of how we might address untded consequences. our mission is to ensure we never have a sputnik nt. we a looking to see where the technologies can go. we don't have the bertini
7:17 am
catechism but in the course of addressing these questions, what is the impact, the questions of unintended consequences do come up and we think about the use things, not as much as we could in some cases but we do look to see how they might be defeated and how we might be surprised. looking at where this surprise comes from. >> this project seems -- spend the moment on that. >> gene editing has gotten out there, a fairly democra to lking atunintentional, something gets out to the environment, how do we kw the gene has been edited and how do we turn it off whether it is intentional or not and to mitigate them?
7:18 am
>> the creators that created the termination data so if it escaped into the wild, its effects would be limited. and the context of dna works, it is crisper and the like and dependent on a supply of ingredients and available. giving the determination, that is an example of this kind of thing. >> you don't completely eliminate the surprises. th tand wi debate ti on it that is the proposal. >> we should never be surprised
7:19 am
again. i have argued with darpa directors about that. >> the nature of technology and we should stop creating the illusion that it will be otherwise and as you said, we will be surprised by what happened and the fundamental comments at the outset, the risk is there. we need to recognize it will always be there no matter what we do and the question is now what do we do without it and back to the prescript inside of this but if you want to say more about that. >> one other piece bringing it in to the current environment. much of the challenge we have to deal with is easy to look back and have the vision where it ended up and backtrack, that
7:20 am
is one approach. most of the challenge we are in an area where we do not undersnd the risk. we can't apply staard metrics and observation to it because we don't understand it. this world of complexity, interaction of parts, the engineering we have done on the military side, high reliability is done by wearing something in hard so that when you push a button you know what is going to happen. the world we are entering into where things are connected together to make them more effective does not yield the same result every time you push the on. you can see that in self driving cars, genetics, the complexity is paced by computational power and straight-line forecast in the future and the downside risks
7:21 am
against an unknown line and we are turning cyber into an exit essential risk, maybe, maybe not, we don't know but the exploration piece of this which takes time, strategies which e gu are very good at but this complexity issue, the outcome is not the same for the same action we are starting to enter into, and looking at deep learning, if it takes a curve driving a car, that is a data point, one of millions. the background information is different. we don't understand that stuff yet. >> i would add i think the
7:22 am
20century mentality is not only obscures the point we are making but if we can assess these things we can do it at the beginning and these systems evolve, the biology systems evolve because evolution is a biological pmenon. the artificial intelligence systems evolve because that is the nature of their capabilities. if you take traditional digital technology traditional as we have come to understand it after 40 years, 90% of what happens on the software system happens after the software is introduced, dispatched, modified and interacts with other things, the microsoft system has to interact with the adobe system. the net effect is the systems are more complex and interactive they are used in the field on navy ships that are constantly changing the system in a variety of ways,
7:23 am
and people come on and the system evolved in ways beyond their dreams. this points to the need for reflection at the beginning and continuously and that is a different kind of challenge. >> i agree with a lot of the risk that isn't frontloaded, a hefty percentage is back loaded or appears in operation for all the reasons you just said. and i am sympathetic to the notion that the risks are really wild and difficult to quantify or identify and i am very much in favor of being broad-minded when considering all the things tt wng but wh it stts to bring in the concept, this undefinable risk and that is what it shakes
7:24 am
out to mathematically leaving you in a tricky spot because you have to act and you have to choose between actions. in choosing between options you are hopefullylicitly but often implicitly making judgments, if we are choosing to invest $100, $1 million towards a category of risk, that means we think it is worth doing so with a more well-defined benefit. we are in fact saying something implicitly about the structure of these risks and it doesn't get us all the way to operational reality. this is not so terrible because even with these wild risks there are lots of things we know. the automated vehicle will
7:25 am
drive around and we know it won't sprout wings and fly off. we actually have a lot on its behavior. i think it is the case, you use the analogy of maps somewhere in here. while you were reading. we have this unseen terrain. we have some detail about the terrain we have been in but we have these areas of technology we haven't explored yet, we don't know what is there but we do know some of it. if there's a mountain and i ask what is on the other side of the mountain you're not going to say a star, there is a combination of dirt, water and air so you actually know a lot so if one recognizes that, that
7:26 am
allows you to put downs on the risks and begin to make these statements. >> i agrit you said. one thing i observed that intensifies your point is given the proliferation it is the case this has a drive of its own that contributes world peace to each other and you not only decide for yourself, but in light of the presumption others will do lots of things, you attend to sks thri associated with this but the risks -- one of the more aggressive propositions prescriptive lee in the papers we are to substantially step off, talking with our opponents as well as our allies about these risks, incentivizing them
7:27 am
and part of the planning process is developing robust mechanisms, contingency planning ents occurring, if you had a biological pathogen that was infectious that broke out in other countries, we wouldn't be able to say that is just their concern. but we don't have joint planning around these kinds of things or the equivalent of the nuclear circumstance, where we educate other countries about how to control the weaponry. is this pie-in-the-sky, naïve and impossible? >> yes. in a standard taxonomy of policy to norms, regimes and the law, new technology will
7:28 am
tend to manage as long as we can for the unknowns that policy regimes and norms and regimes and norms you reach out to your allies and adversaries through treaties and other venues to have a conversation. i will use as an example the new start treaty, for all of its warts, the preamble talks about conventional that looks like strategic defenses versus pure offense. a venue by which both sides concern looking at this area and giving you a heads up so if you don't like it you can say something about it. we can talk about attributes of that system, we may listen to you or not but we will have a conversation. we may elect regimes to not go down that path even though it may see a perceived advanta to you. it is robust but still
7:29 am
voluntary. >> you talk about the example of biological weapons so it is our decision -- used as a ruse, their efforts in that area. i agree we should be pursuing all potential paths towards awareness of unintended consequences and medication strategies but we need to understand again that technological superiority does not equal national security, nor does development of treaties and agreements equally to that as well. i think the ultimate choice we have is an example of
7:30 am
regulation, studies initially and hundreds of people dying per day from overdoses so there will always be the unintended consequence or misuse, intentional or not, it behooves us to think about and not being too satisfied with any arrangements we might be able to have. ..
7:31 am
the were all so a huge proliferation of technology in the commercial doesn't help. some technologies are more amenable to this than others. but the ones that are not so amenable to the strategy should be that they, we are done. this is a negative development for humanity and we will have to hope we can live with it because the alternative is that we don't. i say no, we can try to address these things with technology. as technologists i look for those opportunities to differentially find or develop technologies that will change this game. so we have broke rooms relating to weather and organize them has
7:32 am
been engineered in making bio weapons more detectable so that those treaties in regards to weapons more enforceable announced that the u.s. does for the world. this is just one example of the places where technologists can try to imagine what's over that mountain versus bob martin and choose to go that way because they think it will lead to better dynamics. >> i just want to make to concluding points in an open this up to questions and comments from the floor. one is general cartwright's nice construction of how we move from technology innovation through norms and regimes and so forth. i think i'm the only lawyer on this panel. problem is there such a delay when our institutions of governance folk domestically and
7:33 am
internationally following technology innovation they lack so much. as we get technology acceleration, which we are all witnessing a thing, the importance of that delay in as these technologies becomes more and more troublesome, maybe i can afford 25 years before society catches up her before the regime internationally gets there. if i'm dealing with the relatively controlled weapon for those relatively few competitors. when they very quickly or a large number of people appraise me a lot more. my problem as a lawyer is how lack of my profession is in this regard. the second thing i want to note is a lot of what we do in pursuing technology is justified in several comments from the panel said just this. well, this is risk keep it a lot
7:34 am
less risky than letting their opponents. if we maintain superiority can deter undesirable acts. it is time tested and has power. no one thinks they substantially deter accidents and unintended effects. the most potent rationale for advancement in the technology superiority race doesn't help us in the context of what i'm flagging here and that's when of the reasons it's a particularly important topic. i do not want a world in which we cease to take risk or don't pursue technology superiority. but i think her establishment very much underinvest for all kinds of reasons in worrying about the kind of thing that this discussion has been about in the report has been about. you'll get a chance to say more. let's see if anyone from the audience wants to particularly ask the question or make a comment. you don't have to do the normal
7:35 am
washington thing at of discussing your comment is a question. if you want to make ament just simply be brief. if you could identify yourself would have a television audience. it is not me making these derogatory noses, paul. >> richard, you are not worshiping the value and a little bit costly. i was looking for recommendations at the end of the paper on what we should do to get humans out of that deconstruct ways. you didn't go that far. i'm wondering what you would tell policymakers how to keep humans from screwing things up. >> well, i would say that there is a kind present circumstances
7:36 am
invoon of this figure that is the human will control the ultimate decision-making. the pentagon emphasis is constantly on the humans in the end are making these decisions. i think it's overstated in significance. there arcircumstances where it matters. i like the fact we are adding situations -- a second variable. in this case the human being that is different. i like that. ite check on the system. but the reality is most human decision-makers are highly reliant on mhine information, on sensor data, i'll go with nick analysis that has led to conclusions about flight
7:37 am
trajectories of ballistic missiles, where they are coming from kabul where they are going to win alike. the humans given very little time is typically to go along with the machine or in some instances likely to add error of a different kind than to be corrective. maybe you about this is we should stop putting so much emphasis on the human in the end and pce much more emphasis on the upfront analysis cushioning creation of consequence management activities in the lake. it's the friend and most of our opportunities exist. for some of the reasons we've talked about, the front-end is undervalued. >> two seconds am one. again,king forward it's a lot harder than looking back.
7:38 am
but i'm not sure we know in the a.i. construct for the human actually belongs number one. number two i believe the paper was very good in stating that up until now we offered pattern recognition. we offeredme f ahe abity to seeomething and identify it in those types of things. much of what has been overtaken by other kinds of sensorthat are more effective. i can see 30 miles out at facial quality instead of own eyes from an airplane seen maybe 3000 or 4000 feet. so things have changed. i don't know that we know exactly where it is. to think otherwise is to go into human nature in and of itself. if i ask anybody in this room to create an automated process of putting your code on in t
7:39 am
morning, we do it just like we do itod now. we try to write a program for that and over time the machines would do it differently because we find different ways of doing it. we experience this as an example. a classic empe mostangerous t i world. we should never think about going there because if it is not metal connected to metal, i've lost control. so what did we do? we told people they were making the inputs. it was seven years before it was connected to the flight controls. that is not necessarily a path to go down, but we didn't know where the person input piece, how we should describe it because it wasn't the same as before. the area in a path right now for the normal thing we did with the pony express in the steam engine
7:40 am
in the car et cetera were competing. if we put a blue o and wt jeopardy, we are competing with the machine right now. we have not entered into the teacherership and we don't know what that looks lik quite frankly because the competition helps us understand why we do the things. the potential here is we offer diversity in how we make decisions just by her uniqueness and machines. how does that get implemented? i'm not sure, but the last point i make mrs. don't forget about the disruption that is caused by the people left behind. our education system drops at least one fueratn if not to in any one of these transitions. right now we can't stand that. that's debilitating to culture, governments, structure, et
7:41 am
cetera. we've got to think about your ple when we have over the air transaction and change capabili, how does the next crew understand how they suld interact? oes come from? our education process right now is dropping generations out it was going to figure out how to fix that. >> i would like to add on to that. you mentioned that we need to better understand what that means and how we are using that phrase. i will say something somewhat contentious right now so what to make clear that it's me and not darpa speaking. i think the sure and learning is a misnomer. i don't believe we have machines that were. they were learning machines they would tell you when they were giving the wrong answer. i believe we have machine train systems. so the information that they give back to us is only as good
7:42 am
as both of these are programmed in for trained them to do in a function of the input data that were asking them to evaluate. so the notion that these machines are at fault were somehow misleading a is in itself misleading statement they are. they're as good as we have made them at this point. that is not to say that in the future we might not impart the equivalent of learning into machines in a.i. we need to be able in order to do that, they need to be able to do some sort of the equivalent of reasoning about whether the question they are answering actually make sense. does it hold up? we've all seen examples of an
7:43 am
image of a building msn noises added to that and now the machine says it's an ostrich. we can look at that i know from experience that it's still a building even though some minor things have changed. were not passed that point yet. human on a loop, humane in a loop, that there's humans in the loop through the whole process because we are the ones training in programming these machines. >> another question. >> the private sector being in establishing the peak months in your paradigm. >> would want if you like to comment? >> so, certainly the industry is
7:44 am
advancing technology, but there are differences in the acceptable performance because the consequences of the machine making a mistake in deciding what advertisement you're going to see when you go to google are different from the consequences of a mistake whenou're trying to do target recognition. there is certainly a role in advancing technology, but she can't carry the full burden. it's got to be with the d.o.t. in national security. >> i think we are all seen the shift of the or. in 19 these now is twice as much r&d by some measures being done in the private sector as the government. the companies have obviously
7:45 am
market incentives. ma of them global than the proliferation of these technologies, wther they want to participate in military matters or not. the companies clearly have a significant role to play. it think it's an important question for the. figure out it's rather clear that companies and more authoritarian states are heavily influenced for the maneuverings of those states. the united states is quite a different venue in some companies are very cooperative with the national security mission and others aggressively reject it. we need to thwart that out more and we haven't figured that out. at a moment it's a complication for us it doesn't exist in anything like the same form. the whole other topic.
7:46 am
>> i would just add that least from my perspective, go into the private sector, particularly small business and private labs, things like that, the risk calculus is a lot higher. in other words, they're willing to take a lot more risk and willing to fail, which is unique to this country. and so, it is not a one-size-fits-all intend to look more towards sfr d.c. and darpa and in the early stages shod i follow it and change it. they are very good at incremental improvement. just the basic process of incremental improvement is much better in the private sector that it is in the military sector because they keep doing it. it is their way to survive. >> from the back row, furthers
7:47 am
back. go ahead. >> my name is tony spadaro. apple address the issue of a new technology we invest and with the adversary using that. on the hypothetical that we have a foreign entity that is at risk to life, with a new innovative technology that we did not get to, do we have dn structures and agility to play catch up and perhaps become superior? >> i would argue that we do. we have demonstrated tha historically. what would be successful is another question. but the opportunity is particularly on at least the military side of the equation to have urgent need try. i was fine.
7:48 am
i don't know why i was fine. i've been outmaneuvered. or is higher, jobs faster, whatever it is, whether by going to do about it. get those guys on ts. why did that happen, what we have to do? if it is a big breakout, there are a lot of subpar type things you can do to manage it. we should not think of that is an aberration. that should really be in the normal thought. >> to address that problem combined with a previous question in the air. we are in a very global economy and we've already heard that the bulk of our indy happening particularly with regard to applications of defense relative is happening in the private sector. and so, we have i would struggle
7:49 am
to see it happening very often that there would be a breakout or advance of some kind happening in another country that was allowed over to other countries including ourselves here that is in fact part of my concdeveling t to be concerned about one thing and it seems like i don't have to be concerned about the other. >> we are coming up against a deadline. >> the observation in your prescriptive part of the report with an emphasis on processes and policies in the recognition that we can't control, do we have to deal with potential consequences. it seems to me that we might adhere, and adapt our embrace the private sector learning in a complex global innovative society and that is agility and
7:50 am
the agility of systems depends upon the agility of people. the general point stands a recommendation on the capital side because we won't invest in technology. if we have agility with that, we may be in a better position. >> i have a comment on that. >> i think it is very clear in our 22 national defense strategy that agility very much places a premium and insists on delivering performance and so it will be part of this strategy in terms of making sure we are leveraging the manpower appropriately to achieve that. >> i don't know that i have been in in a public meeting with regards to take knowledge in what u.s. government should do that did not fit at some point u.s. government should be
7:51 am
faster. the issue is how do you actually do that. there is a comment that people power is absolutely true. but another part is brati there's a multitude of reasons by the private sector is more flexible than government in part of it is they are able to smash things and build up much quicker than we are. >> i would just add i think there are a number of very concretehings we need to do. for example, on the personal side which you rightly point to as really important. the amount of waste associated with their unit uniformed services a special talent do not see potential careers that are nearly as gratifying for them as the military is of serious concern from a standpoint.
7:52 am
the system does not account for these requirements have agility. for example, if you were a navy enlisted with technical skills, you can rise up to middle rank pursuing a technical career. if you want to rise beyond that, there is no technical path you can follow. if you are young officer who has particular skills associated with information systems alike, we have developed a community now that gives you some thought shot at promotion but the likelihood that you will ultimately assume a role responsible for budget policy and operations is much lower than if you were good rather than restrict it. examples of that kind continue. we need to stop being simply rhetorical as the question implies and be much more operational in this regard.
7:53 am
on the acquisition side, if for example we said the speed of technological change is such that we don't get full value to, for example, a carrier that will have a 30 year lifetime for some shift with a 40 year lifetime. we would discount the value rarity or 40 years out because of this change and we will insist on buying platforms are that you build agility in. then you would begin to reform that system even bureaucratic way. i could go on but i would get in a lot of trouble not only for what i'd pay but we are overwriting our time. i want to thank the panelists. it's generous for you to come and do this. the center for new american security especially had open philanthropy projects and it
7:54 am
wouldn't have happened without them. they should be recognized. thank you all very much. [applause] [inaudible conversations] [inaudible conversations]
7:55 am

33 Views

info Stream Only

Uploaded by TV Archive on