Skip to main content

tv   Military Technology Development  CSPAN  June 4, 2018 11:07am-12:23pm EDT

11:07 am
[inaudible conversations] [inaudible conversations] >> all right. welcome come everyone here thinks are coming here today. and the director of the technology national security program you're at the center for new american security.
11:08 am
11:09 am
i'm quite grateful for all your hard work and valuable contributions you all made. this report ever just briefly mention stems from an observation about the centrality in our present thinking about the importance of maintaining technological superiority. it is probably the case that national security discussion and i think this is a consequence of sort of normalization of law. the present national security establishment really grew out of the world war ii experience, the
11:10 am
whole war experience. and this is an experience in which the united states strongly dominated and technology innovation. at the end of world war ii we had this vast collection of refugees from europe. we have 50% of global gdp and in the tax structure of industry and the like and this led naturally to her dominance, but we nurtured it interesti that an austin soviet union advantages in terms of manpower with technology skills. now we see a more normal world in which technologies diffuse much more substantially in which china's economic capabilities begin to come towards equality with fast and notably come a few project as some economists have come at 2050 you may see the economic capability measured by gdp in dollars would be 50% greater than ours.
11:11 am
it is not about superior very is not synonymous with security. it generates risk and the proliferation of technologies particularly compounds those risks. so what we see as we move forward as increasingly complex and opaque technologies which we cannot really fully comprehend being employed in military contexts whe they are connected to a highly to start is web incoming nuclear weapons in the most dramatic example in these technologs,er artificial intelligence or information systems or biology create potential for unexpected effects or accidents coming
11:12 am
emergent effects, things no one could predict. to redraft are liable to sabotage. if you think how we try and control civilian technologies to minimize those unintended consequences, it is striking that, for example, in our -- we can't anticipate in the same way. we can't do the equivalent of 100 yen miles of driving in combat because we don't have those combat situations involving high-end adversaries. we can see how the systems will interact with hours.
11:13 am
this compounds and intensifies the risk. it is also the case that we have some secrecy and eliminate a lot of third-party review and in fact have complications ourselves internally with silos of different technology operators and some of these users will not gain access to special access program materials and the like until the risks are suial.bstant i would underscore the nature of those risks is particularly intense when you look back at the history of our managing other technologies. i give some examples, just do it highlight to you by the nuclear task in 1954 where there is a calculation of the fallout range done by very smart analytic people and those people conclude
11:14 am
that the potential radiation dispersion is max and the go ahead and it turns out to be people exposed to radiation with lasting effects. anotr example that i offer in the paper is associated with 1977 h1n1 influenza virus, where remarkably there is a consensus amongst biologists that there was a dissemination of that virus was probably caused by the escape of the virus, a government, probably a military related pain or possibly as a result of an open-air vaccine test around that time. either way, you have a worldwide pandemic, which for reasons we can get into but the panel and you all can clearly doesn't
11:15 am
arise from a natural cause, but rather the interaction. the paper lays out recommendations about what to do with all of those and we can get into that in discussion. i'd like in thecussion to really highlight to ats this. one is diagnostic. is this the right picture of the world? should we be worried about this in the way i described in the second is prescriptive. what is a that we ought to do about this? so our intent really is not to have presentation frenzies are a very skilled expert, there really to have a discussion amongst the four of us. i'm encouraged this trio and if you know this trio, you probably don't need much encouragement, to be critical as i was hopefully finding some things they like in this report. we will go back and forth between one another. we decided amongst ourselves general cartwright would start.
11:16 am
>> on the knuckle dragger. they think, you know, this discussion is important to get at the front end, early into the game and they should probably be redone as new technologies produce. i think for me on the experience side of the equation, the one area that i think has to be explored a little more and was probably given opportunity in the favor. the risk of a continuum and you operate on that continuance in your audiences are in different places on it. demographic people are in different places on that continuum of risk, but they are willing to tolerate what they are. for me, understanding where you are on that continuum addresses
11:17 am
what steps you take as you afford and what steps are willing toake and what risk you're willing to take. for me, what the problem of somebody appearing on the flank of that unit and was able to take advantage of a position that we had when we lost people. i'm willing to say and take the 20% right now. 20% solution, untested. and it ain't that will get me protected, even if it's not all of them in order to take that. if it is the approach of should we introduce into the forest, and that's a very different calculus and more in line with much of the decisions particularly processes laid out in this paper. i think that's important to understand. the question is, in the first case when you're willing to take it and at what point do you intervene and how do you continue to intervene as they move along and understand the
11:18 am
risks you signed up to an urgency now how did they look in the light of day when you've got time to look at how they were practice, what effect they actually had, et cetera. often times we will get down that road, but that continuum is important and you've got to understand it and make sure we have process for both sons. >> the paper argues that essentially these technologies have unpredictable aspects that are opaque and complex. you think that's true? >> yes, very much so. you know, it is very difficult as you start down the path then you start generally for a desired outcome that you do like to enhance or create. we can often start down those paths and all of a sudden the outcome looks good, but the
11:19 am
second and third order effects may look bad or may look even better. i'll go back to the south/per session. we started down that path with a sense of if we could put people in harm's way last often and have better management of collateral damage and understand our targets better in the precision and the intelligence side of the equation, then our deterrence would be better in their coercive effect of imposing will that be more effective. much of that is true, but the second and third order effects have been interesting in a societal construct. one of the expectations construct. i'll never forget the first or the second time in iraq are watching baghdad. bombing did not interrupt. it just kept going.
11:20 am
traffic lights worked, interrupts. >> everything. >> the second thing is something we never expected, which was substantial reduction in amountf men and equipment on the battlefield. we didn't have to move 155 rounds forward in truckloads to keep up with the artillery is. it just fundamentally changed the game. changed the game on how much it took the iron mountain so to speak. those kinds of third order effects are benefial. there were some better equally nonbeneficial. you have to reassess. [inaudible] >> on this point there is a continuum of risk.
11:21 am
currently, the national security agencies are up one position on operational. a question that i have though is very much like jim just said about where do we want to be. how will we know when we are done? how will we know what we are appropriately balanced? what is the point at which you would be comfortable writing the counter of this, saying we are doing too much now. i don't feel like we yet have a principled way of answering that question other than hearing which voices are yelling the
11:22 am
loudest. >> i will disagree with both of you actually. and valerie will hopefully be on my side. i don't really think that we should dial up but that continuum. i'm closer to what is referenced in t paper and the implication the emergent effects and everything is we can't really calculate this very well. i don't tank we can or should dramatically dial down risk or dial it up. i think what we have to do as a better job of recognizing and preparing for us so we can take certain steps that repress the likelihood that we see or give us a better ability to respond even in concert with our opponents to inadvertent consequence.
11:23 am
i can say more about what i think. >> valerie will pick a site here right now. >> valerie come you don't have to speak if you are going to agree with them and not me. >> so, when i read one of the first thought that came to mind was actually i remember the quote in the movie jurassic park i don't know exactly, but your technologists are so busy worrying about whetherhey could come and they never stopped to think about whether they should. that is a little bit of it. and so, from that perspective, i wrestle with whether technologists are really the right people to it in and of themselves be deciding how technology should be used. i think they can run that discussion, but this means the greater community that probably brings motivation beyond what
11:24 am
technologists themselves really bring to the table in terms of trying to address. [inaudible] darpa citing jurassic park. >> i'm sorry valerie committee jeff moore to say? >> i did that though, after the break. >> in the last part of the paper, there is an appendix from which i cite jason and the excellent questions that he provides now to our program managers and directors like you, saying, think about these considerations before you embark on your project. will proliferate? how will we deal with others? will it have unintended consequences? to these questions make any difference day-to-day? >> definitely. i have seen instances both with myself another program managers
11:25 am
in our extended family were having these questions brought to us leads us to different action. it is often the case where technologists are assigned ts are thinking of what can be done and what can be built in seeing these immediate first-order ramification of that and then not considering the second and third order effects. as soon as the question is that for our larger effects, then things start to look different. more complex, but maybe it's not such a good idea to go down. one is about how quickly would it not for scary be able to replicate your capability lets you build it. they could other completely on their own, but if you build it, how long until they get some of the benefit. you could have adversaries better nation state or terrorist groups. the answer is almost always that
11:26 am
it will help them. so, by building the capability, you are contributing to your adversaries having that capability that you don't want. so we are having this condition where we are trying to be superior is all well and good if you can always stay superior. but the fact that you are running faster also means that your adversaries can draft off of you. this is not desirable. so if you can find opportunities to build defenses before the weapons so that it doesn't matter if you or your adversary builds that, that the new capability no longer matters. that leads to more stable situation where you have more strategic stability, but also some breathing space in order to now address the actions.
11:27 am
>> can you provide an example for an entity -- [inaudible] >> i don't need them to be. the advantage do you mean superior to us? the cold war wasn't good, right? the ussr having nuclear weapons was assisted and i.e. and non-in a very controversial position where i think it was not stable or perhaps a bit risky. so that is just one example. >> we were always in a position of superiority in that conflict. >> you think that darpa is trying to -- [inaudible] >> i do. i think there's examples in our
11:28 am
portfolio, particularly when the context of a.i., where we are anticipating these challenges. the trusted economy, those sorts of programs are looking to address the challenges of how we we -- how we might address unintended consequences. but you know, our mission is to ensure we never have another spot back. we are looking to see where the technology can go. we don't have a guess to make dean a catechism. but in the course of addressing these questions, the questions
11:29 am
of unintended consequences do, but we do think about maybe arguably not as much as we could, but generallyoo to see how might they be defeated. how might we be surprised. we are always looking athat. >> this project seems to be an example -- >> can you spend a moment on that. >> the idea is it's becoming a fairly democratized tool and it's looking at both out into thenvonmentow dog we know that there is a team that's been edited and how do we turn it off. looking to help it happen. >> instruct by the fact that the
11:30 am
creators created if they escaped into the wil the effects would at least be limited. an example of that in the context of some of the dna war. and not only genetic changes. we can also be dependent on some supply that is not actually available. those strike me as examples of those kinds of things. >> you're never going to completely eliminate the prizes. >> this is essential to my view. it comes back to what you said about spot back in which correctly said about darpa. we should never be surprised again. i've argued with generations.
11:31 am
that is wrong. we are going to be surprised. we should stop creating the illusion that we will be otherwise. just as you said, we will be surprised with wha happens. the fundamental comments you offereat the outset. my sense is the risk as they are. we need to recognize there will always be there no matter what we do. the question is now, what do we do about it? i would like to come back to the prescriptive side of this. if any of you want to say more about this. >> i think there's one other and that is bringing into the current environment. much of the challenge that we have to deal with this is easy to look back and have the vision of where it all ended up and therefore backtrack the way to why it ended up tre most o the time to challenge is
11:32 am
we are in an area where we do not understand. we can apply standard metrics and observation because we don't understand it. this world of complexity, of interaction in the engineering week and today on military side, it is done by wearing something in hard so when you push a button you know exactly what's going to happen. the world we are entering into where things are taped together to make them more effect give does not yield the same result every time you push the button. you can see that in self driving cars. you can see that in genetics. the complexity is really being paced by confrontation. we are trying to use great mind a forecast in the future and the
11:33 am
downside risk against an unknown line here. we are taking what we know and turning cyberinto anstenti risk. maybe, maybe not. we don't know. the exploration piece of this, which takes time, introductions, strategies, things that were very good at. this complexity issue that the outcome is not the same for the same action that we are starting to enternto. when you look at a learning algorithm, it takes a curve driving a car, that is a data point, one of millions and it just keeps burning that will never take the curve in the same way because the background information is different. and we just don't understand that. >> i would just add that i think the 20th century mentality is
11:34 am
not only obscures the point you are making. it is also that if we can assess these things, somehow we can do it at the beginning and the systems evolve. obviously the biology systems of all b evoons a biological phenomenon. the artificial intelligence evolve because that is the nature of time other self learning capability. but even if you take traditional digital technology it now comes to underand and after the software is introduced. it's modified anoraks. the microsoft system has to interact with the adobe system, et cetera. the net effect is that the system grows more complex and more interactive, they are used out there in the field on navy ships. we are constantly changing the
11:35 am
system in a variety of ways. one of the recent reports observed is beyond their turning. this points to the need for not justhe beginning, but also continuously and that's different. >> i absolutely agree with a lot of the risk isn't frontloaded. some hefty percentage is backloaded or appears in operation. and i am also very sympathetic to the notion that the risks are really wide. they are difficult to quantify or identify. so i am very much in favor of reading broad-minded considering all the things that can go wrong. when it starts to bring in this concept of just undefinable risk so it is infinite risk and that's what it stakes out to
11:36 am
mathematically. that leaves you in a tricky spot because you have to act and choose between the options. in choosing bet optio y e hopefully explicitly, but often implicitly making judgments. so if we are choosing to invest $100, $100 million from $100 billion toward some category, that means we think it is worth doing so in not spending that amount of money and something else with more well defined benefit. so we are in fact stating something implicitly about the quantitative structure of these risks. saying they are undefinable doesn't get us all the way to some operational reality. this is not so terrible because even with these wild risks, there are lots of things we know. so yes, the automated vehicle
11:37 am
will drive around it could potentially do all sorts of things. we know it is not going to sprout wings and fly off. we actually have a lot on this behavior. i think it is the case that you use somewhere in here or if you didn't write it, then i thought it what you were reading. and so, we have all this unforeseen terrain thate haven't ked lord yet. we have some detail about the terrain that we've been in, but we have all these areas of technology that we have been explored yet. we don't know what is there. but we do know some things. there is a mountain over there and ask you if on the other side mountain, you are ning to say . there is going to be some combination of, water and air. you actually know a lot. if one recognizes that, that allows you to put down on the
11:38 am
risk and now began to make these statements. i largely agree with what you just said. one thing i observed that i think intensifies your point is given the prayerful or treason -- proliferation, it is the case that this has a driver that known i think i'm technology is autocatalytic. it contributes all these parts to each other. you not only have to decide for yourself what you're going to do. you have to decide in presumption he not only have to attend to the risks associated with us doing things, but the risks they are doing. one of the more progressive propositions prescript ugly is we are to substantially step up talking with our opponents as well as our allies about these risks and part of the planning process is trying to develop
11:39 am
some robust mechanism contingency planning against events occurring. if you had a biological pathogens that was highly infectious, but broke out in some other country, we wouldn't be a buddhist say, that is just their concern. this is obviously going to be ours. we don't have joint planning to ground these things. we don't have the equivalent of the nuclear circumstance where we educate other countries about how to control their weapon. what do you think of this? pie in the sky a naïve and impossible, desirable, bose? >> yeah. in a standard taxonomy of policy to norms to regimes to law, new technology we tend to manage as long as we can with the unknowns
11:40 am
done in policy regimes and norms. it is an regimes and norms that you reach out to your allies in your adversaries, treaties and other venues to have conversation. i used as an example the s.t.a.r.t. treaty, for all of its warrants, the preamble talks about conventional that looks like strategic. defenses is a venue by which both sides can say to you, i'm looking in this area and then giving you a heads up so that i you really don't like it you can say something about it. can talk about the attributes of that system at some point. we may listen to you. we may not. but at least we'll have a conversation and we will elect regimes do not go down at all. even though it may see some perceived advantage to you. i mean, it is robust, but it is
11:41 am
still voluntary. >> you actually talk about the example of the biological weapons. our decision was not vied a ruse. big up to their efforts in the area. so i agree that we should be pursed doing all potential paths towards awareness of unintended consequences and mitigation strategies. they also think we need to understand again that just as technological superiority, neither does development of treaties and agreements as well. i think, you know, where the opioid crisis that we have today is an example of regulations.
11:42 am
we had studies initially and hundreds of people are dying per day. there's always going to be that unintended consequence, whether intentional or not but it behooves us to be thinking and not being too satisfied. >> yeah, so a i very much in the camp that trere opportunities through talking with allies and adversaries to create better interactions with each other than even if everyone is self-interest if we will yield a better situation. now, the difficulty is that people have the desires or incentives did affect some of these technologies can be very
11:43 am
difficult to tell when they have. you mentioned with regard to the bio weapons of the ussr and that involves dection and whatnot. or also, dual use simpler version of the technology of the private sector does and how. some technologies are more amenable to this than others. but for the ones that are not so amenable, they say well, that is it. we are done. we are just going to have to look at it in the alternative is that we don't. i say no, we still have an agency here and we try to address these things with technology. as technologists, look for those opportunities to differentially find or develop technology that will change this game. we have programs on whether or
11:44 am
not an organism has been engineered and there is one small piece of the pipeline for making board detectable so that those treaties can be more enforceable and thus lead to better outcomes. so, this is a think just one example of the places where technologists can try to imagine what is over.not and choose to go that way because it will lead to better dynamics. >> to concluding points in an open that up to questions and comments from the floor. one is general corporate fr technology innovatione move through norms. the problem that i think i'm the only lawyer. the problem is there is such a dey when our institutions both domestically and especially
11:45 am
internationally follow the technology innovation. they lack so much. as we get technolo acceration, which we are all witnessing i think some of the importance of that delay as the technologies become more potent becomes more and more troublesome. maybe i can afford 25 years before society catches up were before the regime gets better. if i am dealing with a relatively controlled weapon where there are relatively few competitors. when there are very quickly a large number of people, it worries about more. my problem is a lawyer is how lacquered my profession is in this regard. the second thing i just want to note is a lot of what we do in pursuing technology superiority is justified in making several comments on the panel suggests this. this is risky but it's a lot
11:46 am
less risky than letting our wee maintain superiority, can deter undesirable acts. i think the theory is time-tested and house power. but nobody thinks the deterrence substantially deters accidents. so the most potent rationale for advancement in the technology superiority raise doesn't help us in the context of what i'm lacking here. and that is one of the reasons it is particularly important. i do not want a world in which we cease to take risk or which we don't pursue to allergy superiority. but i think our establishment very much invests for all kinds of reasons in worrying about the kinds of things this discussion has been about. go get a chance i think to say more. click either ascii, and i make a
11:47 am
question. you're not to do do the normal washington bank of disguising your comment is a question. and if you could identify yourself since we have an audience. it's not me making these derogatory noises. [inaudible] >> worshiping the value is a little bit costly. i was looking for recommendations at the end of the paper on what we should do, but she didn't quite go that far. i wonder if you would help policymakers develop initiatives now on how to keep humans from screwing things up. >> well, i would say that there is a kind of present
11:48 am
circumstance there is the syndication from off the stage and that is the human will control the ultimate decision-making. pentagon emphasis is constantly on humans in the end are making these decisions. i think it is overstated then significant. there are's instances where it matters. i like the fact we are adding in situations -- [inaudible] a second variable. in this case a human being that is different. it is analog and affect. i like that. it is a check on the system. the reality is that most human decision-makers are highly reliant on machine information. uncensored data on all current mac analysis that is made to
11:49 am
conclusions about the flight trajectory of the ballistic missiles, where they are coming from, where they are going to and the like. the human given very little time is in fact technically inclined to go on with the regime or in some instances to like that air of the different kind than is the human is to be corrected. my view about this is we should stop putting so much emphasis on the human in the end didn't place much more emphasis on the kind of upfront analysis questioning creation of consequence management activities and the like because it is the front-end that most of our effort to news exists. for some of the reasons we've talked about, the front end is undervalued. >> again, looking forward is a lot harder than looking back.
11:50 am
i am not sure we know in the a.i. construct where the human actually belongs. mber t, the paper waser good in stating that up until now we offered pattern recognition. we offered for a long time the ability to see something and identify it. much of what the five senses had been overtaken by other sensors that are more effective. i can see 30 miles out facial quality instead of my own eyes from an airplane the 83,000 or 4000 feet. so things have changed. i don't know that we know exactly where it is. i think that to think otherwise is to go against human nature in and of itself. if i ask anybody in this room to eate an automated pcess of putting your code on in the
11:51 am
morning, we do it just like we do it today now. we try to write a program for that and over time the machines would do it differently because we find different ways of doing it. that is the most dangerous thing in the world. we should never even think aut going there because if it's not metal connected to model, i lost control. so what did we do? we told people that they were making the inputs. it was seven years before we told pilots to stick. it was just too disruptive. that's not necessarily to go down, but we didn't know where the person into a piece, how we should describe it. that is the key. and so, we are in a path right now where we are doing the normal things that we did in the
11:52 am
car, et cetera. we are competing. if we put big blue out and win a test and win a jeopardy, we are competing with the machine right now. we have not entered into that stage and we don't know what that looks like because the competition helps us understand. but i do agree with the paper that the potential here is we offer diversity in how we make decisions, just by our own uniqueness and how does that get implemented? the last point i make on nasa's don't forget about the disruption that is caused or the people left behind. our education system drops at asone fulleneration if not to in any one of these transitions in right now we can't stand that. that is usually debilitating and
11:53 am
we've got to think more about how we view your example and when we have a revered transaction capabilities, how do they understand how they interact. where does the learning comes from? our process right now is ng generations. >> i would like to add onto that. you mentioned we need to understand what tha means. i will say something contentious right now. anybody who's worked with me has heard me say that the machine learning is a misnomer. if they were really learning machines, they would be able to tell you the answer. i believe we have machines
11:54 am
trained. so the information they give back to us is only a good as well with either program found found or trained them to do and a function of the input that we were asking them t evaluate. so, the notion that these machines are at fault or somehow misleading is in itself a misleading statement there. as good as we have made them at this point. that is not to say that in the future we might not impart the equivalent of learning into machine in a.i. we need to be able, in ordeo do that, they need to be able to do some sort of the equivalent of reasoning about whether the question they are answer and actually make sense. does it hold up? we know an example of an image
11:55 am
of a building and there is some noise added to that another machine sounds and we can look at that i know from experience that it is still a building. we are not passed that point yet. in the loop can be quite biased, but they are in the loop through the whole process right now. these are the ones training these issues. >> another question. >> we see the private sector establishing these norms. >> if one of you would like to comment.
11:56 am
>> so, certainly the industry is advancing technology. but there are differences in the accept the bull performances because the consequences of the machine making a mistake in deciding what advertisements are going to see when you go to google or different from the consequences of a mistake when you are trying -- [inaudible] and that sort of thing. there's certainly a role in advancing the technology, the you can't carry the full burden with the dod and national security. >> i think we all see the shift being done by the air. in the 1950s, now it is twice as much r&d by some measures being done in thether. the companies have obviously
11:57 am
market incentives. many global market incenti with the proliferation and whether they want to particat in these matters or not. the companies clearly have a significant role to play. it's an important question for the u.s. to figure out. it is rather clear that companies in more authoritarian states are heavily influenced, somemes paused for the maneuvering of those states. the united states is quite a different view. some are very sympathetic to an and cooperative with the national security mission and others aggressively reject. we need to sort that out more and we have figured that out. at the momt ishe complication for us that doesn't exist. the whole other big topic.
11:58 am
>> i would just add at least from my perspective, go into the private sector, particularly small business and things like that, the risk calculus is a lot higher. in other words, they are willing to take a lot more risk, which is unique to thi country. so it is not kind of a one size fits all and look towards darpa and whatnot to say. they are very good at incremental improvement with the basic process of incremental improvement is much better in the private sector than the military sector. it is their way to survive.
11:59 am
>> i think we should call in someone from the back row. go ahead. >> my name is tony spadaro. the new technology that we invest in the secondary effect in using that to catch up. on a hypothetical that we have a foreign entity at risk to ask the new integrated technology that we do not get to, and do we have the decision structures and the agility to play catch up and perhaps become superior. >> i would argue that we do. we have demonstrated that historically. but would be successful at it is another question. the opportunity is particularly in the military side of the equation to have unit but urgent
12:00 pm
need. i was flamed. i don't know i was fine. i've been outmaneuvered. jumps higher, whatever it is. what am i going to do about it? i've got a place right away that i point immediately. why did that happen? if it is a big breakout, then their are a lot of soft power things you can do to manage it. but we should not think of at as an operation. that should be in the norm of thought. >> provided with the previous question of the private doctor. we are entering -- we are in a very global economy and we've heard he heard that the bulk of our ind happening, in regards to application is happening in the private sector. and so, we have -- i would
12:01 pm
struggle to see it happening very often that they would be a breakout or a big advance of some kind have been in another country that wouldn't quickly spill out to other countries including ourselves. that is in fact part of my concern about us developing certain things. so if i'm concerned about one thing it seems like i don't have to be concerned about the other. >> we are coming up against her deadline ill ueeze in one more. >> the observation in your prescriptive part of the report, there's some processes and policies and the recognition that we can't control that we have to deal with and it seems to me we might adapt or embrace the complex emergence, and that is agility and the systems
12:02 pm
depends upon the agility of people. it seems on investment of the human capital side. thadt with the people -- [inaudible] >> i have a comment on that. >> i think it is very clear an empty space to be part of this strategy addressing the manpower in an appropriate way. >> i don't know that i have been in any public meeting through those that did not state the u.s. government should be
12:03 pm
faster. now, the issue is how do you actually do that? there is the comment that people power is absolutely true. another part is bureaucratic flexibility in what now. they're a multitude of reasons that the private sector is more flexible than government. part of that is they are able to build up much quicker. >> i would just add a thing ferry number, for example, on the personnel sid that is really important. the amount of it associated with their uniform sources in terms of people of exceptional balance. we do not see a potential careers that are nearly as gratifying as a serious concern.
12:04 pm
the system does not account for the navy with technical skills you can rise up to the mide rank pursuing a technical ca the reasoning of the technical path if you are a young officer who has particular skills associated, we have developed some communities now they give you some shot at promotion, but ilill the likelihood hl ultimately assumed central roles for budget and operations with unrestricted minds. in the examples continue. so i think we need to stop being simply rhetorical and be much more operational.
12:05 pm
and on the acquisition side, and for example an implication of this technology and technological changes such that we don't give full value to theo shift in the 30 year lifetime through a discount the value of now because of the change and we will insist a lifetime's better deeper and will change or that you build agility end. then you begin to reform that system even with a bureaucratically. i would get in a lot of trouble becauswe are overrunning our time. i want to thank the panelists. it's generous of you to come and do this. center for new american security especially in open philanthropy
12:06 pm
project and it wouldn't have happened without them. they should be recognized. thank you all very much. [applause] [inaudible conversations] [inaudible conversations] [inaudible conversations]
12:07 pm
[inaudible conversations] [inaudible conversations] [inaudible conversations]
12:08 pm
naible conversations] [inaudible conversations] [inaudible conversations]
12:09 pm
[inaudible conversations] [inaible conversations] [inaudible conversations] [inaudible conversations]
12:10 pm
[inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations]
12:11 pm
[inaudible conversations] [inaudible conversations] [inaudible conversations]
12:12 pm
[inaudible crsatio [inaudible conversations] [inaudible conversations]
12:13 pm
[inaudibleonversatns [inaudible conversations] [inaudible conversations]
12:14 pm
>> we serve so many areas. 35% were caused by stone of access. so in many instances,hey work very closely with programs that they can partnership companies like tcs to bring broad and to customers that have adequate
12:15 pm
broadband can bring a more rich verbose broadbanin the future. i do t it's important as the administration,ccress cos infrastructure like the proceedings and other concepts, that brought andas determined to be a matter of infrastructure to our country and national policies. that is a change becau typically we think of infrastructure as roads, bridges, railway, et cetera, which are important and need to be helped. but you cannot survive today is the business come as an individual in economy without having robust broadband experience. >> we are by this afternoon online capabilities of isis. the former n.y.p.d. director and cofounder of the now defunct
12:16 pm
resolution muslim website but their experience working against each other. new america hosting the event this afternoon. it should start in just a moment. ♪ [inaudible conversations] ♪ ♪
12:17 pm
♪ ♪ ♪
12:18 pm
♪ ♪ ♪
12:19 pm
♪ ♪ ♪
12:20 pm
♪ >> again, moments away from the start of a quorum on online capabilities of isis hosted by new america and should start in just a moment here. we will let you know about some other programming coming up here tonight and communicators on american cable associate president in the io along with pdf vice president anders petersen talking about the issues facing rural broadband providers. "the communicators" tonight at 8:00 eastern on c-span2. stephen holbrooke will talk about the second amendment. the long island federal society. mr. holbrooke argues three, cases before the supreme court. 8:30 a.m. eastern on c-span2. listen to it with the c-span
12:21 pm
app. ♪
12:22 pm
♪ ♪
12:23 pm
♪ ♪

67 Views

info Stream Only

Uploaded by TV Archive on