tv Military Technology Development CSPAN June 4, 2018 7:10pm-8:01pm EDT
7:10 pm
>> tuesday testimony from education secretary betsy devos on president trump's 2019 budget request for her department. she will speak before a senate appropriations subcommittee. that gets underway live at 10:15 eastern on c-span 3. later usa gymnastics president and former michigan state university president will discuss sexual abuse of athletes during their tenures. he will testify before a senate be live at 3:00 p.m. wil eastern also on c-span 3. earlier today the center for a
7:11 pm
new american security held a discussion on potential risks and benefits of emerging military technologies. we heard about a new report which compares the technology development efforts of the u.s. and foreign militaries. this is about an hour. >> all right. welcome, everyone. thanks for coming here today. i'm director of the technology national security program here at the center for a new american security. i'm very pleased today to announce the launch of a new report by the honorable, talking about how we think about technology risks. the secretary has been a leader on emerging technologies for a long time and i'm very excited this new report, a valuable contribution to how we think of risks of emerge technologiein the national security space. we live in an interesting time where a number of countries have stated their clear desire to move aggressively on artificial
7:12 pm
intelligence among other technologies. last year china released a new national strategy for artificial intelligence. and the united states has made it a center piece of its attempts toregain and reinvigorate u.s. military we have a great line up of speakers here today. i'm just going to give briefly io them. you've got their full bios in your hand out there. so we're going to have a great discussion and i'm very excited that this report will contribute to what i hope will be a robust discussion within the national security space for how we think about risk as we pursue these new technologies. with that i will turn it over to the secretary. >> thank you, paul. thank you particularly for this event but for your role in this
7:13 pm
production of this report and center for new american security generally and quite gful for the hard work and contributions you all made. this report i will just briefly mention stems from an observation about the centrality in our present thinking about the importance of maintaining technological superiority. it is probably the case that no single theme has been as dominant in recent time in national security discussion. i think this is a consequence of sort of normalization of the world, the present national security establishment really grew out of world war ii experience and the cold war experience, and this was an experience in which the united states strongly dominated in technology innovation. we were advantaged at the end of world war ii, we had this vast collection of scientists, many of them refugees from europe. we had 50% of global gdp and an intact structure of industry and the like, and this led naturally
7:14 pm
to our dominance, but we nurtured it and sustained it and offset soviet union advantages in terms of manpower with our technology skills. now we see as i say a more normal world, one in which technology is diffused much more substantially, in which china's economic capabilities begin to come towards r equality with us and notably if you project as some economists have 2050, you might see a world in which the chinese economic capability measured by gdp in dollars would be 50% greater than ours. so there are questions about our technology superiority. my report is not actually though about how we maintain that. a lot of people have written and spoken about that. the focus is on a different thing, which is that superiority is not synonymous with security.
7:15 pm
it generates risks and the proliferation of technologies particularly compounds those risks so what we see as we move forward is increasingly complex and opaque technologies which we cannot reay fully comprehend being employed in military context where they are connected to highly destructive weapons, dramatic example.being the most and these technologies, whether in artificial intelligence or information systems or biology all create potential for unexpected effects, for accidents, for emergent effects, that is things no one could predict, for interactive effects and indeed they are liable to sabotage. if you think how we try and control civilian technologies to minimize those unintended consequences, it's striking that, for example, in our exploration of self-driving cars
7:16 pm
we naturally insist on what will be 100 million miles or many tens of millions of miles of driving to encounter all unexpected kind of contingencies, and we debate at itea when one company or another has an accident. but in the military context, we can't anticipate in the same way. we can't do the equivalent of 100 million miles of driving in combat because weon't have those combat situations involving high end sophisticated adversaries, and we can't see how their systems will interact with ours, and this compounds and intensifies the risks. it's also the case that we because of secrecy and the like eliminate a lot of third party review and in fact have complications ourselves internally with silos and different technology operators and some of the not
7:17 pm
gain access to special access program materials and the like until they are actually called upon to use them so the risks are substantial. i would underscore that the nature of those risks is particularly intense when you look back at the history of our managing other technologies and i give some examples inhe paper, just two that i would highlight to you are the nuclear test in 1954 where some -- there was a calculation of the fallout range done by very smart analytic people, and those people conclude that the envelope of potential radiation dispersion is x. they go ahead with the test and it turns out to be 3 x and some 600 people are exposed to radiation with lasting life-long effects. another example that i offer in the paper is associated with the 1977 h 1n 1 influenza virus where remarkably there's a quiet
7:18 pm
consensus amongst biologists you can find traces of this in literature that there was a dissemination of that virus, that dissemination of that virus was probably caused by the escape of the virus from a government freezer, probably a military-related thing or possibly as a result of an open air vaccine test that the chinese conducted around that time. either way, you have a worldwide pandemic which for reasons we can get into if it interests the panel and you all, pretty clearly doesn't arise from natural cause, but rather from these interactions of technology. the paper lays out some recommendations about what to do about all this,can and get into that in the discussion. so i'd like in the discussion to really highlight two aspects of this. one is diagnostic. is this a right picture of the world?
7:19 pm
should we be worried about this in the way that i described? and then the second is prescriptive. what is it that we ought to do about this? so our intent really is not to have presentation from these three very skilled experts, but really to have a discussion amongst the four of us, and i have encouraged this trio and if you know this trio, you know they probably don't need much encouragement to be critical as well as hopefully finding some things they like in this report, and we'll go back and forth between one another. we decided amongst ourselves that general cartwright would start because he's the most courageous of the four of us. [laughter] >> i think, you know, one, this discussion is important to get at the front end as early into the game as possible, and it -- and it should probably be redone on multiple occasions as new
7:20 pm
technologies are introduced. i think for me and the experience s of the equation, the one area that i think has to be explored a little more than probably was given opportunity in the paper, but risk is a continuum, and you operate on that continuum, and your audiences are in different places on it. demographically people are in different places on that continuum, risk, what they are willing to tolerate and what they aren't. so for me understanding where you are on that continuum adjusts what steps you take as you move forward and what steps you're willing to takend how mu risk you're willing to take. for me, with a problem of somebody appeared on the flank of that unit and was able to take advantage of a position that we had and we lost people, i'm willing to take the 20%
7:21 pm
right now. the 20% solution untested anything that will get me lives protected, even if it's not all of them, i'm willing to take that risk. if it's the approach of should we introduce precision and stealth into the force? that's a very different calculus and more in line with much of the decisions particularly and processes laid out in this paper. i thinkhat's important to understand. the question is, in the first case, when you're willing to take the risk and at what point do you intervene and how do you continue to intervene as we move along to understand the risks you have signed up to in you are urgency and how do that -- in urgency and how do they look in the light of day, what effect they actually had, etc.? often times we will get down that road, but that continuum is important to me, i think. and you've got to understand it and we've got to make sure we have process for both ends of
7:22 pm
the continuum. >> recognizing that point, the paper argues that essentially these technologies are -- have unpredictable aspects, that they are opaque and complex. do you think that's right? >> yes, very much so. it's very difficult as you start down a path, you start down there generally on the military side of the equation anyway for a reason, for a desired outcome that you would like to enhance or create. we can often start down those paths and all of a sudden the outcome is -- maybe looks good but the second and third order effects may look bad or may look even better. i will go back to the stealth/precision. you know, we started down that path with a sense of if we could put people in harm's way less often, have better management of collateral damage, and understand our targets better
7:23 pm
and precision in the intelligence side of the equation, then our deterrence would be better and our coercive effect of war and imposing will would be more effective. much of that is true, but second and third order effects of it have been very interesting in a societal construct, number one, and the expectation construct. i will never forget the first night of the second time we went into iraq of watching baghdad bombing did not interrupt the rush hour. people just kept going. and traffic lights worked. everything. >> actually if it had been washington, everything interrupts -- >> everything. [laughter] >> but i mean it has changed the thought process, but the second order effect was something we never expected really which was substantial reduction in the amount of men and equipment on the battlefield because we
7:24 pm
didn't have to move forward in truckloads to keep up with the artillery and stuff. it just fundamentally changed the game. it changed the game on how much it took in the iron mountain, so to speak, to get forward. those kinds of second and third order effects are beneficial here. there are some are that are equally -- there are some that are equally nonbeneficial and you have to reassess. >> what did you disagree with in this report? >> on this point of there's a continuumf risk, so currently the national security agencies are at one position on this risk continuum. there's diversity within it, but collectively they are around some value. part of the reason that you're writing this is that you think that they ought to be someplace else; right? that the risks from accidents are being too highly discounted or just not considered at all so they are discounted infinitely;
7:25 pm
right, in favor of immediate operational use or whatnot. so a question that i have, though, is very much like jim just said, about where do we want to be? right? how will we know when we're done? how will we know when we're appropriately balanced; right? what will be the point at which you would be comfortable writing the counter to saying we're doing too much now; right? and i don't feel like we yet have a principaled way of answering that question other than hearing which voices are yelling the loudest; right? >> let me disagree with you both of you actually and valerie we will invite you in hopefully on my side. [lauter] >> i don't really think we should or do dial up risk close to the continuum. i'm closer to the view of normal
7:26 pm
accidents that's referenced in the paper, and i think the implication of the emergent effects and other things is that we can't really calculate this terrly well. i don't think we can or should dramatically dial down risk or dial it up. i think what we ought to do is do a better job of recognizing and preparing for it so that we can take certain steps that will press the likelihoods that we see or give us a better ability to respond even in concert with our opponents to inadvertent consequences. i can say more of what i think. do you want to have in on that, jeff? or valerie? >> i think valerie has a side on that. i'm interesting. >> valerie you don't have to speak if you agree with me. [laughter] >> when i read this, one of the first thoughts that came to mind was actually i remembered the
7:27 pm
quote in the movie jurassic park that you're -- i can't remember it exactly but your technologists were so business -- were so busy worrying about what they couldithout thinking about whether they should and that's kind of what this report is about. and so from that perspective, i wrestle withheer technologists are really the right people to in and of themselves be deciding how technology should be used. i think they can inform that discussion, but it needs a greater community that probably brings motivation beyond what technologists themselves really bring to the table in terms of trying to address -- >> -- increases my faith in national security, citing jurassic park. [laughter] >> jeff, let me just ask you -- i'm sorry, valerie, did you have more?
7:28 pm
>> i did but it will come out in the course of the conversation. >> jeff, in the last part of the paper, there's an appendix which i cite i think excellent questions that are provided now to program managers and directors like you saying think about these considerations before you embark on your project, will it proliferate? how would we deal with others if it es? will it have unintended consequences? how can weushion that? do these questions make any difference? >> definite and i'm not just saying that because he's my boss. but i've seen instances both with myself and other program managers and others in our extended family where having these questions brought to us leads us to different action. and it is often the case. we're technologists. we're scientists. our thinking of what can be done and what can be built and seeg the immediate first order ramification of that for operational use and then not considering the second and third
7:29 pm
order effects, and as soon as the question is asked, what are those largerfes? then things start to look different. one, more complex but also that maybe it's not such a good idea to go down that route. one of the questions is about how quickly would an adversary be able to replicate your capability once you build it; right? they could build it completely on their own, but if you build it, how long until they get some of the benefit of your effort? right? and you could have adversaries at nation states that are terrorist groups. and the answer is almost always that it will help so by building a capability, you are contributing to your adversaries having that capability, which you don't want; right? so we're having this race condition where we're trying to always be technologically superior is all well and good if you can always stay superior,
7:30 pm
but the fact that you are running faster also means that your adversaries can draft off you, and this is not desirable. if you can find opportunities to build, say, defenses before the weapons so that it doesn't matter if you or your adversary builds it, the new capability no longer matters; right? that leads to more stable situatio where you have more strategic stability, but also sort of breathing space in order to be able to now start to addres technological aspects. >> can you provide an example where somebody's, you know, an entity second to the game has achieved an advantage? >> i don't need them to be -- by advantage, do you mean superior to us? >> well, an advantage. >> i mean the cold war wasn't good; right? and the ussr having nuclear
7:31 pm
weapons was assisted by us having nuclear weapons; right? and i am not in a very controversial position where i think having these two pointing nukes at each other was not stable, right, or desirable, perhaps a bit risky; right? that's just one obtuse example. >> we were always in a position of technology superiority during that time. >> well, you think they are trying to think of the kind of defenses valerie that -- >> i do. i think there are examples in our portfolio, particularly when the context of ai, where we are anticipating these sorts of challenges and those sorts of programs are looking to address the challenges of how we -- of
7:32 pm
how we might address unintended consequences, but you know, our mission is to ensure we never have another moment. so we are looking to see where the technology can go. we do -- we don't have the -- but in the course of addressing these questions, what is the impact? the questions of unintended consequences, and that dialogue do come up and we do think about them. maybe, you know, arguably not as much as we could in some cases, but generally we do look to see, you know, how might they be defeated, how might they -- how might we be surprised? so we're always looking at where
7:33 pm
might the surprise come from. >> the project seems to be a good example of this. maybe you want to spend a moment on that? >> it's not in my office. the idea is that gene editing has gotten out there. it's becoming a tool. it is looking at both unintentional -- you know, something gets out into the environment, how do we know that there's a gene that has been edited and how do we turn it off? whether it's been intentional or unintentional. so we're looking at tools to help detect when that's happened and then to mitigate the -- >> i'm also struck by the fact that the creators created a termination date in it so that if it escaped into the wild, its effects would at least be limited. and there's an example of that in the context of some of the dna work, we can program not only genetic changes using
7:34 pm
techniques, but we can also make our new creation dependent on some supply of ingredient that's not naturally available, so we can control it or we can give it a termination date. so those strike me as examples of this kinf thing. >> but you're never going to completely eliminate the surprises. >> correct. >> you know, if there's a date, if your computer didn't have the right date time on it, then you are vulnerable. >> this is essential to my view and comes back a little bit to what you said about sputniknd what you said about valerie darpa we should never be surprised again. my view and i have argued with generations of darpa directors about this is that's wrong. we are going to be surprised. it is the nature of technology now and we shou stopreating the illusion that it will be otherwise. and just as you said about the genes, we will be surprised by what happens, and this comes back to the fundamental comments that you offered jim and you offered jeff at the outset.
7:35 pm
my sense is the risk is there. we need to recognize it will always be there no matter what we do. and the question is now, what do we do about it? i would like toe bac to the prescriptive side of this. if any of you want to say more about that before we turn to that. >> one other piece to the risk and that is really bringing it into the current day environment. and much of the challenge that we have when we deal with this, it's easy to look back and have the vision of where it all ended up and therefore backtrack your way into why it ended up there. that's one approach here. but most of the time the challenge that we have is we're in an area where we do not understand the risk. we can't apply standard metrics and observation to it because we don't understand it. and this world of complexity, of the interaction of parts, in the engineering we have done to date
7:36 pm
at least on the military side high reliability is done by wiring something in hard, so when you push a button, you will know exactly what is going to happen on the other end. the world we are entering into where things are connected together ideally to make them more effective does not yield the same result every time you push the button. you can see that in self-driving cars. you can see that in genetics. you can see that in the complexi is really being paced by computational power, and we're trying to straight line a forecast of the future and the down side risks against an unknown line here. we're just taking what we know and turning cyber into, you know, an existential risk maybe, maybe not, we don't know. and so that the exploration piece of this which takes time and deductions, strategies,
7:37 pm
things that these guys are very good at, but this complexity issue that the outcome is not the same for the same action that we're starting to enter into, so when you look at a learning algorithm, when you look at deep learning, if it takes a curve driving a car, that's a data point. one of millions. and it just keeps learning and it will never take that curve the same way because the background information is different every time. we just don't understand that stuff yet. >> i would just add that i think the 20th century mtality is noly obsres the point you are making, it is also that if we can assess these things, somehow we do it at the beginning and have it. these systems evolve. obviously the biology systems evolve because evolution is a biological phenomenon. the artificial intelligence systems evolve because that's
7:38 pm
the nature of them, of their self-learning kind of capabilities. but it is not just them. even if you take traditional digital technology, traditional as we have now come to understand it after 40 years of experience, 90% of what happens on a software system happens after the software's introduced. it's patched. it's modified. it interacts with other things. the microsoft system has to interact with the adobe system, etc. and so the net effect is that as the systems grow more complex, and more interactive, they are used out there in the field on navy ships. we're constantly changing the system in a variety of ways. so one of the things that recent navy accident reports observe is new people come on and the system evolves beyond their training. this is not just reflection at the beginning but continuously and that's a different kind of challenge. >> i absolutely agree with a lot
7:39 pm
of the risk isn't front loaded, some hefty percentage of it is back loaded or appears in operation for all the reasons you just said. and i'm also very sympathetic to the notion that the risks are really wild. they are very difficult to quantify or identity, what not. and i'm very much in favor of being broad minded when considering all the things that can go wrong, but when it starts to bring in this concept of it's just undefinable risk so it's infinite risk and that's what it shakes out to mathematically, that leaves you in a tricky spot because you still have to act and you still have to choose between options. and in choosing between options, you are hopefully explicitly but often implicitly making judgments about relative risk; right? so if we're choosing to invest $100, 100 million dollars, 100
7:40 pm
billion dollars towards addressing some category of risk, that means that we think that it is worth doing so and not spending that amount of money on somebody else with a more well defined benefit; right? so we are in fact saying something implicitly about the quantitative structure of these risks. so saying that they are undefinable doesn't get us all the way to some operational reality. and this is not so terrible because in fact even with these wild risks, there are lots of things we know. so yes, the automated vehicle will drive around and could potentially do all sorts of things, but we know it is not going to sprout wings and fly off and you know -- >> i'm not so sure about that. [laughter] >> we actually have a lot of bounds on its behavior. so i think it's the case that you use the analogy of maps somewhere in here or if y didn't write it, then i thought
7:41 pm
it while reading. so we have all this unforeseen terrain that we haven't explored yet; right? we have some detail about the terrain that we've been in, but not perfectdetail, but we have all this areas of vast amount of technology that we haven't explored yet and we don't know it's there. but we do know some things. if there's a mountain over there and i ask you what's on the other side of the mountain, you are not going to say a sun; right? there's not a star there. there's going to be some combination of dirt, water and air; right? so you actually know a lot. so if one recognizes that, that allows you to put bounds on the risks and now begin to make these statements. >> so i largely agree with what you just said. one thing, though, that i observed that i think intensifies your point is given the proliferation of the science and the like, you use the case that this has a drive of its own, i think of modern
7:42 pm
technology of autocatalytic it contributes all these parts together. you not only decide for yourself what you are going to do but you have to decide in light of the presumption that others will do a lot of things. you not only attend to the risks of us doing things but also the risk of their doing things. one of the more aggressive propositions prescriptively in the paper is we ought to substantially step up our talking with our opponents as well as our allies about these risks trying to sensitize them to them. part of the planning process is trying to develop some robust mechanisms of contingency planning against events occurring. example i use if you had a biological pathogen that was highly infectious that broke out in some other country, we wouldn't be able to say, well that's just their concern. this is obviously going to be ours, but we don't have joint
7:43 pm
planning around these kinds of things. we don't have the equivalent of the nuclear circumstance where we lend controls, we educate other countries about how to control their weaponry. what do you think of this? is this pie in the sky, naive and impossible, desirable, both? >> yeah. [laughter] >> you know, in a standard of policy to norms toegimes to law, new technology we tend to manage as long as we can f the unknowns down at policy regimes and norms, and it is in regimes and norms that you reach out to your allies and your adversaries, your treaties and other venues to have conversation. i will use an example, the treaty, the preamble talks about conventional that looks like
7:44 pm
strategic, defenses versus pure offense. i mean, it is a venue by which both sides can say to you, i'm looking in this area. you know, and i'm giving you a heads up so that if you really don't like it, you can say something about it. we can talk about the attributes of that system at some point. we may listen to you. we may not. but at least we're going to have a conversation. and we may elect in norms and regimes to not go down that path at all. even though it may see some perceived advantage to you. so i mean it is robust, but it is still voluntary and it is diffic- >> go ahead valerie. >> you actually talk about the example of the bio -- biological weapons; right? >> right. >> so our decision not was viewed as a ruse by the former soviet union so they upped their efforts in that area.
7:45 pm
so i would just -- i agree that we should be pursuing all potential paths towards nsequences and mitigation strategies for them, but i also think we need to understand again that just as technological superiority does not equal national security, neither does the, you know, development of treaties and agreements equate to that. >> sure. >> as well. i think, you know, we have -- the opioid crisis that we have today is an example of we have regulation. we had studies initially, and, you know, hundreds of people are dying a day in the u.s. from overdoses. so there's always going to be that unintended consequence or misuse, whether intentional or not, that behooves us to be
7:46 pm
thinking about and not being too satisfied with any sort of arrangements we might be able to have in these discussions. >> jeff? do you want to comment on this? >> yeah so i'm very much in the camp that there are opportunities through talking with allies and adversaries to create good game theory; right? so create better interactions with each other that even if everyone is self-interested, we will yield a better situation. now, the difficulty is that people may have desires or incentives to defect and with some of these technologies it can be very difficult to tell when they have. you mentioned regard to bio weapons and the ussr and that involves detection and supply chains and whatnot or also dual use, the proliferation of technology in the commercial and private sector doesn't help and so on. so some technologies are more
7:47 pm
amenable to this than others. for the ones that are not so amenable, the strategy shouldn't be well, that's it, we're done, that is just a negative development for humanity and we're going to have to hope we can live with it because the alternative is that we right? i say no we still have an agency here and we can try to address these things with technology. so as a technologist, i look for those opportunities to differentially fund or develop technologies that will change this game; right? so we have programs relating to detecting whether or not an organism has been biologically engineered so that those treaties in regards to bio weapons can be more enforceable and thus lead to better outcomes not just for the u.s. but for the world. so this is i think just one
7:48 pm
example of the places where the technologists can try to imagine what's over that mountain, versus over that mountain and choose to go that way because they think it will lead to better dynamics. >> i want to make two concluded points and then open this u t questions and comments fromhe floor. one is in general cartwright's nice construction on how we move from technology innovation through norms to regimes and so forth, the problem -- and i think i'm the only lawyer on this panel. the problem is that there's such a delay when our institutions of governance both domestically and especially internationally follow the technology enno vision -- innovation, they lag so much, and as we get technology acceleration, which we're all witnessing, i think, the importance of that delay and as these technologies become more potent becomes more and more troublesome. maybe i can afford 25 years
7:49 pm
before society catches up or before the regime internionally gets there if i'm dealing with a relatively nt whe there are relatively few competitors, but when there are very quickly a large number of people this proliferates to it worries me a lot more. my problem as a lawyer is how laggard my profession is in this regard in developing this. the second thing i want to note is a lot of what we do in pursuing technology superiority is justified, and i think several comments from the panel suggest this, on theories of deterrence. well, this is risky, but it's a lot less risky than letting our opponents run ahead. if we maintai superiority, it can deter undesirable acts and i think that theory is time tested and has power. but nobody thinks that deterrence substantially deters accidents and unintended effects. so the most potent rationale for
7:50 pm
advancement in the technology superiority race doesn't help us in the context of what i'm flagging here. and that's one of the reasons i think it's a particularly important topic. i don't want a world in which we cease to takeisk or in which we don't pursue thnology superiority, but i think our establishment very much underinvests for all kinds of reasons, in worrying about the kind of things that this discussion has been about and what this report is about. you will get a chance i think to say more. let's just see if anyone from the audience wants particularly to either ask a question or make a comment. you don't have to do the normal washington thing of disguising your comment as a question. if you want simply to make a comment, just be brief. go ahead. if you can identify yourselves since we have a television audience. >> sure. >> it's not me making these derogatory noises, paul.
7:51 pm
[laughter] >> your argument -- you're a little bit caustic. i was looking for recommendations at the end of the paper on what we should do maybe to get out of the loop in constructive ways but you didn't quite go that far. i'm wondering what you would tell policymakers for ai development initiatives now. how to keep humans from screwing things up? >> yeah, i would say that there is a kind of -- in the present circumstance, there's in invocation of this figure from off the stage, and that is the human will control the ultimate decision making. and the pentagon emphasis is constantly on well, humans in the end are making these decisions. and i think it's overstated in
7:52 pm
its significance. there are circumstances where it matters. i like the fact that we are adding in situations where -- [inaudible] -- a second variable, in this case a human being in addition to theachine that's different. the machine is typically digital. the human is analog in effect. i like that. it's a check on the system. but the reality is that most human decisionmakers are higy reliant on machine information, on censor data, on algorithmic analysis that has led to conclusions about the flight trajectories ballistic missiles where they are coming from and where they are going to and the like and the human given very little time is in fact typically inclined to go along with the machine or in some instances is as likely to add error of a different kind as it is to be -- as the human is to be
7:53 pm
corrective. my view about this is we should stop putting so much emphasis on the human in the end and place much more emphasis on the kind of up front analysis, cushioning, creation of nsequence management activities and the like, because it's at the front end that most of our opportunities exist. but for some of the reasons we've talked about, i think the front end is undervalued. >> i want two seconds on that one. >> again, looking forward, a lot harder than looking back. but i'm not sure we know in the ai construct where the human actually belongs, number one. number two, i believe the paper was very good in stating that, you know, up until now, we offered pattern recognition. we offered for a long time the ability to see something and
7:54 pm
identify it, those types of things. much of the five senses have been overtaken by other kinds of sensors that are more effective. i can see 30 miles out, you know, at facial quality instead an airplane seeing from maybe 3 or 4,000 feet. so things have changed. i don't know tha we know exactly where it is, and i think that to think otherwise is to go against human nature in and of itself. if i ask anybody in this room to create an automated process of, you know, putting your coat on in the morning, we do it just like we do it today now. we try to write a program for that, and then over time, we do it -- the machines would do it differently because we'd find different ways of doing it. we experience this in fly-by wire. classic example.
7:55 pm
that's the most dangerous thing in the world we should never think about going there, because if it's not metal connected to metal i have lost control. so what did we do? we kind of told people that they were making the inputs. okay. but it was seven years before we told pilots that the stick was not connected to the flight controls. it just was too disruptive. okay? that's not necessarily a path to go down, but we didn't know where the person input piece, how we should describe it because it wasn't the same as before. that's the key. and so we are in a path right now where we're doing the normal thing that we did with the pony express and then to the steam engine, to the car, etc. we're competing. you know, if we put -- if we put big blue out and we win at chess, and we win at jeopardy, we're competing with the machine right now. we have not entered into that stage of partnership by a long shot. we don't know what that looks like quite frankly because the competition helps us understand
7:56 pm
where we do good things, but i do agree with the paper that the potential here is that we offer diversity in how we make decisions. just by our own uniqueness and then by the uniqueness of machines. and how does that get implemented? you know, i'm not sure. but think the last point i would make on this is don't forget about the disruption that is caused by the people that are left behind. our education system drops out at least one full generation if not two in any one of these transitions. and right now we can't stand that. that just -- that's debilitating to culture, to governance, structure, etc. we have got to think more about how we do -- your example of the ship, how we do make -- when we have over the air transaction and change capability, how does the next crew understand how they should interact with that machine? where's their learning come from? our education process right now is dropping those generations out. we've got to figure out how to
7:57 pm
fix that. >> i'd like to add on to that. you mentioned that we need to better understand what human on the loop really means and how we're using that phrase. i'm going to say something somewhat contentious right now so i want to make clear that it is me not darpa speaking with me. i think machine learning is a misnomer. i don't believe that we have machines that learn. if they were really learning machines, they would be able to tell you when they are giving you a wrong answer. i believe we have machine trained systems. we have -- so information that they give back to us is only as good as what we've either programmed them or trained them to do. and a function of the input data that we're asking them to evaluate. so the notion that these
7:58 pm
machines are at fault or somehow misleading us is in itself a misleading statement. they are as good as we have made them at this point. that's not to say that in the futu w might not impart the equivalent of learning into machines, in ai. in order to do that, they need to be able to do some sort of the equivalent of reasoning about whether the question that they are answering actually makes sense. does it hold up? we know -- you've all seen the examples of an image of a building and there's some noise that's added to that, and now the machine says it is an ostrich but we can look at that and we know from experience that it's still a building, even though some minor things have changed. we're not past that point yet. so human on the loop, human in
7:59 pm
the loop c be point wise, but there's humans in the loop through the whole process right now because we are the ones that are training and programming these machines at this point. >> let's let another question in and then jeff i think you probably have things to say on this. >> -- establishing these norms in your paradigm? >> would one of you like to comment? >> i have. >> go ahead. >> so certainly the industry is advancing the technology, but there are differences in the acceptable performance because the -- you know, the consequences of a machine making a mistake in deciding what advertisement you're going to see when you go on to google are
8:00 pm
45 Views
IN COLLECTIONS
CSPAN2 Television Archive Television Archive News Search ServiceUploaded by TV Archive on