Skip to main content

tv   Debate on Using AI in Operating Nuclear Weapons  CSPAN  January 28, 2025 8:22am-9:27am EST

8:22 am
>> today, former service members and advocates of discussed the community care program before the senate veterans affairs committee. the hearing is being held about how president trump scored on a government hiring freeze could impact veterans access to healthcare. watch your life at 10:30 a.m. eastern on c-span3, c-span now app free mobile video app, or online at c-span.org. >> c-span, democracy unfiltered. where funded by these television companies and more including midco. ♪ ♪ >> where are you going? or maybe better question is how far do you want to go? and how fast you want to get there? now we're getting somewhere. so let's go. let's go faster. let's go further. let's go beyond.
8:23 am
>> midco supports c-span is a public service along with these other television providers giving you a front-row seat to democracy. up next policy analyst and academics debate use of artificial intelligence in the military nuclear command control and communication systems. the nc3 system is a deterrent and detection system which provides authority to the present use nuclear weapons during times of war. this discussion is hosted by the center for strategic and international studies in washington, d.c.
8:24 am
>> good morning, and welcome everybody to see us a yes. my name is heather williams, director the project on nuclear issues at csis, facially not as -- we are delighted to welcome you to csis today to both the folks in but also to everybody online enjoining live for this debate about reliance on artificial intelligence in nuclear command control and communication. nc3. i have a couple housekeeping items to go over and then we'll jump into the substance of the debate. this debate is onn the record ad is being recorded. after the formal debate we want time for any discussion in q&a. if you're in the room please use the qr code that is behind you
8:25 am
and there's an online form for folks join there. i need to share with you are building safety precautions for those of you are in the room and we feel secure in our building but as a convener with the duty to prepare for any eventuality. i'll be your responsible safety officer and please follow my instructions should the need arise and make sure you know where the closest exit is. probably either behind you are behind me. we'll turn over. to the program and to 2025 pony debate. the series began in 2009 to encourage a dynamic and free flowing exchange of ideas about some of the most pressing nuclear issues in today's worsening security firm, nuclear saber rattling and emerging technologies make these debates more timely than ever. the last of it was unused nuclear targeting policy with frank miller and james in january 2020 for which you can find online if you're interested. csis project issue is out in 2003 to develop the next
8:26 am
generation of nuclear policy technical and operational experts. today we have our 2025 nuclear scholars initiative class, welcome to all of you, thank you for being here. along with many folks in her midcareer cadre. thank you for joining and being part of the poni family. the nuclear mission is a core part of a programming that provides young professionals with the unique venue to interact and engage in dialogue with senior experts. today's debate is going to focus on ai integration in nc3. and my colleague had a discussion with general cotton of u.s. strategic command where he outlined a bit of a vision for ai and taking advantage of nuclear efficiencies in the enterprise. two days before that former president biden and president xi jinping made a statement that affirmed the need to have human control over the decision to use nuclear weapons, this
8:27 am
comes at a time as the entire u.s. nuclear triad is modernizing and including the command and control systems. the concern is that ai will make decisions whether they're aware or not. what are the enhanced decision making and what are the risks associated with that and what are the risks we can live with. to better look at this challenge i'm going to host this debate. sara is on a project here at csis and founder and ceo. and paul is at the center for new american, and delighted to have chris andrews in the discussion for the debate a fellow at the national defense university and a member of our mid career cadre. the motion on the table today, the motion is that the u.s. should increase its reliance on
8:28 am
artificial intelligence to enhance decision making in its n3 systems. the format of the debate, sara in the atef affirmative of the motion and paul in the negative. they'll make remarks and we'll sit back down and have a conversation and each have four minutes of rebuttal from what they heard from each other and chris will have a discussion and they will have time to answer that once we get to that point in the debate we'll open it up to you, what are the burning questions out of the discussion and i'll moderate as they those go through and at the end they will have time for final remarks and we will end at 10:00. so with that, i'm going to turn over to sara to get us started with her opening comments. sara, please.
8:29 am
>> good morning. i'm shorter, also not wearing heels. first, i just want to say thank you for being here, for attending both in person and online, the discussion of these issues is vitally important to our national security. there has been, i think, widely acknowledged kind of a deficit in the interest of young people in these kinds of debates that can seem either too historically rooted or religious in nature and not grounded in actual methodology and practice and so i'm always thrilled to come here and support pony in what they're doing because we need more critical thinkers in this area. so, let me just start with that. while i thank you i am also going to say i'm dismayed at being like, having flashbacks to high school debate team, which is not always how i want to start my morning, but this
8:30 am
is going to be awesome. my name is sara, i'm going to be speaking in the affirmative of this case. i will tell you i'm not going to bury the lead and that my assertion is probably a little exaggerated for the purposes of debate, but i'm historically personality here we go. my basic assertion ai tools and techniques are appropriate for use across just about the entire n c3 system with the exception of authorizing automated weapons release authority for the deployment of nuclear weapons with humans not on the loop, right? to get started, i mean, i think we all need to acknowledge that there are several different and varying definitions of what artificial intelligence is and paul has literally written the book on this so he will win that portion of the debate. i will tell you that for my purposes what i am using is basically the definition that
8:31 am
comes out of title 15 section 9401 subsection 3, it is also the definition that was used in the executive order that was issued by president trump yesterday, as well. i read that and i will be honest, i'm not sure what it says, i'm sure it says inciteful things. the term artificial intelligence for that definition means a machine-based system that can for a given set of human defined objectives make predictions, recommendations, or decisions influencing real or virtual environments. artificial intelligence systems use machine and human-based inputs to perceive real and virtual environments, abstract such perceptions into models for analysis in an automated matter and use model inference to formulate options for information or action. okay. so, i mean, the first thing
8:32 am
that you do is define the terms, so there's that. the second term that i think is really important to define before we get started is what is an nc3 system. what does it look like? the dod has been helpful defining and having public documents what nuclear command and control systems are. i will tell you that there are over 200 programs of record in the department of defense. everything from radios, all the way up to space-based sensing architectures and essentially there's two layers of kind of nc3 systems, the one is that kind of persistent sensing and communications layer and the second is really called the thin line, which is, it must be able to endure through not only initial strike, but counterstrike. there are really the nuclear assurety requirements, right,
8:33 am
are really defined as an always never situation. they must always be available to be used and employed by the authorizing authority who is the president of the united states. they may never accidentally be employed or be employed when that is not the case. right? and so, those are just some basic definitions there. my three basic arguments to support why nuclear command and control is appropriate is a mission set that appropriately leads itself to artificial intelligence tools and techniques. ai tools and techniques are used almost throughout the entire stack of nuclear command and control, both on the hardware and the software aspects of that. the second is that ai tools and techniques can actually help to expand the decision space for the human decision maker that needs to make decisions about
8:34 am
the appropriate use and response to use of a nuclear weapon. and the third is that ai tools can actually help the national security community better model plan exercise and otherwise increase the readiness of our nuclear forces and planners. so, i'm just going to kind of expand on all of those. the first is that ai it used throughout the nuclear weapons enterprise and specifically the nc3 enterprise. this is just kind of a blanket fact. ai tools are already used to do things like design and engineer the integrated circuits for cpu's, for gpu's on the hardware side of the equation, they are used and employed for robust image and signal processing. the analytics for the missile warning sensors and
8:35 am
capabilities, the integrated warning attack and assessment, certified systems all use ai to be able to push data and analyze it in a way that is actually usable to the forces that need it. modern space-based sensing systems already employ ai tools and techniques, their entire stack of their models from everything from the physical layer all the way up to the application layer and in this way, ai tools and techniques are already employed in the n c3 system. the second argument is that ai tools and techniques can help to expand the base for human decision makers. there's no doubt when you're talking about deployment of nuclear weapons these are questions of existential humanity and deserve every second that they can to be able to postulate, not only the consequence of an incoming strike, but also what the counter response is. that fixed time of--
8:36 am
in icbm entering and requiring a response is generally thought to be around 25 to 30 minutes, with very little flex. the argument here is that you should use ai tools and machine learning techniques to be able to process data, do characterization, to do data analytics, to determine flight paths so that you can actually expand the decision space for the human who needs to make a decision about what is going to happen after that incoming, right? this is an argument of data processing, of where you want to do that data processing allowing machines to do what machines do best, quite frankly, pattern deviation, character identification, if you could have a machine do that and then allow a human who
8:37 am
hopefully and you know, will actually have more time to think about the consequence of their action and the employment of a counterstrike capability. and then the last one is that the ai tools can actually be used to model, to war game, to exercise, to force plan, to optimize against forced structure and to think about how, how we would actually employ a nuclear weapon. quite frankly, most of the investment in super computers comes from nnsa precisely to do this, they have invested heavily in super computing to model the effects of nuclear welcomes. so that is my argument. i think we need everything, every tool that american innovation can give us to preserve our security and i think ai in nc3 is an appropriate use. >> thank you. >> please.
8:38 am
>> this is the most awkward part. [laughter] >> great, okay. all right. well, thank you, sara for the wonderful opening comments and thank you, heather and chris for this and csis for hosting the discussion and thank you all for coming and online. should we take the most dangerous weapons humanity has ever built which has the potential to kill millions to integrate in command and control a completely unreliable technology that we do not understand? no, we should not do that. in fact, it's such a bad idea that the idea of integrating ai into command and control has been fictionalized as one of the dumbest things that humanity could ever do.
8:39 am
sky net, ai villain attempts to wipe out humanity in the series, ai into nuclear weapons command and control. make no mistake, that's what we would be doing. even if we keep a human in the loop and integrate ai into decision making we would be ceding judgment to ai. humans were in the loop in 2003 when the highlight automated patriot air and missile defense system shot down two friendly aircraft. the human operators were nominally in control, but in practice they were operating a very complex, highly automated system that they did not understand in a complicated real world environment that had novel challenges and the automation failed, they did not understand it, and the humans were not in control, the machine was. and that allowed the system to shoot down two friendly aircraft and now imagine if those were nuclear missiles and the consequence of that mistake was nuclear war.
8:40 am
that's the stakes we are talking about. so we can't always think about what might be theoretically possible if everything were to work perfectliment we need to acknowledge the reality of military operations under friction and the fog of war. the u.s. military is not immune from mistakes and neither is the u.s. nuclear enterprise. the list of mistakes in near mishaps is terrifying. in fact, there have been literally dozens of nuclear mishaps and near misses in u.s. history and nowhere all exist tent-- 2007, the air force left two nuclear missiles unattended for hours, they were flown across the country to louisiana and no one noticed they were missing. the idea that the u.s. military is going to integrate a successful technology that's completely reliable into the most high consequence mission and that it will increase resilience is a totally
8:41 am
fantasy. but it's even worse than that because even if by some miracle u.s. military were to integrate ai into nuclear decision making in a way that appears to be reliable in peace time, we have every reason to believe that it would fail when we need it the most in a crisis around war time. because ai systems are terrible at adapting to novelty. if they're presented outside of the scope of their training data they're effectively blind and unable to adapt. so, what is the training data said that would use for nuclear war? thankfully, we don't have one, but that means that when our systems are most needed in a war time situation or a crisis or a threat of war, they would be operating outside perhaps of their training data and could not be trusted and the real risk is ai might be at peace
8:42 am
time and then fails at war time. and the brittleness with ai. humans can operate during this and ai can't. ai is capable in one area, that it's capable in another area and even if it's not and can fall apart dramatically. people can overtrust ai with cast catastrophic consequences. and we've seen that with the cars, they've driven into concrete barriers, semi trucks, pedestrians, and there are accidents. and they're good in one area, but they assume that it's capable in others, too. if the situation is not only training data, ai didn't know how to adapt.
8:43 am
ai systems doesn't understand context critical for making the right decision. >> so outside of a bunker in 19le 3 received an alert that five nuclear missiles were inbound from the united states, his thought was, why only five? that didn't make sense. if the u.s. were to launch a surprise attack, they should send overwhelming number of missiles. so he had context that was important to add to the information he was receiving. he had another piece of important context and knew that the soviet union recently deployed a new satellite earlier warning system and new technology doesn't work always as advertised and break and might be malfunctioning. he reported to the superors that it was malfunctioning. what would ai do in that situation? it would do what it was trained to do. it wouldn't know the broader
8:44 am
context to the decision making or know about the mistakes. and a nuclear crisis is exactly the situation we would expect the ai system to stale catastrophically, where it's a novel event and context is important to making the right decision. but that's not all. that's just how ai systems can fail. that doesn't get into all the nasty ways that the ai system could be hacked, manipulated, which adversaries would try to do. ai systems can be fooled with spoofing attacks and the hidden back doors that could later be exported by adversaries. this can be done in such a way that the system looks like it's functioning normally and that you can't detect that it's malfunctioning. now, ai is getting better over time, but sometimes introduces new problems. the most advanced ai systems have been shown under some conditions to engage in
8:45 am
spontaneous strategic deception, lying to its own, and we don't know how to make them stop. i could go on well past my time limits with the failures of ai systems, there are many. the point is that ai is not good for anything, it's good for many systems, including in the military, but nowhere for it requires zero tolerance that's what we need in nuclear command and control. ai is a long, long way from the level of assurance that we need in the nuclear enterprise. and lastly, of course, an ai system will never be able to understand the emotional gravity of what's at stake, quite literally the fate of humanity hangs in the balance. he knew if he got it wrong millions of people could do and ai will never understand what that means. it will never feel sick to its
8:46 am
stomach about the consequences of getting it wrong. so the answer is a clear no. integrating ai into nuclear decision making will not enhance resilience. it will degrade our decision making, make the risk of inadvertent integration and lull us into a false sense of security and confidence by appearing to be reliable in peace time and then failing when we need it the most. thanks so much and i look forward to the decision. >> thank you. thank you both so much for your opening remarks. thank you for disagreeing with each other, it would be a boring debate otherwise. we'll turn to rebuttal. and paul has the tricky part to having just made a statement and then a rebuttal. quickly summarize how i'm standing the debate thus far
8:47 am
and we can dive in. sara's argument tools and technology should be applied across nc3 with the exception of automatic release based on points that ai has used on software and hardware and expand the decision space when time is shot and help with planning, modeling and exercising. paul's remarks, this is unreliable technology shouldn't be applied to the most dangerous weapons and based on historical examples of automation failures and humans not in control when they thought they were. nuclear mishaps and adapt and lack of training data. what i'm hearing the main point of disagreement first is about the reliability of ai itself, the technology, but also, its susceptibly to hacking and spoofing. and ai in the nuclear context because the consequences are so high and because they're unique
8:48 am
in that way. and hopefully i've had the issues and i'll let you duke it out. paul, i'll innovate you to speak about anything you heard her say. >> and the awkward part, and commenting and addressing some things you raised. i really like that you brought up the always-never dilemma. this is crucial to the challenge of integrating ai into nuclear decision making. that's a tall order. that's a tall order for people, for our organizations today to say, okay, we always want nuclear command and control to work to convey an authorized order from the president to employ nuclear weapons, but we never want the situation where there is an accident l an or an unauthorized use and there's no way that ai's good enough to meet that criteria. no way in our system it's capable to say it's always
8:49 am
going to give the right answer. maybe 80% of the time, maybe 90% of the time. in a lot of ddomains, that's going to be fine. you look at human, and a human radiologist and looking at the x-ray and giving the right answer. but that's not the case in the nuclear enterprise. you've brought up specific examples of ways to use ai i want to tackle some of these. ai in designing chips, for example. i mean, we're doing that now. that's a sensible thing to do. i think there are-- i would distinguish between things that are involved in decision making and things that are-- and maybe the broader nuclear enterprise more general or adjacent to it. so, we shouldn't be luddites and say, we're going to use archaic way to design chips we're not going to use the best way to do it. that makes sense, a sensible way to use ai.
8:50 am
assuring we have auto pilots on planes. yes, we should have auto pilots on planes. ai is able to do that better. there's a key difference between things that are simple, repeatable tasks, where we have good data on what that task is. like taking off and landing planes, we can test that in peace time. that's not taking off and landing the same way, versus decision making involving nuclear weapons or even information that leads to decision making that might be different in a crisis. and i think that's really important, you mentioned that one of the goals of ai is that we could buy time for decision making. that's a-- anything we can do to buy time is great, i think we should try to do that. the challenge with ai, if you start to bring it into the decision making process. let's say there's some recommendation brought up to decision makers, okay, our algorithm predicts x.
8:51 am
the goal would be to buy time. we give a little bit of earlier warning. the challenge is, do we, in the way of doing that add more uncertainty? right? there's a question about, just is it right, that's a really important question, but also, do policy makers trust the ai system or are we simply adding more noise into the mix, right, in an already chaotic environment, adding more uncertainty for decision makers, i don't know if i can trust this information. i don't know if i can i don't know if i i can trusts recommendation from algorithm, and is at helping to getgo to better decision-making or is that simply grading more confusion for human decision-maker. at the end the day we've got to acknowledge the reality of human cognition and human psychology, how do people respond to the systems? that bennett important decision with self driving cars.
8:52 am
sometimes the way people treat ai is, we're going to the ai do everything it can and they were going have human filling the gaps. >> fifteen seconds. >> i think what we've seen with self driving cars is okay the idea you're going to be driving down the road at 70 miles an hour and the ai automation is doing fine and then split second the human is going mechanize something is wrong and intervene and take over. it's not realistic. it's a human function. with got to think about how argument could interface with the systems and how can we support this to make it more effective. thank you. >> thank you. i think that's great. what's interesting here is a couple of things. one, thankfully, there is a real lack of experience and lived training data for nuclear weapons deployment scenario. i hope that continues to be the case.o one of the interesting things here, there's certainly a lot of
8:53 am
risks with by season training data. that is part of what the executive order that president trump signed out yesterday was trying to address. other administrations have put out robust ceos about in the department of defense hasf also done strategies about the dangers of limiting and i threat entire weapon w systems. i see that. the challenge here is that i heard is coming brought through, i don't believe that ai is appropriate to obviate the need for human judgment. i think ai tools and techniques are exactly that. they are tools and techniques that can ben applied when you ae analyzing petabytes of data that are going to impact millions of
8:54 am
people's lives. and if you can crunch and do that pattern recognition classification and determination, flightpath discrimination, any quicker, then what you have to do is should absolutely do that left of launch. you should make sure that all of that is as robustly kind of bad it as possible, and then scariest thing about nuclear weapons, at the end of the day, that is still a human judgment, right rex there is no recourse, in my mind, for ai to replace human judgment. so when i say i am kind of for ai, quite frankly all the way up until you get to automated weapons release authority, for me that also includes like cogeneration. i don't think ai is going to be doing cogeneration on what our responses for nuclear weapons.
8:55 am
it is not a replacement for actual rigorous planning. there was always a a risky, r, as someone who is a mathematician, spaceport in all this, right? there's always risk as you approach zero. the always never scenario is, it's not a scenario. it's a requirement. it's very hard. i have very large tattoos, right? i used to walk into the pentagon and people would not know what to do with me, they were like what is going to see about our nuclear weapons, you know? i would be look, look at me, you know? i was younger. i was cooler. i had big old tattoos and i'd be like him relatively risk tolerant person. yeah. the one area while i would choose to accept risk is nuclear command-and-control because it
8:56 am
is nuclear command-and-control. it is literally a discussion about existential humanity. i find myself in the awkward position now of being out of the pentagon, of being much more optimistic about the pentagon's ability to not only integrate innovation that were making here in america that can help our economy and our national security, but doing it also in a responsible way that ensures the nuclear assure the requirements of the united states and its allies. >> you're at time. eight seconds left, perfect. great. thank you both respecting the time limits. thank you for teasing out your arguments and engaging with each other. we will now turn over to chris, one of our midcareer cadre focus is going to pose a few question. you each get one question and then you have two minutes to answer.. chris, thank youou for doing th. >> thanks, heather. always a delight and privilege
8:57 am
to doublespeak year. i do it illegally say about asking these questions on behalf of my employer, dod and you or anybody else. they're coming straight from the heart. [laughing] a couple data point sent out to me in your excellent remarks. first and foremost, , i can join you in celebrating there's limited data set for real-world applications of weapons release stories. that's great. the second set of data point the standout timmy are they married accident, dozens of near misses on loss of control nuclear weapons is alarming. history is farla more terrifying than we with time to get int. it seems to me like those examples highlight the need to potentially augment human judgment or supported iny. some way. it would've been nice reminder, for example, if there'd been a good automated alert that said did you make sure you kept track of theur nuclear weapons today before taking off? it seems there will be an opportunity to mitigate risk and
8:58 am
augment what is already a history of flawed human judgment that frankly we're lucky to have escaped from. my question to you is what would have to change about ai's reliability, its performance, its integration into difficult voters in nc3 that would change her pledge it make you more excited about it and say this is a systemxc that could help us reduce risk? >> that's a great question. to i be clear, i do want to saye should never use technology to improve the safety of our nuclear, of course we should. simple things like automation like a check or somebody sang did you double check the whatever? that could be helpful, right? thee main risk is when we start to see more sophisticated ai systems, and that could be using networks, deep learning or complicated rules-based systems, you can get to a point where the human operators don't understand them very well, the inner workings of the system, and they
8:59 am
are complex enough and when you're dealing with the real-world environment, there are chances with the ai system will fail. if there was a way to improve nuclear insureds, because history is, should we? yes, absolutely. but i would contrast the nuclear problem with the related problem, which is driving, right? were in a process of self driving cars. they will be better than humans. one could argue based on some of the data that we are there now. but the baseline is humans are terrible drivers. like 30,000 people die on the roads every year in america. it's a harrowing experience come right? the baseline is bad cop but also in order to get her there, and this a real key difference with the challenges talking about the nuclear space, we put self driving cars out on the road.
9:00 am
they're in a real-world operating environment which is the same as the one they will operate in, and they're been accidents. i don't know we should ever get to the place where you have good self driving carsha that are sae without having some accidents along the way, i don't think the nuclear command-and-control enterprise in the space will tolerate this kind of accident. >> that's an excellent point i think. turn out to sarah. i find it persuasive that as paul agreed it's important to incorporate new technology to make systems more efficient. there's a big difference between the character of ai versus just getting more efficient computing to analyze data more effectively come more efficiently and expand that to space. so my question then becomes, all improve ourwe've capability to analyze, located data, once we make the decision to lean into ai as part of our nc3 apparatus, decision-making
9:01 am
stories aside, how might our adversaries proceed the risk space? .. ion keeping in mind recent statements by the president and xi jinping to agree to keep a human in the loop, so to speak. it seems like there might be an opportunity for adversaries to say, look, we're losing ground in the race in nc3, and ai more broadly outside of nc3. sara, how might our adversaries look at nc3 and strategic ability? >> this is a great question and exactly why i think this debate is prescient right now, right? because you know, at the end of the day, and articulated this as well, ai isn't emotional, it doesn't understand context. it certainly doesn't understand how to be flexible and then
9:02 am
when you end up applying that to how you would anticipate adversarial leadership decision making, right, and you mirror that on what adversaries think our decision making process could be. there's room there for real consternation, right? there's no surprise that the russians and the chinese particularly have invested heavily in ai. no surprise that they have different thresholds for use of strategic systems and how that would impact a nuclear weapons scenario, deployment scenario. at the end of the day, i am less -- i feel like it is less
9:03 am
compelling to use ai for kind of that rote generative kind of co-development in the nuclear command and control perspective. and i think it's much more on kind of the data crunching and allowing computers to do what computers do best and humes to do what humans do best. it ends up with an optimization. and this is a multi-factor optimization and so, clearly the enemy or the adversary always gets a vote there and they have spoken not only in their resources and how they've decided to develop these things, but also in their writing about how they would choose to employ them in a strategic way.
9:04 am
outstanding sara. thank you, paul. >> thank you both and thank you for really great questions. questions are coming in, if you're in the room please use the qr code, if you're online, use the online forum. two questions for sara that were coming in. the first one from one of our nuclear scholars from oakridge lab. how can we verify the ai knowing they were put in by humans and that they could be skewed. anonymous asked to be able to counter automation bias when asked to make decisions under uncertainty and time pressures? kind of related questions there about related limitations. >> they are both great questions and they are related.
9:05 am
at the end of the day, i think that everyone on both sides of this debate recognize that there is limited training data. it is one of the most challenges endeavors right now is to figure out how to unbias training data and algorithms and that is something that, i mean, ironically places like oakridge are looking at, but it is also incumbent upon the department to really look at and make available what they believe that training set looks like, what that data set looks like. there's entire offices now that, quite frankly, aren't just looking at the ethical use of ai, but are also looking at the human interaction with that, how humans pursue ai, what that looks like and you know, quite frankly, more
9:06 am
resources and more study needs to put to that. i don't have an easy answer for that. i will tell you, i don't think most people do. but the trading data, i mean, it-- i think one of the challenges with this kind of a prompt or this kind of a debate is that definitionally everybody just thinks like ai. there are, what, like how many people in the room right now? >> a little over 100. there's over 100 people in the room and at least 200 definitions of what ai are right now and when you apply that against nc3, we are all thinking about this and contextualizing in a different way. for me ai and machine learning is really about pattern deviation, classification, and really combing through pedia bytes of data and allowing a human to be a human. i can agree or not agree to be
9:07 am
human, humans are inherently flawed, the beauty and nature of being a human, but that's different for me than coa generation, than generative ai which i'm sure a lot of people are also into as well. >> thank you. a question for paul. this is also coming from one of our nuclear scholars from this year, from adam reynolds. you might have to do a little explaining on this one as well. would advances in the explainable ai potentially change your opposition to ai and nc3 in a snare joe with xai, the human in the loop could explain the decision, and concur with his or her decision. you're going to explain explainable. >> and one of the problems for the ai system, particularly for ones that use deep neural
9:08 am
networks, for deep learning networks. their answers are not often explainable after the fact we have a hard time understanding why did the ai system do that. that's fundamentally different from traditional rules based systems. the rules based software, we had in our computer and airplane autopilot, some of those are not predictable how it will respond in different decisions. something like the 737 max crashes. after the fact we can look at the lines of code and terms to figure that out. a neural network is a massive connection, trillions of artificial neurons, like chat gpt, go why did it say that? there's no answer for that, it's buried in trillions of connections. sometimes you get really weird situations. for example, claude has this
9:09 am
sort of affinity for animal rights and it wasn't specifically designed to do this, pro animal rights. the designers don't totally know why and don't know how to, sort of level that out in a way that doesn't then cause adverse effects across the system. so that's a real problem for using any kind of high consequence application. now, the goal of explainable ai as a field of studies is ai that you're looking into and get a better sense of what it's doing. ideally you can peer into the neural network and say this is causing this activation and why we're getting some kind of output. that's different of course, than an ai that explains itself, which systems can do today. the ai language model, for example, can tell you why it did something, but that's maybe not the same as why it did it just like with people, people
9:10 am
could give you an explanation why they did something, but that may not be why they did it. they may actually not know themselves why they did, you know, act a certain way. so, i think that if you could get to truly explainable ai that's understandable, where we really had a good sense of why the model of functioning a certain way and the ability to adjust expectations, that would be absolute game changer, but we're a long ways from that right now. >> we're getting a lot of questions coming in about accountability, responsibility, governance, and so i'm going to have a question for each of you on this topic. the first one, this is for sarah, this comes from adam howser, how would you address the responsibility gap ai poses in the case of failure or inadequacy, who should be held accountable. and the question for paul, i'm going to bundle a couple here, but given the potential unpredictability of ai systems
9:11 am
when they're under stress. what safeguards should be prioritized and then i would also add onto this, you know, in one of your books, i think it was an army of none, you capture kind of the history of governance and arms control, and an agreement for the fantastic historical table of arms control. but you look at different tools of governance and how these could be applied. are you using the same on ai governance as you wrote that book or has anything changed with technological developments? but, sarah, we'll go to you first. do you want to take adam's question. >> adam's question was about responsibility? >> how would you address the responsibility gap that ai poses in the case of failure or inadequacy, who should be held accountable. >> again, since my argument is that we should not use ai to automate weapons release authority and we should be using ai to enhance or augment
9:12 am
human judgment, then the human judgment of the consequence, right, that the human needs to be accountable. i mean, ai is not a replacement for human judgment. ai is not a replacement for, i mean, it is artificial intelligence. you know what it's not a replacement for that i'm constantly looking fore, real intelligence, genuine, it's really hard to find. it's a glib answer, but it is the correct answer which is to say these are tools and techniques. they are not means and ends of themselves and while we're talking about ai, there are plenty of other tools and techniques that we can use to optimize human judgment that we should be applying across all of our defense missions and so, that gap in responsibility or accountability essentially always comes back to an empowered individual who is
9:13 am
making a decision, which is the root of, you know, what defense and national security should be. >> quick follow-up to that. >> yeah. >> would we always know who that person was? >> for nuclear employment you always know who your command authority is. >> for example, some of the inputs going into, like where the information is coming from or if you rely on ai, for example, for situational awareness, some of the things you outlined and if you did have ai contributing to that decision making process and something went wrong, one way or the other, would you always be able to trace back ultimately who is responsible? >> no, you would not. >> okay. thanks. paul. >> yeah, so the first question about safeguards, what safeguards should you have in place, i think in general we think about implementing ai in military operations. i would think about bounding the risk. think about, okay, if this ai
9:14 am
automated system fails, what are the consequences of doing that and what can i do to kind of follow that risk in a way to put all of those types of guardrails in place, right? so if it's a conventional munition or if it's a drone that there's some kind of automation on it. what is the payload of this system. if it's a long target, how big are the explosives, how many weapons are we talking about here? what's the range, what's the consequences and are there other things that may not be in the ai space specifically that we can do to sort of bound that risk and then think about, okay, if something goes wrong, how bad can it go before there's time for a human to make an informed decision to intervene, to change things? in the nuclear space specifically, you know, i think the-- i think the statement in the nuclear posture view that states that a human would be in the loop is excellent, very glad to see that. i think it's a huge diplomatic
9:15 am
win that china agreed that xi agreed with biden earlier this fall to maintain human control of a nuclear weapon. i think the next step, what does that mean in practice. to actually doubt that some of the integrated what we're talking about here, okay, how does that cascade forward into guidance internally about kind of what the left and right limits are about the ways that ai could be employed. in terms of what changed in ai governance and only been out a couple of years now, and very, very quickly. so, i think that the underlying ideas about the vehicle and more and the issues surrounding autonomous weapons or not different and the technology changed remarkably in a couple of years and the book talks about deep learning and neural networks, but it's much more advanced. and as the technology gets more capable, you know, for a long time, i mean, people have assumed that ai is getting more
9:16 am
advanced. it's heading towards human level intelligence, pervasive now in discussions about are we on the verge of agi, and what people often mean human level intelligence. i think implicitly people envision human-like intelligence, i think what we're seeing is that ai that we're building does not think like humans. it's different. it takes some strange and alien ways and that's a new development. the technology is powerful, but it's one that we need to be mindful of, even if the outputs looks like humans, what it's designed to do. what's going under the hood is not and we need to be conscience of that when we use the technology. >> we're running short on time and there are questions about adversaries use of ai and so i'm going to pull just one, the most distinct one, i'm going to have to ask you to be brief in your response if you can. and this comes from the usa today, the question was, does china's looming adoption of at
9:17 am
least a partial launch on warning posture raise the mutual stakes of ai integration into nc3? whoever wants to take that one. >> all right. give it to me again. >> does china's looming adoption of at least a partial launch on warning posture raise the stakes of ai integration, if you're further pressing the decision making space in a potentially protracted regional crisis with china, would incorporation of ai be even more problematic? >> i mean, i think i challenge the premise of the question because at the end of the day, i think everyone has agreed that there does need to be a human on the loop making that decision. the ai integration that i would
9:18 am
forsee is very less of launch and so i don't actually-- and at the end of the day there is no risk tolerance on nc3 anyway. so increasing the decision about increasing-- does it increase the risk, the risk is already there. it is already acknowledged. it is you know, that requirement for us is always going to be there. that will be enduring. across administrations, cross technologies, it will exist, so-- >> anything to add? >> i guess i would maybe broaden the question a little bit to say in my mind, there are a couple of things that are raising the stakes of getting it right into the nuclear stability, and chris kind of mentioned this earlier, the
9:19 am
increased sailyans, and tri-polar world and technology, ai, cyber space, ir. that's changing the nuclear balance so i think that the stakes of getting this right, of integrating these technologies in the way that is stabilizing and not destabilizing is critical in a way that the landscape is changing the way that really hasn't for decades. so having folks frankly all of you invested in all of this. we have a lot of brands to figure out the right way to integrate technology in a way that's stabilizing is absolutely essential. >> with that i want to have each of you a chance to offer, i'm sorry, we have to take away to 30 seconds each for any points you want to make. sarah, we'll start with you.
9:20 am
>> yeah, i think, again, i will assert that ai tools and techniques are one. tools and techniques that need to be applied with critical judgment by humans consistently across any c-2 system. two, they're being used in the hardware and the software aspects of nc3 systems. three, i think that this is a multi-factor optimization challenge that definitionally challenging and it's not that the same ai techniques and tools need to be use across the entire spectrum of that kill chain and scenario. different ai and ml techniques used earlier versus later. i think later when you get into the actual kind of counterstrike or employment decision making, ai becomes less effective and less appropriate. certainly on the side of
9:21 am
automation, i think it's heavily weighted earlier in use in planning in characterization and identification so that it's more profits intensive upfront. so, that's my general argument. >> awesome. well, first of all, thank you heather and chris for a great discussion and a lovely debate. you know, sarah, you made this point earlier, machines do what machines do best and humans do what humans do best and i couldn't agree more, the question are what are those tasks and machines are really good at things where we have lots of data and we have clear metrics for better performance. we could all agree that more of something is better. so driving is a good example. we can collect data out in the real world and we all agree that having fewer accidents is better. that's what we want. we want something that's safer. so humans are better at places
9:22 am
where we don't have good data, where there's a lot of novelty and the right answer depends a lot on context. and what does that look like in the nuclear enterprise? there are certainly some applications for ai and automation, make sense, and conceive the nuclear operations. taking off and landing for aircraft. there and we can make machines now, for a while now, that are better than humans. and are there roles for computer vision for tracking adversary movements and identify, okay, this is a mobile missile launcher, absolutely. that's a sensible application and we can say this is a mobile missile launcher, this is what it is. but let's take the next step then and to say, okay, could we have a predictive ai system that could do indications of warning of a surprise attack? that would be a terrible application because we don't have the data. what would make a good answer would depend highly on other
9:23 am
pieces of context. so we need to be very cautious as we think about how we're using this technology and mindful of its failures. thanks so much. >> thank you all so much. the final thought that i'm going to leave you all with, quite a few of us have mentioned this statement between biden and xi about human in the loop and i think we've said a bit-- i think we've gotten a lot into the details of what that means in practice and what that looks like. and i put that out there as something to watch in particular, yesterday we heard that president putin and president trump are interested in restarting arms control talks and trying to reengage in some sort of dialog hopefully on the production measures and historically russia has been reluctant to join in a human in the loop statement. and if the arms control talks go forward this might be a topic for discussion and i think it made today's debate more timely and hopefully everybody has a bit of better understanding of the nuance of
9:24 am
the technology and we ended with a bit more agreement than what we started with, which is nice, it's nice to come around there, but i think there's a bigger trend that doesn't apply to policy, but to everybody's day-to-day life, which is the human relationship with technology and how much do you trust it and how much do you rely on it and when and where, but in the context of nuclear weapons when the consequences are so big and the scale is so big, it raises ethical questions as well. i'm grateful to the three of you for participating in this discussion. a big thank you to the csis av team as always for helping us pull this off and the biggest thanks of all on the pony team for organizing and coordinating everybody and caroline horn who just walked in, thanks, caroline and last but not least, thank you all to you in the room and online, i'm sorry we couldn't get to all the questions. there are a lot of questions and generated a lot of
9:25 am
interest. and please stay engaged in pony and we look forward to the next event. thanks, everybody. that's it. [applause]. [inaudible conversations] [inaudible conversations] [inaudible conversations] >> today former service members and advocates discuss the community care program before the senate veterans affairs committee. the hearing is held amid concerns how president trump's executive order on a government hiring freeze could impact veterans access to health care.
9:26 am
watch live on c-span 3, c-span now our mobile video app or online at c-span.org. >> democracy, it isn't just an idea, it's a process. a process shaped by leaders elected to the highest offices and entrusted to a select few with guarding its principles. it's where the nation's course is charted, democracy in real-time, this is your government at work. this is c-span giving you your democracy unfiltered. washington state governor bob ferguson's inaugural address touched on the state budget, housing and police funding. governor ferguson served

0 Views

info Stream Only

Uploaded by TV Archive on