tv Key Capitol Hill Hearings CSPAN October 1, 2014 7:00pm-8:01pm EDT
7:00 pm
kickstarter, chobani yogurt and gabby giffords. you'll hear about new efforts to cure cancer. the discussion of the origins of the universe and the future of finance. that's tonight starting at 8:00 eastern here on c-span3. >> our campaign 2014 debate coverage continues. tonight at 8:00 on c-span, live coverage of the minnesota governor's debate between incumbent governor democrat mark dayton, republican candidate jeff johnson and independence party candidate hannah nicollet. thursday night live coverage of the oklahoma governor's debate between joe dorman and governor mary fallin. also thursday on c-span2, the nebraska governor's debate and saturday night on c-span at 8:00 p.m. eastern, the debate between john lewis and ryan zinke.
7:01 pm
c-span campaign 2014. more than 100 debates for the control of congress. now a discussion on federal science policy and the use of simulation to address national threats. the chief scientist for the national nuclear security agency and a special adviser to energy techtary. he addresses the university of tennessee howard baker center for publictoba÷ policy in knoxv. this is about an hour. >> thank you. that was an overly generous introduction. thank you, again, taylor. i think the hospitality here at the baker center, the university of tennessee has been remarkable. it's a wonderful place, and i'm happy to be here. you know, as an academic who ended up in washington for some reason, i wanted to give you my
7:02 pm
personal take on computational science, what we do and kind of how i view this. i think it's an interesting story, i hope you'll find it interesting, too, as a beta tester, i guess this can fail and still be successful as part of your learning so we can look at it that way. so i have some framing thoughts on computational science. i guess i should project this. so there are just a few top ibs i'd like to talk to today. tell you a little bit about how i think about it. where i see the challenges. some examples of what we've done and how we use it.
7:03 pm
and where we're headed. depending on time i'll cover some of these in different ways. i think there is no even or simple way to explain how we apply simulation these days. certainly from popular culture we have a sense that simulation can do remarkable things. you know, you only have to go to the theater or look at all the content out there where virtualization is really part of almost anything you see these days. but when you have to temper it in reality and make decisions and there are consequences to those decisions, it's a little bit different. i wanted to tell you a little bit about that world. the degree of trust in simulation is still emergent. there is not a unique way to characterize how well we think
7:04 pm
we are predicting something and how much we trust it. and there's a lot of work to be done there. there's some places we do it by statute and there are other places where you really need champions and advocates at the right time to say, hey, these tools can be brought to bear. here are some experts that can he help, and i hope to give you a few examples of that. really, trust, there isn't an easy way to explain whether you -- why you trust simulation or why you don't. i think for everybody it's somewhat experiential, and there is a personal aspect to that and you see it among scientists and i see it in washington among scientists. there are some that believe it and there are some that don't. again, you can trace this back in many ways, you can trace it back 500 years to kick hart and bacon and deductive and
7:05 pm
deductive reasoning and other ways to approach the world. either you prove it empirically you can't take the next step or you believe that you can deduce things and you can set up some type of intuitively derived set of premises, and from that you can build your understanding and those are two lines of thought that exist today. you'll find a collection of scientists, some will say unless you do the experiments, i don't believe anything you predict. and so it's again the whole idea of trust and when you call upon simulation to help you is still deeply rooted in personal issues that are hard to capture. i hope you keep that in mind as you go through some of the examples today. i'll try and cover a collection of different topics and try to show you some of the commonality of what's behind these, and i hope you find it interesting. prediction is really part of our
7:06 pm
everyday life. you deal with it whether you're trying to figure out, you know, what's going to happen in march, you know, and the ncaa tournament or the world cup at rio or the gold medal count in the sochi olympics, you know, prediction comes in many places. you predict things by yourself. i would say that among all the predictions you do, the consequences are probably fairly limited. the consequences of making a bad prediction are not severe maybe you get wet because you didn't expect the it to rain. but i would say that's not a high consequence right now. we're turning to simulation quite a bit more.
7:07 pm
i'm sorry about that. in understanding a number of types of more serious problems. more societal problems and i view them as being in two categories. you know, as an academic, i resonate and certainly i resonated in my previous career on the class of let me call for no better name output-based type simulations. this is the kind of problem that a scientist poses. it is typically well defined. you know what to measure, you know, in scientific parlance. you know the degrees of freedom. you know what to measure. you have a ham iltonian, you have a theory and you're trying to solve the theory. and it's then an exercise in mathematics and in controlling your approximations to solve that. and you have something you want to measure.
7:08 pm
measure maybe you're studying protein confirmation or maybe you want to measure the mass of the protn our can pick your quantity but it's precise, you know the degrees of freedom and typically a matter of controlling the approximation when you put it on the computer. it also has the benefit that you are the specialist. when you solve that kind of problem, you're the master of that domain and you control it the other class of problems that i see, let me call outcome based are the ones that i find interesting. they're technically imprecise. they're ill posed they're typically societally based. they're things that impact people. you want to know why things are going to happen and why they're important to you. often you don't know what the degrees of freedom are. you don't know where to start.
7:09 pm
you can't control the models or approximations, you don't know how precise your answer is. but that's the place we need the most help. typically these are multidisciplinary type problems where you have to work with other people. you have to ask questions outside your comfort zone. and they are hard. i think discovery lies there in general and this is the class of problems i would like to illustrate today. you know, in this second class of outcome-based problems, we don't ask scientifically precise questions, but the things we care about is what do you have to do and when. what is your confidence that you can actually help here? what does it mean? what does it mean to you what happened? what are the risks? what are the risks it might happen again? and the question is how do you bring science into answering questions that are not scientifically precise?
7:10 pm
where do you start? and how do you do that quickly? and what tools do you have at your disposal to help inform that? often you don't know if you're even asking the right questions. and it's often you have to ask are the right people asking the right questions? and in that case are you even positioned to answer them? but when you think about societal issues -- and i'll talk a little bit about fukushima, the underwear bomber, you know, the satellite shootdown, the oil spill, you know, collections of things that impacted people where science helped inform the decisions to be made, again, you real issues oftentime usurgent, but the quality of the question you want to answer through simulation is like that. so that is not precise. what is the measure of what does it mean?
7:11 pm
because the average person wants to know what it means to them. how it will impact their life. you know, whether you'll have electricity tis or whether you can get gasoline or groceries or is your lifestyle impacted? that's really the societal issue that you're concerned about. how do you manage the interface of science which is typically precise questions and preciseoj methods with the imprecise needs of questions of this quality? i want to mention maybe oneñ?l additional quick digression that, you know, at the same time that we're interested in solving these problems we certainly have a changing world. there was a nice little piece a couple of years ago i remember that jack did in comparing the ipad 2 to the cray aúo supercomputers and these days with the amount of iphone and the amount of computering you
7:12 pm
carry in your pocket it's remarkable. project out 10, 15 years the kind of time scales that departments have to think about for planning big infrastructure, what is the future we're thinking about? what are the tools we have to have and how do we work through there so that the country can be responsive to answer these kinds of questions? today there's the growing set of issues we worry about, whether energy, security or climate, health, critical infrastructure, you know, there are more and more places where we think computational science to inform us in decisions because many of these things can't be tested or instrumented or done before it happens and so these are places where virtualization is an important step in characterizing the risks and decisions that we might have to make. among problems we have, there again, two categories, data rich and data poor. i just want to distinguish those
7:13 pm
just to keep that in the back of your mind. you know, there are some problems, sensor data, weather data, places where you have nothing but data and your problem with simulation is to figure out what does it mean? what are causes and effects, what are simply correlated signals or what are causative. and that's not always easy. solving the inverse problem from a rich set of data is a very hard problem in trying to figure out what really&we impacts what. there are problems that are data poor. certainly nuclear weapons program is an example, but i'll give other -- i would say that tony metzacappa's super nova work, data poor place, tony would certainly love to instrument the next super nova beforehand and get all the data you want, but you can't do that. and if you get data, you'll be
7:14 pm
happy. but you can only get what you're used -- very limited set of measurements and making sense of that is really model dependent. and so among the classes it's not just simulation broadly, there are different qualities of questions we ask. there are different kinds of data and different assumptions we have to make on the models we need. so, you know, a sense of some of the things that we have turned to simulation for in the past few years. certainly while i've been in washington i thought it might be a little illuminating. i will -- i would be remiss being at oak ridge, also with y 12 and certainly the place where the department invests heavily in the nuclear security mission not to say a little bit of the
7:15 pm
nuclear weapons program. i think it's an interesting tour de force of simulation, and i just want to capture a couple of things there for you just f for that reason. in the bottom corner there, a cartoon illustration of the kind of complexity. in understanding how we do understand weapons now without testing them, we stopped testing in 1992, you know, in the record year this country did 98 nuclear tests, the integrated amount is 1054 tests over our history, kind of our legacy, but the problem scientifically is really multi-scale. it starts at the nuclear scale, at the scale of nuclear interactions, pro figures and fusion processes, 10 to the 15 minus meter and it spans the size of the weapon, the meter
7:16 pm
size. it's more than a 15 order of magnitude problem. for those in washington what i say is you can think of it in terms of the federal budget which is about 3.5 trillion dollars. it's like managing the federal budget at the .3 cent volume. understanding cause and effect under 15 orders of magnitude is nontrivial. there have to be assumptions in there. how do you qualify the trust in the points you made at the .3 cent scale, at the $100 scale, 1,000, million, trillion dollar scale to say that i have confidence that i know where the federal budget is going and i can tell you where it's going to be next year. it is a very tough and challenging problem but it is a place where the laboratories have certainly excelled in doing th that. anyway, there are a lot of questions we ask these days at the bottom of that slide. we want to know whether they're
7:17 pm
safer if we have more options to make them more secure. we need to know what other people are doing. we worry about terrorism and proliferation and they're very broad questions that we are starting to turn these tools to, but i think in view of time, let me go perhaps to more interesting things that at least you might find more interesting. i remember february 1st, 2003, i hadn't been in government very long. it was a saturday morning. i was returning from a conference in san diego and i was at the terminal there, saturday morning kind of at the end of the terminal is a little round area that has the gates and in the middle was the bar. i was waiting to board. i kind of looked over at the television sets and i was watching the re-entry of the space shuttle and i couldn't quite make sense. i was looking at that knowing that the shuttle was passing
7:18 pm
overhead at the time and there were like three or sobriety lights coming down. and i couldn't tell what that was. and it was the shuttle braeakin up on re-entry. it's a moment that's etched in my mind. you think you know what you're looking at and you really have no idea what you're seeing. and on monday after that sandia national laboratory was already in touch with nasa to ask, you know, what is it we can do to help with the kinds of tools that are available, is there something we could do to assist in understanding this problem? it is a post hoc issue but nevertheless it is a useful thing to do. nasa understood, as you can see in the video, that foam had come off so they took some high resolution movies -- let me see
7:19 pm
if i can get this -- and they were able to see the foam coming off. and if you calculate the relative speed, it's about at 700 feet per second that this block of foam maybe a cubic foot or so came off and struck the shuttle and at the time they had a tool. they had a tool called crater. you can read about it. there's a very good report from the columbia accident investigation board, the gamon commission that weren't thrount this. very thoughtful, detailed report. the tool that they had, crater, which really had its genesis in micro meteorite impacts in the '60s grew into their tool of choice in the late 70s and early '80s, but it was used outside of its domain of validity and no one knew. again, one of these legacy codes where people had retired. those that understood where you could use it were no longer
7:20 pm
there. there wasn't a sense of how predictive it was. it was viewed as a conservative tool and it told you that there wasn't a problem. the shuttle was on its 28th flight and so it was known that the foam hit it and it was viewed that this was not a problem. we had a look at the problem. nasa certainly reached out to a number of places to do the analysis. one of the things that they found at sandia is that the strength properties of the front end of the wing, and what i have there in blue, it's a picture of the simulation of the reinforced carbon carbon front ends of the wings that were used to understand what happened. anyway, sandia became -- decided a detailed analysis failure modes. the question here is what went wrong. what is the failure mode?
7:21 pm
so you're reentering the atmosphere. you're going from kind of noncontinuum to continuum modeling. it's a challenging place to understand, and you're trying to figure out what could have happened. an important part of the analysis was to get a piece of aged reinforced carbon carbon material. and what they found is that the age properties depended on the number of reentries, that the strength is degraded each time you re-enter. this one -- i think 41 of the 44 tiles on the shuttle were original to the shuttle and each time they re-enter oxygen penetrates the micro pores and it weakens the strength properties. so sandia started to characterize. they managed to get small amounts of aged reinforced carbon carbon and characterize
7:22 pm
the stress of strength properties and started to do some analyses of what failure modes could have happened. in the end what they found is that a cubic foot of foam roughly hitting at 700 feet per second would break through and cause these to fail. they discovered this final,íf i march, so march of 2003, and it wasn't until july of 2003 that the experiments were done at southwestern research institute which demonstrated then. and the thing that actually got my attention was this. i saw this on cnn. in my mind, yeah, we saw this a few months ago, but for those who are empirically driven, this was when the answer was obtained, but it demonstrated then. and you could see a picture of
7:23 pm
the simulation at the same time of the foam hitting the wing and the experiment. it demonstrated that this was a failure mode. as the shuttle started re-entry -- to re-enter the hot gas certainly entered the wheel well and started to melt the inside of the wing and it caused the catastrophic failure of the shuttle. but it was a place, again, where simulation tested the different scenarios. it showed that it wasn't the wheel well problem which was originally thought as the primary cause of failure, that it happened through the foam hitting the wings. you had to characterize the material. it was a very complex set of simulations because, again, what you're asking is how did it fail and you're not sure exactly what to measure. in 2006 we launched a satellite
7:24 pm
and it basically never made on by the. it was launched from vandenberg and it was in kind of a pole tumbling state for a couple of years. and we were approached to try and understand what we can doo(% about this. it was a classified project, now declassified. the code name was called burnt frost, and the issue was with what confidence could we provide the president that one could shoot this thing down. confidences of the scenario in which you could shoot this out of the sky. the issue here was a large hy o hydrozine tank. toxic. it was frozen. it's not a hard calculation to see that, you know, just from
7:25 pm
the thermal consideration sz that it wouldn't melt upon re-entry, it would simply -- it would pass through re-entry. and being uncontrolled, you can't steer it into the ocean. it goes wherever it goes. and so we were asked to try and understand this, and it was an interesting project over a couple of months. there was a movie here i'll run, i'm not playing the music because i don't like it. the team put this together as mm1ñ to their efforts, but it has a couple of nice pictures in there so i clipped it and put it in there. you might recall that in 2007, i think, the chinese shot down or hit one of their own satellites and it was in a fairly high
7:26 pm
orbit. there are still pieces 540 miles up that we worry about. the question is here whether we could shoot down this satellite at a low enough point so that there wouldn't be debris left, that you would hit the tank. and the problem is as the satellite is coming in in an uncontrolled way, you know, it atmosphe atmosphere. you don't know where it's going to finally hit. as it hits and it starts to change its trajectory it accelerates dramatically. so if you wait too long it's going too fast and you'll never hit it. if you hit it too high, you'll leave the debris in orbit. so there's a small window that you have to guess at -- do more than guess -- to try and understand whether there is a kill shot for the satellite. it was finally decided that at about 153 miles up that one could do that. at first the simulations gave about an 80% confidence that
7:27 pm
this could be done in the window of time. i think the satellite made about 16 revolutions per -- per day and so you had a couple of tries to do it before it was too late. and basically the satellite looked like a hydro zine tank coke can and then it had its solar panels. if you hit the coke can part it would be like a bullet through paper. it had no impact. you had to hit the tank and you had to predict the telemetry. when it was done it was done with knowing what the shots were. it was known to be a kill shot. it was a place where, again, the initial estimate was 80% confidence. the decision at the topói9 at t time was that's not good enough.
7:28 pm
let's continue working on this. when it could be done with 95% confidence then it was done and it was really a remarkable missile shot done from this egis cruiser. but, again, it was kind of a time urgent problem. it came out, you know, by surprise. we have tools. we have people who understand satellites. we have people who understand thermal mechanics, that understand failure, characterization, codes, all of this. you have to somehow grab all of this, put it together and try and see, can you actually address this question. fukushima was, again, a problem of this quality. i remember again this, pretty visibly, we were just watching it on tv in the office that
7:29 pm
morning trying to figure out what this meant without having a sense of the reactor facilities yet. it was simply about the tsunami itself, which was devastating. one of the things the department does is nuclear emergency response, so it is a place where we have the ability to send things, robots, into harsh environments. air sampling. something borne out of the nuclear testing days is atmospheric modeling because we know quite a bit about where radiation goes and so there are still many residents skills, livermore, that can be used to monitor. we were brought into this in a couple of ways. one, for emergency response, including teams here at oak ridge who were called to task to
7:30 pm
help, but the questions that arose, that came to us in part were the following, what is the da danger? how bad can it get? at any given time there are 5 to 6,000 student visas in japan. every year there are 5 to 600,000 u.s. tourists there, u.s. military on bases so there is a large u.s. population there, and a question that comes up is do we evacuate u.s. citizens? there was going to be a midday meeting in tokyo which meant a meeting in the i had mel of the nig -- in the middle of the night. what can we add to this conversation? a call went out to the livermore director to mobilize the center. one of the i use assues are wha
7:31 pm
the conditions? you have to capture what is coming out of it and how much? the initial conditions i would say were not well known at the time but the questions were significant. if you decide to evacuate u.s. citizens, it's a logistics problem. how do you get all the people out of narita? whose airplanes? it's not a simple thing to do if you decide there are citizens at risk. people also wanted to know, well, what does it mean to people on the west coast of the united states? so for specific u.s. interests there were a lot of questions that we cared about quite a bit. what are dose rates? which isotopes? things of higher degree refinement. the initial estimate was from the simulations that were done, were that tokyo was not at risk and we did not have to worry about that, but i have to say,
7:32 pm
it's not easy to do these kinds of scientific problems through the conventional way of peer review. you know, you can't pull together the team of your best people in the middle of the night and say, hey, you haven't met each other before but why don't you work together and answer this question in an hour. so the question is, how do we do that? how do we become more responsive to harnessing the skills we have in this country in a way which can address these questions which seem to am could up almost annually? there's almost always something that comes up where science can inform. this was a case where we did quite a bit of air sampling and air modeling. we did really quit a bit of support for japan, and i think there was a very positive story that came out of this, understanding again what happened at the site and what it means to japan and then what it means to u.s. citizens as well
7:33 pm
or to the continental united states. there are other places. it was -- we had been working for a good year or so on tryrzt to look at governance models, how we work with -- how many agencies can come and partner with us at our national laboratories to solve some of their interesting problems. what is the way we can engage other agencies to answer their strategic issues using the tools like we have at oak ridge or other national laboratories? we had had a conversation with janet napolitano on[ybu decembe 18th on this, you know, saying that partnership model is part of our effort to develop a stronger strategic relationship
7:34 pm
between agencies, which turned out to be timely in a number of ways. one week later, on december 25th, 2009, there was mr. abdul matalab, the underwear bomber, who was stopped from igniting his petn that he kept in his underwear in that flight. it started a relationship between the department of energy and the department of homeland security and aviation security to try and answer some of these questions of how do we protect against this? could this happen again? what are really the risks of this happening? and it was an interesting problem. it's really -- for this particular type of issue it's a competition of different effects going on of all the elastic
7:35 pm
energy stored in the airplane and whether you can dissipate it before catastrophic cracks pr propagate through the skin and the ribs of the aircraft. and we worked on this with them for some time. i would have to say it has been a valuable thing. i can't say too much more about this other than there are a lot of interesting issues in aircraft security here, and there was quite a bit learned from this, but it was a place where, again, we had to become aviation experts to answer a number of these questions because it was time urgent to figure out what the risks are out there and do we have to protect against different kinds of threats than we expected on airplanes? i have a few other examples, but
7:36 pm
let me perhaps go towards simulation. you know, i want to say a couple of things about the tools here before i get to some summary points. the tools we use are -- we talk about simulation as something simple, but those certainly here at the laboratories, the program on these, it's a tour de force. computer is maybe 100 or 200 racks of system, each rack weighing more than a car. they suck up remarkable amounts of energy. they have millions of processors that you have to somehow work across to solve and single problem. it takes teams of experts and people to attack these from across a broad set of disciplines. extremely nontrivial to deliver any of these kinds of simulations or products.
7:37 pm
really champions. these systems take megawatts of power. i remember at livermoore when we were starting up the white -- i think it was the white super computer, it runs at about 4.7 megawatts when it's working. when it's idoling it's about 2.5. so when they were running the first simulation, run the first benchmark, something that jack likes very much, the lind pack benchmark that his organization, the top 500 tracks annually and has done for many years, the -- someone pressed return, it started, and suddenly there was, you know, 2.5 megawatt spike in the local power grid which is equivalent to, you know, a couple of thousand homes. there was a call from the local power company what was going on because someone started a calculation. these are not just computers, they're very complex things that
7:38 pm
you really have to think about in different ways. when we had the first, you know, large system at sandia up there in the top corner, 10,000 processors, i think it was about the size of a basketball court. it can now -- you know, there was a chip by intel which has effectively the same computational power, a picture of a colleague there, raj hazra holding this in 2011. the equivalent power from this machine in 1996. we're looking ahead at the technology keeping in mind that portable electronics and basically $600 plus billion portable electronics market can't be steered very much by federal investment but perhaps strategic investment at the margins can still derive, you know, quality computers for the problems that we need to solve in years to come.
7:39 pm
it's a challenge. exit scale initiative is kind of a code word of what the department is struggling with right now, but kind of a system we're looking at would probably be in the best case aaúv 20 megawatt type of system, 10 to the 18 calculations operations per second are notional goals for this, but we need them to be functional and useful. let's see. i think since i -- i have a tendency to talk a little bit too long, let me go to thinking a little bit about the future. so, going back to where i started. you know, there isn't a natural place where anyone stops to say,
7:40 pm
well, what is simulation? what can it did for us? who should be working on this for us? often we end up in crises and we end up in places where we are responding to something and doing the best we can with what we have. it's important to start looking ahead and asking, you know, where could we add value? i just take a couple of things that the president has mentioned recently, the climate action plan. certainly his nuclear security agenda from a number of speeches and the nuclear posture review. places where you could imagine there could be a role for simulation in a substantive way. the question is, well, how do we do that? who's going to do it? it'olvçájjt to say it. the question is, who does what? if there isn't a central place to think about this, it's incumbent on people, on those invested in the outcomes to think about that and try and make things happen.
7:41 pm
decisions are typically not made by scientists. i don't say that is good or bad, simply observe that the kinds of questions we're faced with often are not scientific and the problems are often not well-defined. we want to know what it means to people. we want to know what it means to the economy. we want to know very big societally based questions. when you try and dissect these they typically cover a number of different disciplines, a number of skills, many fields of specialty rallying the right people to try and address them can be nontrivial and it can be somewhat unnatural. i would say it doesn't overlay on university structures very well either. there isn't a natural place to go to try and address some of these questions. and peer review is typically not available. you don't have time to sit back with your team of experts and
7:42 pm
get your panel together and go through and figure out whether what you've done is right. if you're trying to understand whether you need to evacuate people, you just don't have time for that. so the question is, how do you build in a sense of pedigree, of quality, of prediction so we don't end up doing something foolish? i think that is very nontrivial. i think it's a real problem and it requires scientific attention because we typically don't stop -- we typically stop at error bars on a number of interests to us and it doesn't translate to the average person and to the kind of meta questions that are emerging now. simulation is certainly showing its value. we find it in more and more places, largely because there are champions out there who pull it along and know when to inject it, but it isn't still a natural place to go. many of the problems we get -- i mean, we don't have oil spill
7:43 pm
simulation experts, you know, that we call on for, you know, underwater crises. you know, we don't have the experts for, you know, pick your topic and we can't afford to constitute them for every problem we have, so we have to figure out how to create a more responsive infrastructure from the tools and people we have. so i think there's a lot to offer. i think there is a lot of promise and -- but we're going to have to figure out, again, how to -- how to transmit the degree of confidence in anything we do. perhaps understanding how we can be more responsive. there are washington issues there, i'm sure, but there are places the university could see themselves. there are places national labs could see themselves. when you know when to inject
7:44 pm
this into a conversation and how you do it even asking, you know, these aren't -- are these the right questions to be asking. progress here, success against the next set of threats, of urge -- urge again sis among a broader set of scientists,rbrw3 social scientists, to health to physical and mathematics. certainly industryffx/÷ and lab government. i don't see the number of issues diminishing, i see them growing. i see the complexity increasing. i see the kinds of things that we are expecting people to answer becoming a little more refined, and i think we have to be prepared for that. but i think there's a very positive story on what this
7:45 pm
country does in simulation and how we turn it to these problems, and i hope i've made at least some impression that this is of interest. so thank you again for your time, and i'm happy to take questions. [ applause ] >> we are recording these presentations, and we'd like the question and answers to be done through the microphones. we have microphones that we can pass around. we're also going to have a reception after the question and answer period so you can ask dimitry questions. anybody want to go first? kelly? tony, do you have a question? >> i enjoyed the presentation, especially the nasa thing since i was working there. do you think we could have saved
7:46 pm
columbia? >> so that's -- >> a lot of people at nasa thought we could have. >> that i don't -- i wouldn't consider myself qualified to answer that. i think even if, you know, we had in a timely way discovered what the issue was, mitigation is an entirely different problem. you know, i -- i couldn't answer that with any confidence. i did think about that question though. i thought, well, this is the right -- the next question to ask, but i don't have anything in there that's good to help you with that. >> so, coming back to the nasa example. i think one of the interesting pieces there was you talked about the crater code being used outside of its valid parameter set, but you also raised the point that that was -- there was a human factors piece associated
7:47 pm
with that in that the skill set or the knowledge or the depth of knowledge at least of what was actually in that code and how it was developed was lost. as we move now into a realm where we're no longer talking about codes that are thousands of lines long but codes that are millions if not trillions of lines long, how do we deal with that problem? >> yeah. so this is really at the heart of, you know, a field called, you know, uncertainty quantification. there isn't a good answer. it is a place -- we are working with universities, you know, around the country to try and understand. certainly there's work at our laboratories, but you're exactly right. i think an experienced code writer maybe will remember where all the right punctuation is in the code for maybe 50,000 lines of code, but when you have 100 million lines of code, it is harder to figure out what is
7:48 pm
really in there and so there are all kinds of sources of uncertainty that come into a complicated code from the use of material properties or data to assumptions and models to places where there's a mix of impirri civil and calibration. there is no methodologyooli to propagate uncertainty through thelgql entire spectrum of sou of uncertainty. and ix1] think even qualifying the potential sources of uncertainty is hard. so, it's a place where we need work. i don't think there is a good answer for that, but it's really where we have to look because ultimately someone's going to make a decision, you have to have a simple, distilled amount of information on the degree of trust you have in what came out.
7:49 pm
you know, some of these simulations look convincing with great color maps and great details and meshes and so forth, but, you know, it's a cartoon and so the question is to what extent is there confidence and knowledge behind it? and i would say that it's still an open scientific question. i think that is a problem that needs to be worked on in years to come. it's really at the heart of complex simulation. >> good. >> dimitry, you said something earlier that really affected me, i'm sure it did everyone here, when you said these things happen annually. obviously there's a pressing need to look ahead. so, with regard to how we set things up to better prepare to respond, you know, as both a calling up at livermore and harnessing a team, what are your
7:50 pm
thoughts on that? how can we set up a better framework given, as you said, we can expect these things to happen, unfortunately, yearly? >> yeah. you question. you know, building in a responsiveness is hard because of how we support people and fund people, everyoneis busy and it requires people who want to be involved in this so we need to know who is out there and whether they're willing, at the ready to help. and when something you aurgent happens, you go to mind, who do we have? and we have to get a little bit
7:51 pm
broadly about the tools in this country and the experts in this country and there is probably a next step there in understanding what that means or whether there is barriers in funding or regulations that we would have to change. i don't know, you know, withsqke oil spill, the lab director from s sandia, tom hunter, left his post for months. so finding the subset of those inclined to dedicate themselves
7:52 pm
to these problems, whether short or long-term is understanding that. but we do need a bit of an inventory so that we can be more responsive and understand whether there are barriers that have to be changed to any legislation or policy changes or simply communication. wikileaks was another great example of, i didn't go into it, i skipped over it. it was also a nice place where suddenly there's a massive data set, a million, on the order of next to a million kind of mixed media things. and what's in there? aside from doing any keyword search, is my name in there? is our department in there? there's something a little more sophisticated you can do, there are graph analysis methods and the things that we looked at, what is the content, what is the knowledge involved, what is the relationships of information, what do you distill from this
7:53 pm
when you look at it in it's entirety. there's interesting and complex problems in there, sometimes you need material science people, sometimes you need algorithms and graphs, but ultimately you need computer people. it's a core base, but it's a mix of things you would need at the ready for any of these. >> i'm going take the microphone for a second for a follow on comment that i hope you can elaborate on. a good year component to your presentation, i know that good year worked with sandia to sort of model the next generation tire. and we need these computer folks, the jacks and the tony's and the gregs in the world, with our corporate engagement strategies and our partnership at oakridge, many of our partners are coming to us asking to simulate experiments ahead of
7:54 pm
spending spending a lot of time and money confirming those findings. that speaks to the fact that although this will not be more routine in a future view. do you see that coming within that realm of r & d? >> i hope so. we have a few examples of where things like that have worked, where the partnerships with businesses have worked. my personal characterization of the good year story was, it required the company to be in crisis before it forced adoption as a new paradime for prior development. in the end, it led to the top selling tire in germany, you know, their really marquee product, the president of american tire championed this when good year was against the ropes. we have found in talking with
7:55 pm
different companies, you often find the people doing simulation and those engaged ini:2ov i wan to inject this new way of thinking into our business model. typically what we hear is, they don't care, you're too expensive, what have you done for me now? we need this by next quarter? and it showed value in 2000, it saved the company, which was the last global u.s. tire company. but it took that long, it was beyond thinking through quarterly profits, and so that is hard to do. when companies are at risk, you know, there are a few stories out there on well that's when
7:56 pm
leadership looks at all the options on the table, and they're willing to change the model dpraukly. i don't know of too many examples of it happens when things were going well. i think we talked about the outcome of the models. but the input is very important, of course, as we can't trust the data that we have, and of course you need to draw on data is in the public sector and the
7:57 pm
private sector and really document them in the computer code, as you mentioned in the computer code, that it was not applied to what it was intended for and we domgn't know about i limitations. i think it could be done more, what do you see could be done in terms of getting the data cjh @% more usefulbecause you can trust them better. >> one way to address that is we need to do what is useful to do in the first place. i don't think we should have standing armies on the ready for
7:58 pm
might happen because we'll never get it right and we'll waste money and you don't want people idoled. so given that you can'tj çtr(t&% anticipate what's going to happen, the move toward simply doing better for what we need to do anyway, is probably what we should be doing. if there are places where we can improve standards, if&h've can improve the quality of what we think we're doing, the methodologies, that should be something that we do when we try and capture, but the only other thing we can do is structure ourselves so that we can then be responsive to draw upon what we have learned to throw it at the next crisis. >> before we wrap up and thank
7:59 pm
dmitri for his excellent presentation, we have bess from the department of energy, from the national nuclear security administration and guests here from oakridge national labs so i'm delighted that our friends in the community were able to join us today. i'll catch up with you at the reception. we will have a reception shortly. i'm very blessed with the fact that i have a wonderful colleague of mine, susan valentine that helps me organize these things and that actual conversation today was co-hosted by the baker summit and for power halls institute for nuclear security and howard, and their help with that. and thank you for coming on a very cold day that's warming up and if you would like to join us after this for some refreshments and more questions with dmitri, thank you for your time here.
8:00 pm
>> $135. coming up on cspan 3, we're going to bring you a number of panels from the 2014 ideas festival in new york. we'll start with the discussion on technology and social media. then we'll hear from a nasa scientist, working on mars exploration. later cancer biologist, andrew z hestle looks at cancer research. and thin we'll look at virtual currency. this weekend on the cspan networks, friday night at 10:00 eastern on cspan, a conversation with retired justice john paul stevens, the founder and former chair of microsoft bill gates on the ebola outbreak in west africa. and
55 Views
IN COLLECTIONS
CSPAN3 Television Archive Television Archive News Search ServiceUploaded by TV Archive on