Skip to main content

tv   Key Capitol Hill Hearings  CSPAN  March 19, 2014 12:00pm-2:01pm EDT

12:00 pm
>> [indiscernible] [applause] [captions copyright national cable satellite corp. 2014] [captioning performed by national captioning institute] [captioning performed by national captioning institute] >> let me just finish because these people yelling "no-fly zone" -- i am for a no-fly zone.
12:01 pm
>> vice president joe biden has wrapped up a two-day trip to lithuania and poland where he sought to reassure nato allies in a region that the u.s. will defend them against any russian aggression. he told a pair of baltic leaders that "we are in this with you together." he spoke after russia's president to cleared ukraine's crimean peninsula as part of russia and militiamen took over ukrainian headquarters in crimea. ukraine is not part of nato. the united nations security council is meeting today on the russian issue. meanwhile, the european union is considering economic sanctions. the eu meeting scheduled for tomorrow and friday in brussels. take a look at our live coverage here in c-span at 1:00 p.m.
12:02 pm
eastern. a heritage foundation discussion on the federal reserve and monetary policy that will be followed by new federal reserve chairman janet yellen holding her first meeting today as head of the fed open market committee and will talk to reporters after that meeting. that is scheduled to begin the news conference at 2:00 p.m. eastern. at 5:00, we will hear from two former national security advisers, one who served under president carter, and one who served under president ford in the first president bush. next up, a discussion of enteral science policy and the use of simulation to solve problems and national threats. the chief scientist for the national nuclear security agency recently addressed the university of tennessee howard baker center for public policy in knoxville. this is an hour. [applause]
12:03 pm
>> thank you. that wasn't overly generous introduction. the hospitality here at the baker center has been remarkable. it's a wonderful place, and i'm happy to be here. as an academic who ended up in washington for some reason, i wanted to give you my personal take on computational science what we do and kind of how i view this. i think it is an interesting story. i hope you will find it interesting, too. as a beta tester, i think this can fail and still be part of your learning. so we could look at it that way. i have, i guess, some framing thoughts on computational science. i guess i should project this. let's see.
12:04 pm
there we go. there are just a few topics i would like to talk to today tell you a little bit about how i think about it, where i see the challenges. some examples for what we've done and how we use it and where we are headed. depending on time, i will cover some of these in different ways. there is no even or simple way to explain how we apply simulation these days. certainly from popular culture we have a sense that simulation can do remarkable things. you only have to go to the theater or look at all the content out there where virtualization is part of almost anything you see these days, but
12:05 pm
when you have the temperate reality and make decisions and there are consequences to those decisions, it's a little bit to front, and i wanted to tell you a little bit about that world -- it's a little bit different. the degree of trust is still emerging. there is not a unique way to characterize how well we think we are predicting something and how much we trust it, and there's a lot of work to be done there. there are some places we do it by statute and other places where you really need champions and advocates at the right time to say that these tools could be brought to bear and i hope to give you a few examples of that. really, trust -- there is not an easy way to explain why you trust simulation or why you do not. for everybody, it is somewhat experiential.
12:06 pm
there is a personal aspect to that, and you see it among scientists, and i see it in washington among scientists. some believe it and some do not. again, you can trace this back in many ways. you can trace it back 500 years to descartes and bacon and deductive and inductive reasoning and different ways to approach the world. either you believe empirically that until you test it, you cannot do the next step, or you believe that you can set up some intuitively derived set of premises, and from that build your understanding. those are two lines of thought that exist today. you will find a collection of scientists, and some will say that unless you do the experiment, they do not believe anything you predict. again, the whole idea of trust and when you call upon simulation to help you is still deeply rooted in personal issues
12:07 pm
that are hard to capture, and i hope you keep that in mind as we go through some of the examples today. i will try to cover a collection of different topics and try and show you some of the commonality of what is behind these, and i hope you find it interesting. prediction is really part of our everyday life. you deal with it, whether you are trying to figure out what is going to happen in march and the in caa tournament or the world cup in rio -- the ncaa tournament or the world cup in rio or the sochi olympics. prediction comes in many places. you predict things by yourself. i would say that among all the predictions you do, the consequences are probably fairly limited. the consequences of making a bad prediction are typically not severe. maybe you will get wet because
12:08 pm
you did not expect it to rain. maybe you did not fill out your bracket in march very well and did not win the pool. but i would say that is not a high-consequence type of decision, but today, we are turning to simulation quite a bit more to help us in -- whoops , i'm sorry about that -- and helping us understand a number of types of more serious problems, more societal problems and i view them as being in two categories. as an epidemic -- as an academic, i resonate -- and certainly i resonated in my previous career -- on the class of output-based type of simulations. this is the kind of problem that a scientist poses. it is typically well-defined. you know what to measure.
12:09 pm
in scientific parlance, you know the degrees of freedom. you know what to measure. you have a theory, and you are trying to solve the. and it has been an exercise in mathematics in controlling your approximations to solve that, and you have something you want to measure. maybe you are studying protein confirmation, or maybe you want to measure the mass of the proton. you can pick your quantity, but it is scientifically precise. you know the degrees of freedom, and typically, it's a matter of controlling the approximation when you put it on the computer. it also has the benefit that you are the specialist. when you solve that kind of problem, you are the master of that domain, and you control it. the other class of problems that i see -- let me call outcome-based -- are the ones that i find more interesting these days. these are the ones that are technically imprecise.
12:10 pm
they are ill-post. they are integrally societally-based. they are things that impact people. you want to know why things are going to happen and why they are important to you. often, you do not know what the degrees of freedom are. you do not know where to start. you might not be able to control the models or the approximations. you do not know how precise your answer is, but that is the place where we need the most help. typically, these are multidisciplinary kind of problems where you have to work with other people. you have to ask questions outside your comfort zone, and they are hard. i think discovery lies there in general, and this is the class of problems i would like to illustrate today. in this second class of outcome-based problems, we do not ask scientifically precise questions, but the thing we care about is what you have to do and when.
12:11 pm
you know, what is your confidence that you can actually help. ? what does it mean? what does it mean to you? what happened? what are the risks? what are the risks it might happen again he? how do you bring science into answering questions that are not scientifically precise? where do you start? how did you do that quickly? what tools do you have at your proposal -- at your disposal to help inform that? often, you do not even know if you are asking the right questions. often, you have to ask if the right people are asking the right questions. in any case, are you even positioned to answer them? when you think about societal positions -- i will talk about fukushima, the underwear bomber, the satellite shootdown, the oil spill -- you know, collections of things that impacted people where science help inform the
12:12 pm
decisions to be made -- again, you know, real problems, real issues often time urgent, but the kind of quality of the question you want to answer through simulation is like that so that is not precise. what is the measure of what does it mean? the average person wants to know what it means for them on my how it will impact their life. whether you will have electricity or whether you can get gasoline or groceries or is your lifestyle impacted. that is the societal issue you are concerned about, so how do you manage the needs of science with the imprecise needs of questions of this quality? i want to mention maybe one additional quick digression.
12:13 pm
at the same time that we are interested in solving these problems, we have a changing world. there was a piece a few years ago comparing the ipad 2 to the cray supercomputer and with the level of computing you carry in your pocket, it's remarkable and if you project out 10 or 15 years, the kind of timescales departments have to think about for planning big infrastructure, what is the infrastructure we are thinking about? what are the tools we have to have? how do we work through their so the country can be responsive to answer these types of questions? today, there's a growing set of westerns we worry about, whether it is energy or security or climate health, critical infrastructure. there are more and more places where we think that there is a role for computational science to inform us in decisions because many of these things
12:14 pm
cannot be tested or instrumented or done before it happens, so these are places where virtualization is an important step in characterizing the risks and decisions we might have to make. among the kinds of problems we might have, there are again two categories -- data-rich and data-poor. i just want to distinguish those to keep them in the back of your mind. there are some problems -- sensor data, weather data, places where you have nothing but data, and your problem with simulation is to figure out what it means, causes and effects correlated signals. that is not always easy. solving the inverse problem from a rich set of data is a very hard problem in trying to figure out what really impacts what. there are problems that are data-poor.
12:15 pm
certainly a nuclear weapons program is an example, but i will give other -- i would say the supernova work is data-four. tony would certainly love to instrument the next supernova before hand and get all the data you want but you cannot do that. if you get data, you will be happy, but you can only get a very limited set of measurements , and making sense of that israeli model-dependent. among the classes, it's not just simulation broadly. there are different qualities of questions we ask. there are different data -- different kinds of data and different assumptions we have to make on the models we need. a sense of some of the things that we have turned to simulation for in the past few years -- certainly while i have been in washington.
12:16 pm
i thought it might be a little illuminating. i would be remiss being at oak ridge, also with y 12, in places where the department invests heavily, not to say a little bit of the nuclear weapons program. i think it is an interesting tour de force of simulation. i just want to capture a couple of things for you just for that reason. in the bottom corner, it's just kind of a cartoon illustration of the kind of complexity -- in understanding how we do understand weapons now without testing them -- we stopped testing in 1992. you know, in the record year this country did 98 nuclear tests. the integrated amount is 1054
12:17 pm
tests. over our history, kind of our legacy but the problem scientifically is really multi-scale. it starts at the nuclear scale at the scale of nuclear interactions for fission and fusion processes. it spans the size of the weapon or the meter size and even beyond. it's more than a 15 order of magnitude problem. for those in washington, you can think about it in terms of the federal budget, which is about $3.5 trillion -- it's like managing the federal budget at the 3.5-sent -- 3.5-cent level. the question is how you qualify the trust in the predictions you make at the different scales.
12:18 pm
to say that you have confidence with the federal budget is going and you have confidence you can say where it will be next year. it's a challenging problem, but it is a place where laboratories have excelled in doing that. anyway, there are a lot of questions we ask these days just at the bottom of that slide. we want to know whether they are safer if we have more options to make them more secure. we need to know what other people are doing. we worry about terrorists and proliferation, and they are very broad questions we are starting to turn these tools to, but in view of time, let me go perhaps to more interesting things or at least that you might find more interesting. i remember february 1, 2003. i had not been government very
12:19 pm
long. i was returning from a conference in san diego. at the end of the terminal is a little round area that has the gates in the middle, and i was waiting to board. i looked over at the television sets and i was watching the reentry of the space shuttle, and i cannot quite make sense. i was looking at that knowing that the shuttle was passing overhead at the time, and there were three or so bright lights coming down. i could not tell what that was and it was the shuttle breaking up on reentry, but it was kind of a moment that is etched in my mind. one of those things where you are staring -- you think you know what you're looking at and you really have no idea what you are seeing. on the monday after that, the national laboratory was already in touch with nasa to ask what they could do to help with the kinds of tools that were available. was there something they could do to assist in understanding the problem yet but it's a post
12:20 pm
talk issue but nevertheless something we needed to do. nasa understood, as you could see in the video, that home -- that foam had come off. they took some high-resolution movies -- see if i can get this -- and they were able to see the foam coming off, and if you calculate the relative speed, it's about 700 feet per second that this block of foam came off and struck the shuttle. at the time, they had a tool. you can read about it. there's a very good report from the columbia accident investigation board that went through this. very thoughtful and detailed report. one of the things they found -- the model -- the tool that they had which really had its
12:21 pm
genesis in micro-meteorite impacts in the 60's -- in the 1960's grew into their tool of choice in the late 1970's and late 1980's, but it was used outside its domain of validity, and no one knew. those that understood where you could use it were no longer there. there was not a sense of how predictive it was. it was viewed as a conservative tool, and it told you there was not a problem. the shuttle was on its 28th flight, so it was known that the foam hit it, and it was viewed that this was not a problem. we had a look at the problem. nasa certainly reach out to a number of places to do the analysis. one of the things that they found is that the strength properties of the front end of the wing -- what i have in blue is a picture of the simulation
12:22 pm
of the reinforced carbon front ends of the wings that were used to understand what happened -- anyway, they started a detailed analysis of failure modes. the question here is what went wrong. what is the failure mode? you are reentering the atmosphere, going from non-continuing to continuum modeling -- non-continuum to continuum modeling, trying to figure out what could have happened. an important part of the analysis was to get a piece of age reinforced carbon material, and they found the age properties depended on the number of free entries, that the string is degraded each time you reenter. this 1 -- think 41 of the 44 tiles on the shuttle were
12:23 pm
original. each time they reenter, oxygen penetrates the micro courts and reduces the strength properties. they started to characterize -- they managed to get small amounts of age reinforced carbon and characterize the stress and strength properties and started to do some analyses of what will your modes could have happened. in the end, what they found is that a cubic foot of foam roughly hitting at 700 feet per second would break through and cause these to fail. they discovered this finally in march. march of 2003. it was not until july of 2003 that the experiments were done
12:24 pm
at southwestern research institute, which demonstrated then -- and the thing that actually got news attention was if i remember seeing this on cnn, in my mind, we saw this a you months ago but for those who are in. click driven, this was when the answer was obtained, but it demonstrated then -- you could see a picture of the simulation at the same time of the foam -- the foam hitting the win, and it demonstrated that this was a failure mode and as the shuttle started to reenter, the hot gas reentered the wheel well and started to melt the inside of the wing, and it caused the catastrophic failure of the shuttle, but it was a place again where simulation tested the different scenarios. it showed that it was not the wheel well problem, which was originally thought is the primary cause of ale your, that it happened through the foam
12:25 pm
hitting the wing. you had to characterize the material. it was a very complex set of situations -- of simulations because again, you are asking how it failed, and you are not sure what to measure. in 2006, we launched a satellite , and it basically never made orbit. it was in kind of a cold, tumbling state for a couple of years, and we were approached to try to understand what we can do about this. it was a classified project -- now declassified. the program name was burnt frost . the issue was with what confidence could we provide the president that one could shoot this thing down -- what were the
12:26 pm
modeling confidences of the scenario in which you could shoot this out of the sky. the issue here was a large hydrazine tank, toxic material. it was frozen. it is not a hard calculation to see just from the thermal considerations that it would not melt upon reentry. it would pass through reentry, and being uncontrolled, you cannot steer it into the ocean. it goes wherever it goes. we were asked to try to understand this. it was an interesting project over a couple of months. there was a movie -- i'm not playing the music because i do not like it, but the team put this together as kind of an homage to their efforts, but it
12:27 pm
has a couple of nice pictures, so i clip it and put it in their . you might recall that in 2007, the chinese shot down or hit one of their own satellites, and it was in a fairly high orbit. as a consequence, there are still over 2000 pieces of debris in orbit at about i've hundred 40 miles up -- 540 miles up that we worry about. the question is what if we could shoot this satellite down and alone a point so there would not be debris left. as the satellite is coming in in an uncontrolled way, a kind of skips over the atmosphere, and you do not know where it is going to hit. as it hits and changes its trajectory and accelerates dramatically. if you wait too long, it goes too fast and you will never hit it.
12:28 pm
there is a small window that you have to guess that -- or do more than guess -- to try to understand whether there is a kill shot for the satellite. it was finally decided that at about 153 miles up, one could do that. at first, the simulations gave about 80% confidence that this could be done in the window of time. i think the satellite made about 16 revolutions per day, so you had a couple of tries to do it before it was too late. basically, the satellite look like a hydrazine tank and something that looks like a coke can. if you hit the coke can part, it would be like a bullet through paper. you would have no impact. you really had to hit the tank and predict the telemetry.
12:29 pm
when the shot was done, it matched exactly what the predictions were. it was known to be a kill shot. it was a place where, again, the initial estimate was 80% confidence. the decision at the top at the time was that's not good enough. "let's continue working on this." when it could be done with 95% confidence, then it was done, and it was really a remarkable missile shot done from this aegis cruiser. again, it was kind of a time-urgent problem. it came out by surprise. we have tools. we have people who understand satellites. we have people who understand thermal mechanics that understand ale your and characterization and code and all this, and you have to somehow grab all of this, put it together, and tried to see if
12:30 pm
you can actually address this question. fukushima, again, was a problem of this quality. i remember again this pretty visibly. we were watching on tv in the office that morning trying to figure out what this meant without having a sense of the reactor facilities yet. it was about the synonymy itself, which was just devastating -- it was about the tsunami itself, which was just devastating. it is a place where we have the ability to send things, robots into very harsh radioactive environments. it's a place where we can do air sampling. something born out of the old nuclear testing days is atmospheric modeling because we
12:31 pm
cared quite a bit about where radiation goes, so there are still many resident skills that can be used to monitor. we were brought into this in a couple of ways -- one for emergency response, including teams here at oak ridge who were called to task to help, but the questions that arose that came to us in part were the following -- what is the danger? how bad can it get? at any given time, there are 5000 to 6000 student visas for u.s. students in japan. every year, there are 500,000 to 600,000 u.s. tourists, so there is a large u.s. population there, and the question that comes up is -- do we even evacuate u.s. citizens? there was going to be a mid-day
12:32 pm
meeting in tokyo which meant a meeting in the middle of the night, and we had little more than an hour to figure out what we can add to this conversation. the call went out in the middle of the night to the livermore director to mobilize the sender, and -- mobilize the center. you can say you want to model it and you have great atmospheric models, but you still have to try to capture what is coming out of it and how much. the initial conditions, i would say, were not well known at the time, but the questions were significant because if you decide to evacuate u.s. citizens, it is a logistics problem. how many airplanes, whose airplanes? it's not a simple thing to do if you decide that there are citizens at risk. people also wanted to know what it means for people on the west coast of the united states. for specific u.s. interests, there were a lot of questions that we cared about quite a bit.
12:33 pm
what are those rates? which isotopes? and things of higher degree refinement. the initial estimate was from the simulations that were done that tokyo was not at risk and we did not have to worry about that, but i have to say, it's not easy to do these kinds of scientific problems through the conventional way of peer review. you can't pull together your best team of people in the middle of the night and say "you've never worked together before, but why don't you answer this in an hour?" how do we become better at harnessing the skills in the country in a way that can answer these questions which seem to come up almost annually? i think this was a case where we did quite a bit of air sampling
12:34 pm
and air modeling. we didn't really quite a bit of support for japan, and i think there was a very positive story that came out of this. what happened at the site, what it means to japan, and what it means to u.s. citizens as well or to the continental united states. there are other places -- it was -- we had been working for a good year or so on trying to look at governance models, how we work with -- how many agencies can come and partner with us at our national laboratories to solve some of their interesting problems. what is the way we can engage other agencies to answer their strategic issues using the tools we have like at oak ridge or
12:35 pm
other national laboratories. we had had a conversation with janet napolitano on december 18 on this, saying that the partnership model is part of our effort to develop stronger strategic relationships between agencies, which turned out to be timely in a number of ways. one week later on december 20 5, 2009 -- december 25, 2009 there was the underwear bomber, who was stopped from igniting his petm that he kept in his underwear in that flight. it started a relationship between the department of energy and the department of homeland security and aviation security to try to answer some of these questions of how we protect
12:36 pm
against this. could this happen again? what are the risks of this happening? it was an interesting problem. for this particular type of issue, it is a competition of different effects going on of all the elastic energy stored in the airplane and whether you can dissipate it before catastrophic cracks propagate through the skin and the ribs of the aircraft. we worked on this with them for some time. i would have to say it has been a valuable thing. i cannot say too much more about this other than there are a lot of interesting issues in aircraft security here, and there was quite a bit learned from this, but there was a place where we had to become aviation experts to answer these
12:37 pm
questions because it was time-urgent to figure out what the risks were. i have a few other examples, but let me perhaps go towards simulation. i want to say a couple of things about the tools here before i get to some summary points. the tools we use are -- we talk about simulation as something simple, but those, certainly here at the laboratories that program on these -- it's a tour de force. a computer is maybe 100 or 200 racks a system, each weighing more than a car. they suck up or marketable amounts of energy.
12:38 pm
they have millions of processors you somehow have to work across to solve a single problem. it takes teams of experts and people to attack these from across a broad set of disciplines. extremely nontrivial to deliver any of these kinds of simulations or products. really champions. these systems take megawatts of power. i remember when we were starting up the whites -- i think it was the white supercomputer. it runs at about 4.7 megawatts when it is working. when it's idling, it's about 2.5 . they were running the first simulation of around the first benchmark, something that jack likes very much. his organization the top 500 tracks annually and has done for many years the return started
12:39 pm
and suddenly, there was a two .5-megawatt spike in the local power grid, which is a couple of thousand homes -- there was a 2.5-megawatt spike in the local power grid. these are not just computers. they are very complex things that you really have to think about in different ways. when we had the first, you know, large system up there in the top corner, 10,000 processors, i think it was about the size of a basketball court. there was a chip intel, which has effectively the same computational power. picture of a colleague there holding this back in 2011. the equivalent power from this machine in 1996. we are looking ahead at the
12:40 pm
technology, keeping in mind that portable electronics and basically 600-plus-billion-dollar portable electronics market cannot be steered very much by federal investment, but perhaps strategic investment at the margins can still derive quality computers for the problems that we need to solve in the years to come. it's a challenge. kind of the system we are looking at would probably be, in the best case, 20-megawatt type of system. 10 to the 18 calculations, operations per second are notional goals for this, but we need to be functionally useful. let's see.
12:41 pm
since i have a tendency to talk a little bit too long, let me go to thinking a little bit about the future. going back to where i started there is not a natural place for anyone to stop and ask what is simulation? what can it do for us? often, we end up in crises and we end up in places where we are responding to something and doing the best we can with what we have. it's important to start looking ahead and asking -- where could we add value? i picked a couple of things the president has added recently. the climate action plan. certainly, his nuclear security agenda from a number of speeches. places where you could imagine there could be a role for simulation in a substantive way. the question is -- how do we do
12:42 pm
that? who is going to do it? it's one thing to say that. the question is -- who does what? if there is not a central place to think about this, it's incumbent on people, on those invested in the outcomes, to think about that and try and make it happen. decisions are typically not made by scientists. i do not say that is good or bad. i simply observed that the kinds of questions we are faced with often are not scientific, and the problems are often not well defined, but we want to know what it means to people, what it means to the economy. we want to know very big societally-based questions. when you try to dissect these they typically cover a number of different disciplines, a number of different skills, many fields of specialty. rallying people to try to
12:43 pm
address them can be nontrivial and somewhat unnatural. it does not overlay on university structures, either. there is not a natural place to go to try to address some of these questions. peer review is typically not available. you do not have time to sit back with your team of experts and get your panel together and go through whether what you have done is right. if you are trying to others -- understand whether you need to evacuate people -- you just do not have time for that. the question is how you build in a sense of pedigree, quality, prediction so we do not end up doing something foolish. i think that is very nontrivial. it's a real problem that requires scientific attention because we typically do not stop -- we typically stop at error bars, and it does not translate to the average person and to the kind of meta-questions that are
12:44 pm
emerging now. simulation is certainly showing its value. we find it in more and more places largely because there are champions out there who pull it all long and no way to inject it, but it is not still a natural place to go. many of the problems we get -- we do not have oil spill simulation experts that we call on for underwater crises. we do not have the experts for pick your topic and we cannot afford to contract them for every problem we have, so we have to figure out how to create a more responsive infrastructure from the tools and people we have. i think there's a lot to offer. there's a lot of promise but we are going to have to figure out again how to transmit the degree
12:45 pm
of confidence in anything we do. perhaps understanding how we can be more responsive -- there are washington issues, i'm sure, but there are places where the university could see themselves. there are places national labs could see themselves. where you know when to inject this into a conversation and how you do it. even asking if these are the right questions to be asking. progress here, success against the next set of threats, of urgency, i think, will require communication and greater partnership among the different entities among a broader set of scientists, from social scientists to health, to physical, and mathematics. certainly industry labs, and
12:46 pm
government. i don't see the number of issues diminishing. i see them growing. i see the complexity increasing. i see the kinds of things that we are expecting people to answer becoming a little more refined, and i think we have to be prepared for that, but i think there's a very positive story on what this country does in simulation and how we turn it to these problems. i hope i have made at least some impression that this is of interest. thank you for your time, and i'm happy to take questions. [applause] >> we are recording these presentations, and we would like the questions and answers to be done to the microphones. we have microphones we can pass around. also, you can ask dimitri questions at the reception. if you have a question --
12:47 pm
anybody want to go first? tony? tony, do you have a question? >> i enjoyed the presentation. do you think we could have saved columbia? a lot of people at nasa thought we could have. >> i would not consider myself qualified to answer that. even if we had in a timely way discovered what the issue was mitigation is an entirely different problem. i did think about that question though, and i did think that this was the right next question to ask, but i do not think there is anything to help you with that.
12:48 pm
>> coming back to the nasa example, i think one of the interesting pieces there was you talked about the crater code being used outside of its valid parameter set, but you also raise the point that there was a human factor associated with that, and that the skill set or the knowledge or the depth of knowledge, at least, of what was actually in that code and how it was developed his loss. as we are no longer talking about codes that are thousands of lines long but millions of not trillions of lines long, how do we deal with that problem? >> this is really at the heart of, you know, a field called uncertainty quantification. there is not a good answer. it is a place we are working with universities around the country to try to understand.
12:49 pm
certainly there is work at our laboratories, but you are exactly right. an experienced code writer might remember where all the right punctuation is in the code for maybe 50,000 lines of code but when you have one million lines of code, it's hard to figure out what is in their -- in there. there are all kinds of sources of uncertainty from assumptions and models to places where there is a mix of empiricism and calibration. there is no methodology to propagate uncertainty through the entire spectrum of sources of uncertainty and i think even qualifying all the potential sources of uncertainty is hard.
12:50 pm
so it is a place where we need work. i don't think there is a good answer for that, but it is really where we have to look. ultimately, if someone is going to make a decision, you have to have a simple distilled amount of information on the degree of trust in what came out. it is a cartoon, so the question is -- to what extent is there confidence and knowledge behind it? i think that is a problem that needs to be worked on in years to come. it's really at the heart of complex simulation. >> dimitri, you said something
12:51 pm
earlier that really affected me -- i'm sure it did everyone here -- when you said these things happen annually. obviously, there is a pressing need to look ahead. with regard to how we set things up to better prepared to respond , what are your thoughts on that? how can we set up a better framework given, as you said, we can't expect these things to happen, unfortunately, yearly -- we can expect these things to happen, unfortunately, yearly? >> that's a good question. building and responsiveness is hard because of how we support people and fund people. everyone is busy and it requires people who want to be involved in this. we need to know who is out there and whether they are willing at
12:52 pm
the ready to help. when something urgent happens, you go to the short list. who comes to your mind? it is kind of human nature -- who do we have? we have to get a little better than that and think broadly about the experts resident in this country. there is probably a next step in understanding what that means, whether there are barriers in funding or regulations we would have to change. i don't know. with the oil spill, the lab director, tom hunter, really left his job for about four months. we had laboratory people doing only that. it was a nontrivial commitment so the question is -- would you
12:53 pm
be ready to leave what you are doing today if we say, "i really need you?" can you afford that? it's not for everybody. finding a subset of those who are inclined to dedicate themselves to some of these problems, which might be short or long term, is part of that, but understanding what assets we have is another. we do need a bit of an inventory so that we can be more responsive and understand whether there are barriers that have to be changed through any legislation or policy changes or simply communication. wikileaks was another great example. the guy did not go into it, but i skipped over it. another place where suddenly there was a massive data set. aside from doing any keyword
12:54 pm
search, is my name in their? is our department in there? there is something more sophisticated you could do. there are graph analysis estimates we could look at. what's the content? what is the knowledge involved? what do you distilled from this when you look at it in its entirety? there are interesting and complex problems, so sometimes you need material science people. sometimes you need algorithms and graft people. ultimately, though, you need computer people. >> i noticed that you had a good year component to your presentation. i know goodyour work at linked to model the next generation tire -- good-year worked at link
12:55 pm
ength to model the next-generation tire. many of our corporate partners are coming to us, asking to simulate experiments ahead of spending a lot of time and energy and money actually confirming the experiments, so we are seeing a groundswell interest in that, and it speaks to the fact that although this will not be commonplace, it will be more routine in the future. do you see that coming in that realm? >> i hope so. we have a few examples of where things like that work where the partnerships with businesses have worked. my personal characterization of the story was it required the company to be in crisis before it forced adoption as a new paradigm for tire development.
12:56 pm
in the end, it led to the top-selling tire in germany. they are really marquee products. i think the president of north american tire was the one who finally championed this in 2004 when goodyear was against the ropes. we found in talking with different companies, you often find people doing simulation and those engaged in thinking about the world this way often not having the top cover from their leadership. there is not the pull from the top from the management from the leadership saying, "i want you to inject this new way of thinking into our business model." typically, what we hear is " they don't care. you are too expensive. what have you done for me now? we need this by next quarter." but the story was a decade-old. it started in 1993 and showed value in 2004. it saved the company.
12:57 pm
but it took that long. it was beyond thinking through quarterly profits, so that is hard to do. when companies are at risk there are a few stories out there on "that's what leadership looks at all the options on the table," and they are willing to change the model dramatically. i don't know of too many examples where it has happened where things are going well. >> hopefully, that will change. >> capturing the r.o.i. is something we need to do more of. >> i think we have time for more -- one more question. >> thank you for a very stimulating conversation. you talked much about how to prepare in terms of the people and the science. you talked about, you know
12:58 pm
assumptions. how much can be trust and belief in the data and the outcome of the models, but the input is very important, of course. of course, you need to draw on data that is in the public sector and the private sector and the academic sector. can you do more in terms of validating existing data sets and documenting them well enough that it is not applied for what it was originally intended for and we did not know about the limitations? the nation has not very successful, i think, in implementing metadata standards, and i think more could be done. what do you think could be done in terms of getting the data sets more useful so we can trust
12:59 pm
them better? >> i think one of the things -- you know, one way to address that is we need to do what is useful to do in the first place. i don't think we should have standing armies on the ready for trying to anticipate inc. that might happen -- things that might happen because we will never get it right and we will waste money and you do not want people idled. given that you cannot anticipate what is going to happen, the move towards simply doing what we are doing better for what we need to do anyway is probably what we should be doing. if there are places where we can improve standards if we can improve the quality of what we think we are doing, the methodology, that should be something that we do and we try and capture, but the only other thing we can do is structure
1:00 pm
ourselves so that we can then be responsive to draw upon what we have learned to throw it at the next crisis. >> thank you. before we wrap up, i thank dimitri for his excellent presentation. .
1:01 pm
>> thank you all for that and thank you for coming on a very cold day that's warming up. if you join us for refreshment that would be great. help me thank dimitri for his time here. >> we're live at the heritage announce for -- foundation for a discussion on the federal reserve mission. including former federal reserve bank jerry dwyer. this discussion coming as the new federal reserve chairman janet yellen will be leading her first open market committee meeting. we'll follow that meeting this
1:02 pm
afternoon 2:30 eastern. the fed will release a statement and economic projections at 2:00 eastern and again the fed chair with her news conference at 2:30. this event at the heritage foundation should get started shortly. >> good afternoon welcome to the
1:03 pm
heritage foundation. remind our internet viewers questions to be submitted via e-mail to us at any time during the program. we will of course post the program within 24 hours on our home page for your future reference as well. for those in-house it's always nice for you to make that last curtesy check as we begin that cell phones have been turned off, especially for those who are recording our festivities today. hosting our discussion is norbert michel. dr.michel is our research fellow in financial regulation. he study and writes about house finance including reform of fannie mae and freddie mac. he focuses on best way to address difficulties at large financial companies. he researches monetary policy and other issues related appropriately today to the fed.
1:04 pm
before joining heritage he was a tenured professor at nichols state university. college of business. he previously served here at heritage as a tax policy analyst and he is also worked with global energy company entergy. he holds doctorate degree in financial economic. please join me and welcoming my colleague, norbert michel. >> thank you very much. we appreciate everybody coming out for the event all who are watching online. this is an incredibly important issue and institutionally it's one that heritage has not been as engaged in as they are in some other issues. that has all changed. one of the reasons that has all changed our leadership changed in good direction. we have senator jim demint now
1:05 pm
our president who will come up and give us a brief introduction. >> thank you norbert. i guess only a future will tell if it changed in a good direction. this issue, i'm so thankful that we're starting the debate here at heritage. it seems that in congress there were only two kinds of people involved with this issue those who weren't interested or those who approached it with some form of heister why. it was difficult to get a debate of how it related to everything else we're doing. you don't have to think too long about monetary policy to realize it's the foundation of economic
1:06 pm
policy and the evaluation of everything material we do in this country. the fact that congress who was given by the constitution the responsibility for our currency does so little on it. should cause every american to take pause at where we are. federal reserve controls more about the market and congress than anyone in congress does. they have no control over it effectively. without making judgments and what the conclusion needs to be at heritage, we want to have an open debate and an discovery process about the federal reserve, america's monetary system how that affects american economy and the global economy, which is very much does. today we'll begin by looking a little bit at the history of the federal reserve, talk about monetary policy and what it is and its essence.
1:07 pm
what are the next steps to begin to find out more about our whole monetary system and what do we need to do to provide recommendations to congress. which ultimately is where i hope will be at heritage. what should our monetary policy be and what should we -- what should our approach to the federal reserve be. we're beginning that process today and norbert i hope this is not the end but the beginning of a discovery process so that heritage can help lead the way in where we should go with america's monetary policy. i want to thank our guest here today. we got a lot of talent and expertise. i look forward to watching this online myself. thank you very much. >> i really hope it's not the end. i don't want to move again. our first panelist is dr. larry
1:08 pm
white from george mason. he's professor of economics at george mason. for those of you who don't know he specializes in monetary theory and is one of the world eschars on the theory of history and money and banking. he's the author "the clash of economic ideas and free banking in britain." in 2008 professor white received a distinguish scholar award at the association for private enterprise education and he's also an adjunct scholar at cato as well as a member of the academic advisory council. very happy to have him here and i'll turn it over to him. >> thank you norbert. i'm going to try to give you
1:09 pm
kind of a world wind tour through federal reserve history so we have the track record in front of us that we want to evaluate. if you go back to the foundation of the fed, you'll find out that federal reserve doesn't say anything about monetary policy. this issue came up when alan greenspan was visiting the daily show. i believe the title was the crises wasn't my fault. somebody wrote a very good question to john stewart which is to ask alan greenspan, if you believe in free market why do we have federal reserve setting interest rates. greenspan's response was, well, that's a good question. actually you didn't need a central bank, i guess he meant monetary policy when we were on the gold standard. actually the gold standard
1:10 pm
continued at the time the federal reserve act being passed. greenspan was saying, we didn't need to have a federal reserve to run monetary policy at the time the federal reserve act was passed. that's water over the damn. how well has the fed done compared to the prefed gold standard. that's one way to compare the fed's track record to an alternative. there are other ways you can compare it to other leading central banks in the world. that's the standard i'm going to use. you see a dramatic change in the behavior of the price level. almost immediately after the federal reserve gets up and running in 1914, of course something else happened which was world war i began. the nations in europe all left the gold standard. so the international gold standard was pretty much kaput and the strain on the federal reserve system was pretty much
1:11 pm
neutralized. which mean the fed had a free hand to print money. and it did to support the wilson administration's war efforts and you see what happens to the behavior at the price level. it had been pretty much confined within a small range until the money printing to finance the war expenditures begins and the price level jumps. if we think with it in terms of the inflation rate, i want to sort of break down the fed's experience, the fed track record in terms of decades. you see the period of world war i on this slide and it's characterized by inflation bumping around 20% and continuing at 20% even after the war ends and actually reaching a peak of about 22% before there's a massive correction in 1921.
1:12 pm
associated with that was a deep recession from which the economy rebounded quickly. you can see the period of actual deflation as part of the correction from the high price level created during the war. if you sort of measure the areas of inflation versus deflation you can find out that the price level remained hire than it had been before the war began. there was still inflation in the system that remained to be rung out and that would come later in the decade. 22% inflation, not a very good starting point. the fed second decade was the roaring 1920's. i'm not going to go into this point here. i and some other economic historians think the fed had some hand in amplifying the boom in the 1920's to the point it
1:13 pm
couldn't be retained. then came the great depression which was much deeper necessary than to correct the previous excesses. the money supply shrank a great deal much more than necessary to restore equilibrium to the system. we had several years of 10% deflation which put the economy through quite a wringer. the roosevelt administration together with congress decided to try to bring the price level back up by devaluing the dollar and taking the right of gold ownership away from u.s. citizens. they had to turn their gold into the federal reserve system so the fad -- fed would have more reserves. that was the idea. gold ownership remained illegal
1:14 pm
for several decades. the u.s. officially not only had the international gold standard pretty much neutralized. the u.s. was offered the domestic gold standard. there were still efforts to restore gold convertibility between central banks. so the u.s. could still pay gold or lose gold to other central banks in the world or gain gold from them. but the system that had been counted on when federal reserve act was passed to sustain the growth and quantity of money was pretty much nonfunctional now. now it's up to discretion of federal reserve system how much money they want to create. the depression -- the price level eventually recovers but then the depression is followed
1:15 pm
by world war ii. during which inflation rise of course as money is being printed to buy the treasury debt that's being used to finance the war. it didn't have to happen. you can fight a war with debt without monetizing the debt. but again, inflation reaches about 20% until price controls are imposed and so there's a lot of inflation during world war ii that's underreported. when the price controls came off, after the war, inflation shot up even more. you can see it reached nearly 20% again. until the fed kind of took control of the money supply again, brought inflation down and in 1951, came to an agreement with the u.s. treasury that we are no longer bound by the wartime rules to monetize all the debt you want us to
1:16 pm
monetize. the fed gained a greater measure of independence in what's known as the accord between the fed and the treasury. the period following that inflation came down and actually stayed fairly low and stable for the next decade. the period from 1954 to 1964 not bad. i don't want to be accused of only heaping criticism on the fed. it's actually a pretty remarkably stable and low inflation rate during this period. the website that generated these graphs keeps changing the scale on them. it's hard to compare from one graph to another. you have to look at the numbers on the axis to see that inflation is between one and two percent. that's pretty good compared to the previous decades. in the 1960's, the fed started to become more ambitious.
1:17 pm
the result of that was the peacetime great inflation. where inflation rose up to 8% came back down again and reached double digits. finally the decision was made to put somebody in charge who knew how to fight inflation and that was paul volcker. you can see he dramatically did bring inflation down. but it was a painful experience. the reason it happened was that the fed had sort of been inspired by the idea that a little bit of inflation can buy you a lot of unemployment relief. if you look at the dates on these numbers, it's got the unemployment rate on the horizontal axis and the inflation rate on the vertical
1:18 pm
axis. it looked like there was a stable trade off and the fed was working the trade off. they kept reducing the unemployment rate 65, 66, 68 by increasing the inflation rate and getting the unemployment rate to fall a little bit more. although, they were facing diminishing returns. this trade-off turns out to be not sustainable. it only works if people don't anticipate the inflation that's coming. when workers attempt inflation, -- anticipating inflation, they'll start raising the wage the and they'll take inflation just just to get the same rate of unemployment. that's the theory milton freeman presented in 1967 and 1968. here's the 1970's the original
1:19 pm
phillips curve at the bottom. suddenly the economy wasn't working the way it needs to. even the head of the fed, arthur burns said in a speech said the economy is not working the way the textbook tell us. inflation and unemployment is going up together. his critics told him you do what you always need to do, which is control money growth. which volcker eventually did. the period after the volcker disinflation, is another relatively mild period of fed behavior as judged by the inflation rate. again, i should say that the inflation rate isn't a direct measure of what the fed is doing but it's an indirect measure and it's something the fed can control if they put their mind to it. so they can be held responsible for it. not on a month to month basis but on a year by year basis.
1:20 pm
anyway, this 20 year period from 1984 somewhere around 2004 has become known as the great moderation. there's a lot of optimism that the fed had finally settled on the right model of the economy and now knew what to do. now it was steering the ship in the appropriate way. the inflation rate was behaving itself not quite as well as it had in the 1950's but better than it had in the 1960's and 1970's. there was a lot of optimism. those of us who sort of started out as critics of the fed weren't getting much of a hearing in those days. but that all fell apart too. what happened of course, the housing bubble or boom followed by the great recession i'm going to argue that the housing boom, of course it began during greenspan's tenure but it continued and then fell apart
1:21 pm
during the bernanke era. bernanke came in as someone who is very concerned about combating deflation. he was worried even when negative inflation wasn't evident to prevent it. so encouraged greenspan with expansion of monetary policy. the irony is that in the middle of bernanke's era there's a period of deflation. at the beginning of the quantitative easing program, the fed started paying interest on reserves which led to banks bottling up so many bank reserves and therefore lending so much less. that combined with the velocity of money slowing down. something the fed is not responsible for which they should be offsetting if they could forecast better than they
1:22 pm
have. anyway irony of irony there's a very harmful deflation in the middle of bernanke's tenure. in a congressional hearing somebody was criticizing ber can -- bernanke for his inflation record. he said he had the best record. he must have meant the average rather than variation. if you control the year of inflation, you do bring down your average. one way to diagnose what happened during this decade in terms of taylor rule, which is standard way of evaluating what the fed is doing, it's setting the interest rate target appropriately. according to the taylor rule which kind of describe the great moderation years it seems to be a guideline to keeping problems contained, keeping inflation contained, compared to that, the fed held interest rates too low for too long.
1:23 pm
these parallel bands describe where the fed funds rate, the overnight interest rates should have been -- the fed is keeping inflation much below that. inspired by, i think, bernanke's fear of deflation. that made housing mortgage loans cheaper than they would have been. it's true this is a short term interest rates. most housing loans are long term loans. but the proportion changed during the housing boom. so housing finance became more sensitive to short term interest rates. because it was so much cheaper than financing long term. if you look at the rates at which housing finance was expanding, it was a double digit rate for a number of years.
1:24 pm
well if you are lending ten percent more to the housing market every year either housing prices go up ten percent or quantity span or both, the u.s. economy was building more houses than necessary housing projects had to be abandoned midway through completion. part of the reason for this over exuberance in housing was housing regulatory policy and many policies intended to put more people in owner occupied housing. but the fed provided the fuel for this or provided the punch bowl for the party. made it possible to finance more housing than was really needed. during the crisis here i'm switching away from monetary policy for a minute, the fed
1:25 pm
itself switched away from monetary policy exclusively and or predominantly and started to pursuing what you might call credit allocation programs. i had to shrink the type to fit all of these on one page. i even left a few out. a long list of special directed lending programs and some cases for a particular firm. the bear stearns case, aig, got bailouts financed by the fed citibank and bank of america had special lines of credit. at the end of the list, i have the quantitative easing programs, one and three, which have been described as monetary policy, defended as monetary policy by the fed but they're not monetary policy because they are purchases of mortgage backed securities which is an unusual
1:26 pm
asset for the fed to acquire. if you ask why are they buying mortgage backed securities instead of treasury securities the usual thing they'll buy the switch from treasuries to mortgage backed securities, doesn't have any impact on the size of the change in monetary aggregates. but it changes what securities get purchased and what prices get supported. it was an attempt to support the price of a particular kind of financial asset, mortgage backed securities, which the fed thought was under priced. so the fed is substituting its judgment about how financial assets ought to be priced for the mark's judgment. they trying to influence relative prices and the allocation of resources in the financial sector. that's not monetary policy. it's also not, none of these items on the list are lender last resort policy which is traditional role of the fed, the
1:27 pm
person who described this most classically was walter baguet. his approach was, if you have centralized reserves in your banking system, you don't need to do that, but if you have, then the holder of that reserve has a responsibility to make liquidity available to banks that are fundamentally solvent but which are having temporary difficulties. you supposed to lend to liquid banks short term but add a high interest rates to make them regret having gotten into trouble. in order to make it clear that it's not going to be a subsidy operation. it's not supposed to make it comfortable for a bank to get into liquidity trouble. that's not the way the fed pursued it. the fed lent that low interest rates during the financial crisis, one est mitt to the -- estimate was about $13 billion.
1:28 pm
if you look into details there, there are some things you can quibble with. still, fed was one of the cheapest lenders in town in some of these programs. a more general point making these policies up as they went along is not consistent with a kind of bedrock principle of the free society which is the rule of law. which is people ought to be able to know in advance what the rules are going to be and to count on them being enforced impartially. the executive branch of government should not be playing favorites deciding to make up new rules, deciding not to enforce old rules. here's another piece of evidence that the qe policies are not monetary policy. if you look at the path of m2, which is standard measure of the money supply it remains on a fairly smooth path. at the same time the monetary base the fed's balance sheet
1:29 pm
liabilities, are jumping. the first jump is qe1 and then the next jump is qe2 and the ongoing jump we're in now is qe3. those don't change the monetary aggregate. it's not designed to change the path of the money supply. let me wrap it up by going back to what i said i was going to do which is is compare. during the prefed period, which is shown in gold here, the price level ends at almost exactly where it began. in 1914, it's the same as what it was in 1879. the average inflation rate is within a hair of the zero. between 1971 and 2013, the average inflation rate has been over 4%. that's quite a difference.
1:30 pm
the fed has been worse on inflation. the fed created more uncertainty about the future of the value of the dollar is going to be, 5, 10 20 or 30 years from now. that discourages long term investments because it makes it harder to issue long term bonds. so a paper that george and bill and i published in 2012 has these measures of uncertainty which show uncertaincy higher today than it was in the prefed period. a lot of people defend the fed by saying, okay inflation is higher but haven't we smoothed out the business cycle? the answer is no you haven't. certainly not over the fed's entire history because remember the great depression happened under the fed's watch. maybe the great depression is just a practice period that
1:31 pm
shouldn't count against the fed. i stole that joke from george. even if you take out the great depression and just start in the post war period, you find that the volatility of the real economy is basically back to where it was in the prefed period. despite the economy being much better diversified and so seems like the fed has made a negative contribution if you think of economy on its own would have been more stable. lastly, one reason people go for leaving the gold standard was we save all the resources it takes to dig gold out the ground. we haven't done that. we made the price level so unstable, people now buy gold as inflation hedge. the real price of gold is higher now than when we were on the gold standard. there's more gold mining now for people to buy bars and coins to store in their safety deposit box or in their backyards.
1:32 pm
finally, the fed hasn't reduced unemployment. you shouldn't expect them to, because unemployment is not the sort of variable that depends on the price level for the reasons that friedman and phelps gave us. summary, the fed hasn't improved over the prefed period. it hasn't delivered the benefits that we could have hoped for. it's maded moral hazard to the system through its bailout policies. it's not run a monetary policy that has given us benefits. it raised inflation and it's not smooth out business cycles. how can we get better results? well here's an analogy i thought of last night. maybe i shouldn't think about it more. here it is. the federal reserve system is like a early 20th century that was designed with a
1:33 pm
self-governing engine, but early in its history, the self-governing engine was replaced by strapping some rockets on the train. the ability to present money ad lib. that leads to a train that has great danger of jumping off the tracks if the rockets around really priced controlled. what do you do about that? you could remove the rockets try to go back to a self-governing monetary system. if that's politically impossible, that would mean imposing some kind of rule on federal reserve monetary policy either a taylor rule or priced level target or better than, a nominal income target. there are various ways we can think about making fed policy more predictable and less
1:34 pm
discretionary. i will quit there, thank you. >> thank you larry. our next presentation is by george professor sellgen is at the university of georgia college of business. he is also senior fellow at cato institute. dr.sellgen research covers a broad range of topics within the field. including the history of monetary history and macroeconomic theory. he is the author of" the theory of free banking, money supply under competitive issue." he too written numerous articles that have appeared in academic journals. he is also coedtor of econ journal watch. i'll turn it over to george.
1:35 pm
>> first thing i need to do is to find my slides. keep pressing thank you all by the way for giving me a chance to talk about how the fed manages to give people the impression that it's been doing a wonderful job. larry referred to our joint work with my colleague how the fed has actually performed in its first hundred years.
1:36 pm
he summarized some of the results. in terms of price level stability, predictability, business cycle fluctuations, crisis and banking failures things have not been better since the fed's establishment than they were in the decades preceding the fed establishment. that fact is all the more important when you consider that the banking system we had before the fed's establishment was climbing. i'll talk about it. it's not a question that we couldn't do any better than we did before in 1914. but, we've done worse. that certainly is a very unfortunate record. the thing i want to talk about is how the fed has managed to convince so many people that it has done a very good job. that it accomplished its mission. the fed reputation is rather
1:37 pm
high amount there are plenty of us who criticize is. this is to considerable intent the result of the fed's own very successful propaganda efforts. it's those efforts i want to talk about. what those efforts amount to creating a myth. where u.s. monetary history including the history of the period before the creation of the federal reserve. what i want to do is point out some of the ways in which the fed's propaganda misrepresents that history and does so in a way that makes the fed appear to be a much more successful institution than it actually has been. one of the first pieces of propaganda that's very crucial to this efforts is the claim before the fed, particular before the civil war, banks simply weren't regulated. therefore the problems with the banking system were consequences of a lack of regulation and
1:38 pm
particularly a lack of federal regulation. that's not true at all. banking has been regulated from the very beginning of u.s. history. very strictly regulated. as he put it in his famous book on banks and politics prior the civil war. the issue was between prohibition and state control with no thought of free enterprise. this is coming from someone who's not himself a necessarily a fan of free enterprise but the facts are that no state allowed freedom of entry and freedom from regulation of its banks. prohibited banking all together. one thing that almost all state did, was to prohibit any kind of
1:39 pm
branch banking. not only did states prevent out of state banks from doing business across their borders, they didn't allow their own banks to set up any branches. this fact alone was a source of tremendous weakness in the u.s. banking system because the banks in question naturally could not diversify their assets or their liabilities. that single fact was probably one of the most causes of failures in the banking industry in the united states right up until the great depression and even to some extent afterwards. another consequence of the fact that banks couldn't branch was the lack of a uniform currency. one of the federal reserve bank has a video which i show a shot of here, couple farmers trying to do a deal involving a horse. and staring at a batch of
1:40 pm
nonuniform bank notes trying to figure out what it's worth. well that was a problem before the civil war. what the fed doesn't tell you this is not a natural consequence of lacking a single source of currency, which is how the fed makes it seem. it's simply the fact that banks couldn't branch so when their notes traveled far from their single offices, they would tend to go to a discount reflecting the cost of trying to get them back to where they could be redeemed. countries that had many banks that is competing banks but had branching. canada in fact being a closer to home example they had uniform currencies even when they didn't have monopoly banks. they had competition all circulating at their full gold values because it was always a nearby branch where they could
1:41 pm
be redeemed. on the eve of the civil war in this case, a little bit into the civil war, the discounts on state bank notes despite the lack of branch facilities, actually fallen a bit. these are calculations i did myself. i'm sorry, the numbers aren't ledge i believe -- ledgible opinion if you bought up every state bank in the north and brought them all to a central market like new york or chicago and sold them for what gold you could get from a broker there. having paid full face value for the notes, your loss would have been less than one percent of that face value. i mentioned this because fed apologist argued that we had to get rid of the state banks and nationalize the currency system because of this nonuniform
1:42 pm
currency. they argued that's exactly why this was done during the civil war. during the civil war national banks were set up that were nationally chartered, they got the right to issue notes under particular circumstances and subsequent to that legacy, state banks were taxed out of the currency business by prohibitive tax. this wasn't because the currency wasn't uniform. you can see that problem wasn't so severe by the civil war. indeed state banks would have continued in business in many regions of the country and would have competed until that prohibitive tax put them out of business. the reason was to finance the civil war. you can understand that more readily if you realize that a condition for national banks to issue currency was that the currency had to be backed 110% by u.s. government securities. the idea was to have a captive market for bonds, union bonds so
1:43 pm
that the war could be financed. it wasn't as it was so often the case with financial legislation and u.s. financial legislation in particular, it wasn't the case of trying to provide a better monetary system. it was a case of fiscal emergencies driving monetary legislation. even in the consequences for the monetary system especially in peacetime were not so great. in this case, the consequences were not so great specifically because after the civil war, we had some of our most calamityist financial panics. those are -- the federal reserve system publications emphasized the fact that as this quote says bank loans and financial panics continue to plague the nation after the civil war. what they don't tell you the reason for these panics was the bond requirement that meant the supply of currency was now linked to outstanding government debt.
1:44 pm
if the government retired its debt if it retire at all, the total supply of national bank would have to fall to zero. the scarcity of government debt which became greater and greater in the last decades of the 19th century as u.s. government ran surpluses and retired its debt, meant that the country was occasionally deprived of that. the other thing that these fed prop goon da -- propaganda sources will tell you, the way they expressed the problem, it was the problem of the absence of central bank. if you put it that way, the solution is, central bank. let's see. here is a graph showing you first the smoother line. that's the stock of national bank note starting in 1880. it's shrinking dramatically from a peak of just about $350 million to less than half
1:45 pm
that amount in a growing economy, you can imagine this was not a healthy situation. the supply couldn't at all adjust seasonally. in those days it was a tremendous peak demand for currency every harvest season. the fed wants you to believe that this was an eninherent problem of not having a central bank. but wait, what's this other plot? that's the plot of the supply currency in canada, during the same period. it shows not only secular growth consistent with the fact that canada too was a growing economy but lovely seasonal peaks. here's a currency supply that must be administered by a central bank. those canadians must have gotten the idea before we were smart enough to get around to it. it said that canada didn't have a central bank until 1935. what it has a competitive system of large nationally branch banks
1:46 pm
which supply currency that seems to adjust in supply just as a perfect central planner might adjust it. not as our true monetary central planners have been capable of adjusting currency. that also was robust. there were no crisis in canada. therefore no movement to create a central bank until during the great depression when the creation of bank of canada. canada had zero bank failures during the great depression. set up a bank of canada for, that's another talk. american reformers were very conscious by the way before the creation of the fed of the advantages of the canadian style system. they tried through numerous pieces of legislation to get a system like it into existence for the u.s. , the baltimore plan, the carlisle plan all of them ran
1:47 pm
afoul of unit banking opposition and also the opposition from wall street which interestingly was aligned with unit bankers. when the country bankers needed access to the new york money market, they had no choice but to deal with wall street correspondence. so this coalition fought any reform that would have introduced nationwide banking. the pop list let's not -- populist, the plan that would have allowed private banks to continue to be responsible for controlling currency. they wanted government to handle it all. we got the federal reserve back. not best solution to our problem. probably the third, fourth or fifth best you can think of every one of those other plans
1:48 pm
having been wiser than this one. here's our ben bernanke speech after 1907 congress began to say, wait a minute maybe we need to do something about this. maybe we need a central bank. that's nice and short, but it's misleading impression what the true alternatives were. i'm going to run out of time of course. i'm trying to summarize fed propaganda. that could take forever. i'll do at least a little bit more. one of the things that the federal reserve publications is how independent it is. the fed never been independent in any meaningful sense. when it was first formed, the secretary of the treasury and the control over the currency where respective president and vice president of the federal reserve board. believe me, they had some way. larry explained to you how they
1:49 pm
got the fed to finance world war i. this control of the fed by fiscal forces by the treasury, continued very clearly right up through world war ii. when the fed was still handmade into the treasury. there's a myth again perpetuated by the fed that in 1951 a treasury accord was arrived. that's true. this accord ended the fed's being pressured by or influenced by the treasury, being near handmade into the treasury. one fed source credits martin with battling for the accord on the fed's part. one problem with that, martin wasn't involved in the negotiations but for the treasury. he was not the fed. he was rewarded with the chairmanship of the fed by
1:50 pm
truman. he fired him and gave martin the job. martin himself subscribed to what he later called the notion of independence of the fed within the government. it meant that the fed could be independent so long as it did whatever the administration wanted it to do. you can read places where he says things like, to the fed board, we've got to do what nixon wants because -- we've got to do what the administration wants because they are threatening to take away our independence if we don't. even volcker volcker's anti-inflation campaign had nothing to do with the fed.
1:51 pm
that turned their focus to fighting inflation. now the fed was still doing the administration's bidding. it just so happen it was not inflating. the way the fed -- two minutes, great -- the way the fed tries to save it is performance is actually been successful with regard to such things like price level stability. it has to be careful, there are statistics about this thing. here's an example. i think it's from the atlanta fed chair. fluctuations and the purchasing power of gold, made gold a poor standard on which to base our measure value. that made trade difficult since no one knew what a dollar would buy from day to bay. the pub situation goes on to say, when we finally got away from the gold standard, finally we could have a stable value of measure of value. well, maybe we could but we didn't. as you can see from this chart.
1:52 pm
the other thing the fed does with regard to claims about being successful ath dealing with inflation -- at dealing with inflation -- the fed is always fighting inflation. here you have a typical example. is this cartoon there's inflation, there's the fed superman jamming his elbow into it trying to get that inflation out of his system. another federal reserve bank of atlanta publication, the price level begins to rise, the central bank will try to adjust monetary policy. how come it's beginning to rise? who did that. i was going to talk about deflation. sometimes deflation is good. there are bad deflations it involves collapses of spending all the worse ones that happened
1:53 pm
since the fed. this episode comes to mind as another problem for the fed. bank crisis. goodness. they like to talk bernanke in this george washington university lecture, called the bank theory of bank crisis. if you have a good central bank urk come to the rescue. you can uses frank's movie to draw his illustration how this can happen. it's a good reason for that. there's never been random panics in u.s. history. the big run in 1933 was a panic all right but it was because people suddenly realize that fdr might devalue the dollar so they were running on gold. the fed wasn't legally allowed to bail out building and loan associations. it didn't do many banks any good
1:54 pm
even though the fed was there during the great depression. here's a record of actual bank failures in the u.s. you can do this either by total number of failures or by deposits. look what happens. it's actually worse after the creation of the fed. now it's not all the fed's fault. there's lots going on. but for the fed that claim that it improved thing would have to explain why these numbers go way up after 1914. after 1934, you have lower and fewer bank failures. that's not the fed that's the fdic. continental, it would have created a big moral hazard problem. far worse was the doctrine of too big to fail. it really starts with the bailout of continental illinois bank in chicago in 1984. that's what i wanted to add what larry had to say about the fed's
1:55 pm
conduct and the crisis. the fed behavior among things it did wrong was to not act according to the classical notion of the lender of the last resort. the fed and bernanke in particular played a lot of lip service. lend high rate freely on good collateral. the whole idea you're not going to hold up a hope of a rescue for any insolvent solution. the problem was the fed bailed out insolvent institutions. it bailed out -- the problem though, is what they said when they bailed out bear. when they bailed out bear geithner and bernanke they were saying we're doing this because it's big. they didn't say we're doing this
1:56 pm
only because it's solvent. that was too big to fail. as soon as that doctrine was made. lehman and other financial institutions bigger, had every reason it would get bailed out. when it didn't get bailed out of course the disaster was huge. i'm going to skip a bunch of stuff. i want to talk about this claim. one of the biggest propaganda successes of the fed. one minute. to claim that it has done a great job combating the recent crisis and expediting recovery. i'm reminded of the episode of the beverly hillbillies. the word gets out that granny
1:57 pm
has a cure for the common cold. people trying to get the cure out of her so they can make hay with it. but then at the ver end, granny says, she said you take some of this and a week to ten days you will be good as new. the difference the recovery from the recent crises has been one of the worse recoveries ever. it's been extremely slow. there's never been a recovery worse than this. nothing like it. of course, the implication is, especially true if you go back to look at 19th century crises doing nothing would have been better what the fed did. we have economist who claim if it hadn't been for the fed, would have had another great depression. if you cannot claim that the fed
1:58 pm
-- if the fed creates the second worse crises in u.s. history, instead of calling that a bad thing, you would refer to the first worst crises in order to claim this is a success. this is what i call making lemonades out of your very worse lemons. thank you very much. >> thank you george. our next panelist is dr. jerry dwyer who is currently bb and t scholar at clemson. also trinity college in dublin. this is going to sound like we played some cool joke on jerry, it's not true. he's very good guy. he's a former fed employee. he's former vice president at the federal reserve bank of atlanta what he was director of center for financial installation although.
1:59 pm
we're very happy to have jerry here. when jerry is done, we will have a little bit of time for questions from the audience. >> i'm not really sure. what's the right word. i'm pretty libertarian actually. in some ways, i feel like i've been invited into a lion's den. i don't really feel that way. i don't know. you'll find out what i think about stuff. that's about what i can say. if this means anything to you, sometimes i've been introduced as milton friedman's last
2:00 pm
student. after me, quit. make of it what you want. the federal reserve what they did in the financial crises, i view that as a brief, now that i'm not at the federal reserve. what do i think about the various things that were done. one of the arguments that can be made and you can decide what you think. in the end of the day, you can't really even imagine conclusive proof, evidence about whether something is correct or incorrect. lot of discussions are about counterfactuals. i will discuss them. if you done something different, then something different would have happened. the answer we don't know for sure what would have happened. the fed has done a lot of things. they bailed out some of the creditors and bear