Skip to main content

tv   Hearingon Regulating Artificial Intelligence  CSPAN  September 12, 2023 2:35pm-5:21pm EDT

2:35 pm
>> we have really important issues facing this country. as republicans we need to stay focused on the border. we need to stay focused on crime physically in urban areas. we need to stay focused on inflation. those are issues americans care about and they want to see a change in leadership as a result of those. if we start going down these paths that really bear no fruit. we are not going to get an impeachment through the senate. we are not going to the idea that somehow january 6 are being treated differently than other prisoners in the jail that has a history of real abuse and pore conditions. it's just not true. and so we can waste our time on issues that are not important or we can fish focus on issues that are. the realities of is that the impeachment process is one that is going on right now. the judiciary committee the oversight committee the ways and means committee are all investigated. they are developing good
2:36 pm
information about hunter biden. i agree with dan your last guest there is not a strong connection at this point. between the evidence on hunter biden and any evidence -- >> you can watch all of our programs on our website c- span.org. we take you live to capitol hill where microsoft president is expected to be joined by a law professor and a scientist to testify about proposed legislation that wrigley's artificial intelligence. live coverage on c-span 3. >> and also to chairman durbin who supports has been invaluable in encouraging us to go forward here. i have been grateful especially to my partner in this effort senator holly the ranking member. he and i as you know have produced a framework. basically a blueprint for a path forward to achieve
2:37 pm
legislation. our interest is in legislation. in this hearing along with the two previous ones has to be seen as a means to that end. we are very resolved oriented. i know you are from your testimony. and i have been enormously encouraged. and emboldened by the response so far just in the past few days and from my conversations with leaders in the industry like mr. smith. there is a deep appetite indeed a hunger for rules and guardrails basic safeguards for businesses and consumers, for people in general. from -- potential perils. there is also a desire to make use of the tremendous potential benefits. and our effort is to provide
2:38 pm
for regulation in the best sense of the word. regulation that permits and encourages innovation. and new businesses and technology and entrepreneurship. at the same time provide those guardrails enforceable safeguards that can encourage trust and confidence in this growing technology. it is not a new technology entirely. it has been around for decades. but artificial intelligence is regarded as entering a new era. and make no mistake, there will be regulation. the only question is how soon and what? and it should be regulation that encourages the best in american free enterprise but at the same
2:39 pm
time provides the kind of protections that we do in other areas of our economic activity. to my colleagues no need for new rules. you have enough laws protecting the public. yes we have laws that prohibit unfair and deceptive competition. we have laws that regulate airline safety. and drug safety. but nobody would argue that simply because we have those rules we don't need specifics protections for medical device safety. or car safety. just because we have rules that prohibit discrimination in the workplace does not mean we don't need rules that prohibit dissemination and voting. and we need to make sure that these protections are framed and targeted in a way that applied to the rest risks involved. risks based rules. managing the risks is what we
2:40 pm
need to do here. so, our principles are pretty straightforward i think. we have nope pride of authorship. we have seconded this framework to encourage comment. we won't be offended by criticism from any quarter. that is the way we can make this framework better and eventually achieve legislation we hope i hope at least by the end of this year. and the framework is basically establishing a licensing regime for companies that are engaged in high risk ai development. creating an independent oversight body that has expertise with ai and works with other agencies to administer and enforce the law. protecting our national and economic security to make sure we aren't enabling china or russia and other adversaries to interfere in our democracy or violate human rights.
2:41 pm
requiring transparency about the limits and use of ai models. at this point includes rules like watermarking, digital disclosure, when ai is being used and data access for researchers. and ensuring ai companies can be held liable in their products breach privacy violate civil rights and endanger the public, deep fakes impersonation, we've all heard those terms. we need to prevent those harms. senator holly and i former attorneys general of our states, have a deep and abiding affection for the potential enforcement, powers of those officials, state officials. but the point is there must be
2:42 pm
effective enforcement. private rights of action as well as federal enforcement are very very important. so let me just close by saying before i turned it over to my colleague. we are going to have more hearings. the way to build a coalition in support of these measures is to disseminate as widely as possible the information that is needed for our colleagues to understand what is at stake here. we need to listen to the kind of industry leaders and experts that we have before us today. and we need to act with dispatch. more than just deliberate speed. we need to learn from our experience with social media that if we let this get out of the barn, it will be even more difficult to contain than
2:43 pm
social media. and we are seeking to act on social media the harms that it portends right now as we speak. we are literally at the cusp of a new era. i asked sam altman when he sat where you are what his greatest fear was. i said mine my nightmare is the massive unemployment that could be created that is an issue that we don't deal with directly here. but it shows how wide the ramifications may be. and we do need to deal with potential worker displacement and training. and this new era is one that portends an enormous promise but also perils. we need to deal with both. i will turn it now to ranking member senator hawley. >> thank you for organizing this hearing.
2:44 pm
this is now the third of these hearings that we have done. i've learned a lot. in the previous couple. i think some of what we are learning about the potentials of ai is exhilarating. some of it is horrifying. and i think what i hear the chairman saying and what i agree with is we have a responsibility here now to do our part to make sure that this new technology which holds a lot of promise but also peril actually works for the american people. that it is good for working people. that it is good for families. that we don't make the same mistakes that congress made with social media. 30 years ago now congress basically outsourced social media to the biggest corporations in the world. that has been i would submit to you nearly an unmitigated disaster. we had the biggest most powerful operations not just in america but in the globe and in the history of the globe. doing whatever they want with social media. running
2:45 pm
experiments basically every day on america's kids. inflicting mental health harms the likes of which we have never seen. messing around in our elections anyway that is deeply deeply corrosive to our way of life. we cannot make those mistakes again. so, we are here as senator blumenthal said to try to find answers. to try to make sure that this technology is something that actually benefit the people of this country. i have no doubt with all due respect to the corporate in front of us the heads of these corporations. i have no doubt it will benefit your company's. what i want to make sure is that it is the american people. that is the task we are engaged in. i look forward to this today. vacuum is the chairman. >> i want to introduce our witnesses. as is custom i will swear them in and ask them to submit their testimony. welcome to all of you.
2:46 pm
chief scientist. he joined in january 2009 as chief scientists after spending 12 years at stanford university. he was chairman of the computer science department. he has published over 205th the papers. he holds 120 issued patents. he is the author of 4 textbooks. brad smith is the vice chair and president of microsoft. as microsoft vice chair and president he is responsible for spearheading the companies working and representing it publicly in ava wide variety of critical issues involving the intersection of technology and society including artificial intelligence cybersecurity privacy environmental sustainability human rights digital safety immigration philanthropy and products in business. for a nonprofit customers.
2:47 pm
we appreciate your being here. professor -- hartzog is professor of law and class of 1960 scholar at boston university school of law. he's also a nonresident fellow at the school of policy in medicine and law. at washington university. a faculty associate at the center for internet and society at harvard university. a scholar at the center for internet and society at stanford law school. i could go on about each of you at much greater length with all of your credentials. but suffice it to say that very impressive. if you would now stand i will administer the oath. do you solemnly swear the
2:48 pm
testimony you are about to give is the truth the whole truth and nothing but the truth?>> why don't we begin with you mr. dally. >> chairman blumenthal ranking member hawley. they keep the privilege it's hot testified today. i'm delighted to discuss artificial intelligence journey in the future. invidious at the forefront of accelerated computing genitive ai technologies potential to transform industries address global challenges profoundly benefit society. since our founding in 1993 we have been committed to developing technology to empower people and approve the quality of life worldwide. today over 40,000 companies use invidious platforms across media and entertainment scientific computing healthcare financial services internet services automotive and manufacturing to solve the
2:49 pm
world's most difficult challenges and bring new products and services to consumers worldwide. at our finding in 1983 we were a 3-d graphic startup dozens of startups competing to create an entirely new market for accelerators to enhance computer graphics for games. in 1999 we invented the graphics processing unit or gpu which could perform calculations in parallel. we launched -- we recognize the gpus theoretically accelerate in the application that can benefit -- processing. this bed paid off. researchers worldwide innovate on nvidia gpu's. collective efforts we have made advances in the ai that will revolutionize tremendous benefits to society across sectors such as healthcare medical research education business cybersecurity climate and beyond. however we also recognize like any new product or service ar products and services have risks. those who make and use or sell ai enabled products and services are responsible for
2:50 pm
their conduct. fortunately many uses of ai applications are subject to existing laws and regulations. the government the sectors in which they operate. ai enable services and high risk sectors could be subject to enhance edification requirements when necessary. while other applications with less risk of harm may need less strategic stringent licensing a relation. with clear stable and thoughtful regulation ai developers work to benefit society while making products and services as safe as possible. for our part committed to the safe and trustworthy development and diplomas of ai. for example -- guardrails are open source software empowers developers to guide genitive ai applications to produce accurate appropriate and secure text responses. nvidia implemented model risk management guidance ensuring a comprehensive assessment and management of risk associated with nvidia develop models. today nvidia announces it is endorsing the white house's
2:51 pm
voluntary commitments on ai. we can and will continue to identify and address the risks. no discussion of ai would be complete without addressing what is often described as frontier ai models. some of express fear frontier models will evolve into uncontrollable artificial general intelligence. which could escape our control and cause harm. fortunately uncontrollable artificial general intelligence is science fiction and not reality. at its core ai is a software program that is limited by its training. provided to it and the nature of its output. in other words humans will always decide how much decision- making power to cede the ai models. as long as we are thoughtful in measured we can ensure trustworthy and ethical deployment of ai systems without suppressing innovation. we can spur innovation by ensuring ai tools are widely available to everyone not concentrated in the hands of a few powerful firms. i will close with two observations. first the i is already out of
2:52 pm
the bottle. ai algorithms are widely published and available to all. ar software can be transmitted anywhere in the world at the press of a button and many ai development tools frameworks and foundational models are open source. second, no nation and certainly no company controls a chokepoint to ai development. leading u.s. computing platforms are competing with companies from around the world while u.s. companies may currently be the most energy- efficient cost efficient and easiest to use they're not the only viable alternatives for developers abroad. other nations are developing ai systems with or without u.s. components. they will offer those applications in the worldwide market. safe and trustworthy ai will require multilateral and multi- stakeholder cooperation or it will not be effective. the united states is an remarkable position today and with your help we can continue to lead on policy and innovation while into the future. nvidia stands ready to work with you to ensure the development and employment of genitive ai and accelerated
2:53 pm
computing serve the best interest of all. thank you for your opportunity to testify before this committee.>> thank you very much. mr. smith. >> ranking member hawley members of the subcommittee. my name is brad smith the vice president of microsoft thank you for the opportunity to be here today. more importantly thank you for the work that you have done to create the framework you have shared. i think you put it very well, first we need to learn and act with dispatch. ranking member hawley i think you offered real words of wisdom. let's learn from the experience the whole world had with social media. and let's be clear eyed about the promise and the peril in equal measure as we look to the future of ai. i would first say i think your framework does that. it doesn't attempt to answer every question. by design. but it is a very strong and positive step in the right direction.
2:54 pm
and puts the u.s. government on the path to be a global leader in ensuring a balanced approach that will enable innovation to go forward with the right legal guardrails in place. as we all think about this more i think it is worth keeping three goals in mind. first let's prioritize safety and security. which your framework does. let's require licenses for advanced ai models and uses in high risk scenarios. let's have an agency that is independent and can exercise real and effective oversight over this category. and then let's couple that with the right kinds of controls that will ensure safety of the sort we've already seen i think start to emerge in the white house commitments that were launched on july 21st. second, let's prioritize as you do the protection of our
2:55 pm
citizens and consumers. let's prioritize national security. always in a sense in some ways the first priority of the federal government. but let's think as well as you have about protecting the privacy, the civil rights and the needs of kids. among many other ways of working and ensure we get this right. let's take the approach that you are recommending namely focus not only on those companies that develop ai, like microsoft, as well as companies that deploy ai like microsoft. in different categories we are going to need different levels of obligations. and as we go forward let's think about the connection between say the role of a central agency that will be on point for certain things as well as the obligations that frankly will be part of the work of many agencies. and indeed our courts as well. and let's do one other thing as
2:56 pm
well. maybe it is one of the most important things we need to do so we ensure that the threats that many people worry about remain part of science fiction and don't become a new reality. let's keep ai under the control of people. it needs to be safe. and to do that as we have encouraged there needs to be safety brakes especially for any ai application or system that can control critical infrastructure. if a company wants to use ai to say control the electrical grid or all of the self driving cars on our roads or the water supply, we need to learn from so many other technologies that do great things but also can go along. we need a safety break just like we have a circuit breaker in every building and home in this country to stop the flow of electricity if that is needed. then i would say let's keep one third goal in mind as well.
2:57 pm
this is the one where i was just you maybe consider doing a bit more to add to the framework. let's remember the promise that this offers. right now if you go to state capitals you go to other countries, i think there is a lot of energy being put on that. when i see what governor newsom is doing in california or governor bertram in north dakota. i see them at the forefront of figuring out how to use ai to say improve the delivery of healthcare. advanced medicine. improve education for our kids. and maybe most importantly make it government services -- or use the savings to provide more and better services to our
2:58 pm
people. that would be a good problem to have the opportunity to consider. in sum, professor hartsock has said this is not a time for half measures. it is not he is right. let's go forward as you have recommended. let's be ambitious and get this right. thank you. >> thank you very much. mr. hartsock i read your testimony and you are very much against half measures. we look forward to hearing what the full measures that you recommend are.>> that is correct senator. chair blumenthal and members of the committee thank you for inviting me to appear before you today. i am a professor of law at boston university. my comments today are based on a decade of researching law and technology issues. i'm drawing from research on artificial intelligence policy that i conducted as a fellow with colleagues at the cornell institute at washington university in st. louis.
2:59 pm
committee members up to this point ai policy is largely been made up of industry led approaches like encouraging transparency, mitigating bias and promoting principles of ethics. i would like to make one simple point in my testimony today. these approaches are vital. but they are only half measures. they will not fully protect us. to bring ai within the rule of law lawmakers must go beyond these half measures to ensure that ai systems and the actors that deploy them are worthy of our trust. half measures like audits assessments and certifications are necessary for data governance. but industry leverages procedural checks like these to dilute our loss into managerial box checking exercises that entrench harmful surveillance based business models. a checklist is no match for the staggering fortune available to those who exploit our data, labor and are. í see to develop and deploy ai systems. it is no substitute for
3:00 pm
meaningful liability when ai systems harm the public. today i would like to focus on three popular half measures and why lawmakers must do more.
3:01 pm
3:02 pm
3:03 pm
3:04 pm
3:05 pm
3:06 pm
3:07 pm
3:08 pm
3:09 pm
3:10 pm
3:11 pm
3:12 pm
3:13 pm
3:14 pm
3:15 pm
3:16 pm
3:17 pm
3:18 pm
3:19 pm
3:20 pm
3:21 pm
3:22 pm
3:23 pm
3:24 pm
3:25 pm
3:26 pm
3:27 pm
3:28 pm
3:29 pm
3:30 pm
3:31 pm
3:32 pm
3:33 pm
3:34 pm
3:35 pm
3:36 pm
3:37 pm
3:38 pm
3:39 pm
if you are doing that and you don't understand the content, you can ask the ai tutor to help you solve the problem and i think is not only for the kids, but it's usable for the parents and i think it's good. let's just say a 14-year-old or what is the age of eighth grade algebra. i try to help them with their homework and i think we want kids in a controlled way with the safeguards to you something that way. >> i'm not talking about tutors. i'm talking about your ai chat. famously earlier this year your
3:40 pm
chat pod, you had a technology bod and you wrote about this and your chat pod was urging this person to break up their marriage. do we went them to be having those conversations. >> of course not. >> would you commit to raising that. >> i don't want to chat to break up anybody's marriage >> i don't either. yeah, we are not going to make the decision on the exception. it goes to, we have multiple tools. age is a very red line. >> it is. that's why i like it. my point is, there is a safety architecture that we can apply. >> your safety architecture did it stop an adult and did it
3:41 pm
stop the chat pod from having the discussion with an adult in which it said, you don't really love your wife. your wife isn't good for you. she doesn't really love you. this is an adult. can you imagine if the kind of things that your chat pod would say to a 13-year-old. i'm serious about this. do you think it's a good idea. >> what a second. let's put that into context. at a point where the technology and 20,000 people, the journalist spent two hours of the evening ignoring his wife and interacting with a computer and trying to break the system which he managed to do. we didn't envision that use. the next day we have fixed it. >> are you telling me that you've envisioned all the questions that 13-year-olds will ask and that the pair should be absolutely fine with that and i just trust you in the sum of the "the new york times" writer did? >> as we go forward, we have an increasing capability to learn from the experience of real people. >> that's what worries me.
3:42 pm
that's exactly what worries me. what you are saying is, we have to have some failures. i don't want 13-year-olds to be your guinea pig i don't want ,14 to be your guinea pig. i don't want you to learn from their failures. you have to learn from the failures, go right ahead. let's not learn from the failures of america's kids. this is what happened with social media and we had social media he made billions of dollars giving us a mental health crisis in this country and they got rich and kids got depressed and committed suicide. why would we want to run that experience again with ai? not just raise the age? >> we shouldn't want anybody to be the guinea pig. regardless of the age or anything. >> let's roll kids out right now. >> let's also recognize that technology does require real uses. what's different about this technology and from my view
3:43 pm
with the social media experiences that we not only have the capacity, but we have the will and we are applying that will to fix things in hours and days. >> yes. to fix things after the fact. i mean, i'm sorry. this sounds to me like you are going to say, trust us. we are going to do well with this. i'm just asking why we should trust you with our children? >> i'm not asking you to trust, although i hope we will work everyday to earn it and that's why you have the licensing obligation. >> there is an in licensing obligation. >> i'm asking you to make a choice now to say, -- you can go every parent in america now, microsoft is going to protect your kids. we would never use your kids as a science experiment ever.
3:44 pm
never. therefore, we are not going to target your kids and we are not going to allow your kids to be used by our chat lights as a source of information if they are younger than 18. >> i think you were talking about the problem. there are two things that you are talking about. >> i'm talking about kids. it's very simple. >> we don't want to use kids and monetizing and et cetera, but i am equally of the view, i don't want to cut off an eighth grader today with the ability to use this tool that will help them learn algebra or math in a way that they could it a year ago. >> with all due respect, it was in algebra or math that your pet thought was recommending were talking about. we are trying to break up some reporters marriage. >> we are mixing things. >> no, we are not. were talk about your chabad. we are talking about the chat. worse, we are talking about chat and i'm talking about the protection of children and how we make technology better and, yes, there was that episode back in february on valentine's day and six months later, if that journalist try to do the same thing again, it will not happen. >> you want me to be done.
3:45 pm
>> i just don't want to miss my boat. i don't want to miss my vote. >> you are very kind. thank you. some of us haven't voted yet and i wanted to turn to you. in march, a video announced a partnership with getty images to develop models and develop images in this provides royalties to conduct rent creators. why was it important to the company to partner with and pay for getting them into the library developing generative ai models? >> we believe inspecting people and their rights. the rights of the photographers who produced the images that our models are trained on and we didn't want to infringe on this. we did not just grab a bunch of
3:46 pm
images off a weather train our model. we partner with getty and we trained our models and when people use picasso to generate images, the people who provided the original content were enumerated and we see this as a way of going forward in general where people who are providing the eyepiece who trains these models should benefit from the use of the ip. >> today the white house announced eight more companies that are taking steps to move towards safe, secure and transparent development of ai and videos and one of those companies, you talk about the steps that you have taken and what steps you plan to take for the response will development of ai. >> we've done a lot already. we have implemented our guardrails so that we can basically put guardrails around our largely which model so that inappropriate prompts to model don't get a response and the model inadvertently were to
3:47 pm
generate, that is detected and intercepted before it can reach the user of the model. we have a set of guidance that we provide for all of our generated models and how they should be appropriately used. we provide -- it's to say where the metal came from and what the definition is trained on and we take these models very thoroughly in the testing depends on the use. for certain models, we test them for bias. we want to make sure that when you refer to a doctor, you don't automatically assume it's a him. we test them for safety. we have a variant of our nemo model called bio nemo that used in the medical profession and we want to make sure that the advice you give this a there are a number of the other metrics. >> very good. thank you, professor. do you think congress should be more focused on regulating the inputs and design of generative ai or focus more on outputs and
3:48 pm
capability? >> certainly. certainly. i think the area that has been ignored up to this point has been the design and input to a lot of these tools. to the extent that area could use some revitalization, i would encourage input and output design and use. >> i suggest you look at these election bills because as we have been talking about, i think we have to move quickly on those in the fact that it is bipartisan and it's been a very probably different thing. >> absolutely. >> i want to thank mr. smith for wearing a purple vikings type. i know that was an ai generative message that you got too. i know this would be a smart move with me after their loss on sunday. i will will remind you on thursday night. >> i can assure you that it was
3:49 pm
an accident. >> very good. thank you, all. i see we have a lot of work to do. thanks. >> thank you, mr. chairman. mr. smith, want to come to you first to talk about china and the chinese communist party and the way that they have gone about and we have seen a lot of it on tick tock because they had these implement campaigns that they are running to influence certain thought processes with the american people. i know that you all just did a report on china. you covered some of the disinformation and some that you have obtained. talk to me a little bit about how microsoft in the industry as a whole can combat some of these campaigns. >> i think there is a couple of things that we can think more about and do more about.
3:50 pm
the first, as we all should want to ensure that our own products and systems and services our not used by foreign governments in this matter and i think that there is room for the evolution of export controls and next- generation export controls to help prevent that. i think there is also room for our concept that has worked since the 1990s in the world of banking and financial services and know your customer requirements and we have been advocates for those so that if there is
3:51 pm
3:52 pm
3:53 pm
and know your customer requirement. we have set requirements in effect so that they are deployed to data centers. >> let me come to you. i think that what of the things, if we look at ai, is the detrimental impacts. we don't always want to look at the doomsday scenarios, but we are looking at some of the reports on surveillance with the ccp surveilling the winters and with iran surveilling women. i think there are other countries that are doing this same type of surveillance. what can you do to prevent that. how do we present that. >> senator, i've argued in the past that facial recognition technology is a certain source
3:54 pm
of surveillance and they are fundamentally dangerous and that there is no world that it would be safe for any of us. we should prohibit them out right and biometric and public spaces a prohibition of emotion recognition that this is what i refer to as the strong bright line measures that draws absolute lines in the sand, rather than the procedural ones that are trenching this kind of harmful surveillance. >> mr. chairman, cannot take another 30 seconds because mr. daily was shaking his head in agreement on some things and i was catching that. do you want to weigh in before i close my question he on either of these topics? >> i was in general agreement. we need to be very careful about who we sell our technology to and we try to
3:55 pm
sell to people who are using this for good commercial purposes and not to suppress others and we will continue this because we don't want to see this technology misused to impress anybody. >> awesome. thank you. >> thank you, senator blackburn. my colleagues, senator holly mentioned that we have a forum tomorrow, which i welcome. i think anything to aid in our education and enlightenment is a good thing and i just want to express the hope that some of the folks who are appearing in that venue will also cooperate here before the subcommittee and we will certainly be inviting more than a few of
3:56 pm
them and i want to express my thanks to all of you for being here, but especially mr. smith, who had to be here tomorrow to talk to my colleagues privately and our efforts are combo mentoring, not contradictory. i'm very focused on election interference because elections are upon us and i want to thank my colleague for taking a first step towards addressing the harms that can result from all of the potential perils that we have identified here and it seems to me that authenticating the truth that embody true images and voices is what
3:57 pm
approach and banning the defacing impersonations is another approach and obviously banning anything in the public realm and in the public discourse endangers the first amendment, which is why exposure is often the remedy that we see, especially in campaign-finance, so maybe i should ask all of you whether you see that banning certain kinds of election interference and if you raise the specter of foreign interference and frauds and scams that could be perpetrated as they were in 2016 and i think it is what of those nightmares that should
3:58 pm
keep us up at night. we are an open society and we welcome free expression and ai is a form of expression. free or not. whether it is generated or high risk or simply catching up some of the background in the tv ad. maybe each of you talk a little bit about what you see the potential remedies are. >> it is a great concern that the american public may be misled by fakes of theirs kinds. as you mentioned, that the use to authenticate a true image and voice added source and tracking that deployment will let us know what a real images.
3:59 pm
if we insist on ai content and ai generated content be generated as such, the people are kept off that this is generated and not the roofing. you know, i think we need to avoid having some foreign entity interfering in our elections and at the same time, ai generated content is speech and i think it would be a dangerous precedent to try to ban something. i think it is much better to have exposure, as you suggested, and demand something out right. >> three thoughts. number one, this is a critical year for elections. it's not only for the united states, is for united kingdom, india across the european union and over 2 billion people. this is a global issue for the
4:00 pm
world's democracies and number two, i think you are right to focus on first amendment because it is such a critical cornerstone for american critical life and the rights that we all enjoy. i will also be quick to add, i don't think the russian government qualifies for protection and if they are seeking to interfere in our elections, then i think that the country needs to take a strong stand and a lot of thought needs to be given and how to do that effectively but this goes to the heart of the question on why it is such a good one. i think it is going to require some real thought, discussion and an ultimate consensus to emerge around one specific scenario. let's imagine, for a moment, that there is a video that involves a presidential candidate. . and then let's imagine that someone uses ai to put different words into the mouth of that candidate, and uses ai technology to perfect the to a
4:01 pm
level that it is difficult for people to recognize as fraudulent. then you get to this question, what should we do? and, at least as i've been trying, and we've been trying to think this through, i think we have two broad alternatives. one is, we take it down, and the other is, we relabel it. if we do the first, that we're acting as sensors, and i do think that makes me nervous. i don't think that's really our role to act as censors, and the government really cannot, i think, under the first amendment. but relabeling to ensure accuracy, i think that is probably a reasonable path. but really, what this highlights is the discussion still to be had, and i think the urgency for that conversation to take place. >> and, and i will just say, and that i want to come to you, professor hartzog. that i agree emphatically with your point about the russian government or the chinese
4:02 pm
government or the saudi government as potential interference. they're not entitled to the protection of our bill of rights when they are seeking to destroy those rights. and purposefully trying to take advantage of the free and open society to infect, decimate our freedoms. so i think there is a distinction to be made there in terms of national security, and i think that rubric of national security, which is part of our framework, applies with great force in this area. and that is different from a presidential candidate putting up an ad that, in effect, put words in the mouth of another candidate. and as you, as you may know, we began these hearings with
4:03 pm
introductory remarks from me that were impersonation, taken from my comments on the floor, taking my voice from speeches that i made on the floor of the united states senate with content generated by chatgpt that sounded exactly like something i would say, in a voice that was indistinguishable from mine. and , obviously, i disclosed that fact at the hearing. but in real time, as mark twain famously said, a lie travels halfway around the world before the truth gets out of bed, and we need to make sure that there is action in real time if you're going to do the kind of identification that you suggested. real-time meaning real-time in a campaign, which is measured in minutes and hours, not in days and months.
4:04 pm
professor hartzog? >> thank you, senator. like you, i, i'm nervous about just coming out and saying we're going to ban all forms of speech, particularly when you're talking about something as important as political speech. and like you, i also worry about disclosure alone as a half measure, and earlier in this hearing, it was asked, what is a half measure? and i think that goes towards answering your question today. i think the best way to think about half measures is an approach that is necessary, but not sufficient that risks giving us the illusion that we've done enough, but ultimately, i think this is the total point. it doesn't really disrupt the business model and the financial incentives that have gotten us here in the first place. and so to, to help answer your question, one thing that i would recommend is taking about throwing lots of different tools, which i applaud your
4:05 pm
bipartisan framework for doing is bringing lots of different tools to bear on this problem, think about the role that surveillance advertising plays in powering a lot of, a lot of these harmful technologies, and ecosystems. that doesn't allow the system, the, the light just to be created, but flourish. and to be amplified. and so i think about rules and safeguards that we could do to help limit those financial incentives, borrowing from standard principle love, of accountability, things like, we use disclosure where it's effective. it's not effective, you have to make it safe, and if you can't make it safe, it shouldn't exist. >> i think i'm going to turn to senator hawley for more questions, but i think this is a real conundrum. we need to do something about it. we need more than half measures. we can't dilute ourselves by thinking with a false sense of comfort that we've solved the problem. if we don't provide effective enforcement, and, to be very
4:06 pm
blunt, the federal elections commission often has been less than fully effective. a lot less than fully effective in enforcing rules relating to campaigns. and so, there again, an oversight with strong enforcement authority, sufficient resources, and the will to act is going to be very important if we're going to address this problem in real time. senator hawley? >> mr. smith, let me just come back to something you said. thinking about now, workers, you talked about wendy's, i think it was, they're automating the drive through, and talking about, you know, this is, this was a good thing. i, i just want to press on that a little bit. is it, is it a good thing that workers lose their jobs to ai, whether it's at wendy's or
4:07 pm
whether it's at walmart or whether it's at, at the local hardware store? i mean, is it -- you, you pointed out that, your comment was that there's really no creativity involved in taking orders in the drive through. but that is a, a job. oftentimes a first job for younger americans. but, hey, in this economy, where the wages of blue-collar workers have been flat for 30, 40 years and running? what worries me is that oftentimes, what we hear from the tech sector, to be honest with you, is that jobs that don't have creativity, aztec defines it, don't have value. i'm, friendly, scared to death that ai will replace lots of jobs that tech types think are creative, and will leave more blue-collar workers without any place to turn but my question of you is, is can we expect more of this, and is it really progress for folks to lose the kind of jobs that, you know, i expect that's not the best paying job in the world, but at
4:08 pm
least it's a job? and do we really want to see more of these jobs lost? >> well, to be clear, first, i didn't say whether it was a good or bad thing. i was asked to predict what jobs would be impacted, and identified that job is one that likely would be. so, but let's, i think, step back. because i think your question is critically important. let's first reflect on the fact that, you know, we've had about 200 years of automation that have impacted jobs. sometimes for the better, sometimes for the worse. in wisconsin, where i grew up, or in missouri, where my father grew up, if you go back 150 years, it took 20 people to harvest an acre of wheat or corn , and now it takes one. so 19 people don't work on that acre anymore. and that's been an ongoing part of technology. the real question is this. how do we ensure that technology advances so that we help people get better jobs, get the skills they need for
4:09 pm
those jobs, and hopefully do it in a way that broadens economic opportunity rather than narrows it? i think the thing we should be the most concerned by is that since the 1990s, and i think this is the point you're making. if you look at the flow of digital technology, you know, fundamentally, we've lived in a world that has widened the economic divide. those people with a college or graduate education have seen their incomes rise in real terms . those people with, say, a high school diploma or less have seen their income level actually drop, compared to where it was in the 1990s. so what do we do now? well, i'll at least say what i think our goals should be. can we use this technology to help advance productivity for a much broader range of people? including people who didn't have the good fortune to go, say, where you are i went to
4:10 pm
college or law school. can, and can we do it in a way that not only makes them more productive, it actually reaps some of the dividends of that productivity for themselves in a growing income level? i think it's that conversation that we need to have. >> i agree with you, and i, i hope that that is, i hope that that's what ai can do. you talked about the farm, used to take 20 people to do what one person could do. it is to take thousands of people to produce textiles or furniture or other things in this company. we're now at zero. so we can tell the tale in different ways. i'm not sure that seeing working-class jobs go overseas, or be replaced entirely, is a success story. in fact, i'd argue it's not at all. it's not a success story. i'd argue more broadly that our economic policy of the last 30 years has been downright disastrous for working people. and tech companies and financial institutions and certainly banks, wall street, they, they have read huge
4:11 pm
projects, but blue-collar workers could barely find a good paying job. i don't want ai to be the latest accelerant of that trend . and so i don't really want every services station in america to be manned by some computer such that nobody can get a job anymore, get their foot in the door, start their climb up the ladder. that worries me. let me ask you about something else here, my expiring time. you mentioned national security. critically important. of course, there's no national security threat that is more significant for the united states than china. let me just ask you, is microsoft too entwined with china? you have the microsoft research asia that was set up in beijing back in the late 1990s. you've got centers now in shanghai and elsewhere. you've got all kinds of cooperation with chinese state owned misses. i'm looking at an article here from protocol magazine, where one of the contributors said that microsoft had been the alma mater of chinese big tech.
4:12 pm
are you concerned about your degree of employment with the chinese government? do you need to be decoupling in order to make sure that our national security interests aren't fatally compromised? >> i think it's something that we need to be and are focused on. to some degree, in some technology fields, microsoft is the alma mater of the technology leaders in every country in the world because of the role that we played over the last 40 years. but when it comes to china today, we are and need to have very specific controls on who uses our technology and for what. and how. that's why we don't, for example, do work on quantum computing, or we don't provide facial recognition services, or focus on synthetic media, or a whole variety of things, while, at the same time, when starbucks has stores in china, i think it's good that they can run their services in our data
4:13 pm
center other than a chinese company's data center but >> just on facial recognition, back in 2016, your company released this database, 10 million faces without the consent of the folks who were in the database. you eventually took it down, although it took three years. china used that database to train much of its facial recognition software and technology. i mean, it, isn't that a problem? you said that microsoft might be the alma mater of many companies, ai, but china's unique, no? i mean, china is running concentration camps using digital technology like we've never seen before. and isn't that a problem for your company to be in any way involved in that? >> we don't want to be involved in that in any way, and i don't believe we are. >> are you going to close your treasures in china, your microsoft research asia in beijing, your center in shanghai? >> i don't think that will accomplish what you're asking us. >> you're running thousands of people through your centers out into the chinese government and
4:14 pm
chinese state owned enterprises. isn't that a problem? >> first of all, there's a big promise, and i don't embrace the premise that that is, in fact, what we're doing. >> well, which part is wrong? >> the notion that we're running thousands of people through and then they're going into the chinese government. >> is that not right? i thought you had 10,000 employees in china whom you recruited from chinese state owned agencies, chinese state owned businesses. they, they come work for you and then they, they go back to the state owned entities. >> we have employees in china, in fact, we have that number. i don't, to my knowledge, that is not where they're coming from, that is not where they're going. we are not running that kind of revolving door. and it's all about what we do. and who we do it with. that i think is of paramount importance. and that's what we're focused on print >> you condemn what the chinese government is doing and all of that? >> we do. we do everything we can to
4:15 pm
ensure that our technology is not used in any way for that kind of activity in china and around the world, by the way. >> but you condemn it, to be clear. >> yes. >> what are your safeguards that you have in place such that your technology is not further enabling the chinese government, given the number of people you employ there and the technology developed there? >> well, you take something like facial recognition, which is a part of your question. we have very tight controls that limit the use of facial recognition in china, including controls that, in effect, make it very difficult, if not impossible, to use it for any time of real-time surveillance at all. and, by the way, the thing we should remember, the u.s. is a leader in many ai fields. china is the leader in facial recognition technology in the ai for it. >> well, in part because of the information that you help them acquire, no? >> no. it's because they have the world's most data. >> well, yeah, but you gave them --
4:16 pm
>> know. i, i don't think that. >> you don't think you had anything to do with it? >> i don't think -- when you have a country of 1.4 billion people, and you decide to have facial recognition used in so many places, it gives that country a massive data. >> but are you, are you saying that, that the database that microsoft released in 2016 you're saying that i wasn't used by the chinese government to train their facial recognition? >> i am not familiar with that. i'd be happy to provide you with information. but, my goodness, the advance in that racial recognition technology, if you go to another country where they're using facial recognition technology, it's highly unlikely it's american technology. it's highly likely that it's chinese technology, because they are such leaders in that field, which i think is fine. i mean, if you want to pick a field where the united states doesn't want to be a technology leader, i'd put facial recognition technology on that list. but let's recognize it's
4:17 pm
homegrown. >> how much money has microsoft invested in ai development in china? >> i don't know, but i will tell you this. the revenue that we make in china, which accounts for, what, about 1 out of every 6 humans on this planet, you know, it's 1.5% of our global revenue. it's not the market for us that it is for other industries or even some other tech companies. >> sounds, then, like you can afford to decouple. >> but is that the right thing to do? >> yes. and again, a regime that is fundamentally evil that is inflicting the kind of atrocities of its own citizens that you alluded to that it's doing to the uighurs, what it doing that is running modern- day concentration camps, yeah, i think it is. >> but there's two questions that i think at least are worthy of thought. number one, do you want general motors to sell a manufactured car, let's just say, sell cars in china? do you want to create jobs for people in michigan or missouri
4:18 pm
so that ours those cars can be sold in china? if the answer to that is yes, then think about the second question. how do you want general motors in china to run its operations, and where would you like it to store its data? would you like it to be in the secure data center run by an american company, or would you like it to be run by a chinese company? which will better protect general motors trade secrets? i'll argue we should be there so that we can protect the data of american companies, european companies, japanese companies. even if you disagree on everything else, that, i believe, serves this country well. >> you know what, i think you're doing a lot more than just protecting data in china. you have major research centers, thousands, tens of thousands of employees. and to your question, do i want general motors to be building cars in china, no i don't. i want them to be making cars
4:19 pm
here in the united states with american workers. and do i want american companies to be aiding, in any way, the chinese government and their oppressive tactics? i don't. would you like me to yield to you now? you ready? >> i have been very hesitant to interrupt the, the discussion, the conversation here has been very interesting, and i'm going to call on senator allsop, and that i have a couple follow-up questions but >> thank you, mr. chairman, and thank you all for your testimony. just getting down to the fundamentals, mr. smith, if we're going to move forward with a legislative framework, a regulatory framework, we have to define clearly in legislative text precisely what it is that we're regulating. what is the scope of regulated activities, technologies, and products? so how should we consider that question? and how do we define
4:20 pm
the scope of technologies, the scope of services, the scope of products that should be subject to a regime of regulation that is focused on artificial intelligence? >> i think there's three layers of technology on which we need to focus in defining the scope of legislation and regulation. first is the area that has been the central focus of 2023 in the executive branch and here on capitol hill. it's the so-called frontier, or foundation, models that are the most powerful, save for something like generative ai. in addition, there are the applications that use ai, or, as senators blumenthal and hawley have said, the deployer's of ai. if there is an application that calls on that model in what we consider to be a high risk scenario, meaning it could make a decision that would have an impact on, say, the privacy rights, the civil liberties, the rights of children or needs of children, then i think we
4:21 pm
need to think hard and have law and regulation that is effective to protect americans. and in the third layer is the data center infrastructure. where are these models, and where these applications are actually deployed. and we should ensure that those data centers are secure, that there are cyber security requirements, that the companies, including ours, need to meet. we should ensure that there are safety systems at one, two, or all three levels if there is an ai system that is going to automate and control, say, something like critical infrastructure such as the electrical grid. so those are the areas where we would say to start there, with some clear thinking, and a lot of effort to learn and apply the details, but focus there. >> as more and more models are trained and developed to higher levels of power and capability, there will be a proliferation, there may be a proliferation of
4:22 pm
models. perhaps not the frontier models. perhaps not those at the bleeding edge that use the most computable -- compute of all. powerful enough to have serious implications. so is the question, which models are the most powerful in a moment in time? or is there a threshold of capability or power that should define the scope of regulated technology? >> i think you've just posed one of the critical questions that, frankly, a lot of people inside the tech sector and across the government and in academia are really working to answer. and i think the technology is evolving, and the conversation needs to evolve with it. let's just posit this. there's something like gpd 4 from openai. let's just posit it can do 10,000 things really well. it's expensive to create, and
4:23 pm
it's, relatively easy to regulate in the scheme of things, because there's one or two or 10. but now let's go to where you're going, which i think is right. what does the future bring in terms of proliferation? imagine that there is an academic, a professor hartzog is university who says i want to create an open source model. it's not going to do 10,000 things well, but it's going to do four things well. it would require as many nvidia gpu's. it won't, you know, require as much data. but let's imagine that it could be used to create the next virus that could spread around the planet. then you'd say, well, we really need to ensure that there is safety architecture and control around that as well. and that's the conundrum. that's why this is a hard problem to solve. it's why we're trying to build safety architecture in our data centers so that open-source models can, say, run in the and still be used in ways that will prohibit that kind of harm from taking place.
4:24 pm
but as you think about a licensing regime, this is one of the hard questions. who needs a license? you don't want it to be so hard that only a small number of big companies can get it, but then you, you also need to make sure that you're not requiring people to get it when they really, we would say, don't need a license for what they're doing. and, you know, the beauty of the framework, in my view, is it starts to frame the issue. it starts to define the question put >> let me ask this question. is it a license to train a model to a certain level of capability? is a license to sell, or license access to that model? or is it a license to purchase, or deploy, that model? who is the licensed entity? >> that's another question that is key and may have different answers in different scenarios. but mostly, i would say, it should be a license to deploy. you know, i think that there may well be obligations to disclose to, say, an independent authority when a training run begins, depending on what the goal, when the
4:25 pm
training run ends, so that an oversight body and follow it. just the way, say, might happen when a company is building a new commercial airplane. and then they, you know, there are -- what's emerging, the good news is, there's the emerging foundation of, call it, best practices, for, then, how the model should be trained, what kind of testing there should be, what harms should be addressed. that's a big topic that needs discussion put >> when you say a license to deploy, do you mean, for example, if i microsoft office product, which is, wishes to use gpt model for some user serving purpose within your sweet, you would need a license to deploy gpt in that way? or do you mean that gpt would require a license to offer to microsoft? and putting aside whether or not this is a plausible commercial scenario, the, the question is, what's the structure of the licensing arrangement?
4:26 pm
>> in this case, it's more of the letter. imagine, look, think about it like going. boeing builds a new plane. before it can sell it to united airlines, and united airlines can start to fly it, the faa is going to certified that it's safe. now imagine we're at, call it, gpt-12. whatever you want to name it. you know, before that gets released for use, i think you can imagine a licensing regime that would save that it needs to be licensed after it's been, in effect, certified as safe. and then you have to ask yourself, well, how do you make that work so that we don't have the government slow everything down? and what i would say is, you bring the government three things. first you need industry standards, so that you have a common foundation and well understood way as to how training should take place. second, you need national regulation. and third, if we're going to have a global economy, at least in the countries where we want these things to work, you probably need a level of
4:27 pm
international coordination. and i'd say, look at the world of civil aviation. that's fundamentally how it has worked since the 1940s. let's try to learn from it and see how we might apply something like that or other models here. >> mr. dally, how would you respond to the question in a, in a field where the, the technical take abilities are accelerating at an, a rapid rate , future rate unknown? where, and according to what standard or metric or definition of power , do we draw the line for what requires a license for deployment, and what can be freely deployed without oversight by the government? >> you know, i think it's a, it's a tough question, because i think you have the balance two important considerations. the first is, you know, the risks presented by a model of whatever, whatever power, and then on the other side is the fact that, you know, we would like to ensure that the u.s.
4:28 pm
stays ahead in that, this field. and to do that, we want to make sure that, you know, individual academics and entrepreneurs with a good idea can, you know, move forward and innovate and, and deploy models without huge barriers. >> so it's the capability of the model, it's the risk presented by its deployment without oversight. is that, is that -- because the thing is, we're going to have to go right legislation. and the legislation is going to have to go, in words, define the scope of regulated products. and so we're going to have to abound that which is subject to a licensing arrangement, or wherever we land, and that which is not. >> i think it is -- >> and so how do you -- i mean -- >> and its dependent on the application, because if you have a model which is, you know, basically determining a medical procedure, there's a high risk with that. you know, depending on the patient outcome. if you have another model which is, you know, controlling the temperature in your building, if it gets a little bit wrong,
4:29 pm
you may be, you know, gives you a little bit too much power, or maybe, you know, you're not as comfortable as you would be, but it's not a life-threatening situation. so you need to regulate the things that are, have high consequences if, if the model goes awry. >> and i'm on chairman's borrowed time, so just tap the gavel when you want me to stop. >> you had to wait, so will i give you a couple -- >> okay, professor. and i'd be curious to hear from others concisely, with respect for the chairman's follow-ups. how does any of this work without international law? i mean, isn't it correct that a model, potentially a very powerful and dangerous model, for example, whose purpose is to unlock cbr and or mass destructive virological capabilities to a relatively unsophisticated actor, once trained, it's relatively lightweight to transport? and without, a, an international
4:30 pm
legal system, and, b, a level of surveillance that seems inconceivable into the flow of data across the internet, how can that be controlled and policed? >> it's a great question, senator, and, with respect to being efficient in my answer, i'll simply say that there are going to be limits. even assuming that we get international cooperation, which i would agree with you, i mean, we've already started thinking about ways in which emma for example, within the eu, which has already deployed some significant ai regulation, we might design frameworks that are compatible with that that require some sort of interaction. but, but ultimately, what i worry about is actually deploying a level of surveillance that we've never before seen in an attempt to perfectly capture the entire chain of ai, and it's simply
4:31 pm
not possible. >> and i share the concern about privacy, which is, which is in part why i raised the point about, how can we know what folks are loading? a lightweight model, once trained onto perhaps a device that's not even online anymore. >> right. there are limits, i think, to what we'll ever be able to know. >> either of you want to take a stab before i get gavel out here? >> i would just say you're right, there's going to be a need for international coordination. i think it's more likely to come from like-minded governments then, perhaps, global governments, at least in the initial years. i do think there's a lot we can learn. we were talking with senator blackburn about the swift system for financial transactions. and, you know, somehow we've managed, globally, and especially in the united states, for 30 years, to have know your customer requirements obligations for banks. money has moved around the world . nothing is perfect. i mean, that's why we have laws. but it's worked to do a lot of good, to protect against, say,
4:32 pm
terrorist or criminal uses of money that would cause concern. >> well, i think you're right in that these models are very portable. you could put the parameters of, of most models, even the very large ones, on a large usb drive, and, you know, carry it with you somewhere. you could also train them in a data center anywhere in the world, so, you know, i think it's, really, the use of the model, and the deployment, that you can effectively regulate. it's going to be hard to regulate the creation of it, because if people can't create them here, we'll create them somewhere else. i think we have to be very careful if we want the u.s. to stay ahead. if we want the best created model -- where, you know, the regulatory climate has driven them. >> thank you. thank you, mr. chairman. >> thank you, senator allsop. i hope you are okay with a few more questions. we've been at it for a while. we've been very patient. >> do we have a choice? >> no. but thank you very much. it's been very useful.
4:33 pm
i want to follow-up on a number of the questions i've been asked. first of all, on the international issue. there are examples and models for international cooperation. mr. smith, you mentioned civil aviation. the 737, the 737 max. i think i have it right. when it crashed, it was a plane that had to be redone, in many respects. and companies, airlines around the world looked to the united states for that redesign. and that approval. civil aviation, atomic energy. not always completely effective, but it has worked in many respects. and so i think there, there are international models here, where, frankly,
4:34 pm
the united states is a leader by example. and best practices are adopted by other countries when we support them, and, friendly, in this instance, the eu has been ahead of us, in many respects, regarding social media. and we are following their leadership by example. i, i want to come to the, this issue of having centers, whether they're in china or, for that matter, elsewhere in the world. requiring safeguards so that we are not allowing our technology to be misused in china against the uyghurs , and preventing that technology from being stolen, or people we trained there from serving bad purposes . are you satisfied, mr. smith, that it is possible, in fact, that you are doing it in china
4:35 pm
that is preventing the evils that could result from doing business there in that way? >> i would say two things. first, i feel good about our track record and our vigilance, and the constant need for us to be vigilant about what services we offer to whom and how they're used. it's really those three things. and i would take from that what i think is probably the conversation we'll need to have as a country about export controls, more broadly. there's three fundamental areas of technology where the united states is today, i would argue, the global leader. first, the gpu trips from a company like nvidia. second, the cloud infrastructure from a company like, say, microsoft. and the third is the foundation model from a firm such as
4:36 pm
openai, and, of course, google and aws and other companies are global leaders as well. and i think if we want to feel we're good in creating jobs in the united states by inventing and manufacturing here, as you said, senator hawley, which i completely endorse, and good that technology is being used properly, we probably need an export control regime that weaves those three things together. for example, there might be a country in the world, let's just set aside china for a moment. leave that out. let's just say there's another country where you all, and the executive branch, would say, we have some qualms, but we want u.s. technology to be present, and we want u.s. technology to be used properly, the way that it would make you feel good. you might say, then, well, let nvidia export chips to that country to be used in, say, a data center of a company that we trust, that is licensed even here for that use, with the
4:37 pm
model being used in a secure way in that data center with a know your customer requirement and with guard rails that put certain kinds of use off-limits. that may well be where government policy needs to go, and how the tech sector needs to support the government and work with the government to make it a reality. >> i think that, that answer is , is very insightful, and raises other questions. i would kind of analogize this situation to nuclear proliferation. we cooperate over safety, in some respects, with other countries. some of them adversaries. but we still do everything in our power to prevent american companies from helping china or russia in their nuclear programs. part of that nonproliferation effort is through export
4:38 pm
controls. we impose sanctions, we have limits and rules around selling and sharing certain chokepoint technologies related to nuclear enrichment, as well as biological warfare, surveillance, and other national security risks, and our framework, in fact, envisions sanctions and safeguards precisely in those areas for exactly the reasons we've been discussing here. last october, the biden administration used existing legal authorities as a first step in blocking the sale of some high-performance chips and equipment to make those chips to china, and our framework calls for export controls and sanctions and legal restrictions. so i guess a question that we will be discussing, we're not going to resolve it today,
4:39 pm
regrettably, but we would appreciate your input going forward, and i'm inviting any of the listening audience here in the room or elsewhere to participate in this conversation on this issue and others. how should we draw a line on the hardware and technology that american companies are allowed to provide anyone else in the world , any other adversaries or friends? because, as you observed, mr. dally, and i think all of us except, it's easily proliferated . >> if i, if i could comment on this. >> sure. >> you drew an analogy to nuclear regulation, and mentioned the word chokepoint. and i think the difference here is that there really isn't a chokepoint. and i think there's a careful balance to be made between, you
4:40 pm
know, limiting, you know, where, you know, our chips go, and what they're used for, and, you know, disadvantaging american companies, and the whole food chain that, that feeds them, because, you know, we're not the only people who make chips that can do ai. i wish we were, but we're not. there are, there are companies around the world that can do it. there are other american companies, there are companies in asia, there are companies in europe. and if people can't get the chips they need to do ai from us , they will get them somewhere else. and what will happen then is, you know, it turns out that chips aren't really the things that make them useful. it's the software. and if all of a sudden the standard chips for people to do ai become something from, you know, you know, pick a country, singapore, you know, all of a sudden, all the software engineers will start writing all the software for those
4:41 pm
chips. they'll become the dominant chips, and, you know, the leadership of that technology will ship it from the u.s. to singapore, whatever other country becomes dominant. so we have to be very careful with balance, you know, the, you know, national security considerations and the abuse of technology considerations against preserving the u.s. lead in the technology area. >> mr. smith? >> yeah. it's a really important point, and what you have is the argument-counterargument. let me, for a moment, channel what senator hawley often voices that i think is also important. sometimes you can approach this and say, look, if we don't provide this to somebody, somebody else will, so let's not worry about it. i get it. but at the end of the day, you know, whether you're a company or a country, i think you do have to have clarity about how you want your technology to be used. and, you know, i fully recognize that there may be a day in the future after i retire from microsoft when i
4:42 pm
look back, and i don't want to say, oh, we did something bad, because if we didn't, somebody else would have. i want to say, no, we had clear values, and we had principles, and we had in place guardrails and protections, and we turned down sales. so that somebody couldn't use our technology to abuse other people's rights. and if we lost some business, that's the best reason in the world to loosen business. and what's true of a company is true as a country. and so i'm not trying to say that your view shouldn't be considered. it should. that's why this issue is complicated. how to strike that balance. >> professor hartzog, do you have any comment? >> i think that was well said, and i would only add that it's also worth considering in this discussion about how we sort of safeguard these incredibly dangerous technologies, and, and the risk that could happen
4:43 pm
if they, for example, proliferated. if it's so dangerous, then we need to revisit the existential question again, and i just bring it back to thinking not only about how we put guardrails on, but how we lead by example, which i think you, you brought up, which is really important. and we, we don't win the race to violate human rights, right? and that's not one that we want to be running. >> and it isn't simply chinese companies importing chips from the united states and building their own data centers. most ai companies have capabilities from cloud providers. we need to make sure that the cloud providers are not used to circumvent our export controls, or sanctions. mr. smith, you raised the know your customer rules, knowing your customers would require cloud, ai cloud providers whose models are
4:44 pm
deployed to know what companies are using those models. if you're leasing out a supercomputer, you need to make sure that your customer isn't the people's liberation army, that it isn't being used to subjugate uyghurs, that it is used to do facial recognition on dissidents or opponents in iran, for example. but i do think that you've made a critical point, which is, there is a moral imperative here . and i think there, there is a lesson in the history of this great country, the greatest in the history of the world, that when we lose our moral compass, we lose our way. and when we simply do economic or political interests, sometimes it's very shortsighted, and we wander into
4:45 pm
a geopolitical swamp and quicksand. so i think the, these kinds of issues are very important to keep in mind. when we lead by example. i want to just make a final point, and then, if senator hawley has questions , i'm going to let him ask. but on this issue of worker displacement, i mentioned at the very outset, i think we are on the cusp of a new industrial revolution. we've seen this moving before, as they say. and it didn't turn out that well. in the industrial revolution, where workers were displaced en masse, those textile factories and the mills in this country and all around the world went out of business, essentially.
4:46 pm
replaced the workers with automation and mechanics. and i, i would respond by saying , we need to train those workers. education, you deleted to it, and it needn't be a four-year college. you know? in my state of connecticut, electric boat, pratt & whitney, sikorsky, defense contractors are going to need thousands of welders, electricians, tradespeople of all kinds who will have not just jobs, they'll have careers that require skills that, frankly, i wouldn't begin to know how to do. and i haven't the aptitude to do. and that's no false modesty. so i think there are tremendous opportunities here, not just in the creative spheres that, that you have mentioned, where, you know, we may think higher human
4:47 pm
talents may come into play, but in all kinds of jobs that are being created daily already in this country. and as i go around the state of connecticut, the most common comment i hear from businesses, we can't find enough people to do the jobs we have right now. we can't find people to fill the openings that we have. and that is, in my view, maybe the biggest challenge for the american economy today. >> i think that is such an important point. it's really worth putting everything we think about for jobs, because i will certainly endorse senator hawley, what you were saying before about that's, we need, we want people to have jobs, we wanted to earn a good living, et cetera. first, let's consider the demographic context in which jobs are created. the world has just entered a shift of the kind that it literally hasn't seen
4:48 pm
since the 1400s. namely, populations that are leveling off, or, in much of the world now, declining. one of the things we look at is every country, and measure over five-year periods is the working age population increasing or decreasing, and by how much? from 2020 to 2025, the working age population in this country, able aged 20 to 64, it's only going to grow by 1 million people. the last time it grouped by that small in number, you know who was president of the united states? john adams. that's how far back you have to go. and if you look at a country like italy, take that group of people, over the next 20 years, it's going to decline by 41%. and what's true of italy it's true almost to the same degree in germany, it's already happening in japan and korea. so we live in a world where, for many countries, we suddenly encounter. what you actually
4:49 pm
find, i suspect, when you go to hartford or st. louis or kansas city, people can't find enough police officers, enough nurses, enough teachers. and that is a problem we need to desperately focus on solving. so how do we do that? i do think ai is something that can help. and even in something like a call center, one of the things that's fascinating to me, we have more than 3000 customers around the world running proofs of concept. one fascinating one is a bank in the netherlands. they said, you go into a call center today. the desks of the workers look like a trading floor in wall street. they have six different terminals, somebody calls, they're desperately trying to find the answer to a question. you know, with something like gpt-4, with our services, six terminals can become one. somebody who's working there can ask a question, the answer comes up, and what they're finding is that the person
4:50 pm
who's answering the phone, talking to a customer, can now spend more time concentrating on the customer and what they need. and i appreciate all the challenges. there's so much uncertainty. we desperately need to focus on scaling. but i, i really do hope that this is an era where we can use this to, frankly, help people fill jobs, get trading, and focus more, let's
4:51 pm
4:52 pm
to, i think, america's economy in the future, and ai can help promote development of that workforce. >> senator hawley, anything? >> you all have been really patient, and it still has our staff, i want to thank our staff for this hearing, but most important, we're going to continue these hearings. it is so helpful to us i can go down our framework and tie the proposals to specific comments made by sam altman, or others who have testified before us. we will enrich and expand our framework with the insights that you have given us. so i want to thank all of our witnesses and, again, look forward to continue our bipartisan approach here. you
4:53 pm
made that point, mr. smith. we have to be bipartisan, and adopt full measures, not half measures. thank you all. this hearing is adjourned.
4:54 pm
4:55 pm
4:56 pm
4:57 pm
>> this year, book tv marks 25 years of shining a spotlight on leading nonfiction authors and their books. from author talks, interviews, and festivals, book tv has provided viewers with a front row seat to the latest literary discussions on history, politics, and so much more. you can watch book tv every sunday on c-span 2, or online at book tv.org.
4:58 pm
book tv. 25 years of television for serious readers. >> c-span student cam documentary competition is back, and this time, we're celebrating 20 years with this year's theme, looking forward while considering the past. >> the youth of today are the leaders of tomorrow. it is imperative that we take care of them and a groundwork that will help them succeed as they progress through life but >> with more awareness, we can work together to prevent fentanyl from becoming the world's next pandemic. >> inflation realy matters, so it's important to understand the ramifications of allowing inflation to go out of control. >> we're asking middle and high school students to create a 5 to 6 minute video addressing one of two questions. we want to know, in the next 20 years, what is the most important change that you'd like to see in america? or, over the past 20 years,
4:59 pm
what has been the most important change in america? you'll need to stroke supportive and opposing perspectives. as we do each year, we're having a late $100,000 in total prizes, with a grand prize of $5000. and because we're celebrating 20 years,every teacher who has students participate in this year's competition has the opportunity to share a portion of an additional 5000. the deadline for students to submit documentaries is friday, january 19th, 2024. for more information about this year's contest and rules, visit our website at studentcam.org. there there
5:00 pm
efforts to improve va
5:01 pm
financial management, officials from the va department spoke specifically about its business transformation program which is the va's third attempt to modernize its financial and accounting system. good afternoon, the subcommittee will come to order. >> good afternoon, the subcommittee will come to order. we are here to review va's progress in the financial management business transformation program.
5:02 pm
fbmt is the third attempt to evolve its hodgepodge of aging financial system. these systems are a serious problem, every year with the financial audit, clean opinion despite carrying the same weaknesses and deficiencies for a decade. at the same time, it continues to be much similar to the wild west. it has been 10 years since the former executive blew the whistle on billions of dollars of unauthorized commitments. nothing is fundamentally changed, with so many purchase cards and so many different facilities, and no central tracking, the department is practically helpless to enforce its policies much less root out waste and fraud. in basic financial management functions, stretching the capabilities of the systems
5:03 pm
like maintaining records when va transferred care and arp funds. there is basic questions about how the funding was handling during last month's hearing, the va was struggling to answer. additionally, one witness showed contempt at members for even trying to perform our basic oversight duties. this situation is untenable, i appreciate our witnesses today are attempting to solve there. simply put, the fm bt program has to succeed, after a false start in 2016 and 2017, va relaunched the effort in 2018. since then, the integrated financial and acquisition management systems have been implemented in national cemetery administration, if you offices within the benefits administration, the office of information technology, and the office of the inspector general, and part of the office
5:04 pm
of acquisition logistics, and construction. for the information we have, the system seems to be relatively successful in those offices. there is still reason to be concerned, these organizations only add up to if you thousand users and a small fraction of the va's budget, implementing them in a major organization like the veterans health administration and the big spenders within the better lips as veterans benefit administration keeps getting pushed out. now it is not scheduled for rollout until 2024. meanwhile, the programs implementation continues to rise, i'm not suggesting that we have another v hrm, let me be clear, i believe most of the premises of the fbmt are sound, they are similar -- suffering from similar problems like poor coordination between organizations within the va, struggles to va -- applications
5:05 pm
with commercial software and extremely long schedules, it's been 3 1/2 years since the subcommittee last examined the fbmt program, i think veterans and taxpayers are overdue for an update. i appreciate witnesses joining us today to help us better understand challenges that you face. i look forward to working to overcome the difficulties and deliver this system successfully. with that, i will yield opening statement time to -- the corpsman. >> i'm happy to say we are having this hearing on the monetization program that is so future -- pivotal to the future of ea. the use of the aging financial management system has led to manual workaround which impedes va's and congress's inability to conduct oversight spending, it's the second largest federal energy -- agency and it relies on an infrastructure that is
5:06 pm
decades-old, this program has largely gone unnoticed. this is a good thing, when i.t. programs go well, usually you don't hear about them, on alternately, this program is now experiencing delays, given the importance of this program, this committee needs to understand the underlying issue, this program is foundational to creating not only financial efficiency, but the department builds accountability and oversight of congress, i hope to hear from va and the cgi federal today how we can assure the successful and timely development, i will relay a point i may get every hearing, va obviously does not have the management infrastructure in place to cordon eight and ensure the success of these large monetization efforts, there are bills that have cosponsors that would at the
5:07 pm
very least start moving them in the right direction. i hope that we can start acting on the soon here in the house, modernization is mandatory, not optional. it is everyone's interest to do this in a way that does not affect veterans and employees, and wastes billions of dollars. commitment to management and standardization of processes is essential to our future success, thank you again, chairman, i look forward to hear from our witnesses this morning. >> thank you, i will now introduce the witnesses on a first panel, first the department of veterans affairs, we have missed teresa, the deputy assistant secretary for financial management. we also have mr. charles tapp, the chief financial officer for the veterans of administration and mr. daniel, deputy chief information for software product management at the office of information and
5:08 pm
technology, next we have mr. sydney gets, senior vice president for cgi federal, then we have mr. nick dall, deputy assistant inspector general for evaluations at the office of inspector general of the department of veterans affairs. if you would please rise and raise your right hand. do solemnly swear under penalty of perjury that the testimony you provide is the truth, the whole truth, and nothing but the truth? >> thank you, let the record reflect that all witnesses have answered affirmative, you are now recognized for five minutes to deliver your opening statement. >> good afternoon, chairman and all members of the subcommittee, thank you for the opportunity to testify today for the department of veterans affairs financial management business transformation program
5:09 pm
, and its implementation in an integrated financial and acquisition management system. i am accompanied by daniel mccue, deputy chief information officer for product management, the va cannot continue to rely on its legacy financial management asked him due to the enormous risk it presents to va operations, it is becoming increasingly difficult to support from a technical and functional ability standpoint, to not correct new audit findings and is not compliant with internal control standards, i'm proud to report it is no longer a proof of concept, it is successfully replacing the 1980s arab management system, it has been successfully up and running for almost three years, they completed six successful deployments of ifam with 4700 users across the enterprise, that's the national, a portion
5:10 pm
of veterans benefits administration and major staff offices, including the office information and technology, office of inspector general. ifam's users have collectively processed 3.5 million transactions representing almost 10 billion in treasury disbursement, it is a staple, achieving 39.9% uptime, on june 12, va went live with its largest appointment to date increasing the current user base by 60%. it was also the first time va went live simultaneously with both the finance and acquisition components of ifam which demonstrates that ifam is a viable solution capable of becoming the next generation financial acquisition solution for va, it is important to understand it is not just a new core accounting and acquisition system, it is crucial to transforming the business processes and capabilities so we can meet our goals and objectives in compliance with
5:11 pm
financial management legislation and continue to successfully execute our mission to provide veterans with the health care and benefits they have earned and deserved. with so much at stake both in terms of taxpayer dollars on the department's ability to serve veterans, it is vital that va accurately track and report how funds are used, fortunately, ifam significantly improves fund tracking abilities among any other benefit, will ensure proper tracking of expenditures. removal of ifam's increasing the transparency , accuracy, and timeliness, and reliability of financial information, va is gaining enhanced planning, analysis, and decision-making because of improved data integrity, functionality, and business intelligence, they are demonstrating these achievements through ranges of metrics based on industry best practices, ifam's changes are part of va strategy to resolve long- standing financial material weaknesses and strengthen
5:12 pm
internal control for example, in contrast to our current system which cannot capture transaction approval, ifam's routing documents to officials and supports documentation to be attached to the transaction, additionally ifam requires additional levels of approval. it also eliminates the need for an external tool to adjust financial reporting, perhaps most importantly, ifam complies with reporting requirements of the department of treasury to capture various account attributes and conform to the u.s. standard general lantern. the va's current system is unable to meet those requirements which he has led to extended and inefficient manual workaround and ifam's remediate all of these. our success has been and continues to be built on partnership, mutual respect and two way collaboration with their users, ifam has established a dead a dedicated chief experience officer two
5:13 pm
course interactions and change management activities, the change management practice places a heavy emphasis on improvement using customer feedback in our own observation of audit endings and industry best practices, we establish how to learn each wave and incorporate those lessons into wave deployment, fbmt continues to remain on budget, our successes would not be possible without ongoing support from congress and we appreciate the opportunity today to discuss this important initiative, we will continue to work judiciously with veterans to modernize va's financial acquisition management system and provide you with updates as we make further progress. although we are encouraged by our success, we are keenly focused on the difficult work that lies ahead an hour steadfast commitment to see this initiative through, chairman rosen dale, mccormick
5:14 pm
and subcommittee members, this concludes my opening statement, i would be happy to answer any questions. >> thank you, this will be entered into the hearing record, mr. goetz, you are now recognized for five minutes to deliver your opening statement. >> chairman rosen dale, ranking member mccormick and other distinguished members, thank you for the opportunity to appear today, my name is sydney gets, i'm the senior vice president. for the last five years, i have served as project integer on contract with the va for the financial management business transformation program known as fbmt, after the subcommittee's invitation, i'm here to providing requested status of day and underscores ongoing commitments to the success of the va fbmt program. as you know, in 2016, the va established the fbmt program to modernize its 30 year old
5:15 pm
legacy core financial management system, fms. in compliance with applicable regulatory requirements, to accomplish this complex modernization effort, the va selects tgi federal to deploy its momentum enterprise resource program, momentum known to the va as the integration financial acquisitions management system or ifam is a financial management system that is operational and many government agents fees, to mitigate programmers, the fbmt program is migrating users from the va's legacy financial and acquisition systems , ifam using an incremental deployment approach, each deployment or wave delivers specifically configured capabilities to a defined set of va organizations using an agile based implementation methodology. to date, the mbd program has completed six waves deploying ifam 240 700 users at 20
5:16 pm
different va offices, while there are still milestones and challenges ahead to be sure, we are really delivering benefits to the va's finance and acquisition user community. these benefits include improved strategic and daily decision- making, process automation, compliance with federal accounting regulations, maintaining clean audits and accommodating new regulatory requirements. a prime example of how it is helping the va use the community, the power of its realtime transaction processing and on-demand reporting. today, it's users can easily generate financial acquisition reports and drill down into current accurate data on demand. this is because when transactions are entered, they are first verified to meet va standards and then automatically up date budgets in the general ledger in real time. it's users also have the capability to refresh reports hourly. rather than daily, and can run
5:17 pm
most reports at the enterprise level, administration level, or lower levels of the va organization. before ifam, some similar reports took days to produce through manual resource intensive spreadsheet-based processes. as with other conflict programs, success often depends on the stakeholders focus on key performance factors, the same holds true here with the teams focus on collaboration and transparent, enterprise wide standardization, continued improvements, diligent change management, and execution of its risk-based incremental delivery approach has kept the program moving forward. to illustrate this point, let me share how the team is maximizing the value of user acceptance testing, in the first few waves, users performed hands-on towards the end of each wave, this is a common and standard approach. lessons learned told us that we would improve user adoption but
5:18 pm
having users perform iterative hands-on testing of ifam's functionality much earlier into each wave. we are refining a program implementation methodology and applying this approach to three most recent waves by helping users gain an earlier appreciation, the team gains useful feedback that change management and training. it also has allowed us to identify and resolve issues earlier saving both time and resources. i end this testimony where i began by reiterating cgi federal's unwavering commitment to collaborating with the f mbd program to deliver ifam for the entire user base for the advancement of our veterans, i look forward to answering your questions. >> a written statement will be entered into the hearing record, mr. dall, you are now recognized for five minutes to deliver your opening statement. >> subcommittee members, thank
5:19 pm
you for the opportunity to discuss our oversight of va's management challenges, since 2015, the audit of va's financial statements has reported a material weakness due to problematic financial management systems. full implementation could help resolve this persistent material weakness and decreased the transparency, accuracy, timeliness, and reliability of financial information across va. accordingly, we began oversight of implementation shortly after it went live in november 2020. prior modernization efforts failed in part because of poor planning and flawed execution combined with challenges and transitioning from legacy system, decentralized oversight, unrealistic timelines, and out of it engagement of stakeholders and end-users and minimal testing have plagued i.t. progress, resulting in changes in direction and vendors, and
5:20 pm
all steep costs. the most recent audit of va's financial statements, the auditor found three material weaknesses and two significant deficiencies. the material weakness most pertinent to this testimony focuses on limited functionality of the current system to meet va's financial management and reporting needs. over time, the va's complex and antiquated financial system has deteriorated and no longer meets increasingly stringent requirements are inundated by the treasury department and omb, deficiencies in va's financial management system are illustrated in findings we made to va's use of covid-19 funding which showed va lacked assurance that those funds were spent as intended. generally, our free reports found va's complying with transparency reporting requirements, however, we identified concerns with completeness, and accuracy of va's reporting. a major cause for this is reliance on several systems and
5:21 pm
payroll, to purchase card transactions requiring manual entries by staff which increases the risk of reporting errors. the practice of manual expenditure transfers led to a lack of transparency and accountability over vha purchases seen on our audit. we found vha staff not properly documenting the transfers and inadequate guidance from vha's office of finance. this happened because of financial reporting systems limitations, and a lack of oversight that resulted in vha medical facility staff determining on their own what constituted a procurement documentation. additionally, staff did not follow basic controls like documenting purchasing authority, splitting duties between requesting and purchasing items and verifying audits were received. as a result, we reported an estimated $187 million

22 Views

info Stream Only

Uploaded by TV Archive on