tv Hearingon Regulating Artificial Intelligence CSPAN September 13, 2023 7:47am-10:13am EDT
7:47 am
awards, led to significant societal changes, and are still talked about today. here from renowned experts who will shed light on the profound impact of these iconic works and virtual journeys to significant locations across the country intricately tied to these celebrated authors and their unforgettable books. among our featured books, common sense by thomas paine, for huckleberry finn by mark twain, their eyes were watching god and free to choose by milton and rose friedman. watch our 10 part series booked that shaped america starting monday, september 18th at 9:00 pm eastern on c-span, c-span now, our free mobile video apps, or online, c-span.org. >> c-span is your unfiltered view of government funded by these television companies and more including buckeye broadband.
7:48 am
>> buckeye broadband support c-span as a public service along with these other television providers giving you a front row seat to democracy. >> microsoft president brad smith testified on ways to regulate artificial intelligence which he joined other witnesses to discuss transparency laws and the idea of labeling products like images and videos as being made by ai. this hearing before the senate judiciary subcommittee on privacy, technology and the law is about 2 hours and 20 minutes.
7:49 am
>> the hearing of our subcommittee on privacy, technology, and the law will come to order. i want to welcome our witnesses, the audience who are here, and say a particular thanks to senator schumer who has been very supportive and interested in what we are doing here and also to chairman durbin whose support has been invaluable in encouraging us to go forward here.
7:50 am
i have been grateful to my partner in this effort, senator hawley and i produced a framework basically, a blueprint for a path forward to achieve legislation. our interest is in legislation. this hearing along with the two previous ones has to be seen as a means to that end. we are very result oriented. i know you are from your testimony. i have been enormously encouraged and emboldened by the response so far, just in the past few days and my conversations with leaders in the industry like mr. smith, there is a deep appetite, hunger for rules and guardrails, basic safeguards
7:51 am
for businesses and consumers, people in general, from the panoply of potential payroll. there is the desire to make use of potential benefits. our effort is to provide for regulation and the best sense of the word. regulation that permits and encourages innovation and new business and technology and entrepreneurship and provides those guardrails, and forcible safeguards that encourage confidence or growing technology, new technology entirely, has been around for decades but artificial intelligence regarded as entering a new era and make no mistake, there will be regulation, the only question
7:52 am
is how soon and what. it should be regulation that encourages the best in american free enterprise and provides protections we do in other areas of our economic activities. to my colleagues who say there is no need for new rules we have enough laws protecting the public, we have laws that prohibit unfair and deceptive competition, laws that regulate airline safety and drug safety but nobody would argue simply because we have those rules we don't need specific protections for medical devices or car safety. because we have rules that prohibit discrimination in the workplace meaning we don't need rules that prohibit discrimination in voting.
7:53 am
we need to make sure these protect framed and targeted in a way that apply to the risks involved. risk-based rules managing the risks is what we need to do here. our principles are pretty straightforward. we have no pride of authorship. we circulated this to encourage comment. we won't be offended by criticism from any quarter. that is the way to make this framework better and eventually achieve legislation by the end of this year. the framework is establishing a licensing regime for companies that are engaged in high-risk ai developers. creating an independent oversight body that has
7:54 am
expertise with ai and works with other agencies to administer and enforce the law, protecting national and economic security to make sure we aren't enabling china or russia and other adversaries to interfere in our democracy or violate human rights requiring transparency to limit the use of ai models and at this point include rules like watermarking, digital disclosure when ai is being used and data access for researchers. ensuring that ai companies can be held liable when their products violate civil rights, and danger the public, deep fake impersonations, hallucinations. we have all heard those terms and need to prevent those harms and senator hawley and i as former attorneys general of our
7:55 am
state have deep and abiding affection for the enforcement powers of state officials, there is effective enforcement. private rights of action and federal enforcement are very important. let me just close by saying before i turn it over to my colleagues, we will have more hearings. the way to build a coalition in support of these measures is to disseminate as widely as possible, for colleagues to understand what is at stake. we need to listen to industry leaders and experts we have
7:56 am
before us today, to act with dispatch. if we let this horse get out of the barn, it will be even more difficult to contain than social media. we are seeking to act on social media right now as we speak. i asked sam altman what his greatest fear was, i said mine, my nightmare is the massive unemployment that could be created, what we don't deal with directly, it shows how wide the ramifications may be and we need to deal with worker displacement and training and this new era is one that
7:57 am
portends enormous promise but also peril. i will turn now to ranking members senator hawley. >> thank you for organizing this hearing. this is the third of these hearings we've done. i have learned a lot in the previous couple. some of what we are learning about the potentials of ai is exhilarating. some of it is horrifying and i think what i hear the chairman saying and what i agree with is we have a responsibility here now to do our part to make sure this new technology which holds a lot of promise but also peril actually works for the american people. it is good for working people, families, that we don't make the same mistakes congress made with social media, 30 years ago congress outsourced social media to the biggest
7:58 am
corporations in the world, that has been nearly an unmitigated disaster, where we had the biggest, most powerful corporations not just in america and the history of the globe doing whatever they want with social media, running experiments basically every day on america's kids, inflicting mental health harms the likes of which we've never seen, messing around in our elections in a way that is deeply deeply corrosive to our way of life. we cannot make those mistakes again. we are here as senator blumenthal said to financers and make sure this technology is something that benefits the people of this country have no doubt, with all due respect to corporate's in front of us, heads of these corporations, no doubt it will benefit your company's. what i want to make sure of is it benefits the american people. i look forward to this. thank you, mr. chairman.
7:59 am
>> i want to introduce our witnesses and as is our custom i will swear them in and ask them to submit their testimony. welcome to all of you, chief scientist joined in january 2009 as chief scientist, spending 12 years at stanford university where he was chairman of the computer science department. he has published over 250 papers. he holds 120 issued patents and is the author of four textbooks. brad smith is vice chair and president of microsoft. as microsoft's vice chair and president he is responsible for spearheading the company's work in representing a wide variety of critical issues involving
8:00 am
the intersection of technology and society including artificial intelligence, cyber security, privacy, and environmental sustainability, human rights, digital safety, immigration, philanthropy, product and business for nonprofit customers. we appreciate your being here. professor woodrow hartsaga from the class of 1960 scholar at boston university school of law and also a nonresident fellow at policy and medicine and law at washington university, faculty associate at the berkman client center for internet and society at harvard university. ..a scholar at the center for internet and society at stanford law school. i could go on about each of you at much greater length with all of your credentials.
8:01 am
but suffice it to suffice it to say, very impressive and the you now stand i will administer the oath. [witnesses were sworn in] >> thank you. why do we begin with you, mr. dally? >> chairman blumenthal, ranking member hawley, thank you for the privilege to testify today. invidious chief scientist and i'm delighted to discuss artificial intelligence journey future. invidious is the forefront of its computer genitive and i contact those the potential to transform industries, address global challenges, profoundly benefit society.gl since our founding in 1993 e have been committed to developing technology to empower people and improve the quality of life worldwide. today over 40,000 companies
8:02 am
using nvidia platforms across media and entertainment, scientific computing, healthcare, financialut service, internet services, automotive and manufacturing to solve the world's most difficult challenges and bring new products and services to consumers worldwide. at rfidd in 1893 we were a 3-d graphic startup. one of dozens of startups competing to create entirely new market for accelerators to enhance computer graphics for games. in 1999 we admitted the graphics processing unit or gpu which could perform a massive number ofof calculations and parallel. we launched the gpu for gaming and recognize the gpu that accelerate any application that could benefit from massively processing. this that nato. today researchers worldwide innovate on nvidia gpu use. through a collective effort with major advances in ai that will revolutionize and provide tremendous benefits to society across sectors such as healthcare, medical research,
8:03 am
education, business, cybersecurity, climate and beyond. however we also recognize like any new product or service and i products and services have risks. risks. those who make and use or so ai enabled products and services are responsible for their conduct. fortunatelyr many uses of ai applications are subject to existing laws and regulations that govern the sectors in which they operate. ai enabled services at high risk sectors could be subject to enhance licensing and certification ornaments when necessary. what other applications with less risk of harm may be less stringent licensing and/or regulation. clear, stable and thoughtful regulation ai developers will work to benefit society while making product and services as safe as possible. for our part nvidia is committed to the safe and trustworthy development and women of ai. for example, guardrails are open source software empowers developers to guide genitive ai applications for i could come appropriate and secure text
8:04 am
responses. nvidia has implemented model risk management guidance and chewing a comprehensive assessment and management of risk associated withre nvidia developed models. today nvidia announces it is endorsing the white house is voluntary commitment on ai. as we support the eye more broadly we can't and will continue to identify and address risks. no discussion of ai would be complete without addressing what is often described as frontier ai modelss get some have expressed your front to model will evolve into uncontrollable artificial general intelligence which could escape our control and cause harm. fortunately uncontrollable artificial general intelligence is science fiction, not reality. its core ano i is a software program that is limited by its training, the inputs provided to the nature of its output. in other words, humans will always decide how much decision-making power to see the ai models. so long as were thoughtful and measured we can interstate come trustworthy unethical to
8:05 am
appointment of ai systems without suppressing innovation. we can spur innovation by ensuring ai tools are widely available to everyone, not concentrated in the handsne of a few powerful firms. i will close with two observations. first, it -- the aig is out of the bottle. they're widely published and available to all.he ai software can be transmitted anywhere in the world at the press of a button. many ai development tools and frameworks opened a show models are open sourced.it second, no nation and certainly no company controls aco chokepot to ai development. platforms are competing with companies from around the world what u.s. companies make only be the most energy efficient, cost efficient and easiest to use, they are not the only viable alternatives for developers abroad. other nations are filthy ai systems with or without u.s. components and they will offer those applications in the worldwide market. safe and trust with the ai will and multi-stakeholder cooperation or it will not be effective.
8:06 am
the united statest is in a remarkable position today in with your help we can continue toit lead on policy and innovatn well into the future. nvidia stand ready to work with you to ensure the development and deployment of genitive ai and excelerator computing so the best interest of all. thank you for your opportunity to testify before this committee. >> thank you very much. esther smith. >> chairman blumenthal, ranking member hawley, members of the subcommittee, my name is brad smith. onth the vice chair president of microsoft and thank you for the opportunity to be here today and i think more importantly thank you for the work that you've done to create the framework you have shared. chairman blumenthal, thank you put it very well at first, we need to learn an act with dispatch. and ranking member hawley, i think you offer real words of wisdom. let's learn from the experience the whole world had with social media, and let's be clear eyed about the promise and the peril in equal measure as a look to
8:07 am
the future ofas ai. i would firstly i think your framework does that. it doesn't attempt to answer every question by design. but it's a very strong and positive step in the right direction. and puts the u.s. government on the path to be a global leader in entering a a balanced apprh that will enable innovation to go forward with the right legal guardrails in place. as we all think about this more i think it's worth keeping three goals in mind. first, let's prioritize safety and security which with yk does. let's require licenses for advanced ai models and uses in high risk scenarios. let's have an agency that is independent and can exercise real and effective oversight over this category. category. and then let's couple that with the right kinds of controls that will ensure safety of the
8:08 am
sort we've already seen i think start to emerge in the white house commitments that were launched on july 21st. second, let's prioritize as you do the protection of our citizens and consumers. let's prioritize national security. always in a sense in some ways the first priority of the federal government. but let's think as well as you have about protecting the privacy, the civil rights and the needs of kids. among many other ways of working and ensure we get this right. let's take the approach that you are recommending namely focus not only on those companies that develop ai, like microsoft, as well as companies that deploy ai like microsoft. in different categories we are going to need different levels of obligations.
8:09 am
and as we go forward let's think about the connection between say the role of a central agency that will be on point for certain things as well as the obligations that frankly will be part of the work of many agencies. and indeed our courts as well. and let's do one other thing as well. maybe it is one of the most important things we need to do so we ensure that the threats that many people worry about remain part of science fiction and don't become a new reality. let's keep ai under the control of people. it needs to be safe. and to do that as we have encouraged there needs to be safety brakes especially for any ai application or system that can control critical infrastructure. if a company wants to use ai to say control the electrical grid or all of the self driving cars on our roads or the water supply, we need to learn from so many other technologies that do great things but also can go
8:10 am
along. we need a safety break just like we have a circuit breaker in every building and home in this country to stop the flow of electricity if that is needed. then i would say let's keep one third goal in mind as well. this is the one where i was just you maybe consider doing a bit more to add to the framework. let's remember the promise that this offers. right now if you go to state capitals you go to other countries, i think there is a lot of energy being put on that. when i see what governor newsom is doing in california or governor bertram in north dakota. i see them at the forefront of figuring out how to use ai to say improve the delivery of healthcare. advanced medicine. improve education for our kids. and maybe most importantly make it government services --
8:11 am
or use the savings to provide more and better services to our people. that would be a good problem to have the opportunity to consider. in sum, professor hartsock has said this is not a time for half measures. it is not he is right. let's go forward as you have recommended. let's be ambitious and get this right. thank you. >> thank you very much. mr. hartsock i read your testimony and you are very much against half measures. we look forward to hearing what the full measures that you recommend are.>> that is correct senator. chair blumenthal and members of the committee thank you for inviting me to appear before you today. i am a professor of law at boston university.
8:12 am
my comments today are based on a decade of researching law and technology issues. i'm drawing from research on artificial intelligence policy that i conducted as a fellow with colleagues at the cornell institute at washington university in st. louis. committee members up to this point ai policy is largely been made up of industry led approaches like encouraging transparency, mitigating bias and promoting principles of ethics. i would like to make one simple point in my testimony today. these approaches are vital. but they are only half measures. they will not fully protect us. to bring ai within the rule of law lawmakers must go beyond these half measures to ensure that ai systems and the actors that deploy them are worthy of our trust. half measures like audits assessments and certifications are necessary for data governance. but industry leverages procedural checks like these to dilute our loss into managerial box checking exercises that entrench harmful surveillance
8:13 am
based business models. a checklist is no match for the staggering fortune available to those who exploit our data, labor and are. í see to develop and deploy ai systems. it is no substitute for meaningful liability when ai systems harm the public. today i would like to focus on three popular half measures and why lawmakers must do more.
8:15 am
they are not so new. ai systems stole power. this power is used to benefit some and harm others. lawmakers should borrow from established legal approaches to remedy power imbalances. to require broad nonnegotiable duties of loyalty, care and confidentiality and implement robust bright line rules that limit harmful secondary uses and disclosures of personal data in
8:16 am
ai systems. my final recommendation to is to encourage lawmakers to resist the idea that ai is inevitable. when lawmakers go straight to putting up guardrails they fail to ask questions about whether particular ai systems should exist at all. this dooms us to half measures, strong rules would include prohibitions on unacceptable practices like emotion recognition, biometric surveillance in public spaces,, predictive policing and social scoring. an conclusion to avoid the mistakes of the past lawmakers must make hard calls, trust and accountability can only exist where the law provides meaningful protections for humans. and ai half measures. they will certainly not be enough. thank you and i welcome your questions. >> thank you, professor hartzog. i take very much to heart your employment as against half measures. i think listening to both senator hawley and myself you have a sense of boldness and initiative, and we welcome all of the specific ideas most
8:17 am
especially mr. smith your suggestion that we can be more engaged at the state level or federal government in making use of ai in the public sector. but taking the thought that professor hartzog had so important introduced ai technology in general is not neutral. how do we safeguard against the downside of ai, whether it's discrimination or surveillance? with this licensing regime and oversight entity be sufficient and what kind of power to we need to give it? >> i would say first of all i think a licensing regime is indispensable in certain high risk scenarios. but it won't be sufficient to address every issue but it's a critical start. i think what it really ensures
8:18 am
is especially safe for the frontier models most advanced as well as certain applications at highest risk rankle you do need a from the government before you go forward. that is real accountability. you can't drive a car into you get a a license. you can't make a model or the application of able until you passed through that gate. i do think that it would be a mistake to think that one single agency or one single licensing regime would be the right recipe to address everything. especially when we think about the harms we need to address. that's why it's equally critical every agency in the government that is responsible for the enforcement of the law and the protection of people's rights master the capability to assess ai. i don't think we want to move the approval of every new drug from the fda to this agency. so by definition the fda is going to need, for example, to have capability to assess ai. that would be just one of
8:19 am
several additional specifics that a think one can think about. >> i think that's a really important point because ai is going to be used in making automobiles, , making airplanes, making toys for kids. so the faa, the fda, the federal trade commission, the consumer product safety commission, they all have presently rules and regulations but there needs to be an oversight entity that use some of those rules and adapts them and adopt new rules so that those harms can be prevented. there are a lot of different names we can call that entity, connecticut now has an office of artificial intelligence. you could use different terms but i think the idea is that we want to make sure that the harms are prevented through a licensing regime focused on risk.
8:20 am
mr. dally, you said that autonomous ai is science-fiction. ai beyond human control is science-fiction. but science fiction as a of coming true. i wonder whether that is a potential here, certainly it is one that is widely shared at the moment, whether it's fact-based or not, it is in the reality of human perception. and as you well know trust and confidence are very, very important. so i wonder how we counter the perception and prevent the science-fiction from becoming reality? >> so artificial general intelligence think it's out of control is science-fiction, not autonomous. we use artificial intelligence, for example, autonomous vehicles all the time.
8:21 am
i think the way we make sure that we have control over ai of all sorts is by for any really critical application keeping a human in the loop. ai is a computer program. it takes an input. it produces an output and if you don't connect up something that can cause harm to the output it can't cost that harm. and so anytime after some grievous on the could happen you want a human being between output of that ai model and the causing of harm. i think as long as we are careful about how we deploy ai to keep humans in the critical loops, i think we can assure that the ai's will not take over and shut down our power grid because airplanes to fall out of the sky. we can keep control over them. >> thank you. i've a lot more questions but we're going to adhere to five-minute rounds. we have a very busy day if you know with votes as as a mattr fact, and i will turn to senator
8:22 am
hawley. >> thank you, mr. chairman. thanks again to the witnesses for being here. i want to particularly thank you, mr. smith. i know there's a group of other of your colleagues, your counterparts in industry who are gathering i think tomorrow, and that is what it is but i appreciate you being willing to be here in public and answer questions in front of the press is here and this is open to anybody who wants to see it and i think that's the way this ought to be done. i appreciate you willing to be do that. but you make in protecting kids for i want to start with that if i could. i want to ask you about what microsoft has done and is doing your kids usually being chat bot, is it fair to say? >> yes, yes with certain ages we don't challenge any age but yes in general it is possible for children to register if there are certain age. >> and the age is? >> i'm trying to member as a center. >> i think it is 13. doesn't sound right?
8:23 am
>> i was going to say 12 or 13. >> do you have some sort of age verification? how do we know what age? obviously the cake and put in whatever age he or she wants to. is it some sort of age verification? >> they involve tipple to getting permission from a pair. we use across a services including for gaming. i don't live off the top of my head exactly how it works but be happy to get you the details. >> great. my impression is a chat doesn't really have a personal age verification. there's no way really to nobody can you correct me if that's wrong to let me ask you this. what happens to all of the information that our hypothetical 13-year-old is put into the tool as it is having this chat? they can could be chatting t anything and going back and forth on any number of subjects. what happens to the info the kid puts in? >> the most important thing i would say first is that all is
8:24 am
that in the meadow that protects the privacy of children. >> how is that? >> well, we follow the rules in coppa which exist to protect child online privacy and to forbid using it for tracking. it forbids its use for advertising or for other things. it seeks to put very tight controls around the use and the retention of that information. the second thing i would just add to that is in addition to protecting privacy, we are hyper focused on ensuring that in most cases people of any age but especially children are not able to use something like bing chat in ways that would cause harm to themselves or to others. >> and how to do that? >> we basically have safety architectural use across the board. think about it like this. there's two things around a model. the first, first is called a classifier. so that if somebody asks how can
8:25 am
i commit suicide tonight? how can i pull up my school tomorrow? that hits a classifier that identifies a class of questions or problems or issues. second, there's what we call meta prompts, and we intervene so that the question is not answered. if someone asks how to commit suicide, we typically would provide a response that encourages someone to get mental health assistance and counseling and tells them how. if somebody wants to know how to build a bomb, it's a no, you cannot use this to do that. and that fun middle safety architecture is going to evolve, it's going to get better but in a sense it's at the heart if you will both what we do and i think the best practices in the industry and i think part of what this is all about what we're talking about here is how we take that architectural element and continue to strengthen it. >> very good that's helpful. let me ask about the information back to the kids information for
8:26 am
a second. is it stored in the united states, is a stored overseas? >> if the child is in the united states the data is stored in the united states. that's true not only for children, it's for adults as well. >> and who has access to that data? >> the child has access, you know, the parents may or may not have access. typically we get speedy in what circumstances would the parents have access? >> i would have to go deep to the specifics on that. our general principle is this, and this is something with implement and the united states even though it's not legally required in the united states. it is legally required as you may know in europe. people we think of right to find out what information we have about them. they have the right to see at. they have the right to ask us to correct it if it's wrong. they have the right to ask as to delete it if that's what they want us to do. >> and you get? if you ask you to delete did you delete. >> we better yes, that's a problem active promise and we do a lot to comply with that. >> i have a lot more questions,
8:27 am
i'm trying to adhere to the time limit, mr. chairman. five minutes mr. chairman? >> will have another red. >> great news for us, not such great news for the witnesses, sorry. before i leave the subject, just about the kids personal data and where it's stored on asking you this as a you can -- wheezing of the technology companies on social media space have major issues of our data is stored. a major access issues. and the thinking of it should be hard to get them thinking in particular of china were wheezing other social media companies who say america's data stored in america but guess what can lots of people in other countries can access that david. so is that true for you, mr. smith? is a child's data that they've entered into the bing chat that stored in the united states you said if there are an american citizen, cannot be accessed in let's say china via microsoft china-based engineer? >> i don't believe so. i would have to go back and confirm that but i don't
8:28 am
believe. >> would you? would you get that for me for the record? i.t. i will have more questions later. thank you, mr. chairman. >> senator klobuchar. >> thank you very much. thank you, all of you. i think i will -- since and chair of the rules committee. mr. smith come in your written testimony you talk about how watermarks could be helpful disclosure of ai, material as you know and we talked about i have a built that i lead that representative clark leads in the house to require disclaimer and some kind of mark on ai generated app. i think we have to go further. we will get to that in a minute, professor hartzog, but can talk about what you mean by in your written testimony the health of democracy and meaningful civic discourse will undoubtedly benefit from initiatives to help protect the public against deception or fraud? facility by ai generated contact. >> absolutely. here i do think things are moving quickly both in a
8:29 am
positive and/or correction in terms of what we are seeing. on the positive side i think you're seeing the industry come together, a company like adobe exercise real leadership and there's a a recipe that i see emerging. i think it starts with the first principle. people should at the right to know if they're getting a phone call from a a computer, from , if there's content coming from ai system rather than a human being. we then need to make that real with legal rights that back it up. we need to create what's called a prominent system, watermarking for legitimate content, so that it can't be altered easily without our detection to create a deep fake. we need to create an effort that brings industry and and i tk governments together so we know what to do and there's a consensus when we do spot deepfakes especially say even deepfakes that have altered legitimate content. >> thank you. >> that would be -- >> let's get to that, hot off
8:30 am
the press. senator hawley and i've introduced our bill today with senator collins and who but the electoral reform act as a no and senator coons to ban the use of deceptive ai generated content in elections. so this would work in concert with some watermark system but when you get into the deception where it is fraudulent, ai generated content pretending to be the elected official or the candidate when it is not and we've seen this used against people on both sides of the aisle which is why it was so important that we be bipartisan in this work. and i want to thank him for his leadership on not only the framework but also on the work that we're doing. i guess i will go to you, mr. hartzog. could you -- we do have
8:31 am
competition will exception for satire and human because we love satire so much of the senators do, just kidding. could you talk about why you believe there have to be some outright ban of misleading ai contact related to federal candidates in political ads? talk about that. >> sure absolutely. thank you for the question. of course keeping in mind free expression constitutional protections that would apply to any sort of legislation. i do think rightly rules and prohibition from such deceptive ads are critical. because we know the procedural walk-throughs as a said in my testimony often give protection without protecting us. so to outright prohibit these practices i think is really important and i would even go potential a step further and think about ways in which we
8:32 am
could, not just those that we consider deceptive practices we consider even abusive, the light which into the limitations and our desire to believe or want to believe things against us. there's a body of law that sort of runs alongside unfair deceptive trade practices around abusive trade practice. >> okay, all right. mr. dally, inking of that, and to talk to mr. smith about this as well, and i'd used as a skimpy i had someone i know well who has peaked in the marines who is deployed somewhere, they don't even know where it is, fake voice call center asks for money to be sent somewhere in texas i believe. could you talk about what companies do? i appreciate the work you've done to ensure that they i platforms are designed to the can't be used for criminal purposes because it's got to be part of the work that we do. >> yeah. >> not just scams against elected officials.
8:33 am
>> the best measures against deepfakes, and mr. smith mentioned it in his testimony, is use of proper nonce and authentication systems where you can have authentic images, authentic voice recordings, signed by the device whether it's a camera or audio recorder that is recorded that voice and when is presented and can be authenticated as being genuine, not a deep fake. thus with the flip side of watermarks which would provide anything that is -- identified as such. those technologies and, nation couldn't help people sort out along with certain amount of public education to make sure people understand what the technology is capable of and are on guard for that. help them sort of what israel from what is fake. >> okay. i will ask mr. smith back where you started. some ai platforms use local news content without compensating journalists and people including by using the content to train ai
8:34 am
algorithms. the journalism competition preservation act the bill i have with senator kennedy would allow local news organizations to negotiate with online platforms including generative ai platforms that use the content with our dash or without compensation. can you talk about the impact on local journalism. utah crash test what about the importance of investment and quality journalism but we're getting ever got up and wait we make sure the people were actually doing the work are compensated in many ways but also in journalism. mr. smith. >> i would say three quick things. number one, we need to recognize local journalism is fundamental to the health of the country and the electoral system. and it is a link so we need to find ways to preserve and promote it. number two, general ai, think we should let local journalists and publication make decisions about whether they want the content to be available for training or
8:35 am
grounding in the like and that's a big topic and it's worthy of more discussion. we should certain let them in my view negotiate collectively because that's hillary local journalism is really going to negotiate effectively spell i appreciate your words. i'm going to get in trouble for senator blumenthal pickle had. >> i will just say there's ways we can use ai to a local journalists and were interested in that so let's add that to the list. >> very good. and thank you again for you. i talked about store hollywood but you senator blumenthal for your leadership. >> thank you thank you . thank you for yours, senator klobuchar. senator hirono. >> thank you, mr. chairman. mr. smith, , it's good to see yu again. so of the time we are one of these hearings we learn something new but the conclusion i have drawn is ai is ubiquitous. anybody can use ai. it can be used in any endeavor so when i hear you folks just
8:36 am
think about how we should be taking half measures i'm not sure what that means. what does it mean that taking half measure on something as ubiquitous as ai where there are other regulatory schemes that can touch upon those endeavors that use ai? there's always a question i have when we address something as complex as a ai is looking that there are unintended consequences that we should care about, would you agree? >> i would absolutely agree. i think we have to define what's a full measure and what's a half measure but i bet we can all agree that half measures are not good enough. >> that is the thing, how to recognize going forward what is actually going to help us with this powerful tool. i have a question for you, mr. smith. it is a powerful tool that can be used are either good or it can also be used to spread a lot
8:37 am
of disinformation and misinformation, and that happened during the disaster on maui. maui residents were subject to disinformation, some of it coming from foreign governments, i.e. russia, looking to sow confusion and distrust including don't sign up for fema because they cannot be trusted. i worry that with ai such information will only become more rampant future disasters do you share my concern about misinformation in the disaster context in the role i could play? and what can we do to prevent these foreign entities from pushing out ai disinformation to people who are very vulnerable? >> i absolutely share your concern, and to think there's two things we need to think about doing. first, let's use the power of ai as we are to detect these kinds
8:38 am
of activities when they are taking place. because, as he did in that instance from microsoft among others use ai and other data technologies to identify what people were doing. number two, i just think we need to stand up as a country and with other governments in with the public and say there need to be some clear red lines in the world today regardless of how much else what else we disagree about. when you think about what happens typically in the wake of an earthquake or a hurricane or a tsunami or a slide, the world come together, people are generous, they help provide relief. and then let's look at what happened after the fire in maui. it was the opposite of that. we had some people, not necessarily directed by the kremlin, but people to regulate spread russian propaganda,, trying to discourage the people from going to the agencies that could help them.
8:39 am
that's inexcusable and we saw what we believe is chinese directed activity trying to persuade the world in multiple languages that the fire was caused by the united states government itself using a meteorological weapon. those are the things that we should all try to bring the international community together and agree they are off-limits. >> hardware identify that this is even occurring, that there is, china-russia direct misinformation going on? how do we -- i i did know this happening by the way, and even in the energy committee on which i sit we a people testify and ask, regarding the maui disaster i asked if they were aware that been disinformation put out by a foreign government in that example and he said yes, that i don't know that the people of maui recognized that was going on. how do we, one can even identify it's going on and then to come
8:40 am
forward and say this is happening and to name names, identify which country it is that is spreading this kind of disinformation and misinformation? >> i think we have to think about two things. first, i think we add a company like microsoft have to lean in and we are with dave camp with infrastructure, with experts and real-time capability to spot these threats, find the patterns and reach well-founded conclusions. within this i can think of this is a harder think of this is where it will need all of your help. what do we do if we find that a foreign government is illegally trying to spread false information next year in the senate our president to campaign about a a candidate? how do we create the room so that information can be shared and people will consider it? you all, the most important word in your framework is bipartisan. how do we create a bipartisan framework so that when we find
8:41 am
this a climate where people can listen i think that look both of those parts of the problem together. >> i hope we can do that and mr. chairman, if you don't mind, one of the concerns about ai from the work, workers stand what is their jobs will be gone. fso herzog, you mention that generative ai can result in job losses. for both you and mr. smith, what are the kinds of jobs that will be lost to ai? >> that's an excellent question. it's difficult to project that in the future but i would start by saying not necessarily something that can be automated effectively but things that i think that those that control the purse strings think could be automated effectively. if he gets to the point where it appears as though it could i imagine you will see industry
8:42 am
move in that direction. >> mr. smith, i think you mention in your book which i'm listening to that things like ordering something out of a drive-through, that those jobs could be gone to ai. >> yeah, four years ago we published our book, my co-author ben and we sit what's the virtual we think might be eliminated iai? we don't have a crystal ball but i bet it's taking an order in the drive-through at a fast food restaurant. they are not really establish a rapport with a human being. all a person does is listen and type into a computer what you are saying. so what ai can hear as well as person they can enter that in and indeed i was struck a few months ago when they announced, when wendy's announced whether they may automate the drive-through with ai. i think there's a lesson with that and it should give us both pause to think a little bit
8:43 am
about the mission. there is no creativity involved in drive-through at least listening and entering an order. there so many jobs that do involve creativity. so the real hope i think is to use ai to automate the routine, maybe even the work that is boring, to free people up so they can be more creative, so they can focus more on paying attention to other people and helping them. and if we just apply that recipe more broadly i think we might put ourselves on the path that is more promising. >> thank you. thank you, mr. chairman. >> thank you, senator hirono. senator kennedy. >> thank you, mr. chairman and thank you for calling this hearing. mr. dally, mi sang been incorrectly? >> that's correct.
8:44 am
>> mr. dally, -- and i saying your name correctly? >> if i am a recipient of content created by generative ai, do you think i should have a right to know that that content was generated by a robot? >> yes, i think you do. i think the details would depend on the context but in most cases i think i or anybody else that i received somebody i would like to is this real or was this generated. >> mr. smith? >> generally, yes. what i would say is if you're listening to an audio, if you're watching a video, if you're seeing an image and it was generated by ai, i think people have a right to know. the one area where i think there is a new what is if you're using ai to say help you write some to come maybe it's your right the first draft, just as i don't think any of us would say that when our staff helps us write something, we are obliged to give a speech in saint now going
8:45 am
to read the paragraph that my staff wrote. you make it your own. and i think the written word is a little more complex and we need to think that through. but as a broad principle, i agree with that principle. >> professor. >> there are situations where you probably would not expect to be dealing with the product of generative ai, and in those speedy that's the problem. >> right. as times change is possible that our expectations change. >> but as a principal do you think that people should have a right to know when they're being fed content from generative ai? >> if they, well, it's, i tell my students it depends on the context. generally speaking if you're vulnerable to generative ai than it answers absolute yes. >> what do you mean if you're vulnerable? i'm just looking for --
8:46 am
>> sure. >> no disrespect. >> not at all. >> a straight answer. >> absolutely. >> i like to think some breakfast food and straight answers. >> i love them. >> and if, if you robot is feeding me information and i don't know it's a robot, am i entitled to know it's a robot as a consumer? pretty straight up. >> i think the answer is yes. >> all right. back to mr. dally. am i entitled to know who owns that robot? and why the content came from? i know it came from a robot but somebody had to use the robot to make it give me that content. am i entitled as a consumer to know who owns the robot? >> i think that's a harder question that depends on the particular context. i think somebody is feeding the
8:47 am
a video and husband didn't fight is being generated by ai, i now know that is generated, it's not real. if it is being used, for example, in a political campaign then it would want to know who speedy let me stop you. let's suppose i'm looking at a video and it was generated by a robot. would it make any difference to you whether that robot was owned by, let's say, president biden were president trump? don't you want to know in evaluating the content who owns the robot and who prompted it to give me this information? >> i would probably want to know that i don't know that i would feel it would be required for me to know that. >> how much you, mr. smith? >> i'm generally a believe in letting people not only know it was to nevada to peter but who owns a program that is doing it. the only qualification of offer
8:48 am
consult the jaw should think about and would know better than me, there are certain areas in political speech what one has to decide whether you want people to act with anonymity. the federalist papers were first published under a pseudonym. i think the world today i would rather have everybody know who's speaking. >> professor. >> on the freight i'm going to get your game with not a straight answer but i agree speeded how do you feel about breakfasted? >> i am pro-breakfast food. >> okay. >> we agree on that. i agree with mr. smith. i think there are circumstances where you want to preserve anonymous speech and there's some you would want to apsley no. >> well, i don't want to go to over come odyssey this is an important subject, and the extent to which i think -- let me rephrase that. the extent of most senators and knowledge in terms of the nuances of ai, their general
8:49 am
impression is that ai has extraordinary potential to make our lives better if it doesn't make our lives worse first. and that's about the extent of it. and my judgment would not be ready to correct a built double-click somebody decided on purpose. i think were more likely to take baby steps. i ask you these questions predictable because senator schatz and i have a bill, it's very simple. says if you own a robot and is going to spit out artificial content to consumers, consumers have the right to know that it was generated by robot and who owns the robot. and they think that's a good place to start. but again i want to thank, i i want to thank my colleagues here, my chair and my ranking member. they know a lot about the subject and i want to hear their
8:50 am
questions, too. thank you all for coming. >> thank you, senator kennedy. on behalf of the chairman we're going to start a second round, and i guess i will go first since i'm the only one sitting here. that's bad news for the witnesses. >> i came to listen to you. >> let me just, mr. smith let me come back to this. we were talking about kids and kids privacy and safety, thanks for the information you're going to get me. let me give you an opportunity to make a little news today in the best possible way. 13, the age limit for bing chat is such a young age. listen i got three kids at home, ten, eight, two on my kids. i don't want my kids to be interacting with chatbots anytime soon at all but 13 is so incredibly young. would you commit today to raising that age? would you commit to a verifiable age verification procedure since
8:51 am
the parents can know they get some sense of confidence that their 12-year-old is not just saying to bing, i'm 13, 15, go on ahead. now let's get into a back-and-forth with this robot as senator kennedy said. would you commit to this thinks on behalf of child safety today? >> well look, as you can imagine the teams that work at microsoft let me go up and speak but they probably have one principle the want me to remember. don't go out and make news without talking to them first. >> but you're the boss. >> yeah. let's just say wisdom is important, and most mistakes you make, you make when you make by yourself. i'm happy to go back and talk more about what the right age should be. >> don't you think 13 is awfully loud though. >> it depends on what action. >> to interact with a robot who could be telling you to do any number of things. don't you think that is awfully young? >> not necessarily.
8:52 am
>> really? >> it's a scenario. when i was in seoul, korea, a couple what we met with the deputy prime ministers also the ministry of education of education. and they're trying to create for three topics that are very objective, math, coding and learning english. a digital textbook was in ai tutor, so that if you're doing math and you don't understand a concept, you can ask the ai tutor to help you solve the problem. by the way i think it's useful not only for the kids, i think it's useful for the parents. and i think it's good, what you say a 14-year-old, let's say what's the agent eighth-grade algebra? most parents, i found when my kids were eighth-grade algebra i try to help them with they didn't believe i have made it to the class. i think we want kids in a controlled way with safeguards to you something that way. >> we're not talking here about tutors. what i'm talking about your ai
8:53 am
chat, bing chat picked famously earlier this year you had a technology writer for the "new york times" wrote about this and looking at theno article right your chat bot was urging this person to break up his marriage. do we want 13-year-old to be having those conversation? >> no, of course not. >> would you commit to raising day? >> i don't want bing chat to break up anybody's marriage. >> i don't either. [inaudible] latinx. >> but were not going to make the decision on the exception. no, that it goes to this come with multiple tools. age is a very red line. >> it is, ago very red line. that's why i like it. >> and my point is there's a safety architecture that we can apply to speeded but your safety architecture didn't stop an adult, didn't stop the chabad from having this discussion with an adult emergency just don't
8:54 am
really love your wife, your wife isn't good for you, , she doesnt really love you. this is an adult. can you imagine the kind of things your chat i would say to a 13-year-old? serious about this. do you really think this is a good idea? >> wait a second. let's put that in context. at a point with a technology that been rolled out for only 20,000 people, journalist for the near times spent two hours on the evening of valentine's day ignoring his wife and interacting with the computer trying to breakre the system whh he managed to do. we didn't envision that use -- >> well, speeded the next andnde fixed it. >> are you telling me you have envision all they questions a 13-year-old might ask and the parents should be also be fine with that courts are you telling me i should trust you in the same with york times writer did? >> what i am saying is i as we go forward we have an increasing capability to learn from the experience of real people and --
8:55 am
>> that's what worries me. that's exactly what worries me is what you're saying is we have to have some failures. i do want 13-year-olds to be your guinea pig. i don't want 14-year-old to-year-old to be a guinea pig. i don't want any kid to be a guinea pig. i do want youha to learn from their failure. you want to learn from the failure of your site is provided. let's not learn from the failures of america's kids. this is what happened with social media. with social media who made billions of dollars giving us aa mental health crisis in this country. they got rich, the kids get depressed, committed suicide. why would we want to run that experiment again with ai? why not raise the age? you can do it. >> we should want, for spot we should want anybodyof to be a guinea pig. regardless of age or anything speeded good. let's rule kids out right today, right now. >> let's also recognize that technology does require real users. what's different about this technology which is so fundamentally different in my view from the socialig media
8:56 am
experience is that we not only have the capacity but we have the will and we are applying that will to fix things in hours and days. >> well, yeah. after the fact. i mean, i'm sorry it sounds to me like you are saying just trust us, we're going to do well with this. i'm asking you why we should trust you with our children. >> i'm not asking for trust although i hope we will work pretty darn it. that's what you have a licensing obligation. >> there isn't a lysing application. >> that's why the framework andn -- >> but i'm t asking you as the president of this company to make a commitment now for child safety and protection to say you know what microsoft is going, you can tell every parent in america now that microsoft is going to protect your kids. we will never use your kids as a science experiment, ever, never and we are not going to allow, we will not target your kids will not allow your kids to be used by our chatbots as a source
8:57 am
of information if they are younger than 18. >> but i think you are talking about with all due respect thereto think you're talking about anything speeded i just talked about protecting kids, traceable. >> yeah, but we don't want to use kids as a source of information andt monetize et cetera but i equally of the view i do want to cut off an eighth grader today the rights or ability use this tool that will help them learn algebra or math in a way that they couldn't a year ago. >> with all due respect it wasn't algebra or math that your chat was recommending or talking about when he was trying to break up some reporters marriage. >> of course that but now we're mixing things and -- >> no we're not watauga but your chabad, where talk about bing chat. >> of course we're talking abot bing chat, and a talk about the protection of children and how we make technology better. and just therer was the episode back in february on valentine's day, six months later if that journalist tries to do the same thing again, it will not happen.
8:58 am
>> do you want me to be done? >> i just don't want to miss my vote. i don't want to miss my vote. >> senator klobuchar. >> you are very kind, thank you. some of us have not voted jets i wanted to turn to you, mr. dally. in march nvidia announced a partnership with getty images to develop models that generate ai using getty libert. this partnership provides royalties to content creators. why was it important to the company to partner with and pay for these geddes images library developing and ai models? >> we believe respecting people's intellectual-property rights, and the rights of the photographers who produce the images in our models are trained on an expecting income from his images we did want to infringe on. we did that to scrape a budget
8:59 am
images ofo the web every part of entering a model picasso, people use picasso to generate images, the people who provided the original content get ring enumerated. we see this as a way of going forward in general what people are providing the ip that trains these models should benefit from use of that ip? >> i did? >> today the white house announced a car companies that are committee to take steps to move towards safe secure introspective element and ai. nvidia is one of those companies. could you talk about the steps you have taken and what steps do you plan to take to foster responsible development of ai? >> we have done a lot already. we have ase limited our rails so we can basically put guardrails around our own large language model nemo so inappropriate prompts the model don't get response. if the model an advert they were
9:00 am
to generate something that might be considered offensive that is intercepted before can reach the use of the model. we have a set of guidance that we provide for all of our internally generated models and how it should be appropriately used. we provide cards, sort of say where the model came from, what the data set is trained on, and the retest these models very thoroughly. the testing depends upon the use. so certain models we test themm for bias. we want to make sure that when we refer to a doctor it does not automatically assume it is a him. we test them in certain cases for safety. with a period of our nemo model called by nemo that use and the medical profession and it ensured the advice it gives is safe. there are a of other measures. i could get a a list if you wanted.
9:01 am
>> to the extent that that area could use some revitalization, i would encourage inputs and outputs designs and uses. >> and i suggest you look at these election bills because as we've all been talking about, we have to move quickly on those and the fact that it's bipartisan has been a very positive thing, so-- >> absolutely. >> and i want to just thank mr. smith for wearing a purple vikings tie. i know that maybe that was an ai generated message you got to know that this would be a smart move with me after their loss on sunday. i will remind you they're playing thursday night.
9:02 am
>> as a native of wisconsin, i can assure you it was an accident. >> thank you all of you, we've got a lot of work to do. thanks. >> senator blackburn. >> thank you, mr. chairman. mr. smith, i want to come to you first. you talked about china and the chinese communist party, the way they have gone about and we've seen a lot of it on tik tok, they have these influence campaigns that they are running to influence certain thought processes with the american people. i know you all just did a report on china. you covered some of the disinformation, some of the campaigns. so talk to me a little bit about how microsoft within the industry as a whole can combat some of these campaigns. >> i think there's a couple of things that we can think more about and do more about.
9:03 am
the first is we all should want to ensure that our own products and systems and services are not used, say, by foreign governments in this manner and i think that there's room for the evolution of export controls and next generation export controls to help prevent that. i think there's also room for a concept that's worked since the 1990's in the world of banking and financial services. it's these know your customer requirements and we've been advocates for those so that if there is abuse of systems, the company that is offering the service knows who is doing it and is in a better position to stop it from happening. i think the other side of the coin is using ai in advancing our defensive technologies, which really start with our ability to detect what is going on and we've been investing heavily in that space. that is what enabled us to produce the report that we
9:04 am
published. it is what enables us to see the patterns in communications around the world, and we're seeking to be a voice with many others that really calls on governments to, i'll say lift themselves to a higher standard so that they're not using this kind of technology to interfere in other countries and especially in other countries' elections. >> in the report that you all did and you were looking at china, did you look at what i called the other members of the access of evil, russia, iran, north korea? >> we did and that particular report that we did, it was east asia. yeah, we see especially prolific activities, some from china, some from iran and really the most global actor in the space is russia, and we've
9:05 am
seen that grow during the war, but we've seen it, you know, really spiral in recent years going back to the middle of the last decade. we estimate that the russian government is spending more than a billion dollars a year on a global, what we call cyber influence operation. part of it targets the united states. i think their fundamental goal is to undermine public confidence in everything that the public cares about in the united states, but it's not unique to the united states. we see it in the south pacific, we see it across africa and i do think it's a problem. we need to do more to counter. >> so summing it up, you would see something like a know your customer or a swift system, things that apply to banking, that is there to help weed out, you think that companies should increase their due diligence to make certain that their systems are appropriate and then being careful about doing business
9:06 am
with countries that may misuse a certain technology? >> generally, yes. i think one can look at the specific scenarios, what's more high risk and a know your customer requirement and know your cloud in fact so the systems are in secured data centers. >> mr. hart let me come to you. we look at ai detrimental impacts. we don't always want to look at doomsday scenarios, but looking at some of the reports on surveillance with the ccp surveilling the uighurs, can iran surveilling women and i think there are other countries that are doing the same type surveillance. so what can you do to prevent that? how do we pre-vent that? >> senator, i've argued in the past that facial recognition
9:07 am
technologies in certain sorts of biometric surveillance shall fundamentally dangerous and there's no world in which that should be safe for any of us, and that we should prohibit them outright, and the very least prohibition of facial recognition in public spaces and this is what i refer to as the strong bright line measures that draws absolute lines in the sands rather than procedural ones that have been entrenching this kind of harmful surveillance. >> mr. chairman, can i take another 30 seconds? because mr. daley was shaking his head in agreement on some things. i was catching that. do you want to weigh in more i close my questioning on either of these topics? >> i was in general agreement, i guess when i was shaking my head.
9:08 am
i think we-- we need to be careful who we sell our technology to and we try to sell to people who are using this for good commercial purposes and not to, you know, suppress others and we will continue to do that because we don't want to see this technology misused to opress anybody. >> thank you, senator blackburn. my colleague senator hawley mentioned we have a forum tomorrow, which i welcome. i think anything to aid in our education enlightenment, our being senators, is a good thing and i just want to express the hope that some of the folks who are appearing in that venue will also cooperate and appear before this subcommittee.
9:09 am
we'll be inviting more than a few of them and i want to express my thanks for all of you for being here, but especially mr. smith who has to be here tomorrow to talk to my colleagues privately and our effort is complimentary, not contradictory to what senator schumer is doing, as you know. i'm very focused on election interference because elections are upon us and i want to thank my colleagues, senator klobuchar and hawley, coons, and collins for taking a first step towards addressing the harms that may result from deep fakes, impersonation and all of the potential perils that we've identified here. and it seems to me that authenticating the truth or ads that embody true images and
9:10 am
voices is one approach and then banning the deep fakes and impersonations is another approach and obviously banning anything in the public realm in public discourse endangers running afoul of the first amendment. which is why disclosure is often the remedy that we seek, especially in campaign finance. so, maybe i should ask all of you whether you see that banning certain kinds of election interference and mr. smith, you raised the specter of foreign interference and the frauds and scams that could be perpetrated as they were in 2016, and i think it is
9:11 am
one of those nightmares that should keep us up at night because we are an open society. we welcome free expression and ai is a form of expression, whether we regard it as free or not, and whether it's generated and high risk or simply touching up some of the background in the tv ads. maybe you could each of you talk about what you see the potential remedies there. mr. dowling. >> so i think it is a grave concern with the election season coming up that the american public may be misled by deep fakes of various kinds. i think, as you mentioned, that the use of provenance to authenticate a true image or voice at its source and then tracking that to its deployment will let us know what a real
9:12 am
image is and if we insist on ai content. ai generated content being identified as such that people are at least tipped off that this is generated and not the real thing. you know, i think that if we need to avoid, you know, having some-- especially foreign entity interfere in our elections, at the same time ai generated content is speech and i think it would be a dangerous precedent to try to ban something. i think it's much better to have disclosure as you suggested than to ban something outright. >> mr. smith. >> three thoughts, 2024 is a critical year for elections not only in this country, but it's not only for the united states, it's for the united kingdom, india, across the european union more than two billion people will vote for who is going to represent them so this is a global issue for the world's democracies.
9:13 am
number two, i think you're right to focus it particular on the first amendment because it's such a critical cornerstone for american political life and the rights that we all enjoy and, yet, i will also be quick to add, i don't think that the russian government qualifies for protection under the first amendment. if they're seeking to interfere in our elections then i think the country needs to take a strong stand and a lot of thought needs to be given how to do that effectively. number three, i think this goes to the heart of your question and why it's such a good one. i think it's going to require some real thought, discussion, and ultimate consensus to emerge, let me say around one specific scenario. let's imagine for a moment that there is a video that involves a presidential candidate that originally was giving a speech and then let's imagine that someone uses ai to put different words into the mouth
9:14 am
of that candidate and uses ai technology to perfect it to a level that it is difficult for people to recognize as fraudulent. then you get to this question, what should we do? and at least as i've been trying, we've been trying to think this through, i think we have two broad alternatives. one is we take it down and the other is, we relabel it. if we do the first then we're acting as censors, it makes me nervous, i don't think it's our role to act as censors. the government under the first amendment, but relabelling to ensure accuracy, i think that's a reasonable path. what this highlights is the discussion still to be had and i think the urgency for that conversation to take place. >> and i will just say, and then i want to come to you, professor, that i agree
9:15 am
emphatically with your point about the russian government or the chinese government the saudi government, they're not subject to the protection of our bill of rights when they're seeking to destroy those rights and purposefully trying to take advantage of a free and open society to, in fact, decimate our freedom. so, i think there's a distinction to be made there in terms of national security and i think that rubric of national security, which is part of our framework, applies with great force in this area and that is different from a presidential candidate putting up an ad that in effect puts words in the mouth of another candidate, and
9:16 am
as you may know, we began these hearings with introductory remarks from me that were impersonation taking from what i said on the floor of the united states senate taken by chat gpt that sounded exactly like something i would say in a voice that was indistinguishable from mine and obviously, that was in the hearing. in real-time, as mark twain famously said, a lie travels halfway around the world before the truth gets out of bed and we need to make sure that there is action in real-time if you're going to do the kind of identification that you suggested. real-time, meaning real-time in a campaign, which is measured
9:17 am
in minutes and hours, not in days and months. professor? >> thank you, senator. like you, i'm nervous about just coming out and saying we're going to ban all forms of speech, particularly when you're talking about something as important as political speech and like you, i also worry about disclosure alone as a half measure. earlier in this hearing, it was asked what is a half measure and i think that goes towards answering your question today. i think the best way to think about half measures is an approach that's necessary, but not sufficient, that risks giving us the illusion that we've done enough, but ultimately, i think this is the pivotal point, doesn't really disrupt the business model and the financial incentives that have gotten us here in the first place. and so to help answer your question, one thing that i would recommend is thinking about throwing lots of
9:18 am
different tools, which i applaud your bipartisan framework for doing, bringing different things to bear on the problem. thinking about the role that surveillance advertising plays in powering a lot of -- a lot of the harmful technologies and eco systems that doesn't allow the systems, the lie just to be created, but flourish, and to be amplified. so i would think about rules and safeguards that we could do to help limit those financial incentives, bore heing from standard principles of accountability, things like we use disclosures where it's effective. it's not effective, you have to make it safe and if you can't make it safe, it shouldn't exist. >> yeah, i think i'm going to turn to senator hawley for more questions, but i think this is a real conundrum. we need to do something about it, we need more than half measures. we can't delude ourselves by
9:19 am
thinking with a false sense of comfort that we solve the problem if we don't provide effective enforcement and to be very blunt, the federal elections commission often has been less than fully effective. a lot less than fully effective in enforcing rules relating to campaigns. and so there again, an oversight entity with strong enforcement authority, sufficient resources and the will to ask is going to be very important if we're going to address this problem in real-time. senator hawley. >> mr. smith, let me just come back to something you said, thinking about now, workers. you talked about wendy's, i think it was, that they're automating the drive-thru and talking about you know, this is a good thing. i just want to press on that a little bit. is it? is it a good thing that workers
9:20 am
lose their jobs to ai, whether it's at wendy's or whether it's at walmart or whether it's at the local hardware store? i mean, you pointed out that your comment was that there's really no creativity involved in taking orders to the drive-thru, but that is a job, oftentimes the first job for younger americans, but, hey, in this economy where the wages of blue color workers have been flat for 30, 40 years and running, what worries me is that oftentimes what we hear from the tech sector to be honest with you, is that jobs that don't have creativity tech defines don't have value. i'm scared to death that ai will replace a lot of jobs that tech types don't think are creative and will leave more blue collar workers without anyplace to turn. my question to you is can we expect more of this and is it really progress for folks to lose those kind of jobs that, you know, i suspect that's not
9:21 am
the best paying job in the world, but at least it's a job and do we really want to see more of those jobs lost? >> to be clear, first, i didn't say whether it was a good or bad thing. i was asked to predict what jobs would be impacted and identified that job as one that likely would be. so, but let's, i think, step back because i think your question is critically important. let's first reflect on the fact that we've had about 200 years of automation that have impacted jobs. sometimes for the better, sometimes for the worse. in wisconsin where i grew up or missouri where my father grew up, if you go back 150 years, it took 20 people to harvest an acre of wheat or corn and now it takes one. so 19 people don't work on that acre anymore. and that's been an ongoing part of technology. the real question is this: how do we ensure that technology advances so that we
9:22 am
help people get better jobs, get the skills they need for those jobs, and hopefully do it in a way that broadens economic opportunity rather than narrows it. i think the thing we should be the most concerned by is that since the 1990's, and i think this is the point you're making, if you look at the flow of digital technology, you know, fundamentally, we've lived in a world that's widened the economic divide. those people with a college or a graduate education have seen them rise in real terms. those with a high school diploma or less have seen their income level actually drop compared to where it was in the 1990's. so, what do we do now? well, i'll at least say what i think our goals should be. can we use this technology to help advance productivity for a much broader range of people?
9:23 am
including people who didn't have the good fortune to go where say you or i went to college or law school. and can we do it in a way that not only makes them more productive, but actually reaps some of the dividends of that productivity for themselves in a growing income level? i think it's that conversation that we need to have. >> i agree with you, and i hope that that is -- i hope that that's what ai can do. you talked about the farm used to take 20 people to do what one person could do. it use today take thousands of people to produce textiles, furniture, other things in the company where now it's zero. so we can tell the tale in different ways. i'm not sure that seeing working class jobs go overseas or be replaced entirely is a success story. in fact, i argue it's not at all. it's not a success story and i'd argue our economic policy the last 30 years has been downright disastrous for working people and tech
9:24 am
companies and financial institutions and certainly banks and wall street, they have reaped huge profits, but blue collar workers can barely find a good paying job. i don't want ai to be the latest accelerant of that trend. so i don't really want every service station in america to be manned by some computer such that nobody can get a job anymore, get their foot in the door and climb up the ladder. that worries me. let me ask you something else in my expiring time. you mentioned national security, critically important. of course there's no national security threat that's more significant for the united states than china. let me just ask you, is microsoft too entwined with china? you have the microsoft research asia that was set up in beijing back in the late 1990's. you've got centers now in shanghai and elsewhere. you've got all kinds of cooperation with chinese state-owned businesses. i'm looking at an article from
9:25 am
protocol magazine where one of the contributors said that microsoft had been the alma mater of chinese big tech. are you concerned about your degree of entwinement with the chinese government? do you need to be decoupling in order to make sure that our national security interests aren't fatally compromised? >> i think it's something that we need to be and are focused on. in some degree, technology fields, microsoft is the alma mater of the technology leaders in every country of the world because of the role that we've played over the last 40 years, but when it comes to china today, we are and need to have very specific controls on who uses our technology and for what. and how. that's why we don't, for example, do work on quantum computing or don't provide facial recognition services or focus on synthetic media, a whole variety of things. while at the same time when
9:26 am
starbucks has stores in china, i think it's good that they can run their services in our data center, other than a chinese company's data center. >> just on facial recognition. back in 2016 your company released this data base ms celeb 10 million faces without the consent of the folks who were in the data base. you eventually took it down although it took three years. china used that data base to train much of its facial recognition software and technology. isn't there a problem? you said that microsoft might be the alma mater of many companies, ai, but china is unique, no? china is running concentration camps using digital technology like we've never seen before. isn't that a problem for your company to be involved in any way? >> we don't want to be involved in that in any way, i don't think that we are. >> are you going to close your centers in beijing or shanghai? >> i don't think that that will
9:27 am
accomplish what you're asking us. >> you're running thousands of people through your centers through the chinese and state-owned enterprises? isn't that a problem? >> there's a big premise and i don't embrace the premise that that's what we're doing. >> which part is wrong? >> the knowing that we're running thousands of people through and they're going into the chinese government. >> is that not right? i thought you had 10,000 employees in china whom you've recruited from chinese state-owned agencies, chinese state-owned businesses? they come work for you and then they go back to these state-owned entities? >> we have employees in china. in fact, we have that number. i don't-- to my knowledge that's not where they're coming from, that's not where they're going, we're not running that kind of revolving door. and it's all about what we do and who we do it with that i think is of paramount importance and that's what we're focused on. >> do you condemn what the chinese government is doing to
9:28 am
the uighurs in shanghai province and all we can? >> we do everything we can to ensure that our technology is not used in any way for that kind of activity in china and around the world, by the way. >> but you condemn it it would be clear? >> yes. >> what are your safeguards that you have in place such that your technology is not further enabling the chinese government given the number of people you employ there and the technology you develop there. >> you take something like facial recognition which is at the heart of our question, we have very tight controls that limits the use of facial recognition in china. including controls that in effect make it very difficult, if not impossible, to use it for any kind of real-time surveillance at all. >> by the way, the thing we should remember, the u.s. is a leader in many ai fields. china is the leader in facial recognition technology and the ai for it. and-- >> well, in part because of the information that you helped them acquire, no? >> no, it's because they have the world's most data. >> yeah, but you gave them--
9:29 am
>> no, i-- i don't think that's-- >> you don't think that had anything to do with it? >> i don't think -- when you have a country of 1.4 billion people and you decide to have facial recognition used in so many places, it gives that country a massive data. >> but are you saying that the data base that microsoft released in 2016, ms celeb, you're saying that wasn't used by the chinese government to train their facial recognition? >> aim he -- i'm not familiar and happy to provide you with information, but my goodness, the advance in that facial recognition technology, if you go to another country where they're using facial recognition technology, it's highly unlikely it's american technology, it's highly likely it's chinese technology because they are such leaders in that field which is wine fine. if you want to put where the
9:30 am
united states doesn't want to be a technology leader, i'd put facial recognition on that list. it's home grown. >> how much has been invested in china? >> i tell you this, the revenue we make in china accounts for one out of every six humans on the planet, 1.5% of our global revenue, it's not the market for us that it is for other industries or even some other tech companies. >> it sounds then like you can afford to decouple? >> but is that the right thing to do? >> yes, in a regime that's fundamentally evil, that's inflicting the kind of atrocities on its own citizens that you just alluded to, and doing to the uighurs, running modern day concentration camps? >> there are two thoughts, do you want general motors to sell or manufacture cars let's say
9:31 am
sell cars in china. do you want to create jobs in michigan or missouri so the cars can be sold in china? if the answer to that is yes. think about the second question. how do you want general motors in china to run its operations and where would you like it to store its data? would you like it to be in a secure data center run by an american company or would you like it to be run by a chinese company? which will better protect general motors' trade secrets? i'll argue we should be there so that we protect the data of american companies, european companies, japanese companies. even if you disagree on everything else, that, i believe, serves this country well. >> you know what, i think you're doing a lot more than just protecting data in china. you have major research centers, thousands, tens of thousands of employees, and your question do i want general motors to be building cars in
9:32 am
china? no, i don't. i want them to be making cars here in the united states with american workers and do i want american companies to be aiding in any way the chinese government in their oppressive tactics? i don't. senator ossoff would you like me to yield to you now? are you ready? >> i have been very hesitant to interrupt the discussion, the conversation here has been very interesting and i'm going to call on senator ossoff and then i have a couple of follow-up questions. >> thank you, mr. chairman. and thank you all for your testimony. just getting down to the fundamentals, mr. smith. if we're going to move forward with a legislative framework, a regulatory framework we have to define clearly in legislative text precisely what it is that we're regulating. what is the scope of regulated activities, technologies,
9:33 am
products? how should we consider that question and how do we define the scope of technologies, the scope of services, the scope of products, that should be subject to a regime of regulation that is focused on artificial intelligence? >> i think there's three layers of technology on which we need to focus in defining the scope of legislation and regulation. first is the area that has been the central focus of 2023 in the executive branch and here on capitol hill. it's the so-called frontier or foundation models that are the most powerful, say for something like generative ai. in addition, there are the applications that use ai or as senators blumenthal and hawley have said, the deployers of ai. if there is an application that calls on that model in what we consider to be a high risk scenario, meaning, it could make a decision that would have an impact on, say, the privacy
9:34 am
rights, the civil liberties, the rights of children, or needs of children, then i think we need to think hard and have law and regulation that is effective to protect americans. and then the third layer is the data center infrastructure where these models and where where these applications are actually deployed. and we should ensure that those data centers are secure, that there are cyber security requirements, that the companies including ours need to meet and we should ensure that there are safety systems at one, two, or all three levels if there is an ai system that's going to automate and control, say, something like critical infrastructure such as like the electrical grid. those are the areas that we would start there with clear thinking and a lot of effort to learn and apply the details, but focus there. >> as more and more models are trained and developed to higher levels of power and capability,
9:35 am
therele be a proliferation, there may be a proliferation of models, perhaps not the frontier models, perhaps not those at the bleeding edge that used the most compute of all, powerful enough to have serious implications. so, is the question which models are the most powerful at a moment in time or is there a threshold of capability or power that should define the scope of regulated technology? >> i think you've just posed one of the critical questions that frankly a lot of people inside the tech sector and across the government and in academia are really working to answer. i think the technology is evolving and the conversation needs to evolve with it. let's just posit this. there's something like gpt4 from open ai. let's posit it can do 10,000 things really well, it's
9:36 am
expensive to create and it's relatively easy to regulate in the scheme of things because there's one or two or 10, but now let's go to where you're going which i think is right, what does the future bring in terms of proliferation. imagine that there's an academic, a professor hartzog's university, i want to create an open source model, it's not going to do 10,000 things well, but four things well. it won't require as many nvidia's gpu's or data, let's imagine it could be use today create the next virus that will spread around the planet. we need to ensure that there's safety, architecture and controls around that as well and that's the conundrum. that's why this is a hard problem to solve. it's why we're trying to build safety architecture in our safety center so open source models can be run in them and
9:37 am
still be used in way to prohibit that kind of harm from taking place. when you think about a licensing regime, this is a hard question, who needs a license. you don't want it to be so hard that only a small number of big companies can get it, but then you also need to make sure that you're not requiring people to get it when they really, we would say, don't need a license for what they're doing and the beauty of the framework in my view, it starts to frame the issue. it starts to define the question-- let me ask this question. is it a license to train a model to a certain level of capability? is it a license to sell or license access to that model or is it a license to purchase or deploy that model? who is the license entity? >> that's another question that is key and may have different answers in different scenarios, but mostly i would say it should be a license to deploy. that, you know, i think that there may well be obligations to disclose to, say, an
9:38 am
independent authority when a training run begins, depending on what the goal, when the training run ends, so that an oversight body can follow it, just the way, say, might happen when a company is building a new commercial airplane, and then there are-- what's emerging, the good news, there's a merging of foundation of call it best practices for then how the model should be trained, what kind of testing there should be, what harms should be addressed. that's a big topic that needs discussion. when you say, forgive me, when you say a license to deploy, do you mean, for example, if a microsoft office product wishes to use gpt model for some user serving purpose within your suite, you would need a license to deploy gpt in that way? or do you mean that gpt would require a license to offer to microsoft and putting aside whether this is a plausible commercial scenario. the question is what is the
9:39 am
structure of licensing arrangement? >> in this case it's more the latter. think about it like boeing, boeing builds a new plane. before it can sell it to united airlines and united airlines can start to fly it, f.a.a. is going to certify that it's safe. imagine we're at call it gpt12, whatever you want to name it, before that gets release for use, i think you can imagine a licensing regime that would say that it needs to be licensed after it's been in effect certified as safe. and then you have to ask yourself. how do you make that work so we don't have the government slow everything down? and what i would say, you bring together three things, first, you need industry standards so that you have a common foundation and well understood way as to how training should take place. second, you need national regulation, and third, if we're going to have a global economy at least in the countries where we want these things to work, you probably need a level of
9:40 am
international coordination. and i'd say, look at the world of civil aviation. that's fundamentally how it has worked since the 1940's. let's try to learn from it and see how we might apply something like this or other models here. >> mr. dally, how would you respond to the question in a field where the technical capabilities are accelerating at a rapid rate, future rate unknown? where and accord to go what standard or metric or definition of power do we draw the line for what requires a license for deployment and what can be freely deployed without oversight by the government? >> you know, i think it's a tough question because i think you have to balance two important considerations. the first is, you know, the risks presented by a model of whatever power and then on the other side is the fact that,
9:41 am
you know, we would like to ensure that the u.s. stays ahead in this field and to do that we want to make sure that individual academics and entrepreneurs with a good idea can move forward and innovate and deploy models without huge barriers. >> so it's the capability of the model, it's the risk presented by its deployment without oversight, is that the-- the thing is that we are going to have to write legislation and the legislation is going to have to in words define the scope of regulated products and so we're going to have to bound that which is subject to licensing arrangements where it lands and what is not. and so how do you-- i mean-- >> it's dependent on the application, because if you have a model which is, you know, basically determining a medical procedure, there's a high risk with that. now, depending on the patient outcome. if you have another model which
9:42 am
is controlling the temperature in your building, if it gets a little bit wrong, you may be, you know, consume a little too much power or maybe you're not as comfortable as you would be, but it's not a life threatening situation. so, i think you need to regulate the things that are-- have high consequences if the model goes awry. >> i'm on the chairman's borrowed time so just tap the gavel when you want me to stop. >> you had to wait so we'll give you a couple-- >> that's good. okay. professor, and i'd be curious to hear from others as concisely with respect for the chairman's follow-ups, how does any of this work without international law? i mean, isn't it correct that a model potentially a very powerful and dangerous model, for example, whose purpose it to unlock cbrn or mass destructive viralogical to unsophisticated actor.
9:43 am
once trained it's relatively light weight to transport and without, a, an international legal system and b, a level of surveillance that seems inconceivable into the flow of data across the internet, how can that be controlled and policed? >> it's a great question, senator. with respect to being efficient in my answer i'll simply say that there are going to be limits. even assuming that we need international cooperation which i would agree with you, we've already started thinking about ways in which, for example, within the eu, which is already deploying some significant ai regulation, we might design frameworks that are compatible that might require interaction. but ultimately, what i worry about is actually deploying a level of surveillance that we've never before seen in an
9:44 am
attempt to perfectly capture the entire chain of ai and that's simply not possible. >> and i share that concern about privacy which is in part why i raised the point. how can we know what folks are loading. had a lightweight model once trained to perhaps a device that is not even online anymore. there are limits to what we will know. >> anyone of you want to take a stab before i get gavelled out here. >> i think you're right, a need for international cooperation, and like-minded governance, at least in the initial years. i think there's a lot that we could learn. we were talking with senator blackburn about swift systems for financial transactions and somehow managed globally in the united states for 30 years to have know your customer requirements obligations for banks. money has moved around the world. nothing is perfect, that's why we have laws.
9:45 am
but it's worked to do a lot of good to protect against terrorist or criminal uses of money that would cause concern. >> well, i think you're right in that these models are very portable. you could put the parameters of most models, even the very large ones on a large ubs drive and carry it with you somewhere. you could also train them in a data center anywhere in the world. so i think it's really the use of the model and the deployment that you can effectively regulate. it's going to be hard to regulate the creation of it. if people can't create them here they'll create them somewhere else. we have to be careful if we want the u.s. to be ahead. do we want them created here or not where the climate has driven them. >> thank you, senator ossoff, i hope you're okay with a few more questions. we've been at it for a while. we've been very patient. >> do we have a choice?
9:46 am
>> no. [laughter] >> , but thank you very much. it's been very useful. i want to follow up on a number of the questions that have been asked. first of all, on the international issue, there are examples and models for international cooperation. mr. smith, you mentioned civil aviation, the 737-- the 737 max and i think i have it right, when it crashed, it was a plane that had to be redone in many respects and companies, airlines around the world looked to the united states for that redesign and then approval, civil aviation, atomic energy, not always completely effective, but if it worked in many respects.
9:47 am
and so i think there are international models here where, frankly, the united states is a leader by example and best practices are adopted by other countries when we support them and francly, in this instance, the eu has been ahead of us in many respects regarding social media. and we are following their leadership by example. i want to come to the issue of having centers, whether they're in china or for that matter, elsewhere in the world, requiring safeguards so that we are not allowing our technology to be misused in china against the uighurs and preventing that technology from being stolen or people we train there from serving bad purposes. are you satisfied, mr. smith,
9:48 am
that it is possible, in fact, that you are doing it in china, that is preventing the evils that could result from doing business there in that way? >> i would say two things. first, i feel good about our track record and our vigilance and the constant need for us to be vigilant. what services we offer to whom and how they're used, it's really those three things and i would take from that what i think is probably the conversation we'll need to have as a country, about export controls more broadly. there's three fundamental areas of technology where the united states is today, i would argue, the global leader. first the gpu chips from a company like nvidia. second, the cloud infrastructure from a company like, say, microsoft. and the third is the foundation
9:49 am
model from a firm such as open ai. and google and other companies are global leaders as well. and i think if we want to feel that we're good in creating jobs in the united states by inventing and manufacturing here, as you said, senator hawley, which i completely endorse, and good-- the technology's being used properly, we probably need an export control regime that weaves those three things together. for example, there might be a country in the world, let's just set aside china for a moment and leave that out. let's just say there's another country where you all and the executive branch would say, we have some qualms, but we want u.s. technology to be present and we want u.s. technology to be used properly, the way that would make you feel good. you might say then, well, let nvidia export chips to that country to be used in say a data center of a company that
9:50 am
we trust, that is licensed even here for that use, with the model being used in a secure way in that data center with a know your customer requirement and with guardrails that put certain kinds of use off limits. that may well be where government policy needs to go and how the tech sector needs to support the government and work with the government to make it a reality. >> i think that that answer is very insightful and raises other questions. i would kind of analogize to nuclear proliferation. we cooperate over safety in some respect with some other countries, some of them adversaries, but we still do everything in our power to prevent american companies from helping china or russia in
9:51 am
their nuclear program. part of that nonproliferation effort is through export controls. we impose sanctions. we have limits and rules around selling and sharing certain choke point technologies related to nuclear enrichment, as well as biological warfare, surveillance and other national security risks and our framework, in fact, envisions sanctions and safeguards precisely in those areas for exactly the reasons we've been discussing here. last october, the biden administration used existing legal authorities as a first step in blocking the sale of some high performance chips and equipment to make those chips to china and our framework calls for export controls and sanctions and legal restrictions. so, i guess the question that
9:52 am
we will be discussing, we're not going to resolve it today, regrettably, but we would appreciate your input going forward and i'm inviting any of the listening audience here in the room or elsewhere to participate in this conversation on this issue and others. how should we draw a line on the hardware and technology that american companies are allowed to provide anyone else in the world? any other adversaries or friends? because, as you've observed, mr. dally, and i think all of us accept it's easily proliferated. >> yes, if i could comment on this. >> sure. >> you drew analogy to nuclear regulation and mentioned the word choke points. i think the difference here, there really isn't a choke
9:53 am
point and i think there's a careful balance to be made between limiting where our chips go and what they're used for and you know, disadvantaging american companies and the whole food chain that feeds them because, you know, we're not the only people who make chips that can do ai. i wish we were, but we're not. there are companies around the world that can do it. there are other american companies, there are companies in asia, there are companies in europe and if people can't get the chips they need to do ai from us, they will get them somewhere else. and what will happen then is, you know, it turns out that chips aren't really the things that make them useful, it's the software. and if all of a sudden, the standard chips for people to do ai become something from, you know, pick a country, singapore, you know, all of a sudden, the software engineers will start writing the software
9:54 am
for those chips and they will become the dominant chips and the leadership of that technology area was shifted from the u.s. to singapore, whatever other country becomes dominant. so we have to be very careful to balance the national security considerations and the abuse of technology considerations against preserving the u.s. lead in this technology area. >> mr. smith. >> yeah, it's a really important point and what you have is the argument-counter argument. let me for a moment channel what senator hawley often voices that i think is also important. sometimes you can approach this and say, look, if we don't provide this to somebody, somebody else will so let's not worry about it. i get it. but at the end of the day, you know, whether you're a company or a country, i think you do have to have clarity about how you want your technology to be used. and, you know, i fully
9:55 am
recognize that there may be a day in the future after i retire from microsoft when i look back and i don't want to say, oh, we did something bad because if would he didn't, somebody else would have. i want to say, no, we had clear values and we had principles and we had in place guardrails and protections and we turned down sales so that somebody couldn't use our technology to abuse our people's rights and if we lost some business, that's the best reason in the world to lose some business. what's true of a company is true as a country. so i'm not trying to say that your view shouldn't be considered, it should. that's why this issue is complicated, how to strike that balance. >> professor hartzog, do you have any comments? >> i think that was well-said and i would only add that it's also worth considering in this discussion about how we sort of
9:56 am
safeguard these incredibly dangerous technologies and the risks that could happen if they, for example, proliferated. if it's so dangerous, then we need to revisit the existential question again and bring it back not only to thinking about how we put guardrails on, but how we lead by example, which i think you brought up, which is really important and we don't win the race to violate human rights, right? and that's not one that we want to be running. >> and it isn't simply chinese companies importing chips from the united states and building their own data center. most ai companies ran capabilities from cloud providers and we need to make sure that the cloud providers are not used to circumvent our export controls or sanctions. mr. smith, you raised the know your customer rules, knowing
9:57 am
your customers would require cloud, ai cloud providers whose models are deployed to know what companies are using those models. if you're leasing out a super computer, you need to make sure that your customer isn't people's liberation army, that it isn't being used to subjugate uighurs, that it isn't used to do facial recognition on dissidents or opponents in iran, for example. but i do think that you've made a critical point, which is there is a moral imperative here and i think there's a lesson in the history of this great country, the greatest in the history of the world, that when we lose our moral compass, we lose our way, and when we simply do economic or political
9:58 am
interests, sometimes it's very short-sighted and we wander into a geopolitical swamp and quicksand. so i think these kind of issues are very important to keep in mind when we lead by example. i want to just make a final point and then if senator hawley has questions, we're going to let him ask, but on this issue of worker displacements, i mentioned at the very outset, i think that we are on the cusp of a new industrial revolution. we've seen this movie before, as they say, and it didn't turn out that well in the industrial revolution where workers were displaced en masse, those textile factories and the mills in this country and all around
9:59 am
the world went out of business, essentially or replaced the workers with automation and mechanics and i would respond by saying we need to train those workers and provide the education, you've alluded to it. and it needn't be a four-year college. you know, in my state of connecticut, electric, pratt whitney, defense contractors are going to need thousands of welders, electricians, trades people of all kinds who will have not just jobs they'll have careers that require skills that frankly i wouldn't begin to to how to do. and i haven't the aptitude to do and that's no false modesty. so i think there are tremendous opportunities here, not just in the creative sphere that you
10:00 am
have mentioned where, you know, we may think higher human talents come into play, but in all kinds of jobs that are being created daily already in this country. as i go around the state of connecticut, the most common comment i hear from businesses, we can't find enough people to do the jobs we have right now. we can't find people to fill the openings that we have and that is, in my view, maybe the biggest challenge for the american economy today. >> i think that's such an important point and it's really worth putting everything we think about for jobs because i wholeheartedly endorse senator hawley, what you were saying before, we want people to have jobs. ... important point. it's really worth putting everything we think about for jobs, because i will certainly endorse senator hawley, what you were saying before about that's, we need, we want people to have jobs, we wanted to earn
10:01 am
a good living, et cetera. first, let's consider the demographic context in which jobs are created. the world has jobs are created. the world has populations that are leveling off or much of the world now declining. one of the things we look at is every country and measure over five years isa a working age population increasing or decreasing and by how much. from 2020-2025 the working age population in this country people age 20-64 it will grow by 1 million people. the last time it grew by that small a number, do you know who is a number, do you know who is president of the united states? john adams. that's how far back you have to go. a country like italy take that group of people over the next 20 20 years, is going to decline by 41%. what's true of italy is true almost of the same degree in germany. it's already happening in japan,
10:02 am
in korea. so we live in a world where for many countries we suddenly encounter what you find i suspect when you go to hartford or st. louis arkansas city, people can't find enough police officers, enough nurses, enough teachers. that is a problem we need to desperately focus on solving. so how do we do that? i do think ai is something that can help. and even something like a call center, one of the things that's faceting to me, we have more than 3000 customers around the world running groups of concept that one faceting what is a bank in the netherlands. you go into call center today, the desks of the workers look like a trading floor in wall street. they have six different terminals, somebody calls, they are desperately trying to find the answer to a question. with something like gpt four with our services, six terminals can become one. somebody who is working there
10:03 am
can ask a question, the answer comes up. and what they're funny is the person who's answering the phone talking to a customer can now spend more time concentrating on the customer and what they need. i appreciate all the challenges. there's so much uncertainty. we desperately need to focus on skills. but i really do hope that this is an era where we can use this to frankly help people fill jobs, get training and focus more on luscious put it this way, i'm excited about artificial intelligence. i'm even more excited about human intelligence. and if we can use artificial intelligence to help people exercise more human intelligence, and earn more money, that would be something that would be way more exciting to pursue and everything that we had to grapple with the last decade around say social media and the like.
10:04 am
>> our framework very much focuses on treatment of workers, about providing more training. it may not be something that this entity will do but it's definitely something that has to address. and it's not only displacement but also working conditions and opportunities within the workplace for promotion to prevent discrimination, to protect civil rights. we haven't talked about it in detail, but we deal with it in our framework in terms of transparency around decision-making. china may try to steal our technology, can't steal our people. and china has its own population challenges with the need for more people, skilled labor.
10:05 am
but i say about connecticut, we don't have gold mines oil wells. what we have is really able workforce. that's good to be the key to a think america's economy in the future, and a i can help promote development of that workforce. senator hawley. you all have been really patient, and so has our staff. i want to thank our staff for the searing. but most importantnt we're going to continue these hearings. it is so helpful to us. i can go down our framework and tie the proposals to specific comments made by sam altman or others have testified before, and we will enrich and expand our framework with the insights that you've given us. so i want to thank all of our
10:06 am
10:08 am
10:09 am
10:10 am
10:12 am
23 Views
IN COLLECTIONS
CSPAN2 Television Archive Television Archive News Search ServiceUploaded by TV Archive on